uuid int64 541B 3,299B | dataset stringclasses 1
value | text stringlengths 1 4.29M |
|---|---|---|
2,877,628,090,954 | arxiv | \section{Introduction}
\label{n1}
Since Anderson's seminal paper in 1958~\cite{1an}
the metal-insulator transition has been studied in a wide range of
systems. The scaling theory~\cite{2scal} predicts
that there is no metal-insulator transition in one-dimensional (1D) systems with
randomly-distributed impurities.
On the other hand, 1D quasiperiodic systems~\cite{1PRB,2PRA,3PRA,He,Gramsch}
can host localized, extended or critical eigenstates.
The Aubry-Andr\'{e} (AA) model~\cite{7aubry} is an important paradigm of 1D quasiperiodic systems.
It can be derived from the reduction of a two-dimensional quantum Hall system
in the magnetic field~\cite{20SU}. Due to recent advances in experimental techniques, the AA model has been realized
in ultracold atoms~\cite{8BILLY,9ROATI} and photonic crystals~\cite{10PRL,11PRL}.
The phase diagram of the AA model
has been well understood with extensive researches~\cite{21SO,22SOU,23ZD,24GE,25MA,26WI}.
And many different variations of the AA model were studied also.
By including a long-range hopping term or modulating the on-site potentials,
some authors found a mobility edge in the spectrum which can
be precisely addressed by the duality symmetry~\cite{12PRL,13PRL}.
The others addressed a transition from the topological superconducting
phase to the localized phase in the AA model with p-wave pairing interaction~\cite{14PRL,15PRL,16PRB,Cao}.
Among different variations, the off-diagonal Aubry-Andr\'{e} model
displays an abundant phase diagram. It brings up either the zero-energy topological edge modes~\cite{17PRL}
or preserves the critical states in a large parameter space~\cite{18PRB},
depending on different ways of modulating the nearest-neighbor hopping amplitude.
\begin{figure}
\centering
\includegraphics[width=0.5
\textwidth]{1.eps}\\
\caption{(Color online) Phase diagram of the off-diagonal Aubry-Andr\'{e}
model. Four different regions are separated by the critical lines AB
($V=1-\lambda$), BC ($V=\lambda-1$), and AD ($V=1+\lambda$).}
\label{001}
\end{figure}
In a very recent paper~\cite{19PRB}, the authors
combined both the commensurate and incommensurate
modulations to explore the corresponding phase diagram. The model Hamiltonian
is expressed as
\begin{equation}\label{eq:ham}
\hat H=-\sum_{i}^{L-1}(t+\lambda_{i}+V_{i})(\hat{c}_{i}^\dag \hat{c}_{i+1}+h.c.),
\end{equation}
where $\hat c_i$ is a fermionic annihilation operator,
$L$ is the total number of sites, and $\lambda_{i}=\lambda\cos(2\pi b{i})$
and $V_{i}=V\cos(2\pi\beta{i}+\phi)$ denote the commensurate and incommensurate
modulations, respectively. A typical choice of the parameters is $b=1/2$, $\beta=(\sqrt{5}-1)/2$
and $\phi = 0$. For convenience, $t = 1$ is set as the energy unit.
It was argued that the phase diagram of this model can be divided into
three regions, which are the extended, the topologically-nontrivial localized and the topologically-trivial localized phases, respectively.
In this paper we revisit this model by using the symmetry and the multifractal
analysis. Our findings are summarized in the phase diagram (Fig.~\ref{001}).
The main results
includes (i) there exists an additional region of extended phase (region~II)
which can be mapped into region~I by a newly-discovered symmetry transformation,
and (ii) region~III and~IV are mixed phases instead of localized phases
in which most of the eigenstates are critical states.
The rest of the paper is organized as follows. In Sec.~\ref{n2}, we
present the symmetry transformation for Hamiltonian~(\ref{eq:ham}), and
use it to determine the boundaries between different phases.
We further show that region~I and~II are in the extended phase
by calculating the inverse participation ratio numerically.
In Sec.~\ref{n3},
we apply the multifractal analysis in two different approaches.
In both approaches we verify that region~III and~IV are mixed phases with most of the eigenstates being critical states.
\section{Symmetry transformation and inverse participation ratio}
\label{n2}
We identify the phase boundaries of the off-diagonal Aubry-Andr\'{e} model
by finding its symmetry transformation.
For our purpose, the Schr\"{o}dinger equation is expressed every four sites as
\begin{widetext}
\begin{equation}
\begin{split}
& -\left(t+\lambda+V\cos\left(2\pi\beta\left(4n\right)\right)\right) \psi_{4n+1} - \left(
t-\lambda +V\cos\left(2\pi\beta\left(4n-1\right)\right)\right) \psi_{4n-1} = E\psi_{4n} \\
& -\left(t-\lambda+V\cos\left(2\pi\beta \left(4n+1\right)\right)\right) \psi_{4n+2} - \left(
t+\lambda +V\cos\left(2\pi\beta\left(4n\right)\right)\right) \psi_{4n} = E\psi_{4n+1}\\
& -\left(t+\lambda+V\cos\left(2\pi\beta \left(4n+2\right)\right)\right) \psi_{4n+3} - \left(
t-\lambda +V\cos\left(2\pi\beta \left(4n+1\right)\right)\right) \psi_{4n+1} = E\psi_{4n+2} \\
& -\left(t-\lambda+V\cos\left(2\pi\beta \left(4n+3\right)\right)\right) \psi_{4n+4} - \left(
t+\lambda +V\cos\left(2\pi\beta \left(4n+2\right)\right)\right) \psi_{4n+2} = E\psi_{4n+3},
\end{split}
\end{equation}
\end{widetext}
where $\psi_{j}$ denotes the eigenfunction in the first-quantization language and
$E$ is the eigenenergy. $n$ is an arbitrary integer.
We find a transformation with a period of four sites which keeps the Schr\"{o}dinger equation
invariant. The transformation is $t \to \lambda$, $\lambda \to t$, $\beta \to \beta +1/2$, $\psi_{4n}\to \psi_{4n}$,
$\psi_{4n+1}\to \psi_{4n+1}$, $\psi_{4n+2}\to -\psi_{4n+2}$ and $\psi_{4n+3}\to -\psi_{4n+3}$.
This transformation changes the sign of the wave function, but
has no influence on whether the wave function is localized or not. The shift of $\beta$ by $1/2$
keeps the absolute value of $V_j$ invariant at each site. Therefore, as we exchange $t$ and $\lambda$
while keeping $V$ invariant the wave function only changes the sign at some sites.
The transformation of $\left(t,\lambda,V\right)$
from $\left( 1,\lambda,V\right)$ to $\left( \lambda,1,V\right)$
does not change whether the eigenstate is localized or extended.
Furthermore, multiplying the Hamiltonian by an arbitrary number does not
change its eigenfunctions. As $t=1$ is fixed,
the simultaneous transformation $\lambda\to 1/\lambda$ and $V\to V/\lambda$
must relate two points in the phase diagram that belong to the same phase.
\begin{figure}
\centering
\includegraphics[width=0.5
\textwidth]{2.eps}\\
\caption{(Color online) MIPR as a function of $\lambda$ for different values of $V$.
The dashed lines mark where MIPR changes abruptly. They are located at $\lambda = 1\pm V$.
The total number of sites is set to $L=987$. We choose the open boundary condition.}
\label{002}
\end{figure}
Under this transformation, region~I in the phase diagram (Fig.~\ref{001})
is mapped into region~II. For example, the boundary $AB$ ($V=-\lambda +1$), after the transformation, becomes $BC$ ($V=\lambda-1$). Especially, all the region below the boundary $AB$ can be mapped into the region below $BC$. Therefore, region~I and~II are dual and belong to the same phase.
The boundary $AD$ is $V=\lambda+1$. It keeps invariant under the symmetry transformation.
Region~III and ~IV are both mapped into themselves under the symmetry transformation.
By numerically calculating the inverse participation ratio (IPR) and the mean inverse participation ratio (MIPR), we find that region~I and~II are both the extended phase.
The IPR of a normalized wave function $\psi$ is defined as~\cite{27TH,28KO}
\begin{equation}
\text{IPR}_n =\sum_{j=1}^{L} \left|\psi^n_{j}\right|^{4},
\end{equation}
where $L$ denotes the total number of sites and $n$ is the energy level index.
It is well known that the IPR of an extended state scales like $L^{-1}$ which goes
to $0$ in thermodynamic limit. But for a
localized state IPR is finite even in thermodynamic limit.
For a critical state IPR scales as $L^{-\theta}$ with $0<\theta<1$.
There are $L$ different eigenfunctions for a specific Hamiltonian.
We then define the mean inverse participation
ratio (MIPR) as
\begin{equation}
\text{MIPR}=\sum_{n=1}^{L}\text{IPR}_{n}/L,
\end{equation}
where $\text{IPR}_{n}$ denotes the IPR of the $n$th eigenstate.
\begin{figure}
\centering
\includegraphics[width=0.5
\textwidth]{3.eps}\\
\caption{(Color online) The distribution of IPR among all the eigenstates for different
$(\lambda,V)$ selected from the region (a) I with $(\lambda,V)=(0.5,0.2)$,
(b) II with $(\lambda,V)=(1.5,0.2)$, (c) III with $(\lambda,V)=(0.5,0.55)$ and (d) IV with $(\lambda,V)=(0.5,1.55)$.
The $x$-axis represents the eigenenergy $E$.
The number of sites is set to $L=10000$.
The red dots represent the zero-energy modes, which are present in
the topologically-nontrivial phase.}
\label{003}
\end{figure}
In Fig.~\ref{002}, we plot MIPR as a function of $\lambda$
at three disorder amplitudes: $V=0.2$, $V=0.5$, and $V=0.8$.
Here we used $L =987$. There are two turning points of MIPR
located at $\lambda = 1-V$ and $\lambda =1+V$, respectively.
At the turning points MIPR becomes very steep.
We also checked that with increasing number of sites $L$ the change of MIPR
at the turning points becomes even sharper. In the thermodynamic limit $L\rightarrow\infty$,
a singularity behavior of MIPR is expected signaling a transition between the extended phase and the localized (or critical) phase.
So the thermodynamically vanishing MIPR suggests that the system is in the extended regions for $\lambda < 1-V$ and $\lambda >1+V$.
We further study the distribution of IPR with different eigenstates.
The results are plotted in Fig.~\ref{003}.
We find the zero-energy modes (the red dots) in the region~I, II and~III.
Thus, these three regions are in the topologically-nontrivial phase.
But region~IV is topologically trivial.
Fig.~\ref{003}(a) and
Fig.~\ref{003}(b) plot the IPR distribution
in region~I and~II, respectively. The distribution has the same
characteristics in these two regions.
For almost all the eigenstates,
their IPR are close to each other, being very low (around $10^{-4}$).
This indicates that region~I and~II has a pure energy spectrum,
that all the eigenstates are extended. The extended, critical
and localized states do not coexist in these two regions.
On the other hand, region~III and~IV have the significantly
different IPR distribution (see Fig.~\ref{003}(c) and
Fig.~\ref{003}(d)). In these two regions, the value of IPR
is at least one order of magnitude larger than that of
the extended state ($> 10^{-3}$). At the same time,
the IPRs of different eigenstates disperse widely
in a range over two orders of magnitude (from $10^{-3}$ to
$10^{-1}$). Such a dispersed distribution suggests that
region~III and~IV should not be the pure localized phase.
Instead, there should exist critical states in these two regions.
To clarify the nature of them, we will apply the multifractal analysis
to the eigenfunctions in next section.
\section{Multifractal analysis}
\label{n3}
We carry out the multifractal analysis in two different approaches.
In the first approach, we fix the total number of sites in the system, while
dividing the system into a series of boxes with the box length tunable.
This is called the box-counting method~\cite{29SI,30HU}.
In the second approach, we
obtain the scaling behavior of the wave functions by changing
the total number of sites.
\subsection{Box-counting method}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{4.eps}\\
\caption{(Color online) $\kappa$ as a function of $l$ on the logarithmic scale.
The crosses are for $(\lambda,V)=(0.5,0.55)$ which is located in region~III
of the phase diagram. The stars are for $(\lambda,V)=(0.5,1.55)$ which is located in region~IV.
Different colors represent different eigenstates.
The inset plots $\overline{\kappa}$ which is the average of $\kappa$
over all the eigenstates. The total number of sites is set to $L=8000$.}
\label{004}
\end{figure}
Let us consider a normalized wave function $\psi$ defined over a chain of $L$ sites.
The probability of finding the particle at site $j$ is given by $p^n_j = \left|\psi^n_j\right|^2$
which satisfies $\sum_{j=1}^L p^n_j =1 $. In the multifractal analysis of wave functions,
$p^n_j\geq 0$ is viewed as an increase at site $j$ for the $nth$ eigenvalue. And the increase is supposed
to satisfy a local power law everywhere along the chain.
We divide the chain into $L/l$ segments with each segment containing $l$ sites.
The total increase in the $m$th segment is given by $P^n_m=\sum_{j=(m-1)l+1}^{ml} p^n_j$.
We then introduce a partition function
\begin{equation}
\kappa_n(q) = \sum_{m=1}^{L/l}\left( P^n_m\right)^q.
\end{equation}
The partition function obeys a power law $\kappa\sim (l/L)^{\tau}$ where the exponent $\tau$
is related to the multifractal dimension by $D(q)=\tau(q)/(q-1)$.
Following previous literatures we set $q=2$. It has been found that
$\tau(2)=D(2)$ tends to $0$ for a localized state but to $1$ for an extended state.
$0<\tau(2)<1$ signals a critical state.
We select two typical points in region~III and~IV, which are $\left(\lambda,V\right) = \left(0.5,0.55\right)$
and $\left(0.5,1.55\right)$, respectively.
Fig.~\ref{004} plots the corresponding $\kappa$ as a function of $l$ on the logarithmic scale.
By carefully checking the eigenstates in region III and IV, we find that the system has a mixed spectrum either of a localized or of a critical eigenstate. Here we take few typical eigenstates as examples.
For the $668$th eigenstate at $V=0.55$ (red crosses in Fig.~\ref{004}),
$\kappa$ scales as $l^{0.708}$ as $l<10^2$, but $\tau(2)$ tends to 0 for larger segment.
The existence of a turning point in the curve of $\kappa$ is the typical feature of a localized state.
Indeed, the localized states display multifractal feature up to the localization length.
Similarly, the $393$th eigenstate at $V=1.55$
is also a localized state (see red stars in Fig.~\ref{004}).
But for the $759$th eigenstate at $V=0.55$ (blue crosses) and the $635$th eigenstate
at $V=1.55$ (blue stars), $\kappa$ scales as $l^{0.613}$ over the whole domain of $l$.
The lack of turning point and a fractional $\tau$ together indicate that
these two states are critical. Therefore, in region~III and~IV the system
has a mixed spectrum in which the localized and critical states coexist.
We further study the average of $\kappa$ over different eigenstates, defined as
\begin{equation}
\overline{\kappa}=\sum_{n=1}^{L}\kappa_{n}(2)/L,
\end{equation}
where $\kappa_n$ denotes the partition function of the $n$th eigenstate.
The inset of Fig.~\ref{004} plots $\overline{\kappa}$ as a function
of $l$ on the logarithmic scale. In both region~III and~IV,
$\overline{\kappa}$ scales approximately as $l^\tau$ over the whole domain
of $l$. No obvious turning point is found. And $0<\tau<1$ is a fraction.
This indicates that most of the eigenstates in region~III and~IV
are critical states. Otherwise, $\overline{\kappa}$ would have a turning
point and display a platform for larger $l$. Because for larger $l$, $\tau(2)$
tends to 0 for a localized state but bigger than 0 for a critical state.
If there were a significant fraction of localized states in the spectrum,
their contribution to $\overline{\kappa}$ would dominate, and $\overline{\kappa}$
would then display a platform.
It is worth mentioning that the Legendre transformation of $\tau(q)$
is no more than the singularity spectrum. The latter is an important quantity
characterizing the multifractal nature of the system. We will discuss the
singularity spectrum in next subsection. Considering numerical stability,
we will employ a different method for calculating it.
\subsection{Finite size scaling}
\label{n4}
In the box-counting method, the segment length is tunable. An extreme case
is that each segment contains only a single site. The segment length is
then $1/L$. Note that the length of the whole chain is usually normalized to unity
in the multifractal analysis. Therefore, the segment length can also
be changed by changing the total number of sites $L$.
According to previous works~\cite{1PRB,31HI,32WA}, it is convenient to choose $L=F_m$
where $F_{m}$ is the $m$th Fibonacci number.
The advantage of this choice is that the golden ratio can be expressed as
$\beta = (\sqrt{5}-1)/2 =\lim_{m \rightarrow \infty} \frac{F_{m-1}}{F_{m}}$.
In a multifractal system the increase $p^n_j$ at any site satisfies a local power law:
\begin{equation}
p^n_j \sim \left(1/F_{m}\right)^{\gamma^n_j},
\end{equation}
where $\gamma^n_j$ is the singularity exponent. The set of sites
that share the same singularity exponent $\gamma$ is a fractal set of dimension $f(\gamma)$.
$f(\gamma)$ is just the so-called singularity spectrum. The approach of obtaining
$f(\gamma)$ is summarized as follows. The partition function is now defined as
$Z_m(q)=\sum_{j=1}^{F_m} \left(p^n_j\right)^q$. We then introduce the free energy
$G_m(q) = \ln Z_m(q)/m$. The singularity spectrum is the Legendre transformation
of the free energy, given by
\begin{equation}
f(\gamma) = q\gamma - G_m(q)/\epsilon,
\end{equation}
with $\gamma=\displaystyle\frac{1}{\epsilon}\displaystyle\frac{d G_m}{dq}$ and $\epsilon=\ln\beta$.
For a critical wave function, $f(\gamma)$ is nonzero in an interval
$\left(\gamma_{min},\gamma_{max}\right)$ in which $f(\gamma)$ changes continuously.
In the thermodynamic limit $m\to \infty$, the value of $\gamma_{min}$ can be used
to distinguish the extended state ($\gamma_{min}=1$), the localized state ($\gamma_{min}=0$)
and the critical state ($0< \gamma_{min} <1$). Since there are $F_m$ eigenstates
for each Hamiltonian,
we use the average of $ \gamma_{min}$ over all the $F_m$ eigenstates as the indicator,
which is denoted by $\overline{\gamma_{min}}$.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{5.eps}\\
\caption{(Color online) $\overline{\gamma_{min}}$ as a function of $1/m$ for
$(\lambda,V)=(0.5,0.2)$, $(1.5,0.2)$, $(0.5,0.55)$ and $(0.5,1.55)$.
These four points are located in region~I, II, III and IV, respectively.}
\label{005}
\end{figure}
We plot $\overline{\gamma_{min}}$ as a function of $ 1/m $ for different $(\lambda,V)$
in Fig.~\ref{005}.
We find that $\overline{\gamma_{min}}$ extrapolates to $1$ for
$(\lambda,V)=(0.5,0.2)$ and $(1.5,0.2)$ which are both in the extended phase.
This fits our expectation.
$\overline{\gamma_{min}}$ extrapolates to $0.35$ for $(\lambda,V)=(0.5,0.55)$,
and to $0.27$ for $(\lambda,V)=(0.5,1.55)$.
The fractional $\overline{\gamma_{min}}$ is another evidence
that most of the states in region~III and~IV are critical states.
\section{Conclusions}
\label{n5}
In summary, we clarify the phase diagram of the off-diagonal
AA model. For this purpose, we discovered a symmetry transformation
which changes the sign of the wave function but keeps its amplitude invariant.
We also apply the box-counting and the finite size scaling methods
in analyzing the eigenfunctions. These two different approaches both
show that the wave functions display multifractal behavior.
We believe that the methods employed in this paper are useful
in a wide range of variations of the AA model.
\begin{acknowledgments}
This work was supported by the NSF of Zhejiang Province (Grant No. Z15A050001),
the NSF of China (Grant Nos. 11374266 and 11304280), and the Program for New Century Excellent Talents in University.
\end{acknowledgments}
|
2,877,628,090,955 | arxiv | \section{Introduction}
\label{intro}
Rapidly-oscillating Ap (roAp) stars are magnetic chemically-peculiar stars pulsating in non-radial,
magnetoacoustic {\it p-}modes of periods close to 10~min. Pulsations in roAp stars are believed to be
driven by the opacity mechanism operating in the hydrogen-ionization zone
\citep{balmforth:2001}. The presence of strong magnetic fields in roAp stars enhances the
driving of the high-overtone oscillations by the suppression of convection, influences pulsation
frequencies, and determines the global geometry of the pulsational perturbations
\citep{dziembowski:1996,saio:2005}.
Strongly inhomogeneous vertical distributions of chemical elements combined with the rapid
transformation of the outwardly-propagating pulsation waves are responsible for the unique
spectroscopic pulsation signature of roAp stars \citep{ryabchikova:2002}. In particular, the
lines of rare-earth elements (REEs) and the cores of the hydrogen lines pulsate with a factor of 10--100
higher amplitudes than the remaining spectral features
\citep{kochukhov:2001b,mkrtichian:2003,kurtz:2006b}.
The observed roAp instability strip is \textbf{limited} to the $T_{\rm eff}$\ range 6400--8100~K, although theoretical
stability calculations \citep{cunha:2002} predict pulsations in stars as hot as 9500~K and are unable to
account for the unstable modes observed in stars with $T_{\rm eff}$\,$\le$\,7400~K. At the same time, the coexistence
of pulsating and apparently constant Ap stars in the same region of the H-R diagram has been a long-standing
puzzle \citep[e.g.,][]{martinez:1994}. It is now understood that the time-resolved photometric techniques
employed to detect and study variability in most of the 37 known roAp stars are insensitive to the
low-amplitude pulsations observed in spectroscopic time-series analyses
\citep{hatzes:2004,elkin:2005}. This leads to the suggestion that all magnetic Ap stars in a certain
temperature range may oscillate, but some have amplitudes below the photometric detection threshold
\citep{kochukhov:2002}.
We test this hypothesis by completing a high-precision survey of a sample of bright cool magnetic Ap stars
using the High Accuracy Radial velocity Planet Searcher (HARPS) spectrograph at the European Southern
Observatory (ESO). The first result of our observations -- the discovery of 10.9-minute oscillations in the
Ap star HD\,115226 -- was reported by \citet{kochukhov:2008b}. Here we present the discovery of a new
rapidly oscillating Ap star, \object{HD\,75445}, which pulsates with one of the lowest radial-velocity (RV) amplitudes measured for roAp stars.
\section{Observations and data reduction}
\label{obs}
We used the HARPS spectrograph \citep{mayor:2003} at the ESO 3.6-m telescope at La Silla to monitor HD\,75445\ in
the context of our search for low-amplitude variability in bright cool Ap stars (ESO program 079.D-0118). The star
was observed on the night of April 15, 2007. The observations started at the barycentric JD 2454205.47129 and
continued for 3.8 hours. We collected 120 consecutive 80~s exposures, separated by a dead time of 31~s. The
resulting time resolution of 111~s enabled us to detect variations \textbf{with} frequencies as high as $\nu=4.5$~mHz
($P=3.7$~min).
The extraction of one-dimensional spectra and barycentric velocity correction of the wavelength scale was
performed with the help of the HARPS pipeline. Our spectra have a nominal resolving power
$R\equiv\lambda/\Delta\lambda=115\,000$, and cover a wavelength range from 3780 to 6910~\AA, with a 30~\AA\ gap close to
5320~\AA. Individual exposures of HD\,75445\ have peak signal-to-noise ratio of 70 per 15~m\AA\ pixel
at $\lambda$~6000~\AA.
In the final reduction step, one-dimensional extracted spectra of HD\,75445\ were post-processed to achieve
consistent continuum normalization following the procedure described in \citet{kochukhov:2007}.
We did not employ the simultaneous ThAr method available at HARPS, avoiding contamination of the stellar
signal. Instead, we acquired a ThAr reference spectrum at the beginning and end of the stellar observations.
Using these calibrations, we estimated that the instrumental drift within the time series was below 0.1~m\,s$^{-1}$. For
the moderate signal-to-noise ratio of individual spectra of HD 75445, the dominant source of noise in the radial-velocity measurements was photon noise ($\ge$\,2~m\,s$^{-1}$) rather than the instrumental precision, which is similar to the
measured drift.
\section{Basic properties of HD\,75445}
\label{params}
The southern chemically-peculiar star HD\,75445\ (HIP\,43257, CD $-38^{\rm o}$4907) was classified as a Sr-Eu object by
\citet{bidelman:1973}. Its Str\"omgren photometric indices, $b-y=0.159$, $m_1=0.218$, $c_1=0.729$
\citep{vogt:1979}, H$\beta=2.801$ \citep{Maitzen:2000}, indicate $T_{\rm eff}$\,=\,7600--7700~K according to the
calibrations by \citet{moon:1985} and \citet{napiwotzki:1993}. Geneva colours yield $T_{\rm eff}$\,=\,7680~K
\citep{kochukhov:2006}, in good agreement with the Str\"omgren photometry.
\citet{kochukhov:2006} investigated the evolutionary state of HD\,75445\ using Hipparcos parallax and photometric $T_{\rm eff}$.
They determined $\log{L}=1.17\pm0.06$\,$L_\odot$, $M=1.81\pm0.05$\,$M_\odot$ and a stellar age that is a factor 0.56--0.72 of
the main-sequence lifetime. \citet{ryabchikova:2004a} included HD\,75445\ in their abundance analysis study of a sample of
roAp and non-pulsating Ap stars. Adopting $T_{\rm eff}$\,=\,7700~K and $\log g$\,=\,4.3, they showed that HD\,75445\ has close
to solar Fe abundance, moderate enhancement of Cr and Mn, 1.6~dex overabundance of Co, and a large
overabundance of several REEs. As for many known roAp stars, HD\,75445\ exhibits an ionization anomaly of Pr and Nd,
with doubly ionized lines of these elements providing an 1.3--2.0~dex higher abundance measurement than the lines of first ions.
\citet{ryabchikova:2008} examined the Ca stratification and isotopic composition of HD\,75445. They reported a 2.0~dex
step-like change of the Ca concentration at $\log\tau_{5000}=-0.9$ and detected the presence of heavy Ca isotopes
($^{46}$Ca and $^{48}$Ca) in the upper atmospheric layers. This study also inferred a spectroscopic $T_{\rm eff}$\,=\,7650~K
using the H$\alpha$ line.
\begin{figure}[!t]
\fifps{8.5cm}{11419_f1.eps}
\caption{Comparison of the 6135--6165~\AA\ region in the spectra of the new roAp star HD\,75445\ and the
well-known bright roAp star $\gamma$\,Equ. The UVES spectrum of $\gamma$\,Equ\ is shown on top, with the
identifications of the strongest spectral features. The mean
HARPS spectrum of HD\,75445\ (middle, thick curve) is compared with the UVES observation of this star
(bottom, thin curve) obtained 5.4 years before the HARPS observations. The UVES spectra are shifted
vertically for display purposes.
}
\label{fig1}
\end{figure}
\citet{ryabchikova:2004a} commented on the spectroscopic similarity of HD\,75445\ and the bright roAp star $\gamma$\,Equ\
(\object{HD\,201601}). This point is illustrated in Fig.~\ref{fig1} with new high-quality, $R=115\,000$
spectra available for both stars (mean HARPS spectrum for HD\,75445\ and mean UVES spectrum derived from the
archival time series data set of $\gamma$\,Equ). The spectra of these two stars are almost identical, the only
difference being slightly broader line profiles of $\gamma$\,Equ\ due to the stronger mean surface field strength of
this star. However, there is a discrepancy between the spectra of the two stars in the region of the resonance \ion{Li}{i}
doublet at $\lambda$ 6708~\AA, which is strong in $\gamma$\,Equ\ but entirely absent in HD\,75445\ \citep{kochukhov:2008a}.
\citet{mathys:1997b} detected Zeeman splitting in the \ion{Fe}{ii} 6149~\AA\ line of HD\,75445\ and measured the mean
field modulus $\langle B \rangle$\,=\,$2985\pm42$~G with 9 spectra recorded over the period of 450~d in 1994--1995.
\citet{ryabchikova:2004a} provided three additional measurements of $\langle B \rangle$, 2915, 2957, and 2873~G, for the spectra obtained
in 2000--2001. The splitting of \ion{Fe}{ii} 6149~\AA\ in our mean HARPS spectrum (2007.3) and in the UVES
spectrum from 2001 \citep{ryabchikova:2008} is consistent with $\langle B \rangle$\,=\,3030~G. In summary, the full set of 14 $\langle B \rangle$\
measurements shows no evidence of periodic field strength variation.
\begin{figure}[!t]
\fifps{8.5cm}{11419_f2.eps}
\caption{Amplitude spectrum for the ASAS photometry of HD\,75445\ obtained after 2003.6.}
\label{fig2}
\end{figure}
We searched for long-period rotational brightness modulation in HD\,75445\ using the Hipparcos epoch
photometry \citep{ESA:1997} and the ASAS database \citep{pojmanski:2002}. No periodic variability \textbf{with} an
amplitude larger than 5~mmag was detectable in the Hipparcos light curve. The ASAS photometry of HD\,75445\ exhibited erratic
brightness changes before 2003, which were probably instrumental in nature. Measurements obtained after
2003.6 did not deviate significantly from the mean value $V=7.14$. The amplitude spectrum computed for the ASAS
observations of HD\,75445\ during 2003.6--2008.8 shows a marginal 7~mmag variability with a 29.5~d period
(Fig.~\ref{fig2}). This is consistent with our estimate of $v_{\rm e}\sin i$\,$\le$\,2~km\,s$^{-1}$\ obtained by fitting profiles
of magnetically insensitive \ion{Fe}{i} lines at $\lambda$ 5434, 5576, and 5691~\AA. Comparison of the mean
HARPS spectrum with the UVES observation obtained 5.4 years before our observing run shows no detectable
changes in the line profiles (Fig.~\ref{fig1}), suggesting a very long rotation period.
Spectroscopic similarity of HD\,75445\ to $\gamma$\,Equ, its prominent REE ionization anomaly, and
its effective temperature of $T_{\rm eff}$\,$<$\,8000~K imply that
this star is an obvious candidate for the search of oscillations \citep{ryabchikova:2004a}. However,
no photometric pulsation signature exceeding one mmag was detected for this star by Martinez (private
communication). Here, we demonstrate that HD\,75445\ is indeed a roAp star but pulsating with an amplitude well below the
current detection threshold of the ground-based, time-resolved photometry.
\section{Analysis of radial velocity variation}
\label{rv}
We measured radial velocities of lines in the spectrum of HD\,75445\ using the centre-of-gravity technique
\citep{kochukhov:2001b}. Spectral line identification was based on the atomic line data extracted from the
VALD
database \citep{kupka:1999}, which includes the
DREAM
compilation of the REE line parameters
\citep{biemont:1999}. The list of \ion{Nd}{iii}\ transitions was further extended using the study by
\citet{ryabchikova:2006}.
Previous time-resolved spectroscopic analyses of roAp stars
\citep{kochukhov:2001b,mkrtichian:2003,ryabchikova:2007b} demonstrated that
maximum pulsation amplitudes are always found in singly and doubly ionized REE absorption
features, such as \ion{Nd}{ii}, \ion{Nd}{iii}, and \ion{Pr}{iii}. A number of strong and medium-strength lines of REE
ions are present in the spectrum of HD\,75445. However, observational data available to us were of insufficiently high signal-to-noise ratio to detect pulsations in individual lines. We
reduced the noise in the velocity curves by averaging RV measurements for all lines of \textbf{a given
REE ion}. Among rare-earths, only \ion{Nd}{ii}\ and \ion{Nd}{iii}\ have sufficient number of lines in the
spectrum of HD\,75445\ to yield precise combined RV measurements.
Using 29 lines of \ion{Nd}{iii}\ and 56 lines of \ion{Nd}{ii}, we achieved a noise level of 3--5~m\,s$^{-1}$\ in the
amplitude spectra and revealed conspicuous amplitude peaks in the 1.8--2.0~mHz frequency range, which implied that oscillations of amplitude 20--30~m\,s$^{-1}$\ were present
(Fig~\ref{fig3}, Table~\ref{tbl1}). These oscillation signatures are highly significant. The
probability that noise would produce a peak of this observed amplitude at \textit{any} frequency
in the studied range \citep[False Alarm Probability,][]{horne:1986} is $7\times10^{-5}$
for \ion{Nd}{ii}\ and $4\times10^{-6}$ for \ion{Nd}{iii}. We also applied a bootstrap randomization technique
\citep{kuerster:1997}, which is a more rigorous method of establishing the statistical
significance of a peak in amplitude spectrum. Of the $10^5$ randomly shuffled data sets
created from the original mean \ion{Nd}{ii}\ and \ion{Nd}{iii}\ RV curves, none exhibited spurious peaks of the
observed amplitude in the frequency range 0--4.5~mHz. Thus, the probability that noise would
create the signal detected in \ion{Nd}{ii}\ and \ion{Nd}{iii}\ lines is less than $10^{-5}$.
\begin{figure}[!th]
\fifps{8.5cm}{11419_f3.eps}
\caption{From the top panel to the bottom panel: amplitude spectra for the average
radial-velocity curves of 15 telluric lines,
49 lines of \ion{Fe}{i} and {\sc ii}, 29 lines of \ion{Nd}{iii}, and 56 lines of \ion{Nd}{ii}. The vertical
dashed line shows the main pulsation frequency $\nu=1.85$~mHz. In each panel,
the amplitude spectrum of the entire data set (thick curve) is compared to that of the first 1.8~h of the
spectroscopic monitoring (thin curve). Horizontal dotted lines show corresponding noise levels.}
\label{fig3}
\end{figure}
Applying a similar analysis procedure to 15 telluric lines in the 6275--6315~\AA\ region, we
find no oscillations above 8~m\,s$^{-1}$\ with a noise level of 3~m\,s$^{-1}$. Similarly, for the combined RV
curve of 49 \ion{Fe}{i} and \ion{Fe}{ii} lines, which are unexpected to show significant variation
in a low-amplitude roAp star, we found a maximum amplitude of 6~m\,s$^{-1}$\ and a noise level of 2~m\,s$^{-1}$.
The stability of telluric lines and the stellar Fe features confirms that variation detected in the Nd
lines of HD\,75445\ is not due to an instrumental artifact.
\begin{table}[!t]
\centering
\caption{Frequency analysis of the average radial velocity curves of telluric lines, \ion{Fe}{i} and {\sc ii},
\ion{Nd}{iii}\ and \ion{Nd}{ii}. $N$ indicates the number of lines measured. $A_{\rm max}$ gives the highest radial velocity
amplitude, followed by the estimate of False Alarm Probability of the corresponding signal. The last
two columns give the amplitude of the variation with $\nu=1.85$~mHz ($P=9.01$~min) and the noise estimate.
\label{tbl1}}
\begin{tabular}{lccccc}
\hline
\hline
Ion & N & $A_{\rm max}$ & FAP & $A$ & $\sigma$ \\
& & (m\,s$^{-1}$) & & (m\,s$^{-1}$) & (m\,s$^{-1}$) \\
\hline
\multicolumn{6}{c}{Full data set}\\
Telluric & 15 & 7.9 & 0.31 & $2.5\pm2.5$ & 2.6 \\
\ion{Fe}{i},{\sc ii} & 49 & 6.2 & 0.31 & $1.5\pm1.8$ & 1.8 \\
\ion{Nd}{ii}\ & 56 & 29.1 & 6.6E-05 & $24.6\pm4.8$ & 4.9 \\
\ion{Nd}{iii}\ & 29 & 20.5 & 3.6E-06 & $20.4\pm3.0$ & 3.0 \\
\multicolumn{6}{c}{First 1.8~h}\\
\ion{Nd}{ii}\ & 56 & 46.2 & 8.7E-06 & $44.2\pm5.5$ & 5.5 \\
\ion{Nd}{iii}\ & 29 & 36.5 & 1.1E-06 & $36.4\pm3.8$ & 3.7 \\
\hline
\end{tabular}
\end{table}
The complex appearance of the \ion{Nd}{ii}\ and \ion{Nd}{iii}\ amplitude spectra in Fig.~\ref{fig3} suggests a multiperiodic
pulsation. The presence of several excited modes in HD\,75445\ became apparent when the mean RV data was analysed in
the time domain. We found that in the first 55 observations of HD\,75445, corresponding to the initial 1.8~h of our
time-resolved observations, pulsation variability was clearly evident. The amplitude spectra of this partial
data set, illustrated in Fig.~\ref{fig3}, indicated an almost monoperiodic pulsation. The least-squares fitting
of this part of the oscillation curve yielded amplitudes of 36 and 46~m\,s$^{-1}$\ as well as pulsation periods of
$8.93\pm0.05$~min and $9.04\pm0.04$~min for the singly and doubly ionized Nd, respectively. There was a small
lag of $0.44\pm0.19$~rad between the RV maxima of the two Nd ions. In a similar way to other roAp stars
\citep{ryabchikova:2007b}, \ion{Nd}{iii}\ in HD\,75445\ showed a later maximum than \ion{Nd}{ii}.
The prominent sinusoidal variation in the Nd lines was subdued after about 2~h from the beginning of our observations, presumably
due to beating of several excited modes as seen in other multiperiodic roAp stars \citep[e.g.,][]{sachkov:2008}. Tentative
least-squares analysis suggested the presence of at least three significant frequencies: $\nu_1=1.81$~mHz
($P_1=9.20$~min), $\nu_2=1.85$~mHz ($P_2=9.01$~min), and $\nu_3=1.99$~mHz ($P_3=8.37$~min). The length of
our time series does not allow $\nu_1$ and $\nu_2$ to be fully resolved. On the other hand, $\nu_2$ and $\nu_3$
were resolved, and $\nu_3$ was found to have a higher relative amplitude for \ion{Nd}{ii}.
\section{Discussion}
\label{discus}
We have established the presence of multiperiodic pulsations in the cool magnetic Ap star HD\,75445\
using combined RV measurements of the lines belonging to \ion{Nd}{ii}\ and \ion{Nd}{iii}. The star exhibits oscillations
with three frequencies, which have different amplitude ratios for the two Nd ions. The phase lag between RV
curves of \ion{Nd}{ii}\ and \ion{Nd}{iii}\ can be interpreted in the framework of the outwardly propagating pulsational
perturbation, which first reaches the layer where \ion{Nd}{ii}\ lines form and, after some delay, is seen in the
higher atmospheric layer probed by stronger \ion{Nd}{iii}\ lines \citep{ryabchikova:2007b,mashonkina:2005}. The
difference in the amplitude ratios of the frequency components of the \ion{Nd}{ii}\ and \ion{Nd}{iii}\ RV curves can be
ascribed to different vertical cross-sections of the three pulsation modes.
HD\,75445\ is the 38th known roAp star. Its discovery is significant because the star's pulsation amplitude is
noticeably lower than for other roAp stars discovered to date using time-resolved spectroscopy. For
example, HD\,218994 \citep{gonzalez:2007} and HD\,115226 \citep{kochukhov:2008b} pulsate with an amplitude
$\ge$\,500~m\,s$^{-1}$, while for HD\,116114 \citep{elkin:2005} and HD\,154708 \citep{kurtz:2006b} pulsations with
an amplitude of 50--100~m\,s$^{-1}$\ were reported. Only for $\beta$\,CrB (HD\,137909) comparable RV amplitudes of
20--30~m\,s$^{-1}$\ were found in individual lines of singly ionized REEs \citep{kurtz:2007a}. However, $\beta$\,CrB
is in many respects different from other roAp stars. It is an evolved star with a long pulsation period and
a chemical composition deviating from that of a typical roAp star \citep{ryabchikova:2004a}. In contrast, HD\,75445\
appears to have average roAp characteristics and is, in fact, a spectroscopic twin of the well-known roAp
star $\gamma$\,Equ. Nevertheless, it pulsates with an unusually low RV amplitude. This shows that although the
atmospheric chemical composition, in particular the REE ionization anomaly, is helpful in selecting roAp
candidates, it has no direct connection with the amplitude of oscillations in the line-forming
region.
\citet{kurtz:2006b} noted the tendency for weaker roAp oscillations to be found in stars with stronger fields.
However, HD\,75445\ has a mean field modulus that is significantly weaker than many roAp stars but also shows an exceptionally low
pulsation amplitude. We therefore conclude that a low-amplitude roAp pulsation can be present in cool Ap stars
of any field strength. Although there are theoretical reasons to believe that the magnetic field alters the
amplitude of the photospheric oscillations in few strong-field stars, a parameter other than the field intensity
defines pulsation amplitude for other roAp stars.
The detection of \textbf{very} low amplitude pulsations in HD\,75445\ suggests that the roAp excitation mechanism produces
oscillations with no apparent lower amplitude threshold. Thus, many cool Ap stars may possess pulsations
with RV amplitudes $\ll$\,100~m\,s$^{-1}$, which can be currently detected by time-resolved spectroscopy only in
bright sharp-line stars such as HD\,75445.
|
2,877,628,090,956 | arxiv | \section*{ ACKNOWLEDGMENTS }
This work was supported in part by the topical research
program (2009 -T-1) of Asia Pacific Center for
Theoretical Physics.
\vskip 5.4mm
|
2,877,628,090,957 | arxiv | \section{Introduction}\label{sec:intro}
Optimal design for generalized linear models (GLMs) \citep{sitter1995d-optimal,khuri2006design,silvey2013optimal, fedorov2013optimal} is an important topic in the design of experiments area.
In recent years, there have been new developments on both theoretical and algorithmic fronts, such as \cite{woods2011continuous,yang2011optimal, burghaus2014optimal, 14wu, waite2015designs, wong2019optimal} among many others.
A key challenge of optimal design for GLMs is that the design criterion often depends on the regression model assumption, including the specification of the link function, the linear predictor and the values of the unknown regression coefficients.
Many existing works focus on local optimal designs given a certain model specification, such as in \cite{09yang, li2009some, 12yang, 14wu,li2018efficient}.
Contrary to the local optimal design, one type of global optimal design takes the parameter uncertainty into consideration under two directions.
One direction is to consider a prior distribution of the unknown parameters, and construct the so-called Bayesian optimal design \citep{khuri2006design, amzal2006bayesian, woods2017bayesian}.
The design criterion is typically the integral of the local design criterion or efficiency with respect to the prior of the parameters.
When such integration is not analytically available, a standard solution is to sample from the prior distribution and use the weighted average of local design criteria or efficiencies as the objective function \citep{atkinson2015designs}.
Another direction is to use the minimax/maximin approach to minimize the design criterion or maximize the efficiency under the ``worst-case'' scenario.
\cite{sitter1992robust} introduced a minimax procedure for obtaining a design to deal with parameter uncertainty.
\cite{king2000minimax} proposed an efficient algorithm to construct a maximin design for the logistic regression model under D-optimality.
\cite{imhof2000graphical} developed an algorithm to maximize the minimum efficiency under two competing optimality criteria with a graphical method.
Note that existing literature on the maximin/minimax designs often focus on D-optimality and uncertainty of the unknown parameters.
The biggest challenge in maximin/minimax designs is that the design construction can be quite difficult \citep{atkinson2015designs}.
Besides the unknown parameters, there could be other uncertainties involved in a GLM, such as the specification of the link function and the linear predictor.
The literature on the designs for GLMs to deal with such kind of model uncertainty is relatively scarce.
\cite{woods2006designs} proposed a compromise design that minimizes the weighted average of the criteria, and each criterion is based on a potential model.
Later, \cite{dror2006robust} proposed using clustered local optimal designs, and showed the resulting design had a comparable performance with the compromise design through numerical examples.
In this work, we propose a new maximin $\Phi_p$-efficient design (denoted as Mm-$\Phi_p$) criterion for GLMs using the $\Phi_p$-efficiency \citep{75kiefer} and develop an efficient algorithm for design construction.
The proposed design, namely Mm-$\Phi_{p}$ design, can accommodate several types of uncertainties, including (i) uncertainty of the unknown parameter values; (ii) uncertainty of the linear predictor; and (ii) uncertainty of the link function.
Here, we focus on \emph{approximate design} \citep{75kiefer,atkinson2014optimal}, which describes the design as a probability measure on a group of support points.
It provides the framework for us to investigate theoretical properties of the proposed design criterion, and pave a theoretical foundation to construct the efficient algorithm with desirable convergence properties.
The key idea of this work is to adopt a continuous and convex relaxation (i.e., the ``log-sum-exp'' approximation) as a tight approximation of the worst-case $\Phi_p$-efficiency with respect to uncertainty of model specifications.
With this relaxation, we arrive a tractable design criterion, which facilitates the theoretical investigation for developing an efficient algorithm to construct the corresponding design.
The merits of this idea is not restricted to the $\Phi_p$ criterion, even though $\Phi_p$ is already a quite general criterion including A-, D-, E-, and I-optimality criterion as special cases.
Through the demonstration of the proposed approach based on $\Phi_p$-criterion, it is apparent that this convex and smooth relaxation idea can be applied to other maximin design as long as the criterion is convex in the design.
The framework we have developed, including the general equivalence theorem and the design construction algorithm as well as its convergence, can be extended to other maximin design as well.
Other main contributions of this work are summarized as follows.
First, the proposed Mm-$\Phi_p$ design criterion is very general, covering various design criteria, such as D-, A-, E-optimality for estimation accuracy and I-, EI-optimality for prediction accuracy \citep{li2018efficient}.
Second, different from the Bayesian optimal design, the proposed Mm-$\Phi_p$ design is a maximin design, which avoids the choice of prior distributions on the model specifications.
Third, the proposed Mm-$\Phi_p$ design can flexibly accommodate the aforementioned three types of model uncertainties in GLM.
Finally, the proposed algorithm has impressive computational efficiency with sound theoretical properties, and can be easily modified to construct compromise designs and Bayesian optimal designs.
The rest of the article is organized as follows.
Section \ref{sec:crit+GET} describes the Mm-$\Phi_p$ design criterion and investigates the theoretical properties.
In Section \ref{sec:algorithm}, an efficient algorithm is developed.
Numerical examples are conducted in Section \ref{sec:examples} to examine the the performance of the proposed method.
We summarize the work with some discussions in Section \ref{sec:discussion}. All the technical proofs are detailed in the Appendix.
\section{The Mm-$\Phi_p$ Design Criterion and Its Properties}\label{sec:crit+GET}
Consider an experiment with $d$ design variables, $\boldsymbol{x} = [x_1,...,x_d]$, and $x_j\in \Omega_j$, where $\Omega_j$ is a measurable domain of all possible values for $x_j$.
The experimental region, $\Omega$, is a certain measurable subset of $\Omega_1\times\cdots\times\Omega_d$.
For a GLM, the response $Y(\boldsymbol{x})$ is assumed to follow a distribution in the exponential family.
The link function, $h: \mathbb{R}\rightarrow\mathbb{R}$, provides the relationship between the linear predictor, $\eta=\boldsymbol{\beta}^\top\boldsymbol{g}(\boldsymbol{x})$, and $\mu(\boldsymbol{x})$, the mean of the response $Y(\boldsymbol{x})$ as
$\mu(\boldsymbol{x}) = \mathbb{E}[Y(\boldsymbol{x})] = h^{-1}\left(\boldsymbol{\beta}^\top\boldsymbol{g}(\boldsymbol{x})\right)$,
where $\boldsymbol{g} = [g_1,...,g_l]^\top$ are the known basis functions of the design variables, $\boldsymbol{\beta}=[\beta_1,\beta_2,...,\beta_{l}]^\top$ are the corresponding regression coefficients parameters, and $h^{-1}$ is the inverse function of $h$.
The approximate design $\xi$ is defined as $\xi = \left\{\begin{array}{ccc}
\boldsymbol{x}_1,&...,&\boldsymbol{x}_n\\
\lambda_1,&...,&\lambda_n
\end{array}\right\}$,
where $\boldsymbol{x}_1, \ldots \boldsymbol{x}_{n}$ are the support points, and $0<\lambda_i<1$ represents the probability mass allocated to the corresponding support point $\boldsymbol{x}_i$.
We use $M = (h,\boldsymbol{g},\boldsymbol{\beta})$ to denote the model specification of a GLM whose link function is $h$, basis functions are $\boldsymbol{g}$, and the vector of the regression coefficients is $\boldsymbol{\beta}$.
The Fisher information matrix of the GLM $M$ is
\begin{equation}\label{eqn:fisher}
{\mathsf I}(\xi,M) = \sum\limits_{i=1}^n\lambda_i\boldsymbol{g}(\boldsymbol{x}_i)w(\boldsymbol{x}_i,M)\boldsymbol{g}^\top(\boldsymbol{x}_i),
\end{equation}
where
$w(\boldsymbol{x}_i,M) = \left[\var(Y(\boldsymbol{x}_i))[h^{'}(\mu(\boldsymbol{x}_i))]^2\right]^{-1}$.
Clearly, ${\mathsf I}(\xi,M)$ depends all three components of $M=(h, \boldsymbol{g}, \boldsymbol{\beta})$.
Various local optimal design criteria in the literature are all based on the Fisher information with a specified $M$.
\subsection{The Mm-$\Phi_p$ Design Criterion}\label{subsec:criterion}
To represent the uncertainties of a GLM, we denote the set of candidate link functions, the set of the candidate basis functions, and the domain of the regression coefficients as $\mathcal{H}$, $(\mathcal{G}|\mathcal{H})$, and ($\mathcal{B}|\mathcal{H},\mathcal{G})$, respectively.
The notation of conditioning presents the dependence of basis functions $\boldsymbol{g}$ on the choice of link function $h$, and the dependence of regression coefficients $\boldsymbol{\beta}$ on the choice of both $h$ and $\boldsymbol{g}$.
The set $\mathcal{M} = \{M = (h,\boldsymbol{g},\boldsymbol{\beta}): h\in\mathcal{H}, \boldsymbol{g}\in(\mathcal{G}|\mathcal{H}), \boldsymbol{\beta}\in (\mathcal{B}|\mathcal{H},\mathcal{G}) \}$ contains all model specifications of interest.
In the optimal design theory, \emph{efficiency} is a popular and scale-free performance measurement to compare the designs for a given criterion.
Specifically, for a generic design criterion $\Psi(\xi,\mathcal{M})$, which is to be minimized, the efficiency of a design $\xi$ relative to another design $\xi'$ is defined as \citep{06atk}
\begin{equation}\label{defn:minefficiency}
\eff_{\Psi}(\xi,\xi';\mathcal{M}) = \frac{\Psi(\xi',\mathcal{M})}{\Psi(\xi,\mathcal{M})}.
\end{equation}
Using such a definition of efficiency, the design $\xi$ is more efficient than design $\xi'$ as long as the efficiency in \eqref{defn:minefficiency}
is larger than 1.
When a single model specification is considered, i.e., $\mathcal{M} = \{M\}$, the criterion $\Psi$ becomes a local optimal design criterion.
When multiple specifications are considered, the criterion $\Psi$ corresponds to a global optimal design criterion, such as Bayesian optimality, compromise design optimality, minimax/maximin optimality, etc.
Throughout this work, for a specified model $M$, we use the generalized $\Phi_p$-optimality introduced in \cite{kiefer1974general}, which is
\begin{eqnarray}\label{eq:phi_p}
\Phi_p(\xi,M) = \left(q^{-1}\tr\left[\frac{\partial \boldsymbol{f}(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}^{\top}}{\mathsf I}(\xi,M)^{-1}\left(\frac{\partial \boldsymbol{f}(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}^{\top}}\right)^{\top}\right]^p\right)^{1/p},\,\,\,0<p<\infty,
\end{eqnarray}
where $\boldsymbol{f}(\boldsymbol{\beta}) = [f_1(\boldsymbol{\beta}),...,f_q(\boldsymbol{\beta})]^{\top}$ are some functions of $\boldsymbol{\beta}$.
Common examples are linear contrasts of the coefficients, such as $\beta_k$ and $\beta_j-\beta_{j'}$.
Note that the $\Phi_p$-optimality is essentially D-optimality as $p\rightarrow 0$ and E-optimality as $p\rightarrow\infty$.
Let us denote ${\xi^{\opt}_M}$ to be the local optimal design which minimizes the $\Phi_p$-criterion for model $M$.
According to \eqref{defn:minefficiency}, the $\Phi_p$-efficiency of any design $\xi$ relative to local optimal design ${\xi^{\opt}_M}$ given a specific $M = (h,\boldsymbol{g},\boldsymbol{\beta})$ is
\begin{eqnarray}\label{eqn:phieff}
\eff_{\Phi_p}(\xi,{\xi^{\opt}_M};M) = \frac{\Phi_p({\xi^{\opt}_M},M)}{\Phi_p(\xi,M)}.
\end{eqnarray}
It is obvious that $0 \le \eff_{\Phi_p}(\xi,{\xi^{\opt}_M};M) \le 1$ for any $\xi$, and the larger the $\Phi_p$-efficiency is, the more efficient the design $\xi$ is.
Under the idea of global maximin design, we consider the maximin $\Phi_p$-efficient design, which maximizes the smallest possible $\eff_{\Phi_p}(\xi,\xi^{\opt}_M;M)$ over all $M\in \mathcal{M}$.
That is, we consider a maximin design as
\begin{align}\label{eqn:orirobustdesign}
\xi^{*}&=\argmax\limits_{\xi} \inf_{M\in\mathcal{M}} \eff_{\Phi_p}(\xi,{\xi^{\opt}_M};M).
\end{align}
In the optimization problem \eqref{eqn:orirobustdesign}, the infimum is used instead of minimum because it is not certain whether the minimum is attainable.
To simplify the problem, we take a closer look at the model set $\mathcal{M}$.
In practice, $\mathcal{H}$ usually contains a few potential link functions.
For example, the link function of Poisson regression for counting data is $h(\mu(\boldsymbol{x})) = \ln(\mu(\boldsymbol{x}))$, and the link function of a GLM for binary data could be logistic function $h(\mu(\boldsymbol{x})) = \ln\left(\frac{\mu(\boldsymbol{x})}{1-\mu(\boldsymbol{x})}\right)$, or probit function $h(\mu(\boldsymbol{x})) = \Phi^{-1}(\mu(\boldsymbol{x}))$, or a complementary log-log function $h(\mu(\boldsymbol{x})) = \ln(-\ln(1-\mu(\boldsymbol{x})))$.
The set of the candidate basis functions $(\mathcal{G}|\mathcal{H})$ is often finite too.
The typical basis functions used in GLMs are linear and/or higher-order polynomials of $\boldsymbol{x}$.
Note that $(\mathcal{B}|\mathcal{H},\mathcal{G})$, the domain of $\boldsymbol{\beta}$, often is uncountable since $\boldsymbol{\beta}$ is considered to be continuous.
Consequently, the set $\mathcal{M}$ is an uncountable set, which may not ensure an attainable minimum.
A common remedy \citep{dror2006robust, woods2006designs,atkinson2015designs,woods2017bayesian} is to discretize $(\mathcal{B}|\mathcal{H, G})$ and create a finite and countable subset ($\mathcal{B}'|\mathcal{H},\mathcal{G})$.
The corresponding surrogate set $\mathcal{M}' = \{M = (h,\boldsymbol{g},\boldsymbol{\beta}): h\in\mathcal{H}, \boldsymbol{g}\in(\mathcal{G}|\mathcal{H}), \boldsymbol{\beta}\in (\mathcal{B}'|\mathcal{H},\mathcal{G})\}$ is also a subset of the original $\mathcal{M}$.
Replacing $\mathcal{M}$ by $\mathcal{M'}$ in \eqref{eqn:orirobustdesign}, the solution of
\begin{eqnarray}\label{eqn:robustdesign}
\xi^{*} = \argmax \limits_{\xi} \min_{M\in\mathcal{M}'} \left[\eff_{\Phi_p}(\xi,{\xi^{\opt}_M};M)\right]
\end{eqnarray}
is a sub-optimal solution of \eqref{eqn:orirobustdesign}.
When the discretization is adequate to form a close approximation of $\mathcal{M}$, the sub-optimal solution is expected to be close to the original optimal solution.
The design criterion in \eqref{eqn:robustdesign} is still a challenging optimization due to the non-smooth objective function $\min_{M\in\mathcal{M}'} \left[\eff_{\Phi_p}(\xi,{\xi^{\opt}_M};M)\right]$
\citep{wong1992unified, wong1993heteroscedastic, schwabe1997maximin, king1998optimal, atkinson2015designs}.
We consider using ``Log-Sum-Exp" as a tight and smooth approximation to the minimum function, which is widely used in machine learning \citep{calafiore2014optimization}.
With the ``Log-Sum-Exp", one can have
\begin{align} \label{eq:lsebounds}
& \left[\ln\left(\sum\limits_{j=1}^m \exp\left(\frac{1}{\eff_{\Phi_p}(\xi,{\xi^{\opt}_{M_j}};M_j)}\right) \right)\right]^{-1} \le \min_{M\in\mathcal{M}'} \eff_{\Phi_p}(\xi,{\xi^{\opt}_M};M) \nonumber \\
& \leq \left[\ln\left(\sum\limits_{j=1}^m \exp\left(\frac{1}{\eff_{\Phi_p}(\xi,{\xi^{\opt}_{M_j}};M_j)}\right)\right)-\ln(m)\right]^{-1},
\end{align}
where $m$ is the cardinality of $\mathcal{M}'$, i.e., the number of potential model specifications in $\mathcal{M}'$.
The equality in the first inequality is obtained when $m=1$, and the equality in the second inequality holds when $\eff_{\Phi_p}(\xi,{\xi^{\opt}_{M_j}};M_j)$ remains the same for all $M_j\in\mathcal{M}'$.
Thus
maximizing $\left [ \ln\left(\sum\limits_{j=1}^m \exp\left(\frac{1}{\eff_{\Phi_p}(\xi,{\xi^{\opt}_{M_j}};M_j)}\right)\right)\right ]^{-1}$ leads to maximizing both the lower and upper bound of the worst (or the smallest) $\Phi_p-$efficiency.
Therefore, instead of solving \eqref{eqn:robustdesign}, which involves an inner minimization of $\Phi_p$-efficiency,
we propose to use the ``Log-Sum-Exp" approximation of the worst-case $\Phi_p$-efficiency as the design criterion, which is to minimize
\begin{equation}\label{eqn:lse}
\lse(\xi,\mathcal{M}') \triangleq \ln\left(\sum\limits_{j=1}^m \exp\left(\frac{1}{\eff_{\Phi_p}(\xi,{\xi^{\opt}_{M_j}};M_j)}\right)\right).
\end{equation}
Minimizing $\lse(\xi,\mathcal{M}')$ is the same as maximizing $\left [ \ln\left(\sum\limits_{j=1}^m \exp\left(\frac{1}{\eff_{\Phi_p}(\xi,{\xi^{\opt}_{M_j}};M_j)}\right)\right)\right ]^{-1}$
since $\ln\left(\sum\limits_{j=1}^m \exp\left(\frac{1}{\eff_{\Phi_p}(\xi,{\xi^{\opt}_{M_j}};M_j)}\right)\right)>0$.
We call the $\lse(\xi,\mathcal{M}')$, which aims at maximizing the minimal $\Phi_p$-efficiency, the Mm-$\Phi_p$ criterion.
The design that minimizes $\lse(\xi,\mathcal{M}')$ is called the Mm-$\Phi_p$ design for the surrogate model set $\mathcal{M}'$, denoted by ${\xi^{\mr}_{\mathcal{M}'}}$.
It is obvious that minimizing $\lse(\xi,\mathcal{M}')$ is equivalent to minimizing
\begin{align}\label{eqn:criterion}
\se(\xi,\mathcal{M}') = \triangleq \sum\limits_{j=1}^m \exp\left(\frac{1}{\eff_{\Phi_p}(\xi,{\xi^{\opt}_{M_j}};M_j)}\right)
= \sum\limits_{j=1}^m \exp\left(\frac{\Phi_p(\xi,M_{j})}{\Phi_p({\xi^{\opt}_{M_j}},M_{j})}\right).
\end{align}
That is to say ${\xi^{\mr}_{\mathcal{M}'}} = \argmin\limits_{\xi} \se(\xi,\mathcal{M}')=\argmin\limits_{\xi} \lse(\xi,\mathcal{M}')$.
In Section \ref{subsec:robvscom} and \ref{subsec:theory}, we first compare the proposed maximin $\Phi_p$-efficient design ${\xi^{\mr}_{\mathcal{M}'}}$ with the well-known compromise design in \cite{woods2006designs}
and then show the convexity of $\se(\xi,\mathcal{M}')$ with respect to $\xi$, as well as the necessary and sufficient conditions of the Mm-$\Phi_p$ design ${\xi^{\mr}_{\mathcal{M}'}}$.
\subsection{Connection to Compromise Design}\label{subsec:robvscom}
\cite{woods2006designs} proposed a compromise design that optimizes the weighted average of certain criteria, where each criterion is based on a potential model from some prior.
It means that the compromise design requires a prior distribution $p(M)$ for the model specifications $M\in\mathcal{M}'$.
The prior distribution can be as simple as a uniform distribution or other informative distributions.
There can be two different ways to define a compromise design.
The first way aims at maximizing a weighted average of the local $\Phi_p$-efficiencies.
That is
\[
{\xi^{\effcom}_{\mathcal{M}'}} = \argmax_{\xi} \sum_{j=1}^m p(M_j)\eff_{\Phi_p}(\xi,{\xi^{\opt}_{M_j}};M_j),
\]
and it is henceforth called the eff-compromise design.
Clearly, this averaged local efficiencies is not smaller than the reciprocal of $\lse(\xi,\mathcal{M}')$ since
\[
[\lse(\xi,\mathcal{M}')]^{-1}\leq \min_{M\in\mathcal{M}'} \eff_{\Phi_p}(\xi,{\xi^{\opt}_M};M)\leq \sum_{j=1}^m p(M_j)\eff_{\Phi_p}(\xi,{\xi^{\opt}_{M_j}};M_j).
\]
Thus the compromise design maximizes an upper bound of the worst $\Phi_p$-efficiency.
This is not as ideal as $\lse(\xi,\mathcal{M}')$.
Minimizing $\lse(\xi,\mathcal{M}')$ simultaneously maximizes a lower and an upper bound of the worst $\Phi_p$-efficiency (see \eqref{eq:lsebounds}),
even though the two upper bounds $\left[\lse(\xi,\mathcal{M}')-\ln(m)\right]^{-1}$ and $\sum_{j=1}^m p(M_j)\eff_{\Phi_p}(\xi,{\xi^{\opt}_{M_j}};M_j)$ can be both attainable, depending on the prior distributions.
Another type of compromise design is to minimize the weighted average of local $\Phi_p$-criterion. That is
\[
{\xi^{\phicom}_{\mathcal{M}'}} = \argmin_{\xi} \sum_{j=1}^m p(M_j)\Phi_p(\xi, M_j),
\]
which is henceforth called the $\Phi_p$-compromise design.
Such a design criterion is more consistent with the classic Bayesian optimal design.
According to \cite{woods2006designs} and \cite{atkinson2015designs}, the Bayesian optimal design can be considered as a special case of the compromise design, as the former only deals with the uncertainty of the unknown parameters of the GLMs, whereas the compromise design handles all three kinds of uncertainties that are listed previously, including uncertainty of the parameters.
We would like to point out that the $\Phi_p$-compromise design can be sensitive to the choice of the prior distribution, especially when the optimal criterion values of different model specifications are very different. On the contrary, $\lse(\xi,\mathcal{M}')$ does not assume any prior distribution and is robust to all the choices of the prior distribution of model specifications.
\subsection{General Equivalence Theorem}\label{subsec:theory}
To develop an efficient algorithm to construct the Mm-$\Phi_p$ design, we study the convexity of the objective function $\se(\xi,\mathcal{M}')$ with respect to $\xi$, and summarize the necessary and sufficient conditions of the Mm-$\Phi_p$ design ${\xi^{\mr}_{\mathcal{M}'}}$ in a General Equivalence Theorem.
To make this part concise, we list the major results here and place the lemmas and the proofs in the Appendix.
For a model specification $M_j\in \mathcal{M}'$, we simplify the notation of the information matrix ${\mathsf I}(\xi,M_j)$ to be ${\mathsf I}_j(\xi)$, the weight function $w(\boldsymbol{x},M_j)$ in \eqref{eqn:fisher} to be $w_j(\boldsymbol{x})$, the $\Phi_p$-criterion value of a design $\Phi_p(\xi, M_j)$ to be $\Phi_p^j(\xi)$, and the $\Phi_p$-criterion value $\Phi_p({\xi^{\opt}_{M_j}}, M_j)$ of the local optimal design to be $\Phi_p^{\opt_j}$.
Then, we can rewrite $\se(\xi,\mathcal{M}')$ as $\se(\xi,\mathcal{M}')=\sum\limits_{j=1}^m \exp\left(\frac{\Phi_p^j(\xi)}{\Phi_p^{\opt_j}}\right)$.
Lemma \ref{lem:SeConvex} in the Appendix proves the convexity of $\se(\cdot,\mathcal{M}')$ with respect to $\xi$.
Given two designs $\xi$ and $\xi'$, the directional derivative of $\se(\xi,\mathcal{M}')$ in the direction of $\xi'$ is defined as follows.
\begin{equation}\label{defn:dirder}
\nabla_{\xi'}\se(\xi,\mathcal{M}'):=\phi(\xi',\xi) = \lim\limits_{\alpha\rightarrow 0^+}\frac{\se((1-\alpha)\xi+\alpha \xi',\mathcal{M}')-\se(\xi,\mathcal{M}')}{\alpha}, \quad \alpha\in [0,1].
\end{equation}
Lemma \ref{lem:dirder} in the Appendix derives the concrete formula of $\phi(\xi',\xi)$.
If $\xi'$ only contains a single support point $\boldsymbol{x}$ with corresponding weight $\lambda=1$, the directional derivative of $\se(\xi,\mathcal{M}')$ in the direction of $\xi'$ is a special case of Lemma \ref{lem:dirder}.
We denote this directional derivative as $\phi(\boldsymbol{x},\xi)$, and give its formula in Lemma \ref{lem:dirder2} in the Appendix.
Following Lemma \ref{lem:dirder2}, we also provide the specific formulas of $\phi(\boldsymbol{x},\xi)$ for the D-, A- and EI-optimality.
With these results, we can obtain the General Equivalence Theorem \ref{thm:equi_thm} for the Mm-$\Phi_p$ design that maximizes $\lse(\xi,\mathcal{M}')$, or equivalently minimizes $\se(\xi,\mathcal{M}')$.
\begin{thm}[General Equivalence Theorem]\label{thm:equi_thm}
The following three conditions of a design ${\xi^{\mr}_{\mathcal{M}'}}$ are equivalent:
\begin{enumerate}
\item
The design ${\xi^{\mr}_{\mathcal{M}'}}$ minimizes $\lse(\xi,\mathcal{M}')$ and $\se(\xi,\mathcal{M}')$.
\item
$\phi(\boldsymbol{x},{\xi^{\mr}_{\mathcal{M}'}})\geq 0$ holds for any $\boldsymbol{x}\in\Omega$, and the inequality becomes equality if $\boldsymbol{x}$ is a support point of the design ${\xi^{\mr}_{\mathcal{M}'}}$.
\end{enumerate}
\end{thm}
The General Equivalence Theorem \ref{thm:equi_thm} for the $\lse$ criterion in \eqref{eqn:lse} provides important guidelines on how the support points of the Mm-$\Phi_p$ design should be added in a sequential manner.
The proposed algorithm for the Mm-$\Phi_p$ design (detailed in Section \ref{sec:algorithm}) iterates between adding the support point and updating the weights $\lambda_i$'s, i.e., which can be considerd as a Fedorov-Wynn type of algorithm \citep{dean2015handbook}.
In each step of an iteration, we add one design point $\boldsymbol{x}^*$ into the current design as a support point, if $\boldsymbol{x}^*$ meets the following two conditions.
The first condition is that its directional derivative is negative, $\phi(\boldsymbol{x}^*,\xi)<0$.
Otherwise, if there does not exist an $\boldsymbol{x}\in \Omega$ such that $\phi(\boldsymbol{x},\xi)<0$, then $\xi$ already reaches the optimal.
The second condition is that the directional derivative $\boldsymbol{x}^*$ reaches the minimum, or the \emph{size} of the directional derivative is maximal compared to other possible points whose directional derivative values are also negative.
This condition leads to the maximum reduction of $\se(\xi,\mathcal{M}')$ if $\boldsymbol{x}^*$ is added to $\xi$.
After the design point $\boldsymbol{x}^*$ is added, the weights of all design points in the current design need to be updated.
Thus, it is important to investigate the property of the optimal weights when the design points are given.
Given design points $\boldsymbol{x}_1,\boldsymbol{x}_2,...,\boldsymbol{x}_n$, the weight vector $\boldsymbol{\lambda} = [\lambda_1,\lambda_2,\ldots,\lambda_n]^\top$ is the only variable for the design.
We emphasize this by adding a superscript $\boldsymbol{\lambda}$ in the notation of the design and denote it as
$\xi^{\boldsymbol{\lambda}} = \Big\{\begin{array}{ccc}
\boldsymbol{x}_1,&...,&\boldsymbol{x}_n\\
\lambda_1,&...,&\lambda_n
\end{array}\Big\}$.
Consider $\se(\xi^{\boldsymbol{\lambda}},\mathcal{M}')$ as a function of $\boldsymbol{\lambda}$, i.e.,
\begin{eqnarray}\label{eqn:weightlb}
\se(\cdot,\mathcal{M}'): \{\boldsymbol{\lambda} = (\lambda_1,\cdots,\lambda_n): \lambda_i > 0,\sum\lambda_i=1\}\mapsto \sum\limits_{j=1}^m \exp\left(\frac{\Phi_p(\xi^{\boldsymbol{\lambda}},M_j)}{\Phi_p^{\opt_j}}\right).
\end{eqnarray}
The optimal weight vector $\boldsymbol{\lambda}^*$ should be the one that minimizes $\se(\xi^{\boldsymbol{\lambda}},\mathcal{M}')$ with the given support points $\boldsymbol{x}_1,...,\boldsymbol{x}_n$.
Lemma \ref{lem:ConvWeights} in the Appendix proves the convexity of $\se(\xi^{\boldsymbol{\lambda}},\mathcal{M}')$ with respect to $\boldsymbol{\lambda}$.
Corollary \ref{thm:equi_weight} provides a sufficient and necessary condition on the optimal weights for a design whose support points are fixed.
It is a special case of Theorem \ref{thm:equi_thm} when the experimental region is restricted to the set $\Omega = \{\boldsymbol{x}_1,...,\boldsymbol{x}_n\}$.
\begin{cor}[Conditions of Optimal Weights]\label{thm:equi_weight}
Given a set of design points $\boldsymbol{x}_1,...,\boldsymbol{x}_n$, the following three conditions on the weight vector $\boldsymbol{\lambda}^* = [\lambda^*_1,...,\lambda^*_n]^\top$ are equivalent:
\begin{enumerate}
\item
The weight vector $\boldsymbol{\lambda}^*$ minimizes $\lse(\xi^{\boldsymbol{\lambda}},\mathcal{M}')$ and $\se(\xi^{\boldsymbol{\lambda}},\mathcal{M}')$.
\item
For all $\boldsymbol{x}_i$, with $\lambda_i^*>0$, $\phi(\boldsymbol{x}_i,\xi^{\boldsymbol{\lambda}^*}) = 0;$
for all $\boldsymbol{x}_i$ with $\lambda_i^*=0$, $\phi(\boldsymbol{x}_i,\xi^{\boldsymbol{\lambda}^*}) \geq 0.$
\end{enumerate}
\end{cor}
\section{Efficient Algorithm of Constructing Mm-$\Phi_{p}$ Design}\label{sec:algorithm}
This section details the proposed sequential algorithm, named as \textbf{Mm-$\Phi_{p}$ Algorithm}, to construct the Mm-$\Phi_p$ design ${\xi^{\mr}_{\mathcal{M}'}}$.
The proposed algorithm has a sound theoretical rationale as well as impressive computational efficiency.
The key idea of the proposed algorithm is as follows.
In each sequential iteration, a new design point $\boldsymbol{x}^*$ with the smallest negative value of directional derivative $\boldsymbol{x}^* = \argmin\limits_{\boldsymbol{x}} \phi(\boldsymbol{x},\xi)<0$ is added to the current design,
and then the Optimal-Weight Procedure (detailed in Section \ref{sec: weight updating}) is used to optimize the weights of the current design points.
The stopping rule of the proposed sequential algorithm can be determined based on the efficiency of the obtained design.
The proposed \textbf{Mm-$\Phi_{p}$ Algorithm}, following a similar spirit as the sequential Wynn-Fedorov type algorithm, is to add the new design point after the optimal weights of the existing design points are achieved.
In each iteration, the design point that minimizes the directional derivative $\phi(\boldsymbol{x},\xi)$ will be added into the design to gain a maximum reduction of $\se$ criterion value.
Then, the weights of all design points in the current design are optimized, which will be described in Section \ref{sec: weight updating}.
Theoretically, the algorithm should terminate until the directional derivatives of all candidate design points in the experimental region are nonnegative.
However, this stopping rule is impractical since it requires many iterations to make all the directional derivative values strictly positive (numerically it is unlikely to have exactly zero cases).
To address this issue, we consider terminating the algorithm when the design efficiency is large enough, say close to 1.
Such a stopping criterion is much better than terminating the algorithm when the directional derivative $\min\limits_{\boldsymbol{x}\in\Omega}\phi(\boldsymbol{x},\xi)>\epsilon$ with a small negative $\epsilon$.
The drawback of the later rule is that the choice of $\epsilon$ does not directly reflect the quality of the achieved design, since $\phi(\boldsymbol{x},\xi)$ is the directional derivative.
Following the general definition of design efficiency in \eqref{defn:minefficiency}, we denote the efficiency of a design $\xi$ relative to the Mm-$\Phi_p$ design ${\xi^{\mr}_{\mathcal{M}'}}$ that minimizes the Mm-$\Phi_p$ criterion $\lse$ as:
\begin{equation}\label{eqn:robusteff}
\Eff_{\lse}(\xi,{\xi^{\mr}_{\mathcal{M}'}};\mathcal{M}') = \frac{\lse({\xi^{\mr}_{\mathcal{M}'}},\mathcal{M}')}{\lse(\xi,\mathcal{M}')}.
\end{equation}
Since $\Eff_{\lse}(\xi,{\xi^{\mr}_{\mathcal{M}'}};\mathcal{M}')$ involves ${\xi^{\mr}_{\mathcal{M}'}}$, which is unknown, we derive a lower bound of it in Theorem \ref{thm:lowerboundeff}.
Instead of using $\Eff_{\lse}(\xi,{\xi^{\mr}_{\mathcal{M}'}};\mathcal{M}')$ as the stopping rule, we can use the lower bound of $\Eff_{\lse}(\xi,{\xi^{\mr}_{\mathcal{M}'}};\mathcal{M}')$ as the stopping rule.
\setcounter{lem}{4}
\begin{lem}\label{lem:ineqofdirder}
For any design $\xi$ and the Mm-$\Phi_p$ design ${\xi^{\mr}_{\mathcal{M}'}}$ that minimizes $\se(\xi,\mathcal{M}')$ or equivalently minimizes $\lse(\xi,\mathcal{M}')$, the following inequality holds:
$$\min\limits_{\boldsymbol{x}\in\Omega} \phi(\boldsymbol{x},\xi)\leq \phi({\xi^{\mr}_{\mathcal{M}'}},\xi)\leq \se({\xi^{\mr}_{\mathcal{M}'}},\mathcal{M}')-\se(\xi,\mathcal{M}')\leq 0,$$
where $\phi(\boldsymbol{x},\xi)$ and $\phi({\xi^{\mr}_{\mathcal{M}'}},\xi)$ are the directional derivatives defined in \eqref{defn:dirder}.
\end{lem}
\begin{thm}[A Lower Bound of $\lse$-Efficiency]\label{thm:lowerboundeff}
Design ${\xi^{\mr}_{\mathcal{M}'}}$ is the Mm-$\Phi_p$ design that minimizes $\lse$ criterion in \eqref{eqn:lse}.
The $\lse$-efficiency defined in \eqref{eqn:robusteff} of any design $\xi$ relative to ${\xi^{\mr}_{\mathcal{M}'}}$ is bounded below by
$$\Eff_{\lse}(\xi,{\xi^{\mr}_{\mathcal{M}'}};\mathcal{M}') \geq 1+2\frac{\min\limits_{\boldsymbol{x}\in\Omega} \phi(\boldsymbol{x},\xi)}{\se(\xi,\mathcal{M}')}.$$
\end{thm}
\begin{comment}
\begin{proof}
When $1+2\frac{\min\limits_{\boldsymbol{x}\in\Omega} \phi(\boldsymbol{x},\xi)}{\se(\xi,\mathcal{M}')}<0$, $\Eff_{\lse}(\xi,{\xi^{\mr}_{\mathcal{M}'}};\mathcal{M}') \geq 1+2\frac{\min\limits_{\boldsymbol{x}\in\Omega} \phi(\boldsymbol{x},\xi)}{\se(\xi,\mathcal{M}')}$ holds automatically.
When $1+2\frac{\min\limits_{\boldsymbol{x}\in\Omega} \phi(\boldsymbol{x},\xi)}{\se(\xi,\mathcal{M}')}\geq 0$, that is, $\frac{\min\limits_{\boldsymbol{x}\in\Omega} \phi(\boldsymbol{x},\xi)}{\se(\xi,\mathcal{M}')}\geq -0.5$, define $\frac{\se({\xi^{\mr}_{\mathcal{M}'}},\mathcal{M}')}{\se(\xi,\mathcal{M}')} = a >0$, then it follows immediately from Lemma \ref{lem:ineqofdirder} that
\begin{equation}\label{inequ:eff}
1\geq a = \frac{\se({\xi^{\mr}_{\mathcal{M}'}},\mathcal{M}')}{\se(\xi,\mathcal{M}')}\geq 1+\frac{\min\limits_{\boldsymbol{x}\in\Omega} \phi(\boldsymbol{x},\xi)}{\se(\xi,\mathcal{M}')}\geq 0.5.
\end{equation}
Since the function $\frac{\ln(a)}{\ln(\se(\xi,\mathcal{M}'))}+1-a$ is an increasing function of $\se(\xi,\mathcal{M}')$, $\se(\xi,\mathcal{M}')\geq e$ because of its definition and $a\geq 0.5$, we have
\begin{eqnarray*}
&&\left|\Eff_{\lse}(\xi,{\xi^{\mr}_{\mathcal{M}'}};\mathcal{M}')-\frac{\se({\xi^{\mr}_{\mathcal{M}'}},\mathcal{M}')}{\se(\xi,\mathcal{M}')}\right|= \left|\frac{\ln(a\se(\xi,\mathcal{M}'))}{\ln(\se(\xi,\mathcal{M}'))}-\frac{a\se(\xi,\mathcal{M}')}{\se(\xi,\mathcal{M}')}\right|\\
&=& \left|\frac{\ln(a)}{\ln(\se(\xi,\mathcal{M}'))}+1-a\right|\leq \max(\left|\ln(a)+1-a\right|,\left|1-a\right|)\\
&=& \max(-\ln(a)-1+a,1-a)=1-a.
\end{eqnarray*}
Thus, together with \eqref{inequ:eff}, $\Eff_{\lse}(\xi,{\xi^{\mr}_{\mathcal{M}'}};\mathcal{M}')\geq 2 a-1\geq 1+2\frac{\min\limits_{\boldsymbol{x}\in\Omega} \phi(\boldsymbol{x},\xi)}{\se(\xi,\mathcal{M}')}$.
\end{proof}
\end{comment}
Using the lower bound of $\lse$-efficiency in Theorem \ref{thm:lowerboundeff} as the stopping criterion,
the proposed algorithm terminates when the lower bound $1+2\frac{\min\limits_{\boldsymbol{x}\in\Omega} \phi(\boldsymbol{x},\xi)}{\se(\xi,\mathcal{M}')}$ exceeds a user-specified value, $Tol_{\text{eff}}$.
Here $Tol_{\text{eff}}$ should be set close to 1, say $Tol_{\text{eff}} = 0.99$, or equivalently $\frac{\min\limits_{\boldsymbol{x}\in\Omega} \phi(\boldsymbol{x},\xi)}{\se(\xi,\mathcal{M}')}\geq -0.005$.
With this stopping rule, the sequential algorithm to construct the Mm-$\Phi_p$ design is described in Algorithm \ref{alg:sequential}.
The $MaxIter_2$ is the maximum number of iterations allowed of adding design points, and we set it to be 200.
\begin{algorithm}
\caption{ (\textbf{Mm-$\Phi_p$ Algorithm}) The Sequential Algorithm for Mm-$\Phi_p$ Design. \label{alg:sequential}}
\begin{algorithmic}[1]
\State For each model specification $M_j\in \mathcal{M}'$, construct the local optimal design and calculate the corresponding optimatlity criterion value $\Phi_p^{\opt_j}$.
\State Generate an $N$ points candidate pool $\mathcal{C}$.
\State Choose an initial design points set $\mathcal{X}^{(0)} = \left\{\boldsymbol{x}_1,\cdots,\boldsymbol{x}_{l+1}\right\}$ containing $l+1$ points.
\State Obtain optimal weights $\boldsymbol{\lambda}^{(0)}$ of initial design points set $\mathcal{X}^{(0)}$ using Algorithm \ref{alg:weight} (\textbf{Optimal-Weight Procedure}) and form the initial design $\xi^{(0)} = \left\{\begin{array}{cc} \mathcal{X}^{(0)}\\ \boldsymbol{\lambda}^{(0)}
\end{array}\right\}$.
\State Calculate the lower bound of $\lse$-efficiency of $\xi^{(0)}$:
\[\text{eff.low} = 1+2\frac{\min\limits_{\boldsymbol{x}\in\mathcal{C}} \phi(\boldsymbol{x},\xi^{(0)})}{\se(\xi^{(0)},\mathcal{M}')}.\]
\State Set $r=1$.
\While {$\text{eff.low}<Tol_{\text{eff}}$ and $r< MaxIter_1$}
\State
Add the point $\boldsymbol{x}_r^* = \argmin \limits_{\boldsymbol{x} \in \mathcal{C}} \phi(\boldsymbol{x},\xi^{(r-1)})$ to the current design points set, i.e., $\mathcal{X}^{(r)} = \mathcal{X}^{(r-1)}\cup \{\boldsymbol{x}_r^*\}$,
where $\phi(\boldsymbol{x},\xi^{(r)})$ is given in Lemma \ref{lem:dirder2}.
\State Obtain optimal weights $\boldsymbol{\lambda}^{(r)}$ of the current design points set $\mathcal{X}^{(r)}$ using Algorithm \ref{alg:weight} (\textbf{Optimal-Weight Procedure}) and form the current design $\xi^{(r)} = \left\{\begin{array}{cc} \mathcal{X}^{(r)}\\ \boldsymbol{\lambda}^{(r)}
\end{array}\right\}$.
\State Calculate the lower bound of $\lse$-efficiency of $\xi^{(r)}$,
\[\text{eff.low} = 1+2\frac{\min\limits_{\boldsymbol{x}\in\mathcal{C}} \phi(\boldsymbol{x},\xi^{(r)})}{\se(\xi^{(r)},\mathcal{M}')}.\]
\State $r=r+1$.
\EndWhile
\end{algorithmic}
\end{algorithm}
In Section \ref{sec: convergence}, we provide some theoretical properties on the convergence of the Mm-$\Phi_p$ Algorithm.
Note that the \textbf{Mm-$\Phi_p$ Algorithm} requires optimizing the weights $\boldsymbol{\lambda}^{(r)}$ of the current design points in each sequential iteration.
Section \ref{sec: weight updating} describes the procedure on how to optimize the weight given the design points.
\subsection{Convergence of the Mm-$\Phi_p$ Algorithm}\label{sec: convergence}
The sequential nature of the proposed \textbf{Mm-$\Phi_p$ Algorithm} (i.e., Algorithm \ref{alg:sequential}) makes it efficient in computation as it adds one design point in each iteration.
Moreover, we can establish the theoretical convergence of Algorithm \ref{alg:sequential}, which is stated as follows.
\begin{thm}[Convergence of Algorithm \ref{alg:sequential}(Mm-$\Phi_p$ Algorithm)]\label{thm:cong-algo2}
Assume the candidate pool $\mathcal{C}$ contains all the support points of the Mm-$\Phi_p$ design ${\xi^{\mr}_{\mathcal{M}'}}$.
The design constructed by Algorithm \ref{alg:sequential} converges to ${\xi^{\mr}_{\mathcal{M}'}}$ that minimizes $\lse(\xi,\mathcal{M}')$, i.e.,
\[\lim\limits_{r\rightarrow\infty} \lse(\xi^{(r)},\mathcal{M}') = \lse({\xi^{\mr}_{\mathcal{M}'}},\mathcal{M}').\]
\end{thm}
\begin{comment}
\begin{proof}
We show the proof for the scenario $p>0$ in the $\Phi_p$-criterion. The proof for $p=0$ could be done similarly.
The proof is established by proof of contradiction.
Suppose that the Algorithm 1 does not converge to the Mm-$\Phi_p$ design $\xi^*$, then we have
$$\lim_{r\rightarrow\infty} \se(\xi^{(r)},\mathcal{M}') > \se({\xi^{\mr}_{\mathcal{M}'}},\mathcal{M}').$$
For any iteration $r+1\geq 1$, since $\mathcal{X}^{(r)}\subset \mathcal{X}^{(r+1)}$ and the Optimal-Weight Procedure returns optimal weight vector that minimizes $\se$ criterion,
the design $\xi^{(r+1)}$ cannot be worse than the design in the previous iteration $\xi^{(r)}$, i.e.,
$$\se(\xi^{(r+1)},\mathcal{M}')\leq \se(\xi^{(r)},\mathcal{M}').$$
Thus, for all $r\geq 0$, there exists $a>0$, such that,
$$\se(\xi^{(r)},\mathcal{M}')>\se({\xi^{\mr}_{\mathcal{M}'}},\mathcal{M}')+a.$$
According to Lemma \ref{lem:ineqofdirder},
$$\phi(\boldsymbol{x}_r^*,\xi^{(r)})\leq \phi({\xi^{\mr}_{\mathcal{M}'}},\xi^{(r)})\leq \se({\xi^{\mr}_{\mathcal{M}'}},\mathcal{M}')-\se(\xi^{(r)},\mathcal{M}')<-a,$$
for any $r\geq 0$.
Then, the Taylor expansion of $\se((1-\alpha)\xi^{(r)}+\alpha\boldsymbol{x}_r^*,\mathcal{M}')$ is upper bounded by
\begin{eqnarray}\label{eqn:taylorlb}
\se((1-\alpha)\xi^{(r)}+\alpha\boldsymbol{x}_r^*,\mathcal{M}') &=& \se(\xi^{(r)},\mathcal{M}')+\phi(\boldsymbol{x}_r^*,\xi^{(r)})\alpha+\frac{u}{2}\alpha^2\nonumber\\
&<& \se(\xi^{(r)},\mathcal{M}')-a\alpha+\frac{u}{2}\alpha^2,
\end{eqnarray}
where $u\geq 0$ is the second-order directional derivative of $\se$ evaluated at a value between 0 and $\alpha$.
For Algorithm 1, the criterion $\se$ is minimized, for all $0\leq\alpha\leq 1$ we have
\begin{eqnarray*}
\se(\xi^{(r+1)},\mathcal{M}') &\leq& \se((1-\alpha)\xi^{(r)}+\alpha\boldsymbol{x}_r^*,\mathcal{M}')\\
&<& \se(\xi^{(r)},\mathcal{M}')-a\alpha+\frac{u}{2}\alpha^2,
\end{eqnarray*}
or equivalently,
\begin{eqnarray*}
&&\se(\xi^{(r+1)},\mathcal{M}') - \se(\xi^{(r)},\mathcal{M}') < -a\alpha+\frac{u}{2}\alpha^2 = \frac{u}{2}\left(\alpha-\frac{a}{u}\right)^2-\frac{a^2}{2u}\\
&<&\left\{\begin{array}{ll}-\frac{a^2}{2u}<0,\,\,\,\,&\text{choosing}\,\,\alpha = \frac{a}{u}\,\,\text{if}\,\,\,\,\,a\leq u\\
\frac{u-4a}{8}<0,\,\,\,\,&\text{choosing}\,\,\alpha = 0.5\,\,\text{if}\,\,\,\,\,a> u
\end{array}\right..
\end{eqnarray*}
As a result, $\lim\limits_{r\rightarrow \infty}\se(\xi^{(r)},\mathcal{M}') = -\infty$, which contradicts with the fact that $\se(\xi^{(r)},\mathcal{M}')\geq 0$ for any design $\xi^{(r)}$. Thus,
$$\lim\limits_{r\rightarrow\infty} \se(\xi^{(r)},\mathcal{M}') = \se({\xi^{\mr}_{\mathcal{M}'}},\mathcal{M}').$$
Since $\ln(\cdot)$ on $[1,\infty)$ is a continuous function,
$$\lim\limits_{r\rightarrow\infty} \lse(\xi^{(r)},\mathcal{M}') = \lse({\xi^{\mr}_{\mathcal{M}'}},\mathcal{M}').$$
\end{proof}
\end{comment}
Besides its theoretically guaranteed convergence property, Algorithm \ref{alg:sequential} also converges fast with no more than 50 iterations in all the numerical examples, although the maximal number of iteration is set to be 200. More details about the speed of convergence and computational time are reported in Section \ref{sec:examples}.
We would like to remark that, at the beginning of Algorithm \ref{alg:sequential}, the local optimal design and the corresponding optimality criterion value $\Phi_p^{\opt_j}$ need to be calculated for each model specification $M_j \in \mathcal{M}'$.
It is because they are involved in $\se(\xi, \mathcal{M}')$ and all its derivatives.
However, we only need to compute them once. Using the algorithm proposed by \cite{li2018efficient}, we can construct local $\Phi_p$-optimal designs for GLMs efficiently with guaranteed convergence.
\subsection{An Optimal-Weight Procedure Given Design Points}\label{sec: weight updating}
Based on Corollary \ref{thm:equi_weight}, with a given set of design points $\boldsymbol{x}_1,\cdots,\boldsymbol{x}_n$, a sufficient condition that $\boldsymbol{\lambda}^*$ minimizes $\se(\xi^{\boldsymbol{\lambda}},\mathcal{M}')$ is:
\[
\phi(\boldsymbol{x}_i,\xi^{\boldsymbol{\lambda}^*}) = 0, \mbox{ for } i=1, \ldots, n,
\]
or equivalently (based on Lemma \ref{lem:dirder2}),
\begin{equation}\label{eqn:suffweight1}
\small
\left\{\begin{array}{rll}
q\sum\limits_{j=1}^m\tilde{\Phi}_0^j(\xi^{\boldsymbol{\lambda}^*}) &= \sum\limits_{j=1}^m \tilde{\Phi}_0^j(\xi^{\boldsymbol{\lambda}^*}) w_j(\boldsymbol{x}_i)\boldsymbol{g}_j^{\top}(\boldsymbol{x}_i){\mathsf M}_j(\xi^{\boldsymbol{\lambda}^*})\boldsymbol{g}_j(\boldsymbol{x}_i), & p=0;\\
q^{1/p}\sum\limits_{j=1}^m\tilde{\Phi}_p^j(\xi^{\boldsymbol{\lambda}^*})\Phi_p^j(\xi^{\boldsymbol{\lambda}^*}) &= \sum\limits_{j=1}^m\tilde{\Phi}_p^j(\xi^{\boldsymbol{\lambda}^*})w_j(\boldsymbol{x}_i)\left(\tr\left({\mathsf F}_j(\xi^{\boldsymbol{\lambda}^*})\right)^p\right)^{1/p-1}\boldsymbol{g}_j^{\top}(\boldsymbol{x}_i) {\mathsf M}_j(\xi^{\boldsymbol{\lambda}^*})\boldsymbol{g}_j(\boldsymbol{x}_i),& p>0.
\end{array}\right.
\end{equation}
where
$\tilde{\Phi}^j_p(\xi) = \left[\Phi_p^{\opt_j}\right]^{-1}\exp\left(\frac{\Phi_p^j(\xi)}{\Phi_p^{\opt_j}}\right)$ and ${\mathsf M}_j(\xi) = {\mathsf I}_j(\xi)^{-1}{\mathsf B}_j^{\top}{\mathsf F}_j(\xi)^{p-1}{\mathsf B}_j{\mathsf I}_j(\xi)^{-1}$ with ${\mathsf B}_j = \left.\frac{\partial \boldsymbol{f}(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}^{\top}}\right|_{\boldsymbol{\beta} = \boldsymbol{\beta}_j}$ and ${\mathsf F}_j(\xi) = {\mathsf B}_j{\mathsf I}_j(\xi)^{-1}{\mathsf B}_j^{\top}$.
For convenience, we denote the right side of \eqref{eqn:suffweight1} as $d_p(\boldsymbol{x}_i,\xi^{\boldsymbol{\lambda}^*})$.
For \emph{any} weight vector $\boldsymbol{\lambda} = [\lambda_1,\ldots,\lambda_n]^\top$, with simple linear algebra, it is easy to obtain
\begin{equation}\label{eqn:suffweight2}
\left\{\begin{array}{rlll}
q\sum\limits_{j=1}^m\tilde{\Phi}_0^j(\xi^{\boldsymbol{\lambda}})
&=& \sum\limits_{i=1}^n\lambda_id_0(\boldsymbol{x}_i,\xi^{\boldsymbol{\lambda}}), & p=0;\\
q^{1/p}\sum\limits_{j=1}^m\tilde{\Phi}_p^j(\xi^{\boldsymbol{\lambda}})\Phi_p^j(\xi^{\boldsymbol{\lambda}}) &=& \sum\limits_{i=1}^n \lambda_i d_p(\boldsymbol{x}_i,\xi^{\boldsymbol{\lambda}}), & p>0.
\end{array}\right.
\end{equation}
Combining \eqref{eqn:suffweight1} and \eqref{eqn:suffweight2}, the sufficient condition of the optimal weights is equivalent to
\begin{equation}\label{eqn:suffweight3}
\sum\limits_{s=1}^n\lambda_s^*d_p(\boldsymbol{x}_s,\xi^{\boldsymbol{\lambda}^*}) = d_p(\boldsymbol{x}_i,\xi^{\boldsymbol{\lambda}^*}), \,\,\,\, p\geq 0,
\end{equation}
for all design points $\boldsymbol{x}_1,\cdots,\boldsymbol{x}_n$. To obtain optimal weight $\boldsymbol{\lambda}^*$ that minimizes $\se(\xi^{\boldsymbol{\lambda}},\mathcal{M}')$, the current weights of the design points could be adjusted according to the two sides of \eqref{eqn:suffweight3}.
For a design point $\boldsymbol{x}_i$, if $d_p(\boldsymbol{x}_i,\xi^{\boldsymbol{\lambda}})>\sum\limits_{s=1}^n\lambda_sd_p(\boldsymbol{x}_s,\xi^{\boldsymbol{\lambda}})$, then the weight of point $\boldsymbol{x}_i$ should be increased based on \eqref{eqn:suffweight3}. On the contrary, if $d_p(\boldsymbol{x}_i,\xi^{\boldsymbol{\lambda}})<\sum\limits_{s=1}^n\lambda_sd_p(\boldsymbol{x}_s,\xi^{\boldsymbol{\lambda}})$, the weight of point $\boldsymbol{x}_i$ should be decreased based on \eqref{eqn:suffweight3}.
Thus, following the similar idea in classic multiplicative algorithms \citep{78silvey, 10yu},
the ratio $\left(d_p(\boldsymbol{x}_i,\xi^{\boldsymbol{\lambda}})\left/\sum\limits_{s=1}^n\lambda_sd_p(\boldsymbol{x}_s,\xi^{\boldsymbol{\lambda}})\right.\right)^{\delta}$ would be a good adjustment for the weight of design point $\boldsymbol{x}_i$.
Since this weight updating scheme is inspired by the classic multiplicative algorithm, we call it a modified multiplicative procedure and describe it in Algorithm \ref{alg:weight} in Appendix.
\begin{comment}
\begin{algorithm}[ht]
\caption{(\textbf{Optimal-Weight Procedure}) A Modified Multiplicative Approach. \label{alg:weight}}
\begin{algorithmic}[1]
\State Assign a uniform initial weight vector $\boldsymbol{\lambda}^{(0)} = [\lambda_1^{(0)},...,\lambda_n^{(0)}]^{\top}$, and $k=0$.
\While {$change>Tol$ and $k<MaxIter_2$}
\For {$i = 1, \ldots, n$}
\State Update the weight of design point $\boldsymbol{x}_i$:
\begin{eqnarray}\label{for:multialg}
&& \lambda_i^{(k+1)} = \lambda_i^{(k)} \frac{\left(d_p(\boldsymbol{x}_i,\xi^{\boldsymbol{\lambda}^{(k)}})\right)^{\delta}}{\sum\limits_{s=1}^n\lambda_s^{(k)}\left(d_p(\boldsymbol{x}_s,\xi^{\boldsymbol{\lambda}^{(k)}})\right)^{\delta}},\nonumber\\
&=& \left\{\begin{array}{ll}
\lambda_i^{(k)} \frac{\left(\sum\limits_{j=1}^m \tilde{\Phi}_0^j(\xi^{\boldsymbol{\lambda}^{(k)}}) w_j(\boldsymbol{x}_i)\boldsymbol{g}_j^{\top}(\boldsymbol{x}_i){\mathsf M}_j(\xi^{\boldsymbol{\lambda}^{(k)}})\boldsymbol{g}_j(\boldsymbol{x}_i)\right)^\delta}{\sum\limits_{s=1}^n \lambda_s^{(k)}\left(\sum\limits_{j=1}^m \tilde{\Phi}_0^j(\xi^{\boldsymbol{\lambda}^{(k)}}) w_j(\boldsymbol{x}_i)\boldsymbol{g}_j^{\top}(\boldsymbol{x}_i){\mathsf M}_j(\xi^{\boldsymbol{\lambda}^{(k)}})\boldsymbol{g}_j(\boldsymbol{x}_i)\right)^\delta}, & p=0;\\
\lambda_i^{(k)} \frac{\left(\sum\limits_{j=1}^m\tilde{\Phi}_p^j(\xi^{\boldsymbol{\lambda}^{(k)}})w_j(\boldsymbol{x}_i)\left(\tr\left({\mathsf F}_j(\xi^{\boldsymbol{\lambda}^{(k)}})\right)^p\right)^{1/p-1}\boldsymbol{g}_j^{\top}(\boldsymbol{x}_i){\mathsf M}_j(\xi^{\boldsymbol{\lambda}^{(k)}})\boldsymbol{g}_j(\boldsymbol{x}_i)\right)^\delta}{\sum\limits_{s=1}^n \lambda_s^{(k)}\left(\sum\limits_{j=1}^m\tilde{\Phi}_p^j(\xi^{\boldsymbol{\lambda}^{(k)}})w_j(\boldsymbol{x}_i)\left(\tr\left({\mathsf F}_j(\xi^{\boldsymbol{\lambda}^{(k)}})\right)^p\right)^{1/p-1}\boldsymbol{g}_j^{\top}(\boldsymbol{x}_i){\mathsf M}_j(\xi^{\boldsymbol{\lambda}^{(k)}})\boldsymbol{g}_j(\boldsymbol{x}_i)\right)^\delta }, & p>0.
\end{array}\right.
\end{eqnarray}
\State $change=\max\limits_{i=1,\cdots,n}(|\lambda_i^{(k+1)}-\lambda_i^{(k)}|)$.
\State $k=k+1$.
\EndFor
\EndWhile
\end{algorithmic}
\end{algorithm}
There are three user-specified parameters $\delta$, $Tol$, and $MaxIter_2$ in Algorithm \ref{alg:weight}.
$Tol$ is the tolerance of convergence, and we usually set it to be $Tol=1e-15$.
$MaxIter_2$ is the maximum number of iterations and we set it to be $MaxIter_2=200$ in all numerical examples.
The parameter $\delta\in (0,1]$ plays the same role as in the classical multiplicative algorithm \citep{78silvey, 78titterington}, which is to control the speed of the convergence.
According to the numerical study by \cite{74fellman} and \cite{83tor}, $\delta$ is often chosen as 1 for D-optimality, and 0.5 for A- or EI-optimality.
\end{comment}
We should remark that \cite{10yu} proved the convergence of classical multiplicative algorithm \citep{78silvey} to construct local optimal design for a class of optimality $\tr({\mathsf I}(\xi^{\boldsymbol{\lambda}}, M)^p), p<0$, and \cite{li2018efficient} extended the results to a more general class of $\Phi_p$-optimality.
However, the proof in \cite{10yu} can not be easily extended to prove the convergence of Algorithm \ref{alg:weight} since the derivative of $\se(\xi^{\boldsymbol{\lambda}}, \mathcal{M}')$ to $\lambda_i$ cannot be reformulated into the general form in Equation (2) in \cite{10yu} where only one model is involved.
Nevertheless, Lemma \ref{lem:ConvWeights} has shown that the optimization problem solved by Algorithm \ref{alg:weight} is a convex optimization
\begin{equation}\label{eqn:optprob}
\begin{array}{rrclcl}
\displaystyle \min_{\boldsymbol{\lambda}} & \multicolumn{3}{l}{\se(\xi^{\boldsymbol{\lambda}},\mathcal{M}') = \sum\limits_{j=1}^m \exp\left(\frac{\Phi_p^j(\xi^{\boldsymbol{\lambda}})}{\Phi_p^{\opt_j}}\right)} \\
\textrm{s.t.} & \mathbf{1}^{\top}\boldsymbol{\lambda} = 1, \ \boldsymbol{\lambda} \geq \mathbf{0}
\end{array}
\end{equation}
with linear constraints.
Some existing optimization tools is available to solve such convex optimization.
Based on on our empirical study, Algorithm \ref{alg:weight} can converge to a solution as good as those from the commonly-used optimization tools, but has much faster computational speed.
To show the strength of the proposed Algorithm \ref{alg:weight}, we use a small and simple example with three design points and two $\beta$ values. In the following \emph{Example 1}, we compare Algorithm \ref{alg:weight} with two existing convex optimization tools, \verb|fmincon| function in \textsc{Matlab} using interior-point method and the \verb|CVX| toolbox in \textsc{Matlab} for convex optimization.
To solve an optimization problem with the exponential objective function, \verb|CVX| constructed a successive approximation heuristic that approximates the local exponential function with polynomial approximation and solves the approximate model using symmetric primal/dual solvers \citep{grant2009cvx}.
\emph{Example 1.} Consider a univariate logistic regression model with the experimental domain $\Omega = [-1,1]$, basis function $\boldsymbol{g} = [1,x]^{\top}$ and a parameter space $\mathcal{B} = \{\boldsymbol{\beta}_1, \boldsymbol{\beta}_2\}$ consisting of only two possible regression coefficients $\boldsymbol{\beta}_1 = [-1.4,2.3]^{\top}$ and $\boldsymbol{\beta}_2 = [0.5,1.2]^{\top}$.
The model space is $\mathcal{M} = \{M_1 = (h,\boldsymbol{g},\boldsymbol{\beta}_1),M_2 = (h,\boldsymbol{g},\boldsymbol{\beta}_2)\}$, where $h$ is the link function of logistic regression.
Given design points $x \in \{-1,0,1\}$, all three optimization methods return the same optimal weights,
\[\boldsymbol{\lambda}^* = \{0.3832,0.2660,0.3508\}.\]
Table \ref{tab:comptime} reports the computational times of the three comparison methods.
The results clearly show that Algorithm \ref{alg:weight} is far more efficient than both \verb|CVX| and \verb|fmincon|.
Furthermore, Algorithm \ref{alg:weight} boosts the speed of sequential Algorithm \ref{alg:sequential} dramatically as finding the optimal weights is done in every iteration of the sequential algorithm.
\begin{table}[ht]
\centering
\caption{Computational Times (in seconds) of Three Optimization Methods. \label{tab:comptime}}
\begin{tabular}{|c|c|c|}\hline
CVX & fmincon & Algorithm \ref{alg:weight} (Optimal-Weight Procedure)\\\hline
4.04 & 1.44 & 0.17 \\\hline
\end{tabular}%
\end{table}
It is worth pointing out that, occasionally, $\tilde{\Phi}_p^j(\xi^{(r)}) = \left[\Phi_p^{\opt_j}\right]^{-1}\exp\left(\frac{\Phi_p^j(\xi^{(r)})}{\Phi_p^{\opt_j}}\right)$ and $\tilde{\Phi}_p^j(\xi^{\boldsymbol{\lambda}^{(k)}}) = \left[\Phi_p^{\opt_j}\right]^{-1}\exp\left(\frac{\Phi_p^j(\xi^{\boldsymbol{\lambda}^{(k)}})}{\Phi_p^{\opt_j}}\right)$ in \eqref{for:multialg} of Algorithm \ref{alg:weight} and directional derivative $\phi(\boldsymbol{x},\xi^{(r)})$ in Algorithm \ref{alg:sequential} can get extreme large and cause overflow, which is a well-recognized issue with the Log-Sum-Exp approximation in the literature.
One remedy is to introduce a constant $c$, and $\exp\left(\frac{\Phi_p^j(\xi^{(r)})}{\Phi_p^{\opt_j}}\right) = 、\exp(c)\exp\left(\frac{\Phi_p^j(\xi^{(r)})}{\Phi_p^{\opt_j}}-c\right)$.
This constant scaling factor $\exp(c)$ is eventually canceled in \eqref{for:multialg} in Algorithm \ref{alg:weight} and does not affect the search for the next design point Algorithm \eqref{alg:sequential}.
We set $c = \left\lceil\max\limits_j\left(\frac{\Phi_p^j(\xi^{\boldsymbol{\lambda}^{(k)}})}{\Phi_p^{\opt_j}}\right)-500\right\rceil$ in Algorithm \ref{alg:weight} and $c = \left\lceil\max\limits_j\left(\frac{\Phi_p^j(\xi^{(r)})}{\Phi_p^{\opt_j}}\right)-500\right\rceil$ in Algorithm \ref{alg:sequential} whenever overflow occurs.
\section{Numerical Examples}\label{sec:examples}
In this section, we conduct several numerical examples to evaluate the performance of the proposed Mm-$\Phi_p$ design under different types of model uncertainty.
The performance of the proposed Mm-$\Phi_p$ design is compared to the compromise design proposed by \cite{woods2006designs}.
As we have clarified in Section \ref{subsec:robvscom}, there are two types of compromise design.
The eff-compromise design ${\xi^{\effcom}_{\mathcal{M}'}}$ aims at maximizing the average $\Phi_p$-efficiency and the $\Phi_p$-compromise design ${\xi^{\phicom}_{\mathcal{M}'}}$ aims at minimizing the average $\Phi_p$-optimality criterion.
The later one coincides with the Bayesian optimal design when only considering the uncertainty from unknown regression coefficients.
For all the designs in the examples, the candidate pool $\mathcal{C}$ is constructed by grid points and each dimension of $\boldsymbol{x}$ has 51 equally spaced grid points.
We use the default uniform prior distribution on the model specification for the compromise designs. For $\boldsymbol{f}(\boldsymbol{\beta}) = [f_1(\boldsymbol{\beta}),...,f_q(\boldsymbol{\beta})]^{\top}$ in $\Phi_p(\xi,M)$ in \eqref{eq:phi_p}, we set $f_{j}(\boldsymbol{\beta}) = \beta_{j}$.
\subsection{Model Uncertainty}
In the following \emph{Example 2}, we investigate the performance of the Mm-$\Phi_p$ design and algorithm when the uncertainties are involved in both the link functions and basis functions in the model space $\mathcal{M}$.
\emph{Example 2.} For an experiment with $d=2$ input variables and one binary response, consider both logistic regression model and probit model, and possible polynomial basis functions up to degree 2, i.e.,
$$\mathcal{G} = \left\{\boldsymbol{g}_1 = (1,x_1,x_2)^{\top},
\boldsymbol{g}_2 = (1, x_1,x_2,x_1x_2)^{\top}, \boldsymbol{g}_3 = (1, x_1,x_2,x_1x_2,x_1^2,x_2^2)^{\top} \right\}.$$
For the basis $\boldsymbol{g}_3$, the regression coefficients $\boldsymbol{\beta}_3 = [\beta_{3,1},\cdots,\beta_{3,6}]^{\top}$ are drawn randomly from standard multivariate normal distribution.
For the basis $\boldsymbol{g}_2$, the the regression coefficients $\boldsymbol{\beta}_2 = [\beta_{2,1},\cdots, \beta_{2,4}]^{\top}$ are drawn independently with $\beta_{2,j} \sim \text{N}(\beta_{3,j}, (0.5\beta_{3,j})^2)$, for $j = 1,2,3,4$.
The variance $(0.5\beta_{3,j})^2$ that depends on the regression coefficient $\beta_{3,j}$ allows a larger perturbation for $\beta_{2,j}$ when the corresponding $\beta_{3,j}$ is large.
It is to accommodate the fact that the values of the regression coefficients are likely to change when the quadratic terms are removed.
For the basis $\boldsymbol{g}_1$, the regression coefficients $\boldsymbol{\beta}_1 = [\beta_{1,1},\beta_{1,2},\beta_{1,3}]^{\top}$ are drawn independently with $\beta_{1,i} \sim \text{N}(\beta_{3,i}, (0.5\beta_{3,i})^2)$, for $i=1,2,3$. Thus, the model space $\mathcal{M}$ consists of six models: $\mathcal{M} = \{M_1 = (\text{probit}, \boldsymbol{g}_1,\boldsymbol{\beta}_1), M_2 = (\text{probit}, \boldsymbol{g}_2,\boldsymbol{\beta}_2), M_3 = (\text{probit}, \boldsymbol{g}_3,\boldsymbol{\beta}_3), M_4 = (\text{logit}, \boldsymbol{g}_1,\boldsymbol{\beta}_1), M_5 = (\text{logit}, \boldsymbol{g}_2,\boldsymbol{\beta}_2), M_6 = (\text{logit}, \boldsymbol{g}_3,\boldsymbol{\beta}_3)\}$.
We generate 100 parameter sets $\mathcal{B} = \{\boldsymbol{\beta}_1, \boldsymbol{\beta}_2, \boldsymbol{\beta}_3\}$ to form 100 model sets.
For each generated model set, the Mm-$\Phi_p$ design, eff-compromise design, and $\Phi_p$-compromise design are constructed, respectively.
To compare the designs, we use the $\Phi_p$-efficiency defined in \eqref{eqn:phieff} as a larger-the-better performance measure.
In particular, we consider $\Phi_0(\xi,M)$ (i.e., $\lim\limits_{p\rightarrow 0}\Phi_p(\xi,M)$) which is the D-optimality and $\Phi_1(\xi,M)$ which is the A-optimality.
For each model space, we compute the $\Phi_p$-efficiency in \eqref{eqn:phieff} of all three designs relative to the corresponding local optimal design, and the local optimal design ${\xi^{\opt}_M}$ is obtained by the algorithm of \cite{li2018efficient}.
For each model space, we can calculate the worse-case efficiency as $\min\limits_{M_i\in\mathcal{M}}\eff_{\Phi_p}(\xi, \xi^{\opt}_{M_i}; M_i)$.
\begin{figure}[hbtp]
\centering
\subfloat[A-optimality]
{{\includegraphics[width=6cm]{2dProLogAEffBoxPlot.eps}}}
\qquad
\subfloat[D-optimality]
{{\includegraphics[width=6cm]{2dProLogDEffBoxPlot.eps}}}
\caption{Boxplot of Worse-Case A- and D-Efficiency of Mm-$\Phi_p$ Design, Eff-Compromise Design, and $\Phi_p$-Compromise Design across 100 Randomly Generated Model Spaces}
\label{fig:2dProLogBoxPlot}
\end{figure}
\begin{table}[htbp]
\centering
\caption{Minimum and Median of the Worst-Case A- and D-Efficiency across 100 Randomly Generated Model Spaces for Comparison of Designs}
\begin{tabular}{@{\extracolsep{4pt}}|lcccc|@{}}
\hline
& \multicolumn{2}{c}{Worst-Case A-Efficiency} & \multicolumn{2}{c}{Worst-Case D-Efficiency}\vline\\
\cline{2-3} \cline{4-5}
& $\min$ & $\median$ & $\min$ & $\median$ \\
\hline
Mm-$\Phi_p$ Design & 0.55 & 0.75 &0.68 & 0.86\\
Eff-Compromise Design & 0.31 & 0.73 &0.58 & 0.85\\
$\Phi_p$-Compromise Design & 0.32 & 0.65 &0.55 & 0.82\\
\hline
\end{tabular}%
\label{tab:2dProLogADEff}%
\end{table}
Figure \ref{fig:2dProLogBoxPlot} shows the boxplot of the worst-case A- and D-efficiency of the Mm-$\Phi_p$ design, eff-compromise design and $\Phi_p$-compromise design across 100 different model sets. The red asterisks ``$*$" in the boxplot denote the minimum worst-case A- and D-efficiency, and the larger the minimum, the better the design. Table \ref{tab:2dProLogADEff} summarizes the minimum and median of the worst-case A- and D-efficiency of the three designs.
The results show that the Mm-$\Phi_p$ design returns the largest values on the minimum and median of the worst-case efficiency.
We also notice that the eff-compromise design often gives the highest mean efficiency for a given model space, which is expected since it is designated to achieve the maximum mean efficiency.
However, the mean A- and D-efficiency of all three designs are comparable on average over the 100 model sets.
The computational times of Algorithm 1 to construct the Mm-$\Phi_p$ design are about 7.59 seconds and 6.18 seconds for A- and D-optimality, respectively.
\subsection{Uncertain Regression Coefficients}
In the following \emph{Example 3}, we further illustrate the advantages of Mm-$\Phi_p$ design through an example considering the uncertain regression coefficients with the specified link function $h$ and basis functions $\boldsymbol{g}$.
Note that when the regression coefficient space $\mathcal{B}$ is continuous, a discretization is needed.
In \emph{Example 3}, the performance of the proposed design and algorithm over the unsampled values of regression coefficient $\boldsymbol{\beta}$ is investigated.
\emph{Example 3}. For a univariate logistic regression model with experimental domain $\Omega = [-1,1]$ and a quadratic basis, i.e. $\boldsymbol{g} (x) = [1,x,x^2]^{\top}$, consider a regression coefficient space $\mathcal{B} = \{\beta_1\in[0,6],\beta_2\in[-6,0],\beta_3\in[5,11]\}$.
Since $\mathcal{B}$ is continuous, we choose a Sobol sample of size twenty-six and the centroid $\boldsymbol{\beta}_c = [3,-3,8]^{\top}$ of $\mathcal{B}$, i.e. $m=27$, to form the surrogate coefficient set $ \mathcal{B}'$.
Sobol sample is a low discrepancy sequence that converges to a uniform distribution on a bounded set and it is widely used in Monte Carlo methods \citep{sobol1967distribution}.
The surrogate model set is $\mathcal{M}' = \{M = (h,\boldsymbol{g},\boldsymbol{\beta}): h, \boldsymbol{g}, \boldsymbol{\beta}\in \mathcal{B}'\}$, where $h$ is the link function of the logistic regression.
Four designs are considered: (1) Mm-$\Phi_p$ design ${\xi^{\mr}_{\mathcal{M}'}}$; (2) eff-compromise design ${\xi^{\effcom}_{\mathcal{M}'}}$; (3) local optimal design $\xi^{\text{center}}$ of the centroid of $\mathcal{B}$, i.e. $\boldsymbol{\beta}_c = [3,-3,8]^{\top}$, which can be viewed as either Mm-$\Phi_p$ or compromise design with $m=1$, and (4) Bayesian optimal design $\xi^{\text{Bayesian}}_{\mathcal{M}'}$ with uniform prior, which is also the $\Phi_p$-compromise design.
Figure \ref{fig:1dLogitPoints} shows the constructed designs under D- and A-optimality, respectively.
To compare the four designs, we use the $\Phi_p$-efficiency defined in \eqref{eqn:phieff} as a performance measure.
Specifically, we generate a size of 10,000 Sobol sample from the original continuous region $\mathcal{B}$.
For each of the sample, we compute the $\Phi_p$-efficiency in \eqref{eqn:phieff} of all four designs relative to the corresponding local optimal design,
and the local optimal design ${\xi^{\opt}_M}$ is obtained in the same way as in Example 1.
Figure \ref{fig:1dLogitBoxPlot} shows the boxplot of A- and D-efficiency of ${\xi^{\mr}_{\mathcal{M}'}}$, ${\xi^{\effcom}_{\mathcal{M}'}}$, $\xi^{\text{center}}$, and $\xi^{\text{Bayesian}}_{\mathcal{M}'}$ over 10,000 randomly sampled $\boldsymbol{\beta}$ values. The red asterisks ``$*$" in the boxplot denote the worst-case A- and D-efficiency, and the larger the worst-case efficiency, the better the design. Table \ref{tab:1dLogitIDEff} summarizes the minimum and median A- and D-efficiency of the four designs.
It is seen that the Mm-$\Phi_p$ design ${\xi^{\mr}_{\mathcal{M}'}}$ outperforms the other three designs in terms of the worst-case design efficiency, especially for A-optimality.
Specifically, the worst-case A-efficiency of the Mm-$\Phi_p$ design is 0.41, and is much larger than those of the other three designs.
The worst-case D-efficiency of the Mm-$\Phi_p$ design is 0.86 and is only slightly larger than those of other designs.
We also found that the maximum A-efficiency of the Mm-$\Phi_p$ design is the smallest,
which is not surprising considering the Mm-$\Phi_p$ design maximizes the worst-case efficiency, not the best-case efficiency.
To illustrate the computational efficiency of the proposed Algorithm \ref{alg:sequential}, Figure \ref{fig:1dLogitIterVsObg} shows how the Mm-$\Phi_p$ design criterion $\lse(\xi^{(r)},\mathcal{M}')$ decreases with respect to the number of iterations.
The computation times of Algorithm 1 to construct ${\xi^{\mr}_{\mathcal{M}'}}$ are 1.68 and 1.47 seconds for A- and D-optimality, respectively.
\begin{figure}[hbtp]
\centering
\subfloat[A-optimality]
{{\includegraphics[width=6cm]{1dALogitPoints.eps}}}
\qquad
\subfloat[D-optimality]
{{\includegraphics[width=6cm]{1dDLogitPoints.eps}}}
\caption{Mm-$\Phi_p$ Design, Eff-Compromise Design, Centroid Optimal Design and Bayesian Optimal Design}
\label{fig:1dLogitPoints}
\end{figure}
\begin{table}[htbp]
\centering
\caption{Minimum and Median of A- and D- Efficiency across 10,000 Sampled $\boldsymbol{\beta}$ for Comparison of Four Designs}
\begin{tabular}{@{\extracolsep{4pt}}|lcccc|@{}}
\hline
& \multicolumn{2}{c}{A-Efficiency} & \multicolumn{2}{c}{D-Efficiency}\vline\\
\cline{2-3} \cline{4-5}
& $\min$ & $\median$ & $\min$ & $\median$ \\
\hline
Mm-$\Phi_p$ Design & 0.41 & 0.70 &0.86 & 0.98\\
Eff-Compromise Design & 0.21 & 0.71 &0.83 & 0.98\\
Centroid Optimal Design & 0.16 & 0.71 & 0.81& 0.98\\
Bayesian Optimal Design & 0.26 & 0.69 &0.84 & 0.98\\
\hline
\end{tabular}%
\label{tab:1dLogitIDEff}%
\end{table}
\begin{figure}[hbtp]
\centering
\subfloat[A-optimality]
{{\includegraphics[width=6cm]{1dALogitBoxPlot.eps}}}
\qquad
\subfloat[D-optimality]
{{\includegraphics[width=6cm]{1dDLogitBoxPlot.eps}}}
\caption{Boxplot of A- and D-Efficiency of Four Designs at 10,000 Sampled $\boldsymbol{\beta}$}
\label{fig:1dLogitBoxPlot}
\end{figure}
\begin{figure}[hbtp]
\centering
\subfloat[A-optimality]
{{\includegraphics[width=6cm]{1dALogitIterVsObj.eps}}}
\qquad
\subfloat[D-optimality]
{{\includegraphics[width=6cm]{1dDLogitIterVsObj.eps}}}
\caption{$\lse(\xi^{(r)},\mathcal{M}')$ of the $r$-th iteration in Algorithm \ref{alg:sequential}.}
\label{fig:1dLogitIterVsObg}
\end{figure}
\subsection{Potato Packing Example}\label{sec: potato}
We consider a real-world example, the potato packing example in \cite{woods2006designs}, to further evaluate the proposed Mm-$\Phi_p$ design.
The experiment contains $d=3$ quantitative variables - vitamin concentration in the prepackaging dip and the amount of two kinds of gas in the packing atmosphere.
The response is binary representing the presence or absence of liquid in the pack after 7 days.
The basis functions of the logistic regression model always include the linear and quadratic terms of the input variables.
But one set of the basis functions contains the interaction terms and the other one does not.
The estimates of regression coefficients from a preliminary study in \cite{woods2006designs} are given in Table \ref{tab:PotatoPackModel} in the Appendix.
Since enhancing prediction accuracy is a major goal for the experiment, we use the prediction-oriented I-optimality \citep{atkinson2014optimal} to evaluate the design efficiency.
Note that the I-optimality shares the same mathematical structure as $\Phi_1$-optimality.
The design points of the designs are shown in Figure \ref{fig:PotatoPackPoints} in the Appendix.
Table \ref{tab:PotatoPackIEff} summarizes the I-efficiency of the Mm-$\Phi_p$ design, eff-compromise design, and I-compromise design of the three potential model specifications.
In terms of worst-case efficiency (i.e., smallest value of I-efficiency among $M_1$, $M_{2}$ and $M_{3}$), the proposed Mm-$\Phi_p$ design outperforms the other two designs by a large margin.
\begin{comment}
\begin{table}[htbp]
\centering
\caption{Model Space $\mathcal{M}$ of Potato Packing Example}
\begin{tabular}{|lrrr|}
\hline
Term & First-Order $M_1$ & With interaction $M_2$ & Second-order $M_3$ \\
\hline
Intercept & -0.28 & -1.44 & -2.93 \\
$x_1$ & 0 & 0 & 0 \\
$x_2$ & -0.76 & -1.95 & -0.52 \\
$x_3$ & -1.15 & -2.36 & -0.79 \\
$x_1x_2$ & & 0 & 0 \\
$x_1x_3$ & & 0 & 0 \\
$x_2x_3$ & & -2.34 & -0.66 \\
$x_1^2$ & & & 0.94 \\
$x_2^2$ & & & 0.79 \\
$x_3^2$ & & & 1.82 \\
\hline
\end{tabular}%
\label{tab:PotatoPackModel}%
\end{table}%
\begin{figure}[ht]
\centering
\includegraphics[width=12cm]{PotatoPackPoints.eps}
\caption{Design Points of Mm-$\Phi_p$ Design and Compromise Designs}
\label{fig:PotatoPackPoints}
\end{figure}
\end{comment}
\begin{table}[htbp]
\centering
\caption{I-Efficiency of Mm-$\Phi_p$ Design, Eff-Compromise Design and I-Compromise Design}
\begin{tabular}{|lrrr|}
\hline
& $M_1$ & $M_2$ & $M_3$ \\
\hline
Mm-$\Phi_p$ Design & 0.64 & 0.71 & 0.82 \\
Eff-Compromise Design & 0.52 & 0.78 & 0.92 \\
I-Compromise Design & 0.49 & 0.80 & 0.92 \\
\hline
\end{tabular}%
\label{tab:PotatoPackIEff}%
\end{table
\section{Discussion}\label{sec:discussion}
In this article, we proposed a new maximin $\Phi_p$-efficiency criterion Mm-$\Phi_p$ for GLMs that aims at maximizing the worst-case design efficiency when various kinds of model uncertainties are considered, including uncertainties in link function, linear predictor and regression coefficients.
An efficient algorithm to construct the Mm-$\Phi_p$ design is also developed based on sound theoretical properties of the criterion.
The proposed Mm-$\Phi_p$ design and the algorithm can be easily extended to a more general case such as nonlinear models in \cite{13yang}.
There are several directions for further research to enhance the proposed Mm-$\Phi_p$ design and algorithm.
First, to construct the Mm-$\Phi_p$ design, one needs to form a set of possible model specifications.
An interesting direction is how to extend the proposed design when such information of model specifications is limited or unavailable.
Second, it is interesting to rigorously establish the convergence property of the optimal-weight procedure (Algorithm \ref{alg:weight}),
which requires developing some other mathematical results.
Third, the use of log-sum-exp approximation can be applied to other maximin design with a convex design criterion, and the theoretical and algorithmic developments can be adapted similarly.
We plan to extend the framework to the more general setting for other maximin designs with convex criterion.
|
2,877,628,090,958 | arxiv | \section{Introduction}
By an inner product $\langle\cdot,\cdot\rangle$ on a complex vector space $V$ we shall mean in this paper a (not necessarily positive definite) non-degenerate symmetric sesquilinear form. An inner product on a module $V$ over a $\ast$-algebra $\mathcal{A}$ is called invariant if $\langle av,w\rangle = \langle v,a^\ast w\rangle$ for all $a\in \mathcal{A}$ and $v,w\in V$.
An important problem in the representation theory of $\ast$-algebras is the question of existence of such forms on modules, and to determine when such a form is positive definite. Modules which can be equipped with an invariant inner product are called pseudo-unitarizable, and unitarizable if the form can be chosen positive definite. The pseudo-unitarizable modules form essentially a ``real line'' inside the moduli space of all representations (because they are stable under taking the contragredient module). In many cases at most one invariant inner product exists up to equivalence. For example this is the case in general for finite-dimensional indecomposable modules \cite{MazTur2001}.
A classical result states that every complex finite-dimensional representation of (the convolution $\ast$-algebra of complex-valued $L^1$ functions on) a compact topological group $G$ is unitarizable (see e.g. \cite[Prop.~4.6]{Kna1996}). Other examples from Lie theory include the celebrated discrete series of unitary irreducible highest weight modules over the Virasoro algebra \cite{KacRai1987}, and the classification of pseudo-unitarizable simple weight modules with finite-dimensional weight spaces over a semi-simple complex finite-dimensional Lie algebra with respect to the Chevalley involution \cite{MazTur2001b}.
In this paper we consider a family of $\ast$-algebras $\mathcal{A}(\mathscr{L})$ called \emph{noncommutative Kleinian fiber products} \cite{HarRos2016,Har2016}. They depend on a certain vertex configuration $\mathscr{L}$ and are noncommutative deformations of the algebra of functions on a fiber product of two type $A$ Kleinian singularities \cite{Har2016}. Examples include central extensions of noncommutative Kleinian singularities introduced by Hodges \cite{Hod1993}, and quotients of the enveloping algebra of the affine Lie algebra $A_1^{(1)}$ and of the finite W-algebra $\mathcal{W}(\mathfrak{sl}_4,\mathfrak{sl}_2\oplus\mathfrak{sl}_2)$ \cite{Har2016}. Simple weight $\mathcal{A}(\mathscr{L})$-modules were classified in \cite{Har2016} and are parametrized by pairs $(D,\xi)$ where $D$ is a connected component of a twisted cylinder minus the edges of $\mathscr{L}$, and $\xi\in\mathbb{C}$. The algebras $\mathcal{A}(\mathscr{L})$ are examples of rank two twisted generalized Weyl algebras \cite{MazTur1999}. Pseudo-unitarizable simple and indecomposable weight modules with real support over noncommutative Kleinian singularities, and more generally arbitrary generalized Weyl algebras of rank one, were classified in \cite{Har2011}, covering in particular $U_q(\mathfrak{sl}_2)$ at roots of unity $q$. Bounded and unbounded $\ast$-representations of twisted generalized Weyl constructions were studied in \cite{MazTur2002}.
\subsection{Summary of paper}
In Section \ref{sec:pre} we review the definition of noncommutative Kleinian fiber products as given in \cite{Har2016}.
In Section \ref{sec:Delta} we prove the existence of square roots of the polynomial functions $P_i^\mathscr{L}$ which still solve the MTE, and use this to construct a one-parameter family $\Delta_\xi$ of representations of $\mathcal{A}(\mathscr{L})$. These representations are shown to be pseudo-unitarizable if $|\xi|=1$ in Section \ref{sec:pseudo}.
In Section \ref{sec:simple-weight} we review the classification of simple integral weight modules from \cite{Har2016} and determine necessary and sufficient conditions for them to be pseudo-unitarizable. Along the way we prove that the Casimir element $C$ for $\mathcal{A}(\mathscr{L})$ defined in \cite{Har2016} is unitary (Section \ref{sec:casimir}) and prove a polynomial formula for shifts of products of the square roots of $P_i^\mathscr{L}$ (Lemma \ref{lem:q-ord}).
In Section \ref{sec:semi} we prove that $\Delta_\xi$ are completely reducible and that every simple integral weight $\mathcal{A}(\mathscr{L})$-module occurs as a subrepresentation of $\Delta_\xi$ for some $\xi$.
Lastly Section \ref{sec:signature} contains the description of the signature of the unique (up to nonzero real multiples) invariant inner product on the simple integral weight modules. In particular we obtain necessary and sufficient conditions for them to be unitarizable.
We end with some examples in Section \ref{sec:examples}.
\section{Preliminaries}
\label{sec:pre}
\subsection{Noncommutative Kleinian fiber products}
Let $(m,n)$ be a pair of relatively prime non-negative integers, and $(\alpha_1,\alpha_2)=(\alpha,\beta)\in\mathbb{R}^2\setminus\{(0,0)\}$ with $m\alpha+n\beta=0$. Put
\begin{alignat}{2}
F &= \mathbb{Z}\alpha+\mathbb{Z}\beta &\qquad\qquad V &= F+(\alpha+\beta)/2\\
E_i &= F+\alpha_i/2 &\qquad\qquad E &= E_1\cup E_2
\end{alignat}
\begin{Definition}
An \emph{$(m,n)$-periodic higher spin vertex configuration} $\mathscr{L}=(\mathscr{L}_1,\mathscr{L}_2)$ is a pair of functions $\mathscr{L}_i:E_i\to \mathbb{N}=\{0,1,2,\ldots\}$ with $|\mathscr{L}_i^{-1}([1,\infty))|<\infty$ satisfying the \emph{current conservation rule}:
\begin{equation}\label{eq:CMTE}
\mathscr{L}_1(v+\beta/2)+\mathscr{L}_2(v+\alpha/2)=\mathscr{L}_1(v-\beta/2)+\mathscr{L}_2(v-\alpha/2)\qquad \text{for all $v\in V$}.
\end{equation}
\end{Definition}
Let $\tilde{\mathcal{A}}=\tilde{\mathcal{A}}(\mathscr{L})$ be the associative algebra generated by $\{H,X_1^+,X_1^-,X_2^+,X_2^-\}$
subject to defining relations
\begin{equation}\label{eq:rels}
[H,X_i^\pm]=\pm \alpha_i X_i^\pm\qquad X_i^\pm X_i^\mp = P_i^\mathscr{L}(H\mp\alpha_i/2)\qquad [X_1^\pm,X_2^\mp]=0
\end{equation}
where $[a,b]=ab-ba$ and
\begin{equation}\label{eq:P-def}
P_i^\mathscr{L}(u) = \prod_{e\in E_i} (u-e)^{\mathscr{L}_i(e)}\qquad \text{for $i=1,2$}.
\end{equation}
Let $\mathcal{A}=\mathcal{A}(\mathscr{L})=\tilde{\mathcal{A}}/\mathcal{I}$ where
\begin{equation}\label{eq:CI-def}
\mathcal{I}=\{a\in \tilde{\mathcal{A}}\mid \text{$p(H)a=0$ for some nonzero polynomial $p$}\}.
\end{equation}
\begin{Definition}
$\mathcal{A}$ is the \emph{noncommutative Kleinian fiber product associated to $\mathscr{L}$}.
\end{Definition}
Note that $(p_1,p_2)=(P_1^\mathscr{L},P_2^\mathscr{L})$ is a solution to the \emph{Mazorchuk-Turowska Equation (MTE)}
\begin{equation}\label{eq:MTE}
p_1(u+\alpha_2/2)p_2(u+\alpha_1/2)=p_1(u-\alpha_2/2)p_2(u-\alpha_1/2)
\end{equation}
which is necessary and sufficient for $\mathcal{A}(\mathscr{L})$ to be nontrivial \cite[Prop.~1.11]{Har2016}.
Conversely, up to affine transformations any solution $(p_1,p_2)$ to \eqref{eq:MTE} is a product of such lattice solutions $(P_1^\mathscr{L},P_2^\mathscr{L})$ \cite{HarRos2016,Har2016}.
\section{Realization by difference operators on line bundles}
\label{sec:Delta}
In this section we construct a natural family of representations $\Delta_\xi$ of $\mathcal{A}(\mathscr{L})$ by difference operators acting on global sections of a complex line bundle $L_\xi$ over the one-dimensional lattice $F$. Later we show that every irreducible integral weight representation of $\mathcal{A}(\mathscr{L})$ is a subrepresentation of $\Delta_\xi$ for appropriate $\xi$. Thus this provides a concrete realization of all simple integral weight modules.
The key result in the construction of $\Delta_\xi$, established in Section \ref{sec:sqrt}, is the existence of square roots (in fact, logarithms) of solutions to the MTE. The subtlety lies in proving that there exists a consistent choice square root $P_i^\mathscr{L}(e)^{1/2}$ in such a way that the pair of functions still solve the MTE. This choice can be expressed combinatorially directly in terms of the configuration $\mathscr{L}$ (Remark \ref{rem:l}) and gives rise to both the fundamental symmetry $J$ of the inner product space (Remark \ref{rem:J}) and formulas for the signature of the inner product given Section \ref{sec:signature}.
\subsection{$\sqrt{\text{MTE}}$} \label{sec:sqrt}
The following shows that solutions to the Mazorchuk-Turowska equation have square roots that are also solutions.
\begin{Lemma} \label{lem:square-roots}
There exists a pair of functions $(q_1,q_2)$, $q_i:E_i\to\mathbb{C}$ such that
\begin{enumerate}[{\rm (i)}]
\item $[q_i(e)]^2=P_i^\mathscr{L}(e)$ for all $e\in E_i$ and $i\in\{1,2\}$,
\item $(q_1,q_2)$ is a solution to the MTE \eqref{eq:MTE}.
\end{enumerate}
\end{Lemma}
\begin{proof}
Put $p_i(u)=P_i^\mathscr{L}(u)$ for $i=1,2$.
Consider the following pair of functions $(l_1,l_2)$:
\begin{equation}\label{eq:li-def}
l_i(u)=\sum_{e\in E_i,\, e>u} \mathscr{L}_i(e)\qquad \text{for $u\in E_i$ and $i=1,2$.}
\end{equation}
Here $e>u$ is the usual order on $\mathbb{R}$.
We claim that these satisfy the following two properties:
\begin{equation}\label{eq:PvsL}
\frac{p_i(u)}{|p_i(u)|} = (-1)^{l_i(u)}
\end{equation}
and
\begin{equation}\label{eq:li-CMTE}
l_1(v+\alpha_2/2)+l_2(v+\alpha_1/2)=l_1(v-\alpha_2/2)+l_2(v-\alpha_1/2)\qquad \text{for all $v\in V$.}
\end{equation}
To check \eqref{eq:PvsL}, use the definition of $p_i(u)$ and that
\begin{equation}
\frac{(u-e)^{\mathscr{L}_i(e)}}{|u-e|^{\mathscr{L}_i(e)}}=
\begin{cases}(-1)^{\mathscr{L}_i(e)}& e>u\\
1& e<u
\end{cases}
\end{equation}
To prove \eqref{eq:li-CMTE}, substituting \eqref{eq:li-def} into \eqref{eq:li-CMTE} and cancelling terms we obtain the following, where we assumed WLOG that $\alpha_1\in\mathbb{Z}_{<0}$ and $\alpha_2\in\mathbb{Z}_{>0}$:
\begin{equation}\label{eq:li-CMTE-2}
\sum_{\substack{e\in E_1\\ v-\alpha_2/2<e\le v+\alpha_2/2}} \mathscr{L}_1(e) =
\sum_{\substack{e\in E_1\\ v+\alpha_1/2<e\le v-\alpha_1/2}} \mathscr{L}_2(e)\qquad \text{for all $v\in V$.}
\end{equation}
Since this equation is additive in $\mathscr{L}$, we may without loss of generality assume that $\mathscr{L}$ consists of a single generalized Dyck path of period $(m,n)$.
To this end, it is easy to verify \eqref{eq:li-CMTE-2} for the maximum area path consisting of $n$ steps of $\alpha_2$ followed by $m$ steps of $\alpha_1$. An induction argument shows that if $(l_1,l_2)$ solves \eqref{eq:li-CMTE} for a certain generalized Dyck path $\mathscr{L}$, then it also holds for the path in which a $21$ step has been replaced with a $12$ step. This proves the claim.
Now define
\begin{equation}\label{eq:q_i}
q_i(e)=\exp(2\pi\boldsymbol{i} \tfrac{l_i(e)}{4}) |p_i(e)|^{1/2}\qquad \text{for all $e\in E_i$ and $i=1,2$.}
\end{equation}
where $\boldsymbol{i}^2=-1$. Then $(q_1,q_2)$ satisfies the required properties.
\end{proof}
\begin{Remark}
Actually the proof shows one can take $N$:th roots of solutions too:
\[p_i^{1/N}(e)=\exp(2\pi\boldsymbol{i} \tfrac{l_i(e)}{2N}) |p_i(e)|^{1/N}.\]
\end{Remark}
\begin{Remark}\label{rem:l}
The combinatorial interpretation of the functions $l_i:E_i\to\mathbb{N}$ is as follows. For each vertical edge $e\in E_1$, $l_1(e)$ counts the number (with multiplicity) of vertical edges in $\mathscr{L}$ lying above the straight line through $e$ of slope $n/m$. Similarly for horizontal edges and $l_2(e)$, $e\in E_2$. (The ``location'' of an edge is by convention its midpoint.) See Figure \ref{fig:52-e}.
\end{Remark}
\begin{figure}
\centering
\begin{tikzpicture}
\foreach \y in {-1,0,...,3} {
\draw[help lines] (0 cm,\y cm) -- (5 cm,\y cm); }
\foreach \x in {1,...,4} {
\draw[help lines] (\x cm,-1.5 cm) -- (\x cm,3.5 cm); }
\draw[dashed] (0,-1.5 cm) -- (0,3.5cm);
\draw[dashed] (5,-1.5 cm) -- (5,3.5cm);
\fill (0,0) circle (2pt);
\fill (5,2) circle (2pt);
\draw[thick,Blue] (0,-1) -- (0,0) -- (2,0) -- (2,1) -- (5,1) -- (5,2);
\draw[thick,Blue] (0,0) -- (0,1) -- (2,1) -- (2,2) -- (5,2) -- (5,3);
\draw[thick,Red,dotted] (0, .3) -- (5,2.3);
\draw[thick,Black] (3,1.55) node[font=\scriptsize, left] {$e$};
\draw[thick,Black] (3cm-2pt,1.5) -- (3cm+2pt,1.5);
\draw[thick,Black] (-2pt,.5) -- (0,.5) node[font=\scriptsize,left] {$e_1$} -- (2pt,.5);
\draw[thick,Black] (2cm-2pt,1.5) -- (2,1.5) node[font=\scriptsize,left] {$e_2$} -- (2cm+2pt,1.5);
\end{tikzpicture}
\caption{A fundamental domain for a $(5,2)$-periodic vertex configuration $\mathscr{L}$ (solid blue). Here the vertical edge $e$ has $l_1(e)=2$ because there are two vertical edges, $e_1$ and $e_2$, appearing in $\mathscr{L}$ above the line (dotted red) through $e$ of slope $5/2$. }
\label{fig:52-e}
\end{figure}
\subsection{Construction of the representation $\Delta_\xi$}
On the discrete space $F$ we define a complex line bundle $L_\xi$ as follows.
Fix $\xi\in\mathbb{C}^\times$. Let the abelian group $\mathbb{Z}$ act on $\mathbb{Z}^2\times \mathbb{C}$ by
\begin{equation}
1.(a,b,z)=(a+m,b+n,\xi z)
\end{equation}
and define $L_\xi = (\mathbb{Z}^2\times\mathbb{C})/\mathbb{Z}$ with bundle map $L_\xi\to F$ induced by $(x,y,z)\mapsto x\alpha+y\beta$.
Let $\Gamma(L_\xi)$ be the corresponding vector space of global sections.
Since $(x,y)\mapsto x\alpha+y\beta$ induces a bijection $\mathbb{Z}^2/\langle(m,n)\rangle\simeq F$, we make the identification
\begin{equation}
\Gamma(L_\xi)=\{f:\mathbb{Z}^2\to\mathbb{C}\mid \forall (x,y)\in\mathbb{Z}^2:\, f(x-m,y-n)=\xi\cdot f(x,y) \}.
\end{equation}
It is easy to see that $\Gamma(L_\xi)$ consists of all functions of the form
\begin{equation}
f(x,y)=\tilde{f}(x\alpha+y\beta)\exp\left(\frac{-xm-yn}{m^2+n^2}\log\xi\right)
\end{equation}
where $\tilde{f}:\mathbb{Z}\to\mathbb{C}$ is any function and $\log\xi\in\mathbb{C}$ is any choice of logarithm. Consider the subspace
\begin{equation}
\Gamma_0(L_\xi)=\big\{f\in\Gamma(L_\xi)\mid \text{$\tilde{f}$ has compact ($=$finite) support}\big\}.
\end{equation}
One checks that a $\mathbb{C}$-basis for $\Gamma_0(L_\xi)$ is given by $\{f_\lambda\mid \lambda\in F\}$ where
\begin{equation}\label{eq:fk}
f_\lambda(x,y)=\delta_{x\alpha+y\beta,\lambda}\exp\left(\frac{-xm-yn}{m^2+n^2}\log\xi\right)\qquad\text{for $\lambda\in F$}.
\end{equation}
The following theorem shows that the algebra $\mathcal{A}(\mathscr{L})$ acts naturally on $\Gamma_0(L_\xi)$ by difference-multiplication operators.
\begin{Theorem}
For each $\xi\in\mathbb{C}^\times$, there exists a representation
\begin{equation}
\Delta_\xi: \mathcal{A}(\mathscr{L}) \to \End_\mathbb{C}\big(\Gamma_0(L_\xi)\big)
\end{equation}
uniquely determined by
\begin{subequations}\label{eq:Delta-def}
\begin{align}
\big(\Delta_\xi(X_1^\pm) f\big)(x,y) &= q_1(x \alpha+y\beta \mp \alpha/2) \cdot f(x\mp 1,y), \\
\big(\Delta_\xi(X_2^\pm) f\big)(x,y) &= q_2(x \alpha+y\beta \mp \beta/2) \cdot f(x,y\mp 1), \\
\big(\Delta_\xi(H)f\big)(x,y) &= (x\alpha + y\beta)\cdot f(x,y),
\end{align}
\end{subequations}
where the $q_i$ where defined in \eqref{eq:q_i}.
\end{Theorem}
\begin{proof}
For brevity, put $\tilde{X}_i^\pm =\Delta_\xi(X_i^\pm)$, $\tilde{H}=\Delta_\xi(H)$ and $p_i=P_i^\mathscr{L}$.
We have
\begin{align*}
\big(\tilde{X}_1^\pm \tilde{X}_1^\mp f\big)(x,y)
&= q_1(x\alpha+y\beta\mp\alpha/2)\cdot (\tilde{X}_1^\mp f)(x\mp 1,y) \\
&= q_1(x\alpha+y\beta\mp\alpha/2)
q_1((x\mp 1)\alpha+y\beta\pm\alpha/2) \cdot f(x,y) \\
&= p_1(x\alpha+y\beta\mp\alpha/2) \cdot f(x,y) \\
&= \big(p_1(\tilde{H}\mp\alpha/2)f\big)(x,y).
\end{align*}
This shows that $\tilde{X}_1^\pm \tilde{X}_1^\mp = p_1(\tilde{H}\mp\alpha/2)$.
Similarly one checks that $\tilde{X}_2^\pm \tilde{X}_2^\mp = p_2(\tilde{H}\mp\beta/2)$.
Next we verify that $[\tilde{H},\tilde{X}_i^\pm] = \pm \alpha_i \tilde{X}_i^\pm$ where $(\alpha_1,\alpha_2)=(\alpha,\beta)$ for brevity. We have
\begin{align*}
\big([\tilde{H},\tilde{X}_1^\pm]f\big)(x,y)
&= (x\alpha+y\beta)\cdot (\tilde{X}_1^\pm f)(x,y) - q_1(x\alpha+y\beta\mp \alpha/2) (\tilde{H}f)(x\mp 1,y) \\
&= (x\alpha+y\beta)q_1(x\alpha+y\beta\mp \alpha/2) f(x\mp 1,y) \\
& \quad - q_1(x\alpha+y\beta\mp \alpha/2) ((x\mp 1)\alpha+y\beta) f(x\mp 1,y) \\
&= \pm \alpha q_1(x\alpha+y\beta\mp \alpha/2) f(x\mp 1,y) \\
&= (\pm \alpha \tilde{X}_1^\pm f)(x,y)
\end{align*}
and similarly for $\tilde{X}_2^\pm$.
Next, the most crucial calculation is to verify that $[\tilde{X}_1^\pm,\tilde{X}_2^\mp]=0$ which is where we need that $(q_1,q_2)$ satisfy the MTE \eqref{eq:MTE}.
\begin{align*}
\big([\tilde{X}_1^+,\tilde{X}_2^-]f\big)(x,y) &=
(\tilde{X}_1^+ \tilde{X}_2^- f)(x,y) - (\tilde{X}_2^- \tilde{X}_1^+ f)(x,y) \\
&=q_1(x\alpha+y\beta -\alpha/2)(\tilde{X}_2^- f)(x- 1,y) - q_2(x\alpha+y\beta+\beta/2)(\tilde{X}_1^+f)(x,y+1)\\
&=q_1(x\alpha+y\beta-\alpha/2)q_2(x\alpha+y\beta-\alpha+\beta/2)f(x- 1, y+ 1) \\
&\quad- q_1(x\alpha+y\beta - \alpha/2 + \beta) q_2(x\alpha+y\beta+\beta/2) f(x- 1, y+ 1)\\
&=\big(q_1(v-\beta/2)q_2(v-\alpha/2)-q_1(v+\beta/2)q_2(v+\alpha/2)\big)f(x+ 1,y- 1)\\
&=0
\end{align*}
where we put $v=x\alpha+y\beta - \alpha/2 +\beta/2$. By $1\leftrightarrow 2$ the other case also holds.
This shows that \eqref{eq:Delta-def} defines a homomorphism $\Delta_\xi:\tilde{\mathcal{A}}(\mathscr{L})\to\End_\mathbb{C}\big(\Gamma_0(L_\xi)\big)$.
It remains to show that the torsion ideal $\mathcal{I}$ in \eqref{eq:CI-def} is in the kernel of $\Delta_\xi$. Since $\mathcal{I}$ is a graded ideal with respect to the $\mathbb{Z}^2$-gradation on $\tilde{\mathcal{A}}(\mathscr{L})$ given by $\deg X_i^\pm=\pm\boldsymbol{e}_i$, $\deg H=0$, this amounts to proving that
if $d=(d_1,d_2)\in\mathbb{Z}^2$ and $a\in\tilde{\mathcal{A}}(\mathscr{L})_d$ belongs to $\mathcal{I}$, then $\Delta_\xi(a)=0$.
Since $a\in\mathcal{I}$ there exists a nonzero polynomial $g$ such that $g(H)\cdot a=0$ in $\tilde{\mathcal{A}}(\mathscr{L})$. Applying $\Delta_\xi$ we obtain
\begin{equation}\label{eq:rep-pf0}
g(\tilde{H})\cdot \Delta_\xi(a)=0.
\end{equation}
In \eqref{eq:rep-pf0}, acting on an arbitrary $f\in\Gamma_0(L_\xi)$ gives
\begin{equation}
g(x\alpha+y\beta)\cdot \big(\Delta_\xi(a)f\big)(x,y) = 0.
\end{equation}
By \eqref{eq:Delta-def} there exists a function $h:F\to\mathbb{C}$ such that
\begin{equation}\label{eq:rep-pf-h}
\big(\Delta_\xi(a)f\big)(x,y) = h(x\alpha+y\beta) f(x+d_1,y+d_2)
\end{equation}
hence
\begin{equation}
g(x\alpha+y\beta)h(x\alpha+y\beta) f(x+d_1,y+d_2) = 0.
\end{equation}
Choosing $f$ as the basis vectors $f_\lambda$ defined in \eqref{eq:fk}, we obtain
\begin{equation} \label{eq:rep-pf-gh}
g(\lambda)h(\lambda)=0 \qquad\text{for all $\lambda\in F$.}
\end{equation}
By \eqref{eq:q_i} and \eqref{eq:Delta-def}, $h(\lambda)$ given in \eqref{eq:rep-pf-h} is real analytic in a region $\lambda>N$ for $N\gg 0$ while $g$ is a non-zero polynomial, so \eqref{eq:rep-pf-gh} implies that $h$ is identically zero. This shows that $\Delta_\xi(a)=0$. This completes the proof of the existence of the homomorphism $\Delta_\xi$. The uniqueness follows from the fact that $\mathcal{A}(\mathscr{L})$ is generated by the elements $X_i^\pm$ and $H$.
\end{proof}
\section{Pseudo-unitarizability of $\Delta_\xi$} \label{sec:pseudo}
In Section \ref{sec:pseudo-general} we review the basic definitions needed for the following section, where we prove that $\Delta_\xi$ is pseudo-unitarizable when $|\xi|=1$.
\subsection{Pseudo-unitarizable modules over $\ast$-algebras} \label{sec:pseudo-general}
In this subsection let $\mathcal{A}$ denote a \emph{$\ast$-algebra}, by which we mean an associative unital algebra over $\mathbb{C}$ equipped with a conjugate-linear map $\mathcal{A}\to \mathcal{A}, a\mapsto a^\ast$ satisfying
\begin{equation}
(ab)^\ast = b^\ast a^\ast\qquad (a^\ast)^\ast = a\qquad \text{for all $a,b\in\mathcal{A}$.}
\end{equation}
\begin{Definition} \label{def:inner-product}
Let $M$ be a module over $\mathcal{A}$.
By an \emph{inner product} on $M$,
\[\langle \cdot,\cdot \rangle:M\times M\to \mathbb{C}\]
we mean a non-degenerate symmetric sesquilinear form:
\begin{enumerate}[{\rm (i)}]
\item $\langle \lambda u+\mu v, w\rangle=\lambda \langle u,w\rangle + \mu \langle v,w\rangle$ for all $u,v,w\in M$ and $\lambda,\mu\in \mathbb{C}$,
\item $\langle v,w\rangle = \overline{\langle w,v\rangle}$ for all $v,w\in M$, where the bar denotes complex conjugation,
\item if $\langle v,w\rangle=0$ for all $v\in M$ then $w=0$.
\end{enumerate}
An inner product $\langle\cdot,\cdot\rangle$ on $M$ is called ($\ast$-)\emph{invariant} if
\begin{enumerate}[{\rm (i)}]
\setcounter{enumi}{3}
\item $\langle av,w\rangle = \langle v,a^\ast w\rangle$ for all $a\in\mathcal{A}$, $v,w\in M$
\end{enumerate}
and \emph{positive definite} if
\begin{enumerate}[{\rm (i)}]
\setcounter{enumi}{4}
\item $\langle v,v\rangle>0$ for all nonzero $v\in V$.
\end{enumerate}
\end{Definition}
\begin{Definition} \label{def:unitar}
Let $M$ be an $\mathcal{A}$-module. Then $M$ is \emph{pseudo-unitarizable} if there exists an invariant inner product on $M$ and \emph{unitarizable} if there exists a positive definite invariant inner product on $M$.
\end{Definition}
\begin{Definition} \label{def:dual}
The \emph{finitistic dual} of an $\mathcal{A}$-module with a decomposition $M=\bigoplus_{\lambda\in\mathbb{C}} M_\lambda$, $\dim_\mathbb{C} M_\lambda<\infty$, is defined as
\[M^\#=\bigoplus_{\lambda\in\mathbb{C}} M^\#_\lambda,\qquad M^\#_\lambda=\{f:M_\lambda \to\mathbb{C}\mid \text{$f$ is conjugate-linear}\}\]
with $\mathcal{A}$-action
\[(af)(v)=f(a^\ast v),\quad\forall a\in \mathcal{A}, f\in M^\#_\lambda, v\in M_\lambda, \lambda\in\mathbb{C}.\]
\end{Definition}
\begin{Theorem}\label{thm:pseudo-unitarizability}
\begin{enumerate}[{\rm (a)}]
\item $M$ is pseudo-unitarizable if and only if $M^\#\simeq M$.
\item If $M$ is indecomposable there is at most one invariant inner product on $M$, up to equivalence.
\end{enumerate}
\end{Theorem}
\begin{proof} Follows from general results in \cite{MazTur2001}.
\end{proof}
\subsection{Pseudo-unitarizability of $\Delta_\xi$}
The algebras $\mathcal{A}(\mathscr{L})$ become $\ast$-algebras by defining
\begin{equation} \label{eq:AL-star}
H^\ast = H\qquad (X_i^\pm)^\ast = X_i^\mp\qquad \text{for $i=1,2$.}
\end{equation}
In this section we give an explicit invariant inner product on the representation space $\Gamma_0(L_\xi)$.
We assume $|\xi|=1$ and write $\xi=e^{2\pi\boldsymbol{i}\kappa}$. Every $f\in\Gamma_0(L_\xi)$ has the form
\begin{equation}
f(x,y) = \tilde{f}(x\alpha+y\beta)\exp\left(2\pi\boldsymbol{i} \frac{-xm-yn}{m^2+n^2}\kappa\right)
\end{equation}
for some unique function $\tilde{f}:F\to\mathbb{C}$ of finite support.
\begin{Theorem} $\Delta_\xi$ is pseudo-unitarizable. More precisely, there exists a weight function $\mathsf{w}:F\to\{+1,-1\}$ such that
\begin{equation} \label{eq:form-def}
\langle f,g\rangle = \sum_{\lambda\in F} \tilde{f}(\lambda)\overline{\tilde{g}(\lambda)} \mathsf{w}(\lambda)
\end{equation}
is a binary form on $\Gamma_0(L_\xi)$ satisfying (i)--(iv) of Definition \ref{def:unitar}.
\end{Theorem}
\begin{proof}
The form is invariant iff $\langle \Delta_\xi(X_i^\pm)f,g\rangle=\langle f,\Delta_\xi(X_i^\mp)g\rangle$ because $\mathcal{A}(\mathscr{L})$ is generated by $X_i^\pm$ and $H$, and that $\langle \Delta_\xi(H)f,g\rangle = \langle f,\Delta_\xi(H)g\rangle$ holds is immediate because $\Delta_\xi(H)$ is diagonal with real eigenvalues. We have
\begin{align*}
\langle \Delta_\xi(X_1^\pm)f,g\rangle &= \sum_{\lambda\in F} q_1(\lambda\mp \tfrac{\alpha}{2})\tilde{f}(\lambda\mp\alpha)\overline{\tilde{g}(\lambda)}\mathsf{w}(\lambda) \\
&=\sum_{\lambda\in F} q_1(\lambda\pm\tfrac{\alpha}{2})\tilde{f}(\lambda)\overline{\tilde{g}(\lambda\pm\alpha)} \mathsf{w}(\lambda\pm\alpha)\\
&=\sum_{\lambda\in F} \tilde{f}(\lambda)\overline{q_1(\lambda\pm\tfrac{\alpha}{2})\tilde{g}(\lambda\pm\alpha)}
\left(\frac{q_1(\lambda\pm\tfrac{\alpha}{2})}{|q_1(\lambda\pm\tfrac{\alpha}{2})|}\right)^2 \mathsf{w}(\lambda\pm \alpha)
\end{align*}
and similarly for $X_2^\pm$ and $\beta$ which leads to the conditions
\begin{equation}\label{eq:suff-cond-w}
\left(\frac{q_i(\lambda\pm\tfrac{\alpha_i}{2})}{|q_i(\lambda\pm\tfrac{\alpha_i}{2})|}\right)^2 \mathsf{w}(\lambda\pm \alpha_i)=\mathsf{w}(\lambda)\qquad \text{for $i=1,2$.}
\end{equation}
If \eqref{eq:suff-cond-w} hold then the form $\langle\cdot,\cdot\rangle$ defined by \eqref{eq:form-def} is invariant.
Substituting \eqref{eq:q_i} into \eqref{eq:suff-cond-w} we obtain
\begin{equation} \label{eq:w-diff-eq}
\exp\left(2\pi\boldsymbol{i} l_i(\lambda\pm\tfrac{\alpha_i}{2})/2\right) \mathsf{w}(\lambda\pm\alpha_i)=\mathsf{w}(\lambda),\qquad i=1,2
\end{equation}
Again it suffices to take a phase function
\begin{equation}\label{eq:w-def}
\mathsf{w}(\lambda)=\exp\left(2\pi\boldsymbol{i} \omega(\lambda)\right)
\end{equation}
The system of difference equations for $\omega(\lambda)$ can then be written
\begin{equation}
\omega(\lambda\pm\alpha_i) \equiv_\mathbb{Z} \omega(\lambda) + \frac{1}{2} l_i(\lambda\pm\tfrac{\alpha_i}{2}),\qquad i=1,2
\end{equation}
where $a\equiv_\mathbb{Z} b$ iff $a-b\in\mathbb{Z}$.
For this system to have solutions the $l_i$ must satisfy consistency equations which can
be written
\begin{equation} \label{eq:l-cong}
\frac{1}{2} l_1(\lambda-\tfrac{\alpha_2}{2})+\frac{1}{2}l_2(\lambda-\tfrac{\alpha_1}{2}) \equiv_\mathbb{Z}
\frac{1}{2} l_1(\lambda+\tfrac{\alpha_2}{2})+\frac{1}{2}l_2(\lambda+\tfrac{\alpha_1}{2})
\end{equation}
which actually holds as an equality due to the current conservation \eqref{eq:li-CMTE}.
This proves that the system of difference equations is consistent and with boundary condition $\omega(0)=0$ we obtain the unique solution
\begin{equation} \label{eq:omega-solution}
\omega\left(\pm(\alpha_{i_1}+\alpha_{i_2}+\cdots+\alpha_{i_k})\right)=
\frac{1}{2}\sum_{r=1}^k l_{i_r}\left(\pm(\alpha_{i_1}+\alpha_{i_2}+\cdots+\alpha_{i_{r-1}}+\tfrac{\alpha_{i_r}}{2})\right)
\end{equation}
for any sequence $\underline{i}=i_1i_2\ldots i_k\in\mathsf{Seq}_2$. Since $\mathbb{Z}\alpha_1+\mathbb{Z}\alpha_2=F$ and $m\alpha_1+n\alpha_2=0$ there are non-negative integers $a,b$ such that $F=\langle a\alpha_1+b\alpha_2\rangle$ as abelian groups, proving that the elements $\pm(\alpha_{i_1}+\cdots+\alpha_{i_k})$ run through all of $F$. Due to \eqref{eq:l-cong} the value of $\omega$ modulo $\mathbb{Z}$ is independent of $\underline{i}$. This gives a unique solution $\mathsf{w}(\lambda)$ to \eqref{eq:w-diff-eq} having $\mathsf{w}(0)=1$.
That this form is symmetric $\langle f,g\rangle=\overline{\langle g,f\rangle}$ follows from the fact that $\mathsf{w}(\lambda)$ is real-valued. Actually $\mathsf{w}(\lambda)\in\{1,-1\}$ for all $\lambda\in F$ because $l_i(\lambda)$ are integer valued hence $\omega(\lambda)\in\frac{1}{2}\mathbb{Z}$.
Finally $\langle\cdot,\cdot\rangle$ is non-degenerate: If $\langle f,g\rangle=0$ for all $g$, we can pick $g=f_\lambda$, see \eqref{eq:fk}. Then $\tilde{f_\lambda}(\mu)=\delta_{\lambda,\mu}$ where $\delta$ is Kronecker's delta, hence
\[\langle f,f_\lambda\rangle = \pm \tilde{f}(\lambda),\qquad \forall \lambda\in F\]
hence $\tilde{f}(\lambda)=0$ for all $\lambda\in F$ which implies that $f$ is identically zero.
\end{proof}
\begin{Remark}
This means we have produced an explicit isomorphism
$\Gamma_0(L_\xi)\cong \Gamma_0(L_\xi)^\#$, namely
$f\mapsto \langle f,\cdot\rangle$.
\end{Remark}
\begin{Remark}
When $|\xi|=1$, this gives the following independent proof that $\Delta_\xi(\mathcal{I})=0$.
Since $\mathcal{I}$ is a graded ideal with respect to the $\mathbb{Z}^2$-grading on $\mathcal{A}(\mathscr{L})$ it suffices to prove that $\mathcal{I}_d\in\ker\Delta_\xi$ for each $d\in\mathbb{Z}^2$. Let $a\in\mathcal{I}_d$. Then $a^\ast a=0$ by \cite[Thm.~3.11(ii)$\Rightarrow$(i)]{Har2016}. Thus we have for any $\lambda\in F$,
\[\langle \Delta_\xi(a)f_\lambda,\Delta_\xi(a)f_\lambda\rangle =
\langle \Delta_\xi(a^\ast a)f_\lambda,f_\lambda\rangle = 0.
\]
Since the form is non-degenerate, and all weight spaces are one-dimensional and pairwise orthogonal, there are no nonzero isotropic weight vectors. This implies that $\Delta_\xi(a)=0$.
\end{Remark}
\begin{Remark}\label{rem:J}
In the language of \cite{Bog1974}, the \emph{fundamental symmetry} operator for the (indefinite) inner product space $\Gamma_0(L_\xi)$,
\[J:\Gamma_0(L_\xi)\to \Gamma_0(L_\xi)\]
is given by
\[J f_\lambda = \mathsf{w}(\lambda) f_\lambda \qquad\text{for all $\lambda\in F$.}\]
and the $J$-eigenspace decomposition of $\Gamma_0(L_\xi)$
\[\Gamma_0(L_\xi)=\Gamma_0(L_\xi)^+\oplus \Gamma_0(L_\xi)^-\]
is the \emph{fundamental decomposition}. On the $+1$ (respectively $-1$) eigenspace the form $\langle\cdot,\cdot\rangle$ is positive (respectively negative) definite.
\end{Remark}
\section{Relation to simple integral weight modules} \label{sec:simple-weight}
In this section we prove that the representations $\Delta_\xi$ are completely reducible. Moreover, every simple integral weight module occurs as a subspace in $\Gamma_0(L_\xi)$ for some $\xi$.
\subsection{Unitarity of the Casimir} \label{sec:casimir}
In \cite{Har2016} an $\mathcal{A}(\mathscr{L})$-centralizing element of the localization $\mathcal{A}(\mathscr{L})_{\mathrm{loc}}=\mathcal{A}(\mathscr{L})=\otimes_{\mathbb{C}[H]}\mathbb{C}(H)$ was given.
\begin{Theorem}[{\cite[Prp.~6.3, Thm.~C]{Har2016}}] \label{thm:C}
Consider the element $C\in \mathcal{A}(\mathscr{L})_{\mathrm{loc}}$ given by
\begin{equation}\label{eq:C}
C=X(\underline{i})\prod_{\lambda\in F} (H-\lambda)^{-\ord(\underline{i},\lambda)}
\end{equation}
where $\underline{i}=i_1i_2\ldots i_{m+n}\in\mathsf{Seq}_2(m,n)$ is a sequence of $m$ $1$'s and $n$ $2$'s in any order, $X(\underline{i})=X_{i_{m+n}}^+\cdots X_{i_2}^+X_{i_1}^+$, and $\ord(\underline{i},\lambda)$ is the number (with multiplicity) of vertical (equivalently, horizontal) edges in $\mathscr{L}$ intersected by the face lattice path $\lambda, \lambda+\alpha_{i_1}, \lambda+\alpha_{i_1}+\alpha_{i_2}, \ldots, \lambda+\alpha_{i_1}+\alpha_{i_2}+\cdots +\alpha_{i_{m+n}}=\lambda$.
\begin{enumerate}[{\rm (a)}]
\item $C$ is independent of the choice of $\underline{i}$ from $\mathsf{Seq}_2(m,n)$.
\item $C$ belongs to the center of $\mathcal{A}(\mathscr{L})_\mathrm{loc}$. In particular, $[C,\mathcal{A}(\mathscr{L})]=0$.
\item $C\in\mathcal{A}(\mathscr{L})$ iff $\mathscr{L}$ is a five-vertex configuration (every vertex has at most two incident edges with nonzero multiplicity) in which case $Z(\mathcal{A}(\mathscr{L}))=\mathbb{C}[C,C^{-1}]$. Otherwise $Z\big(\mathcal{A}(\mathscr{L})\big)=\mathbb{C}$.
\end{enumerate}
\end{Theorem}
\begin{Definition}
We call the element $C$ the \emph{Casimir element} for $\mathcal{A}(\mathscr{L})$.
\end{Definition}
In \cite{Har2016} it was observed that $C^\ast C$ is a constant ($C^\ast C$ has $\mathbb{Z}^2$-degree $(0,0)$ and thus is a polynomial in $H$, but it also commutes with $X_i^\pm$ and is therefore a constant, a priori some rational number). In this subsection we prove that in fact this number equals $1$, i.e. $C$ is unitary. This further shows that $C$ is some canonical object.
First we prove two lemmas.
\begin{Lemma}\label{lem:L-ord}
For any $\lambda\in F$ and $\underline{i}\in\mathsf{Seq}_2(m,n)$ we have (putting $\ell=\ell(\underline{i})=m+n$),
\begin{equation}
\sum_{j=1}^\ell \mathscr{L}_{i_j}(\lambda+\alpha_{i_1}+\alpha_{i_2}+\cdots+\alpha_{i_{j-1}}+\tfrac{\alpha_{i_j}}{2})=2\ord(\underline{i},\lambda)
\end{equation}
\end{Lemma}
\begin{proof}
The left hand side counts the total number of both vertical and horizontal edges in $\mathscr{L}$ (with multiplicity) that the face path $(\underline{i},\lambda)$ intersects. Since $\underline{i}$ is a loop, the number of horizontal edges it intersect from $\mathscr{L}$ is the same as the number of vertical edges it crosses, and this number is exactly equal to the order of the path $(\underline{i},\lambda)$, see \cite[Lem.~5.11]{Har2016}.
\end{proof}
Put
\begin{equation}\label{eq:quni}
q_{\underline{i}}(H) = q_{i_1}(H+\tfrac{\alpha_{i_1}}{2})q_{i_2}(H+\alpha_{i_1}+\tfrac{\alpha_{i_2}}{2})\cdots q_{i_{\ell}}(H+\alpha_{i_1}+\alpha_{i_2}+\cdots+\alpha_{i_{\ell-1}}+\tfrac{\alpha_{i_\ell}}{2})
\end{equation}
and similarly for $p_{\underline{i}}(H)$.
Remarkably, even though each $q_j(H)$ defined in \eqref{eq:q_i} is locally a branch of the square root of the polynomial $P_j^{\mathscr{L}}(H)$, the following lemma shows that $q_{\underline{i}}(H)$ is a polynomial, provided the sequence $\underline{i}$ consists of $m$ 1's and $n$ 2's in any order.
\begin{Lemma} \label{lem:q-ord}
For any $\underline{i}\in\mathsf{Seq}_2(m,n)$ we have, as functions on $F$,
\begin{equation}\label{eq:q-ord}
q_{\underline{i}}(H) = \prod_{\lambda\in F} (H-\lambda)^{\ord(\underline{i},\lambda)}.
\end{equation}
\end{Lemma}
\begin{proof}
Put $p_i(H)=P_i^\mathscr{L}(H)$ and $\ell=\ell(\underline{i})=m+n$. Then
\begin{equation}\label{eq:qlem-pf0}
\big(q_{\underline{i}}(H)\big)^2 =
\prod_{j=1}^\ell p_{i_j}(H+\alpha_{i_1}+\cdots+\alpha_{i_{j-1}}+\tfrac{\alpha_{i_j}}{2}).
\end{equation}
Now note that
\begin{equation}\label{eq:qlem-pf1}
p_k(H+\tfrac{\alpha_k}{2}) = \prod_{e\in E_k} (H+\tfrac{\alpha_k}{2}-e)^{\mathscr{L}_k(e)}
\end{equation}
since $E_k=F+\alpha_k/2$ we make the substitition $\lambda=e-\alpha_k/2$ to rewrite \eqref{eq:qlem-pf1} as
\begin{equation}\label{eq:qlem-pf2}
p_k(H+\tfrac{\alpha_k}{2}) = \prod_{\lambda\in F} (H-\lambda)^{\mathscr{L}_k(\lambda+\tfrac{\alpha_2}{2})}
\end{equation}
Applying \eqref{eq:qlem-pf2} to each factor in \eqref{eq:qlem-pf0} we get
\begin{align*}
p_{\underline{i}}(H)&=\prod_{j=1}^\ell \prod_{\lambda\in F}
(H+\alpha_{i_1}+\cdots+\alpha_{i_{j-1}}-\lambda)^{\mathscr{L}_{i_j}(\lambda+\tfrac{\alpha_{i_j}}{2})}\\
&=\prod_{\lambda\in F} (H-\lambda)^{\sum_{j=1}^\ell \mathscr{L}_{i_j}(\lambda+\alpha_{i_1}+\cdots+\alpha_{i_{j-1}}+\tfrac{\alpha_{i_j}}{2})}\\
&=\prod_{\lambda\in F} (H-\lambda)^{2\ord(\underline{i},\lambda)}
\end{align*}
where we used Lemma \ref{lem:L-ord} in the last step.
It remains to show both sides have the same sign. When evaluating at $H=\mu\in F$, both sides of \eqref{eq:q-ord} have the same zero set, namely the set of all $\mu\in F$ such that $\ord(\underline{i},\mu)>0$. So it suffices to show that both sides have the same sign when they are nonzero. In fact we will show that both sides are nonnegative at all $\mu\in F$.
In the left hand side we have by \eqref{eq:quni} and \eqref{eq:q_i},
\begin{align*}
q_{\underline{i}}(H) &=\prod_{j=1}^\ell q_{i_j}(H+\alpha_{i_1}+\cdots+\alpha_{i_{j-1}}+\tfrac{\alpha_{i_j}}{2}) \\
&=\exp\left(2\pi\boldsymbol{i}\sum_{j=1}^\ell l_{i_j}(H+\alpha_{i_1}+\cdots+\alpha_{i_{j-1}}+\tfrac{\alpha_{i_j}}{2})/4\right) \\
&\quad\cdot \prod_{j=1}^\ell |p_{i_j}(H+\alpha_{i_1}+\cdots+\alpha_{i_{j-1}}+\tfrac{\alpha_{i_j}}{2})|^{1/2}
\end{align*}
Setting $H=\mu$ in the exponential expression we get after dividing by $2\pi\boldsymbol{i}/2$
\[\frac{1}{2}\sum_{j=1}^\ell l_{i_j}\left(\mu+\alpha_{i_1}+\cdots+\alpha_{i_{j-1}}+\tfrac{\alpha_{i_j}}{2}\right).\]
If $\mu=\alpha_{k_1}+\cdots+\alpha_{k_r}$ for some $\underline{k}=k_1k_2\cdots k_r\in\mathsf{Seq}_2$, then using \eqref{eq:omega-solution} this equals
\[\omega(\mu+\alpha_{i_1}+\cdots+\alpha_{i_\ell})-\omega(\mu)=0\]
since $\sum_{j=1}^\ell \alpha_{i_j}=m\alpha+n\beta=0$.
A similar argument can be made in the case of $\mu=-(\alpha_{k_1}+\cdots+\alpha_{k_r})$. This proves that
\begin{equation}
q_{\underline{i}}(\mu)\ge 0\qquad \text{for all $\mu\in F$.}
\end{equation}
Next we prove that the same is true for the product in the right hand side of \eqref{eq:q-ord}. Assuming $\mu$ is not a zero we have
\begin{equation}\label{eq:sgn-prod}
\sgn \left( \prod_{\lambda\in F} (\mu-\lambda)^{\ord(\underline{i},\lambda)}\right)=
\sgn\left(
\prod_{\lambda\in F, \lambda>\mu} (\mu-\lambda)^{\ord(\underline{i},\lambda)}\right)
=(-1)^{\sum_{\lambda\in F, \lambda>\mu} \ord(\underline{i},\lambda)}.
\end{equation}
Using Lemma \ref{lem:L-ord} we get
\[
\sum_{\substack{\lambda\in F \\ \lambda>\mu}} \ord(\underline{i},\mu)
= \sum_{\substack{\lambda\in F \\ \lambda>\mu}} \frac{1}{2}\sum_{j=1}^\ell \mathscr{L}_{i_j}\left(\lambda+\alpha_{i_1}+\cdots+\alpha_{i_{j-1}}+\tfrac{\alpha_{i_j}}{2}\right).
\]
Interchanging the order of summation and making the change of variables $e=\lambda+\alpha_{i_1}+\cdots+\alpha_{i_{j-1}}+\tfrac{\alpha_{i_j}}{2}\in E_{i_j}$ we obtain
\[\frac{1}{2}\sum_{j=1}^\ell \sum_{\substack{e\in E_{i_j}\\ e>\mu+\alpha_{i_1}+\cdots+\alpha_{i_{j-1}}+\tfrac{\alpha_{i_j}}{2}}} \mathscr{L}_{i_j}(e).\]
Now use the definition \eqref{eq:li-def} of $l_j$ to get
\[\frac{1}{2}\sum_{j=1}^\ell l_{i_j}\left(\mu+\alpha_{i_1}+\cdots+\alpha_{i_{j-1}}+\tfrac{\alpha_{i_j}}{2}\right)\]
which as shown above equals zero. This proves that for any $\underline{i}\in\mathsf{Seq}_2(m,n)$,
\begin{equation}
\sum_{\substack{\lambda\in F\\ \lambda>\mu}} \ord(\underline{i},\lambda) = 0 \qquad\text{for all $\mu\in F$ such that $\ord(\underline{i},\mu)=0$}
\end{equation}
and hence by \eqref{eq:sgn-prod},
\begin{equation}
\prod_{\lambda\in F}(\mu-\lambda)^{\ord(\underline{i},\lambda)}\ge 0\qquad\text{for all $\mu\in F$.}
\end{equation}
This finishes the proof of the identity \eqref{eq:q-ord}.
\end{proof}
We now prove that $C$ given in \eqref{eq:C} is unitary.
\begin{Proposition} \label{prp:C-is-unitary}
The Casimir element for $\mathcal{A}(\mathscr{L})$ is unitary with respect to $\ast$. That is:
\begin{equation}\label{eq:C-is-unitary}
C^\ast \cdot C = 1 = C\cdot C^\ast
\end{equation}
\end{Proposition}
\begin{proof} Since $H^\ast=H$ and $\lambda\in F=\mathbb{Z}$ we have
\[C^\ast C = \prod_{\lambda\in F}(H-\lambda)^{-\ord(\underline{i},\lambda)} X(\underline{i})^\ast X(\underline{i}) \prod_{\lambda\in F} (H-\lambda)^{-\ord(\underline{i},\lambda)}
=\prod_{\lambda\in F} (H-\lambda)^{-2\ord(\underline{i},\lambda)} X(\underline{i})^\ast X(\underline{i})\]
By a straightforward calculation (see e.g. proof of \cite[Lem.~5.5]{Har2016})
\[ X(\underline{i})^\ast X(\underline{i}) = p_{\underline{i}}(H) \]
where $p_{\underline{i}}(H)$ is as in \eqref{eq:quni} with $p_i=P_i^{\mathscr{L}}$.
By Lemma \ref{lem:q-ord},
\[p_{\underline{i}}(H)=(q_{\underline{i}}(H))^2 = \prod_{\lambda\in F}(H-\lambda)^{2\ord(\underline{i},\lambda)}.\]
This finishes the proof.
\end{proof}
\subsection{Pseudo-unitarizablity of simple integral weight modules} \label{sec:pseudo-simple}
We recall the classification of simple integral weight $\mathcal{A}(\mathscr{L})$-modules from \cite{Har2016}.
Consider the space $\mathbb{T}_{m,n}=\mathbb{R}^2/\langle (m,n)\rangle$ equipped with the quotient topology. This space is homeomorphic to a doubly infinite cylinder. Let $\overline{\mathscr{L}}\subseteq \mathbb{T}_{m,n}$ be the configuration $\mathscr{L}$ regarded as a union of closed line segments. Let $\overline{\mathsf{F}}\subseteq \mathbb{T}_{m,n}$ be the image of $\mathbb{Z}^2$ under the canonical projection $\mathbb{R}^2\to\mathbb{T}_{m,n}$.
\begin{Theorem}[{\cite[Thm.~B]{Har2016}}] \label{thm:weight}
\begin{enumerate}[{\rm (a)}]
\item There is a bijective correspondence between the set of isoclasses of simple integral weight $\mathcal{A}(\mathscr{L})$-modules, and the set of pairs $(D,\xi)$ where $D$ is a connected component of $\mathbb{T}_{m,n}\setminus\overline{\mathscr{L}}$ and $\xi\in\mathbb{C}$ with $\xi=0$ iff $D$ is contractible.
\item Let $M(D,\xi)$ be the module corresponding to $(D,\xi)$. Each nonzero weight space $M(D,\xi)_\lambda$ is one-dimensional and
\[\Supp\big(M(D,\xi)\big)=\{x_1\alpha_1+x_2\alpha_2\mid (x_1,x_2)+\langle(m,n)\rangle\in \overline{\mathsf{F}}\cap D\}.\]
\item For any incontractible $D$ and $\xi\in\mathbb{C}^\times$ the action of the Casimir element $C$ for $\mathcal{A}(\mathscr{L})$ from \eqref{eq:C} is well-defined on $M(D,\xi)$ and $C|_{M(D,\xi)}=\xi\Id_{M(D,\xi)}$.
\end{enumerate}
\end{Theorem}
The following lemma is immediate because the support of an integral weight module is contained in $F$ which is a subset of $\mathbb{R}$.
\begin{Lemma} \label{lem:support-of-finitistic-dual}
If $M$ is a simple integral weight $\mathcal{A}(\mathscr{L})$-module then $\Supp(M)=\Supp(M^\#)$.
\end{Lemma}
Using the unitarity of the Casimir $C$ from Proposition \ref{prp:C-is-unitary}, we obtain the following description of the pseudo-unitarizable simple integral weight $\mathcal{A}(\mathscr{L})$-modules.
\begin{Theorem} \label{thm:M-pseudo}
Let $D$ be a connected component of $\mathbb{T}_{m,n}\setminus\overline{\mathscr{L}}$.
\begin{enumerate}[{\rm (i)}]
\item If $D$ is contractible, then $M(D,0)$ is pseudo-unitarizable.
\item If $D$ is incontractible then $M(D,\xi)$ is pseudo-unitarizable if and only if $|\xi|=1$.
\end{enumerate}
\end{Theorem}
\begin{proof}
By Theorem \ref{thm:pseudo-unitarizability}, a simple weight module $M$ is pseudo-unitarizable if and only if $M^\#\simeq M$.
(i) Put $M=M(D,0)$. By Theorem \ref{thm:weight}, since $\Supp(M^\#)=\Supp(M)=D$ which is contractible, it follows that $M^\#\simeq M$ hence $M$ is pseudo-unitarizable.
(ii) By Theorem \ref{thm:weight} and Lemma \ref{lem:support-of-finitistic-dual}, for any $\xi\in\mathbb{C}^\times$ there exists $\xi^\#\in\mathbb{C}^\times$ such that $M(D,\xi)^\#\simeq M(D,\xi^\#)$. Recall that $\xi$ has the interpretation as being the eigenvalue of $C$. By Proposition \ref{prp:C-is-unitary}, $C^\ast=C^{-1}$ and thus for any $f\in M^\#$ and $v\in M$,
\[ (C f)(v)=f(C^\ast v) = f(\xi^{-1}v)=(\bar\xi^{-1}f)(v).\]
This proves that $\xi^\#=\bar\xi^{-1}$. By the classification theorem again, $M(D,\xi)\simeq M(D,\bar\xi^{-1})$ if and only if $\xi=\bar\xi^{-1}$ or equivalently, $|\xi|=1$.
\end{proof}
\subsection{Decomposition of $\Gamma_0(L_\xi)$ into irreducibles} \label{sec:semi}
Put
\begin{equation}
M_0=\bigoplus_D M(D,0)\qquad M_\xi = \bigoplus_{D'} M(D',\xi)
\end{equation}
where $D$ (respectively $D'$) runs over the set of contractible (respectively incontractible) connected components of $\mathbb{T}_{m,n}\setminus\overline\mathscr{L}$, and $\xi\in\mathbb{C}^\times$ is fixed.
\begin{Proposition} \label{prp:semi}
For any $\xi\in\mathbb{C}^\times$ there is an isomorphism of $\mathcal{A}(\mathscr{L})$-modules
\begin{equation}
\Gamma_0(L_{\xi}) = M_0\oplus M_\xi.
\end{equation}
\end{Proposition}
\begin{proof}
Each $H$-weight space of $\Gamma_0(L_\xi)$ is one-dimensional, spanned by $f_k$, $k\in\mathbb{Z}$.
For each connected component $D$ of $\mathbb{T}_{m,n}\setminus\overline{\mathscr{L}}$, there is a submodule of $\Gamma_0(L_\xi)$ whose support is exactly $D$.
By the characterizing properties of the simple integral weight modules from Theorem \ref{thm:weight}, it remains to prove that if $D$ is an incontractible component and $f_k$ is one of the basis vectors where $k\in F(D)$,
then $\Delta_\xi(C) f_k = \xi f_k$.
Let $\underline{i}\in\mathsf{Seq}_2(m,n)$. We have
\begin{multline*}
\big(\Delta_\xi(X(\underline{i}))f\big)(x,y) =
\big(\Delta_\xi (X_{i_\ell}\cdots X_{i_1})f\big)(x,y) \\
\shoveleft{
=q_{i_\ell}(x\alpha+y\beta-\tfrac{\alpha_{i_\ell}}{2})q_{i_{\ell-1}}(x\alpha+y\beta-\alpha_{i_\ell}-\tfrac{\alpha_{i_{\ell-1}}}{2})\cdots }\\
\shoveright{
\cdots
q_{i_1}(x\alpha+y\beta-\alpha_{i_\ell}-\alpha_{i_{\ell-1}}-\cdots-\alpha_{i_2}-\tfrac{\alpha_{i_1}}{2})\cdot
f((x,y)-\boldsymbol{e}_{i_\ell}-\cdots-\boldsymbol{e}_{i_1}) }\\
=\big(q_{\underline{i}}(\tilde{H}) f\big)(x-m,y-n)
=\xi q_{\underline{i}}(\tilde{H}) \cdot f(x,y)
\end{multline*}
We have shown that as operators on $\Gamma_0(L_\xi)$ we have
\begin{equation} \label{eq:delta-on-Xi}
\Delta_\xi(X(\underline{i})) = \xi q_{\underline{i}}(\tilde{H})
\end{equation}
where $\tilde{H}=\Delta_\xi(H)$.
Consider the centralizing element $C\in\mathcal{A}(\mathscr{L})_{\mathrm{loc}}$ given by \eqref{eq:C}. We have
\begin{gather}
\Delta_\xi(C) = \Delta_\xi\left(X(\underline{i})\prod_{\lambda\in F}(H-\lambda)^{-\ord(\underline{i},\lambda)}\right) =
\xi q_{\underline{i}}(\tilde H) \prod_{\lambda\in F} (\tilde{H}-\lambda)^{-\ord(\underline{i},\lambda)} = \xi
\end{gather}
where we used Lemma \ref{lem:q-ord} in the last step.
We abused notation by applying $\Delta_\xi$ to an element of the localization, but the resulting operator is well-defined on any $f_\lambda$ for $\lambda$ in an incontractible component.
This finishes the proof.
\end{proof}
\section{On the signature of the invariant inner product on $M(D,\xi)$ and internal eight-vertex configurations} \label{sec:signature}
\subsection{Unitarizable simple integral weight $\mathcal{A}(\mathscr{L})$-modules}
By Theorem \ref{thm:weight} each simple integral weight $\mathcal{A}(\mathscr{L})$-module $M$ is isomorphic to $M(D,0)$ with $D$ contractible or $M(D,\xi)$ with $D$ incontractible and $\xi\in\mathbb{C}^\times$.
As we saw in Theorem \ref{thm:M-pseudo}, the former are always pseudo-unitarizable and the latter iff $|\xi|=1$.
In this case there is a unique up to nonzero real multiple admissible form $\langle \cdot,\cdot\rangle$ on $M$ by Theorem \ref{thm:pseudo-unitarizability}. Thus, identifying $M$ with a submodule of $\Gamma_0(L_\xi)$ as in Proposition \ref{prp:semi}, we may define the \emph{signature of $M$} to be
\begin{equation}
\sigma(M) = \{\dim M^+, \dim M^-\}
\end{equation}
where $M^\pm$ are the $\pm 1$ eigenspaces of the fundamental symmetry $J$ from Remark \ref{rem:J}. All this just amounts to the following formula
\begin{equation}\label{eq:s-def}
\sigma(M)=\{s_+,s_-\}\qquad
s_\pm = \#\{\lambda\in F\mid \text{$\pm\langle v,v\rangle\ge 0$ for all $v\in M_\lambda $}\}.
\end{equation}
for some choice of invariant inner product $\langle\cdot,\cdot\rangle$. Changing the form to $-\langle\cdot,\cdot\rangle$ does not change $\sigma(M)$ as a set. Thus $\sigma(M)$ depends only on $M$ and not on the choice of invariant inner product on $M$.
We say that $M$ is \emph{definite} if $0\in \sigma(M)$. Thus $M$ is definite if and only if $M$ is unitarizable.
\begin{Lemma} \label{lem:signature}
The sign of the quadratic form $v\mapsto \langle v,v\rangle$ are the same on two adjacent weight spaces of weights $\lambda$ and $\lambda+\alpha_i$ if and only if $P_i^{\mathscr{L}}(e)>0$ where $e=\lambda+\alpha_i/2$ is the (midpoint of the) edge separating the weight spaces.
\end{Lemma}
\begin{figure}
\centering
\begin{tikzpicture}
\node[font=\scriptsize, below] at (0,0) {$\lambda$};
\draw (-2pt,-2pt) -- ( 2pt, 2pt);
\draw ( 2pt,-2pt) -- (-2pt, 2pt);
\node[font=\scriptsize, below] at (1,0) {$\lambda+\alpha_1$};
\draw (1cm-2pt,-2pt) -- (1cm+2pt, 2pt);
\draw (1cm+2pt,-2pt) -- (1cm-2pt, 2pt);
\draw[Red] (.5,.5) -- (.5,-.5);
\end{tikzpicture}
\caption{Adjacent weight spaces in the support of a simple integral weight module.}
\label{fig:Psign}
\end{figure}
\begin{proof}
Suppose that $v_\lambda\in V$ is a nonzero eigenvector of $H$ with eigenvalue $\lambda$. Then
\begin{equation}
\label{eq:P-norm-ratio}
\langle X_i^+ v_\lambda,\, X_i^+ v_\lambda\rangle
= \langle X_i^- X_i^+ v_\lambda,\, v_\lambda\rangle
= \langle P_i^\mathscr{L}\big(H+ \frac{\alpha_i}{2}\big)v_\lambda,\, v_\lambda\rangle
= P_i^\mathscr{L}\big(\lambda+\frac{\alpha_i}{2}\big)\langle v_\lambda, v_\lambda\rangle.
\end{equation}
\end{proof}
Recall the cylinder $\mathbb{T}_{m,n}=\mathbb{R}^2/\langle(m,n)\rangle$. For any subset $D\subseteq \mathbb{T}_{m,n}$ we put
\begin{equation}
E_i(D) = \{x\alpha+y\beta\in E_i \mid (x,y)\in (\mathbb{Z}^2+\tfrac{1}{2}\boldsymbol{e}_i)\cap D\}.
\end{equation}
where $\boldsymbol{e}_1=(1,0)$ and $\boldsymbol{e}_2=(0,1)$. Thus $E_1(D)$ (respectively $E_2(D)$) is the set of vertical (respectively horizontal) edges that when drawn in a fundamental domain in $\mathbb{R}^2$ have their midpoint inside $D$. Similarly we put
\begin{gather*}
F(D)=\{x\alpha+y\beta\in F\mid (x,y)\in \mathbb{Z}^2\cap D\}, \\
V(D)=\{x\alpha+y\beta\in V\mid (x,y)\in \big(\mathbb{Z}^2+\tfrac{1}{2}(\boldsymbol{e}_1+\boldsymbol{e}_2)\big)\cap D\}.
\end{gather*}
We call elements of these sets \emph{internal} edges, faces and vertices in $D$.
The following gives a characterization of unitarizable simple integral weight $\mathcal{A}(\mathscr{L})$-modules.
\begin{Proposition}
Let $M(D,\xi)$ be a pseudo-unitarizable simple integral weight $\mathcal{A}(\mathscr{L})$-module. Then the following statements are equivalent.
\begin{enumerate}[{\rm (i)}]
\item $M(D,\xi)$ is unitarizable.
\item $\mathsf{w}|_{F(D)}$ is constant where $\mathsf{w}$ was defined in \eqref{eq:w-def},\eqref{eq:omega-solution}.
\item $P_i^{\mathscr{L}}(e_i)>0$ for all $e_i\in E_i(D)$ and $i\in\{1,2\}$.
\item $l_i(e_i)\in 2\mathbb{N}$ for all $e_i\in E_i(D)$ and $i\in\{1,2\}$, where $l_i$ was defined in \eqref{eq:li-def}.
\item At each internal vertical (respectiely horizontal) edge $e$ in $D$, there are an even number, counted with multiplicity, of vertical (respectively horizontal) edges occurring in $\mathscr{L}$ whose midpoints are above the line through the midpoint of $e$ of slope $n/m$.
\end{enumerate}
\end{Proposition}
\begin{proof}
(i)$\Leftrightarrow$(ii) was noted above. (ii)$\Leftrightarrow$(iii) follows from Lemma \ref{lem:signature}. (iii)$\Leftrightarrow$(iv) is immediate by \eqref{eq:PvsL}. Lastly (iv)$\Leftrightarrow$(v) follows from Remark \ref{rem:l}.
\end{proof}
\subsection{Internal eight-vertex configurations and the signature of $M(D,\xi)$} \label{sec:eight}
We turn to the final problem of calculating the signature of $M(D,\xi)$ as defined in the previous subsection.
Define the \emph{internal vertex configuration in $D$} to be $\mathscr{L}^D=(\mathscr{L}^D_1,\mathscr{L}^D_2)$ where $\mathscr{L}^D_i: E_i(D)\to \{1,-1\}$ is given by
\begin{equation}
\mathscr{L}_i^D(e) = \sgn P_i^\mathscr{L}(e)\qquad\text{for $e\in E_i$ and $i=1,2$.}
\end{equation}
Note that $P_i^{\mathscr{L}}(e)\neq 0$ at every $e\in E_i(D)$. We interpret the value $+1$ as the edge being absent in the configuration $\mathscr{L}^D$, and $-1$ as an edge present of multiplicity one. In figures edges in $\mathscr{L}^D$ will be drawn dashed in red. We make the following observation.
\begin{Lemma}
Let $D$ be any connected component of $\mathbb{T}_{m,n}\setminus \overline{\mathscr{L}}$. Then $\mathscr{L}^D$ is an eight-vertex configuration in $D$. That is, at each internal vertex $v\in V(D)$, there are exactly eight possible local configurations, see Figure \ref{fig:eight-vertex}.
\end{Lemma}
\begin{proof}
At an internal vertex $v$, both sides of
\[
P_1^\mathscr{L}(v+\alpha_2/2)
P_2^\mathscr{L}(v+\alpha_1/2)=
P_1^\mathscr{L}(v-\alpha_2/2)
P_2^\mathscr{L}(v-\alpha_1/2)
\]
are nonzero. Taking signs on both sides the claim follows.
\end{proof}
\begin{figure}[h]
\centering
\begin{tikzpicture}
\begin{scope}[color=Red,style=dashed,thick]
\draw ( 0,-4) -- ( 0,-2);
\draw (-1,-3) -- ( 1,-3);
\draw ( 2, 0) -- ( 4, 0);
\draw ( 3,-4) -- ( 3,-2);
\draw ( 6,-1) -- ( 6, 0) -- ( 7, 0);
\draw ( 5,-3) -- ( 6,-3) -- ( 6,-2);
\draw ( 9, 1) -- ( 9, 0) -- (10, 0);
\draw ( 8,-3) -- ( 9,-3) -- ( 9,-4);
\end{scope}
\fill ( 0, 0) circle (2pt);
\fill ( 0,-3) circle (2pt);
\fill ( 3 ,0) circle (2pt);
\fill ( 3,-3) circle (2pt);
\fill ( 6, 0) circle (2pt);
\fill ( 6,-3) circle (2pt);
\fill ( 9, 0) circle (2pt);
\fill ( 9,-3) circle (2pt);
\end{tikzpicture}
\caption{Local eight-vertex configurations.}
\label{fig:eight-vertex}
\end{figure}
We now describe an algorithm for determining the signature of a simple integral weight module $M(D,\xi)$ over a noncommutative Kleinian fiber product $\mathcal{A}(\mathscr{L})$.
Consider internal edges in $D$. For each vertical edge $e\in E_1(D)$, draw a straight line through the midpoint of the edge such that the line has slope $n/m$. Then count the number (with multiplicity) of vertical edges in $\mathscr{L}$ whose midpoint is above that line. If that number is odd, color the edge $e$ red. Otherwise leave it transparent. Repeat that for each vertical edge in $D$. Then carry out the analogous procedure for horizontal edges in $D$. After all internal edges of $D$ have been either colored red or left transparent, the red edges will form the eight-vertex configuration $\mathscr{L}^D$.
Then, by removing the union $\overline{\mathscr{L}^D}$ of line segments corresponding to red edges in $D$, this breaks $D$ further into subcomponents $D^{(j)}$:
\begin{equation}\label{eq:subcomp}
D\setminus \overline{\mathscr{L}^D}=\bigsqcup_j D^{(j)}
\end{equation}
where $\sqcup$ means disjoint union. On each sum of weight spaces
\[M(D,\xi)^{(j)}=\bigoplus_{\lambda\in F(D^{(j)})} M(D,\xi)_\lambda\]
the invariant inner product is positive or negative definite, by Lemma \ref{lem:signature}. Moreover if two connected components $D^{(j)}$ and $D^{(j')}$ are adjacent then the invariant inner product is positive definite on one of them and negative definite on the other. Thus by two-coloring the decomposition \eqref{eq:subcomp} of $D$ and then counting the number $c_i$ of connected components $D^{(j)}$ of each color $i\in\{+,-\}$, that gives the signature of $M(D,\xi)$:
\begin{equation}
\sigma\big(M(D,\xi)\big)=\{c_+,\,c_-\}.
\end{equation}
In particular the signature is independent of $\xi$ when $|\xi|=1$.
\section{Examples} \label{sec:examples}
\begin{Example}
\begin{figure}[h]
\centering
\begin{tikzpicture}
\foreach \y in {0,1,...,6} {
\draw[help lines] (0 cm,\y cm) -- (1 cm,\y cm); }
\draw[dashed] (0,-.5 cm) -- (0,6.5cm);
\draw[dashed] (1,-.5 cm) -- (1,6.5cm);
\fill (0,1) circle (2pt);
\fill (1,2) circle (2pt);
\fill (0cm-2pt,4cm-2pt) rectangle (0cm+2pt,4cm+2pt);
\fill (1cm-2pt,5cm-2pt) rectangle (1cm+2pt,5cm+2pt);
\draw[thick,Blue] (0,0) -- (0,1) -- (1,1) -- (1,2);
\draw[thick,Blue] (0,4) -- (0,5) -- (1,5) -- (1,6);
\fill[pattern=north east lines, pattern color=Blue] (0,1) rectangle (1,2);
\fill[pattern=north east lines, pattern color=Blue] (0,3) rectangle (1,4);
\fill[pattern=north west lines, pattern color=Green] (0,2) rectangle (1,3);
\fill[pattern=north west lines, pattern color=Green] (0,4) rectangle (1,5);
\draw[thick,Red,dashed] (0,1) -- (0,4);
\draw[thick,Red,dashed] (1,2) -- (1,5);
\draw[thick,Red,dashed] (0,2) -- (1,2);
\draw[thick,Red,dashed] (0,3) -- (1,3);
\draw[thick,Red,dashed] (0,4) -- (1,4);
\node[font=\scriptsize, below right] at (0,5) {$D$};
\end{tikzpicture}
\caption{A $(1,1)$-periodic six-vertex configuration with two paths.}
\label{fig:11-2}
\end{figure}
Figure \ref{fig:11-2} shows a fundamental domain for $\mathbb{T}_{1,1}=\mathbb{R}^2/\langle(1,1)\rangle$ with the blue solid edges constituting the $d=4$ case of the $(1,1)$-periodic configuration $\mathscr{L}$ with
\[P_1^{\mathscr{L}}(u)=P_2^{\mathscr{L}}(u)=\big(u-\frac{1}{2}\big)\big(u-\frac{1}{2}-d\big)\]
where $d$ is a positive integer. It was shown in \cite{Har2016} that the corresponding noncommutative Kleinian fiber product $\mathcal{A}(\mathscr{L})$ is related to $d$-dimensional evaluateion modules for the affine Lie algebra $A_1^{(1)}$ and to a finite W-algebra associated to $\mathfrak{sl}_4$. $\mathbb{T}_{1,1}\setminus\overline{\mathscr{L}}$ consists of three connected components, one of which is finite and denoted $D$. Since $D$ has the homotopy type of a circle, there is a one-parameter family of $d$-dimensional simple integral weight $\mathcal{A}(\mathscr{L})$-modules $M(D,\xi)$. Coloring the internal edges as in the algorithm in Section \ref{sec:eight}, we obtain that all internal edges are red.
Thus if $|\xi|=1$ then $M(D,\xi)$ is pseudo-unitarizable with signature
\[
\sigma(M(D,\xi))=\begin{cases}\{d/2,d/2\}& \text{$d$ even}\\
\{(d-1)/2, (d+1)/2\}& \text{$d$ odd}
\end{cases}
\]
If we instead of $\ast$ consider the ``Chevalley'' involution $\dagger$ on $\mathcal{A}(\mathscr{L})$ given by $(X_i^\pm)^\dagger = -X_i^\mp$, $H^\dagger=H$, then this is equivalent to changing signs of the polynomials $P_i^\mathscr{L}(u)$, hence $\mathscr{L}^D$ changes into $-\mathscr{L}^D$, meaning now all internal edges in $D$ are transparent. This recovers the well-known unitarizability of these loop modules regarded as modules over the affine Lie algebra $A_1^{(1)}$.
\end{Example}
\begin{Example}
\begin{figure}[h]
\centering
\begin{tikzpicture}
\foreach \y in {-1,0,...,2} {
\draw[help lines] (0 cm,\y cm) -- (5 cm,\y cm); }
\foreach \x in {1,...,4} {
\draw[help lines] (\x cm,-1.5 cm) -- (\x cm,2.5 cm); }
\draw[dashed] (0,-1.5 cm) -- (0,2.5cm);
\draw[dashed] (5,-1.5 cm) -- (5,2.5cm);
\fill (0,0) circle (2pt);
\fill (5,2) circle (2pt);
\draw[thick,Blue] (0,-1) -- (0,-.5pt) -- (2,-.5pt) -- (2,1) -- (5,1) -- (5,2);
\draw[thick,Blue] (0,.5pt) -- (1,.5pt) -- (1,1) -- (2,1) -- (2,2) -- (5,2);
\fill[pattern=north east lines, pattern color=Blue] (3,1) rectangle (5,2);
\fill[pattern=north west lines, pattern color=Green] (2,1) rectangle (3,2);
\draw[thick,Red,dashed] (3,2) -- (3,1);
\node[font=\scriptsize, below right] at (1,1) {$D_1$};
\node[font=\scriptsize, below right] at (2,2) {$D_2$};
\end{tikzpicture}
\caption{A fundamental domain for a $(5,2)$-periodic configuration $\mathscr{L}$ consisting of two vertex paths. $M(D_1,0)$ is unitarizable, while $M(D_2,0)$ has signature $\{1,2\}$.}
\label{fig:52-2}
\end{figure}
Consider the $(5,2)$-periodic higher spin vertex configuration $\mathscr{L}$ in Figure \ref{fig:52-2} consisting of the two vertex lattice paths $1121112$ and $1212111$ ($1$ being a step right and $2$ being a step up) with the same starting point. Removing these line segments from the doubly infinte cylinder $\mathbb{T}_{5,2}=\mathbb{R}^2/\langle(5,2)\rangle$ leaves four connected components, two of which are finite, $D_1$ of area $1$ and $D_2$ of area $3$. Thus the noncommutative Kleinian fiber product $\mathcal{A}(\mathscr{L})$ has exactly two finite-dimensional simple modules, the one-dimensional $M(D_1,0)$ and the three-dimensional $M(D_2,0)$. The former is unitarizable (since there are no internal edges to check) while the latter has one red internal vertical edge, meaning an edge where $P_1(e)<0$. By the algorithm in Section \ref{sec:eight} this implies the signature of $M(D_2,0)$ is $\{1,2\}$.
\end{Example}
\begin{Example}
\begin{figure}[h]
\centering
\begin{tikzpicture}
\foreach \y in {-1,0,...,5} {
\draw[help lines] (0 cm,\y cm) -- (5 cm,\y cm); }
\foreach \x in {1,...,4} {
\draw[help lines] (\x cm,-1.5 cm) -- (\x cm,5.5 cm); }
\draw[dashed] (0,-1.5 cm) -- (0,5.5cm);
\draw[dashed] (5,-1.5 cm) -- (5,5.5cm);
\fill (0,0) circle (2pt);
\fill (5,2) circle (2pt);
\fill (0cm-2pt,3cm-2pt) rectangle (0cm+2pt,3cm+2pt);
\fill (5cm-2pt,5cm-2pt) rectangle (5cm+2pt,5cm+2pt);
\draw[thick,Blue] (0,-1) -- (0,0) -- (3,0) -- (3,1) -- (5,1) -- (5,2);
\draw[thick,Blue] (0,0) -- (0,1) -- (2,1) -- (2,2) -- (5,2) -- (5,3);
\draw[thick,Blue] (0,3) -- (1,3) -- (1,4) -- (2,4) -- (2,5) -- (5,5);
\fill[pattern=north east lines, pattern color=Blue] (0,1) rectangle (2,2);
\fill[pattern=north east lines, pattern color=Blue] (1,3) rectangle (5,4);
\fill[pattern=north east lines, pattern color=Blue] (2,4) rectangle (3,5);
\fill[pattern=north west lines, pattern color=Green] (0,2) rectangle (5,3);
\fill[pattern=north west lines, pattern color=Green] (3,4) rectangle (5,5);
\draw[thick,Red,dashed] (0,2) -- (2,2);
\draw[thick,Red,dashed] (1,3) -- (5,3);
\draw[thick,Red,dashed] (3,5) -- (3,4) -- (5,4);
\node[font=\scriptsize, below right] at (0,1) {$D_1$};
\node[font=\scriptsize, below right] at (0,3) {$D_2$};
\end{tikzpicture}
\caption{A $(5,2)$-periodic example with $\mathscr{L}$ consisting of three vertex paths. $M(D_1,0)$ is definite, however $M(D_2,\xi)$ has signature $\{7,7\}$ for each $\xi\in\mathbb{C}^\times$, $|\xi|=1$.}
\label{fig:52-3}
\end{figure}
In Figure \ref{fig:52-3}, $\mathcal{A}(\mathscr{L})$ has one $6$-dimensional simple module $M(D_1,0)$ and a one-parameter family of $14$-dimensional simple modules $M(D_2,\xi)$, $\xi\in\mathbb{C}^\times$. Using the algorithm in Section \ref{sec:eight} one checks that the module $M(D_1,0)$ is unitariable. For $|\xi|=1$ the module $M(D_2,\xi)$ is pseudo-unitarizable of signature $\{7,7\}$.
\end{Example}
\begin{Example}
\begin{figure}[h]
\centering
\begin{tikzpicture}
\foreach \y in {0,1,...,5} {
\draw[help lines] (0 cm,\y cm) -- (7 cm,\y cm); }
\foreach \x in {1,...,6} {
\draw[help lines] (\x cm,-.5 cm) -- (\x cm,5.5 cm); }
\draw[dashed] (0,-.5 cm) -- (0,5.5cm);
\draw[dashed] (7,-.5 cm) -- (7,5.5cm);
\fill (0,0) circle (2pt);
\fill (7,3) circle (2pt);
\fill (-2pt,2cm-2pt) rectangle (2pt,2cm+2pt);
\fill (7cm-2pt,5cm-2pt) rectangle (7cm+2pt,5cm+2pt);
\fill[pattern=north west lines, pattern color=Green] (0,1) rectangle (4,2);
\fill[pattern=north west lines, pattern color=Green] (6,4) rectangle (7,5);
\fill[pattern=north east lines, pattern color=Blue] (0,0) rectangle (2,1);
\fill[pattern=north east lines, pattern color=Blue] (4,3) rectangle (7,4);
\fill[pattern=north east lines, pattern color=Blue] (5,4) rectangle (6,5);
\draw[thick,Blue] (0,0) -- (2,0) -- (2,1) -- (4cm+.5pt,1) -- (4cm+.5pt,3) -- (7,3);
\draw[thick,Blue] (0,2) -- (4cm-.5pt,2) -- (4cm-.5pt,4) -- (5,4) -- (5,5) -- (7,5);
\draw[thick,Red,dashed] (0,1) -- (2,1);
\draw[thick,Red,dashed] (6,5) -- (6,4) -- (7,4);
\node[font=\scriptsize, below right] at (0,2) {$D$};
\end{tikzpicture}
\caption{A $(7,3)$-periodic configuration consisting of two paths. The signature of $M(D,0)$ is $\{5,6\}$.}
\label{fig:73-2}
\end{figure}
Figure \ref{fig:73-2} shows a $(7,3)$-periodic configuration $\mathscr{L}$ such that the algebra $\mathcal{A}(\mathscr{L})$ has a unique finite-dimensional simple module $M(D,0)$. This module has dimension $11$ and signature $\{5,6\}$.
\end{Example}
\bibliographystyle{siam}
|
2,877,628,090,959 | arxiv | \section{Introduction}
Glitches are sudden spin-ups observed in the otherwise decreasing rotational frequency of a pulsar \citep{lyn00}. Their origin is still debated: the giant spin-ups observed in the twenty known Vela-like glitchers \citep{esp11} could indicate the presence of bulk superfluidity inside these stars. In this scenario, giant glitches would represent the natural macroscopic outcome of the interaction between quantized neutron vortex lines, which carry the angular momentum of the rotating chargeless superfluid, and the Coulomb lattice of neutron-rich nuclear clusters, which coexists with the neutron superfluid in the inner crust \citep{nv73}. Indeed, this interaction can pin vortices to the normal component of the star, thus freezing the superfluid vorticity and storing its angular momentum. Only when the hydrodynamical lift on a vortex (Magnus force), which increases as the pulsar slows down, equals the pinning force on a line, the vortex is unbound from the lattice: thus free to move under the action of drag forces, it can transfer its angular momentum to the normal component of the star. According to \citet{ai75}, giant glitches are due to the sudden and simultaneous depinning of a large number of accumulated vortices, followed by the rapid transfer of their angular momentum to the observable {\em normal} crust (which consists of the outer crust plus all the other charged components in the star, electrons, protons, and nuclear clusters, strongly coupled together by the pulsar magnetic field). Such a storage and trigger mechanism would have a natural periodicity, as indeed observed in Vela \citep{dod07}.
The vortex scenario for glitches was roughly compared to existing observations in a simple but instructive toy-model, which assumed a cylindrical, uniform-density star with a cylindrical pinning shell (corresponding to the inner-crust) close to the surface \citep{pin80, alp81, and82}. Although the naive treatment of the vortex-nucleus and vortex-lattice interactions gave pinning forces three orders of magnitude larger than what required to explain the average interval between glitches observed in Vela ($\Delta t_{\rm gl}\approx3$ years), the model predicted the correct orders of magnitude for the typical glitch parameters known at the time, namely the jump in angular velocity, $\Delta\Omega_{\rm gl}\approx10^{-6}\Omega$, and the jump in angular acceleration, $\Delta\dot{\Omega}_{\rm gl}\approx10^{-2}\dot{\Omega}$ (the pre-glitch, steady-state parameters for Vela being $\Omega_{\rm Vela}=70$ Hz and $\dot{\Omega}_{\rm Vela}=-9.8\times10^{-11}$ Hz s$^{-1}$). In spite of these positive preliminary results, \citet{pin80} carefully pointed out that the effects of some crucial corrections had to be taken into account before drawing any conclusion: the spherical geometry of the star, the radial density profile required by gravitational equilibrium, the density dependence of the pinning interaction, and the presence of different superfluid phases along the star profile. To date, however, this has not been done in any coherent and consistent model; thus, the explanation of giant glitches in terms of vortices is not yet tested against observations, leaving the origin of these spin-ups a still open question.
The post-glitch recovery of pulsars, on the other hand, has been successfully interpreted in terms of vortex motion under drag forces. Early on, the phenomenological model of \citet{bay69} explained the slow relaxation to steady-state following a glitch as due to the weak interaction between a normal and a superfluid component, each rotating rigidly. Following a glitch, the response of the model is linear in the initial perturbation, and relaxes back to steady-state conditions exponentially, with a relaxation time which is inversely proportional to the strength of the interaction between the two components. The simple two-components model was then reformulated in terms of vortex motion, to allow for differential rotation of the superfluid \citep{pin80}. Eventually, two scenarios were developed to describe vortex dynamics between glitches: thermally-activated creep of strongly pinned vortices \citep{alp84a,alp84b,lin93} or corotation of unpinned vortices under weak drag forces \citep{jon90, jon91,jon93}. The vortex creep model was motivated by the large pinning forces obtained in early calculations \citep{alp77,eb88}. Later on, however, the microscopic vortex-nucleus interaction was shown to be one order of magnitude smaller than what found earlier \citep{dpI,dpII,dpIII}. Moreover, \citet{jon92} argued that the mesoscopic vortex-lattice interaction, necessary to calculate the macroscopic pinning force on a vortex line, is likely to be a factor $\alpha_1\sim10^{-2}$ smaller than what naively assumed in early calculations, due to the random orientation of the macro-crystals forming the inner crust, as well as to the rigidity of vortex lines on distances of order $10^2-10^3R_{\rm ws}$ (with $R_{\rm ws}$ the radius of the Wigner-Seitz cells describing the nuclear lattice). Significantly smaller pinning forces favor the corotation model, where unpinned vortex lines are weakly coupled to the normal crust by small drag forces; thence, in steady-state they (nearly) corotate with the superfluid, while the response to perturbations is linear. Finally, the Christmas 1988 Vela glitch \citep{fl90} showed that the creep model can fit observations only in the linear response regime \citep{alp89}, in which case it is equivalent to the corotation model, but with a different temperature-dependence of the drag parameters. Observationally, the post-glitch recovery of Vela is well described by a sum of exponential terms with different amplitudes and relaxation times \citep{dod02}, and this can be explained in terms of linear response of regions of the superfluid characterized by different drag parameters. The dissipative force has been evaluated for several densities of interest, and the corresponding drag parameters yield relaxation times and glitch rise-times compatible with observations \citep{jon90,jon92,eb92}.
Three aspects of current post-glitch models are relevant here: \\
{\em i)} although simulations successfully reproduce the observed recovery of Vela \citep{lar02}, the glitch itself is always introduced by hand, as an ad hoc initial condition. \\
{\em ii)} core and crust vortices are taken as physically disconnected, namely a layer of normal matter is {\em assumed} between the S-wave neutron superfluid found at subnuclear densities and the P-wave one found above nuclear saturation \citep{jon90}. Recent microscopic calculations, however, do not show any discontinuity of this kind \citep{zgap04}, thus indicating continuous vortex lines throughout the star. \\
{\em iii)} in the core, the superconducting state of protons determines in part the mutual friction on neutron vortex lines. A type II superconductor corresponds to the strong-drag limit, with vortices entangled in a dense array of magnetic flux tubes \citep{lin03}; for a type I superconductor, instead, both the weak- and strong-drag limits have been suggested \citep{sed05,jon06}. To date, the microscopic nature of the proton superconductor is far from settled; therefore, both scenarios of weak and strong mutual friction in the core should be taken into account in the study of glitches. Post-glitch models, however, are not affected by this theoretical uncertainty, since they involve mostly vortices in the equatorial regions, lying entirely in the inner crust.
A typical neutron star has total momentum of inertia $I_{\rm tot}\approx10^{45}$ g cm$^2$, while its inner crust has $I_{\rm ic}\approx10^{-2}I_{\rm tot}$. The accidental coincidence of the ratio $I_{\rm ic}/I_{\rm tot}$ with the early observations of $\Delta\dot{\Omega}_{\rm gl}\approx10^{-2}\dot{\Omega}$ and with fact that about $1.7\%$ of the Vela spin-down is reversed during a glitch, led to consider glitches as related to crust vorticity alone, and thence to the assumption of disconnected neutron superfluids. This scenario, however, has direct implications on the glitch energetics. Indeed, since vortex lines in the core are strongly coupled to the normal component, being magnetized by entrainment effects \citep{alp84c}, the normal crust comprises most of the star and $I_{\rm c}\approx I_{\rm tot}$. This implies that the angular momentum transferred during the glitch is $\Delta L_{\rm gl}=I_{\rm c}\Delta\Omega_{\rm gl}\approx10^{41}$ erg s, and the corresponding glitch energy is $\Delta E_{\rm gl}=\Delta L_{\rm gl}\Omega\approx10^{43}$ erg; both values appear too large. On the one hand, $10^{41}$ erg s corresponds to the difference in angular momentum of the entire inner crust between glitches, thus requiring some unlikely mechanism that freezes the vorticity {\em everywhere} in the crust for about 3 years, and then releases it simultaneously. On the other hand, observations of the power wind nebula surrounding Vela indicate an upper limit of $\sim10^{42}$ erg to the glitch energy \citep{hel01}.
In this letter we present the first realistic (with several approximations, but still preserving the essential physics) and consistent model to determine where in the star the vorticity is pinned, how much of it, and for how long. The model has been tested against observations using realistic equations of state (EoS) for dense matter and implementing general relativistic hydrostatic equilibrium (Pizzochero, Seveso \& Haskell, in preparation). Moreover, initial dynamical simulations based on the multifluid formalism of \citet{ac06} confirm the main assumptions and predictions of the model \citep{hps11}. Here, however, we will discuss a fully analytical, Newtonian version of the model: it yields the correct orders of magnitude for all relevant variables, but provides deeper insight than any numerical treatment.
\section{The model}
We now outline the main assumptions of the model and present the resulting equations; details and calculations, together with a parameter study of the solutions, will be given in a longer article (Pizzochero, in preparation; from now on, Paper I).
We describe the core and inner crust as a n=1 polytrope, of mass $M$ and radius $R_s$. The actual radius of the star will be larger, because of the overlying outer crust; its presence, however, can be ignored here, since it contributes negligibly to the mass and moment of inertia of the normal component. The polytropic relation $P\propto \rho^2$ is a very soft EoS for dense matter; realistic soft EoSs yield $R_s\approx10$ km for $M=1.4 M_{\odot}$. The density profile is $u=\sin(\pi \xi)/(\pi \xi)$, with the dimensionless radius $\xi=r/R_s$ and the density $u=\rho/\lambda$, normalized to its central value $\lambda=\pi M/(4R_s^3)$. The radius of the core, $R_c$, corresponds to the density $\rho_c=0.6\rho_0$ (in units of nuclear saturation density, $\rho_0=2.8\times10^{14}$ g cm$^{-3}$), where nuclei merge into nuclear matter. The inner crust has $\xi>x_c$, where $x_c=R_c/R_s>0.9$; for $\xi>0.9$, the approximation $u=1-\xi$ is sufficiently accurate (Figure \ref{fig1}, left). The total momentum of inertia is
\begin{equation}
I_{\rm tot}=\frac{2(\pi^2-6)}{3\pi^2}MR_s^2=0.26MR_s^2, \label{eq1}
\end{equation}
while the inner crust has $M_{\rm ic}=4.9(1-x_c)^2M$ and $I_{\rm ic}=12.6(1-x_c)^2I_{\rm tot}$.
The standard {\em superfluid fraction}, $Q$, is introduced to describe the protons and nuclei of the normal crust, $I_c=(1-Q)I_{\rm tot}$, and the neutron superfluid component, $I_s=QI_{\rm tot}$, of the star. Although the neutron fraction varies with density, a typical average value is $Q\approx0.95$ \citep{zxp04}. Regarding proton superconductivity in the core, here we choose the weak-drag limit. The model, however, gives reasonable results for the jump parameters also in the strong-drag limit; the decoupling of the core vorticity, pinned by flux-tubes, reduces both $\Delta L_{\rm gl}$ and the moment of inertia responding at the glitch (cf. Paper I).
The density dependence of $f_{\rm pin}(\rho)$, the (mesoscopic) pinning force per unit length, is taken as in Figure \ref{fig1}(right). This is a reasonable first approximation for a parameter study in terms of the maximum value, $f_m$; indeed, pinning goes to zero at $\rho_c$ (no more nuclei) and at neutron drip $\rho_d=0.0015\rho_0$ (no more neutrons), while it is expected to be maximum around densities where the pairing gap peaks. Moreover, we have performed a numerical simulation to evaluate $f_{\rm pin}(\rho)$ in a bcc lattice, with random crystal orientations and proper vortex rigidity. We obtain profiles compatible with Figure \ref{fig1}, with maximum values of order $f_m\approx10^{15}$ dyn cm$^{-1}$ at densities $\rho_m\approx0.2\rho_0$. We also find, as already noted by \citet{lin09}, that attractive and repulsive vortex-nucleus interactions are equivalent for pinning vortices to the lattice (Grill \& Pizzochero, in preparation).
The geometry of the model is shown in Figure \ref{fig2}. The star spins around the $z$-axis, and continuous vortex lines are assumed through the core. Following the results of \citet{rud74}, we can reduce the problem to axial symmetry by integrating the density-dependent quantities along the vortex lines; these quantities will then depend only on the {\em cylindrical} radius $x=R/R_s$, with $R$ the distance from the rotational axis.
We distinguish two cylindrical zones, separated by $x_c=R_c/R_s$: the \lq{crust}\rq{} ($x>x_c$), with vortices lying entirely in the inner crust, and the \lq{core}\rq{} ($x<x_c$), with vortices crossing the star core. In particular, we can integrate the pinning and Magnus forces to obtain an estimate of their total values on a vortex. If $\omega(x)=\Omega_s(x)-\Omega$ indicates the lag between the local superfluid angular velocity and that of the rigid normal crust, the critical lag for depinning, $\omega_{\rm cr}(x)$, is obtained by equating these two forces
\begin{eqnarray}
\int_v{\rm d}z\,f_{\rm pin}[\rho(x,z)]=x\omega_{\rm cr}(x)\kappa\int_v{\rm d}z\,\rho(x,z),
\end{eqnarray}
where $\kappa=\pi\hbar/m_N$.
In Figure \ref{fig3}, we show the resulting profile: $\omega_{\rm cr}(x)$ presents a sharp peak in the \lq{crust}\rq{}, with maximum $\omega_{\rm max}$ located very close to $x_m=1-u_m=1-\rho_m/\lambda$; we will take $\rho_m=0.2\rho_0$. In most of the \lq{core}\rq{}, instead, $\omega_{\rm cr}(x)$ has a roughly uniform value $\omega_{\rm min}\approx10^{-2}\omega_{\rm max}$ (as $x\rightarrow0$ it diverges; similarly to the outer crust, however, this region can be neglected). We find (cf. Paper I)
\begin{eqnarray}
\omega_{\rm max}=\omega_{\rm cr}(x_m)=\frac{4}{\kappa}\frac{R_s^2}{M}\frac{g_{\rm pin}(x_m)}{g_{\rm mag}(x_m)}f_m, \label{eq2}
\end{eqnarray}
where
\begin{mathletters}
\begin{eqnarray}
g_{\rm pin}(x)&=&\frac{1}{2(1-x)}\left[\sqrt{1-x^2}-x^2\ln\left(\frac{1+\sqrt{1-x^2}}{x}\right)\right] \\
g_{\rm mag}(x)&=&\pi x\left[\ln\left(\frac{1+\sqrt{1-x^2}}{x}\right)-\sqrt{1-x^2}\right].
\end{eqnarray}
\end{mathletters}
In the \lq{crust}\rq{}, this estimate of $\omega_{\rm cr}(x)$ should be reasonable, since pinning is continuous along the vortices \citep{rud74,jon90}. In the \lq{core}\rq{}, instead, pinning is discontinuous: vortex lines are attached to the lattice only at their extremities, while most of their length lies in a pinning-free region (having selected the weak-drag limit). We can expect individual string-like excitations of the pinned vortices, which could detach them from the lattice well before $\omega_{\rm min}$ is reached. Indeed, the collective rigidity of vortex bundles in coherent motion, which explains the axial symmetry predicted by the Taylor-Proudman theorem \citep{rud74}, is actually not observed in laboratory experiments with superfluid vortices attached to the rotating vessel only at their ends \citep{ada85}.
Although this issue requires and deserves further study, the crucial point is that vortices in the \lq{core}\rq{} are pinned very weakly. On the other hand, drag forces due to magnetization correspond to very short relaxation times and thence very small steady-state lags \citep{alp84c}. Moreover, \citet{lin09} has shown that vortex repinning is dynamically possible if the lag falls below a critical value (smaller than the critical lag for depinning). These considerations and the profile in Figure \ref{fig3} naturally suggest the following scenario: as the star slows down, vortices in the \lq{core}\rq{} are continuously depinned and then rapidly repinned; this {\em dynamical creep} allows a steady removal of the excess vorticity on short effective timescales $\tau_c$. Although the value of $\tau_c$ is not relevant here, dynamical simulations of Vela glitches suggest $\tau_c\sim10^0-10^1$ s \citep{hps11}, compatible with mutual friction dominated by vortex magnetization. On timescales $\Delta t\lesssim\tau_c$ (e.g., during a glitch), the \lq{core}\rq{} vorticity is only partially coupled to the normal crust (the rest being pinned or responding on longer timescales), and only a detailed study of the dynamics can provide a direct estimate of the coupled fraction. On longer timescales $\Delta t\gg\tau_c$, however, the dynamical creep ensures full effective coupling of the two components with (average) lag of order $|\dot{\Omega}|\tau_c$; in steady-state, this scenario is then equivalent to the corotation model.
The excess \lq{core}\rq{} vorticity will be repinned in the \lq{crust}\rq{}, where pinning increases rapidly by orders of magnitude. At any time $t$ after a glitch, the lag $\omega(t)=|\dot{\Omega}_{\infty}|t$ defines a radial distance $x(t)$ as in Figure \ref{fig3}; here $\dot{\Omega}_{\infty}$ indicates the steady-state (pre-glitch) angular acceleration. We now assume that the excess vorticity, corresponding to the entire region $x<x(t)$ and to the lag $\omega(t)$, is accumulated in a thin vortex sheet at $x(t)$; as the star slows down and $\omega(t)$ increases, the sheet is pushed outwards by the increasing Magnus force, and moves with $x(t)$. When $\omega(t)$ reaches the value $\omega_{\rm max}$, the sheet is at the pinning peak, $x_m$, and the vorticity accumulated in the sheet is finally released simultaneously, causing the glitch. This picture reminds of a snowplow, pushing accumulated snow up an incline and eventually reaching its top edge. The interval between glitches is
\begin{eqnarray}
\Delta t_{\rm gl}=\frac{\omega_{\rm max}}{|\dot{\Omega}_{\infty}|}. \label{eq4}
\end{eqnarray}
This scenario is compatible with post-glitch relaxation; indeed, after a glitch, the unpinned vortices in the \lq{crust}\rq{} are under the same conditions as those considered in current post-glitch models.
Although the \lq{snowplow}\rq{} model is quite schematic, it contains a plausible mechanism for storing and releasing vorticity, as actually confirmed by parallel dynamical simulations \citep{hps11}. In particular, the model allows to calculate {\em directly} the angular momentum $L_v(x)$ of the vortex sheet at $x$. In Figure \ref{fig4} we show the reduction of angular momentum, $\ell_v(x)=L_v(x)/L_v(0)$, when uniformly distributed vorticity contained within $x$ is accumulated in a sheet at $x$; at the peak, $x_m$, the reduction is of order $10^{-3}$. For comparison, we also show the significantly different results for a uniform-density, cylindrical or spherical star; we see how spherical symmetry and realistic density profile are {\em both} crucial to obtain the correct order of magnitude of $\ell_v(x)$.
The angular momentum stored during $\Delta t_{\rm gl}$ and released at the glitch, $\Delta L_{\rm gl}$, can be calculated from the number of vortices removed from the interior and accumulated at $x_m$, namely $\Delta N_v(x_m)=2\pi R_s^2x_m^2\omega_{\rm max}/\kappa$. We find (cf. Paper I)
\begin{eqnarray}
\Delta L_{\rm gl}=I_{v}(x_m)\omega_{\rm max}, \label{eq5}
\end{eqnarray}
with an effective moment of inertia
\begin{eqnarray}
I_{v}(x)=\frac{3\pi^4}{2(\pi^2-6)}g_{v}(x)QI_{\rm tot}=\pi^2g_{v}(x)QMR_s^2, \label{eq6}
\end{eqnarray}
where
\begin{eqnarray}
g_{v}(x)=\frac{x^2}{6}\left[\sqrt{1-x^2}\left(1+2x^2\right)-3x^2\ln\left(\frac{1+\sqrt{1-x^2}}{x}\right)\right]. \label{eq7}
\end{eqnarray}
The glitch rise-time is very short, $\tau_{\rm gl}<40$ s \citep{dod02}; we introduce a new parameter, $Y_{\rm gl}$, which globally describes the {\em fraction} of vorticity {\em coupled} to the normal crust on timescales of order $\tau_{\rm gl}$ (the steady-state coupled fraction, corresponding to long timescales and to pre-glitch conditions, is $Y_{\infty}=1$). The value of $Y_{\rm gl}$ depends on the detailed short-time dynamics of the \lq{core}\rq{} vorticity; in order to get an estimate of the observables, only this quantity is needed. From angular momentum conservation and variation of the crust equation of motion we find the glitch jump parameters
\begin{mathletters}
\begin{eqnarray}
\Delta\Omega_{\rm gl}&=&\frac{\Delta L_{\rm gl}}{I_{\rm tot}[1-Q(1-Y_{\rm gl})]} \\
\frac{\Delta\dot{\Omega}_{\rm gl}}{\dot{\Omega}_{\infty}}&=&\frac{Q(1-Y_{\rm gl})}{1-Q(1-Y_{\rm gl})}. \label{eq9}
\end{eqnarray}
\end{mathletters}
\section{Results and observations}
After fixing the basic stellar parameters $M$, $R_s$ and $Q$ (more generally, $M$ and an EoS), the model has two free parameters, $f_m$ and $Y_{\rm gl}$. It must predicts three observables: the interval between glitches, and the jumps in angular velocity and acceleration during a glitch. In the case of Vela, the average observed values are $\Delta t_{\rm gl}\approx3$ years and $\Delta\Omega_{\rm gl}=1.2\times10^{-4}$ Hz \citep{lyn00}; we already mentioned that early observations gave $\Delta\dot{\Omega}_{\rm gl}/\dot{\Omega}_{\infty}\approx10^{-2}$. More recent data, however, indicate much larger values; in particular, the year 2000 glitch \citep{dod02} added to the already known short-, middle-, and long-time relaxation components ($\tau_i\approx10^4,10^5,10^6$ s, with $\Delta\dot{\Omega}_i/\dot{\Omega}_{\infty}\approx0.44,0.044,0.009$ for $i=1,2,3$), a fourth and {\em very} short one, with $\tau_4=1.2\pm0.2$ minutes and $\Delta\dot{\Omega}_4/\dot{\Omega}_{\infty}=18\pm6$ (one sigma errors). In the 2004 glitch, however, such a component was observed only barely above noise and no firm conclusion could be drawn from the weak data \citep{dod07}. Waiting for future observations, there is nonetheless evidence that {\em right after} a glitch $\Delta\dot{\Omega}_{\rm gl}/\dot{\Omega}_{\infty}$ is larger than unity.
In order to test the model against observations, we consider a standard neutron star with $M=1.4 M_{\odot}, R_s=10$ km, and $Q=0.95$. If we take $f_m=1.1\times10^{15}$ dyn cm$^{-1}$, from equations \ref{eq2}$-$\ref{eq7} we find that $\omega_{\rm max}=0.01$ Hz, and thence $\Delta t_{\rm gl}=3.1$ years and $\Delta L_{\rm gl}=9.5\times10^{39}$ erg s (also, $\Delta E_{\rm gl}=6.7\times10^{41}$ erg). If we then take $Y_{\rm gl}=0.05$, from equation 9 we obtain $\Delta\Omega_{\rm gl}=1.3\times10^{-4}$ Hz and $\Delta\dot{\Omega}_{\rm gl}/\dot{\Omega}_{\infty}=9.3$, in good general agreement with observations. In Paper I we analyze the parameter dependence of these results; we find that the model is quite robust under physically meaningful variations of all the basic parameters ($M,R_s,Q,\rho_c,\rho_m,\rho_d$).
In conclusion, assuming continuous vortices throughout the star, we find that maximum pinning forces of order $f_m\approx10^{15}$ dyn cm$^{-1}$ can accumulate $\approx10^{13}$ vortices in the inner crust of a standard neutron star, and release them every $\approx3$ years, transferring an angular momentum $\Delta L_{\rm gl}\approx10^{40}$ erg s. This is one order of magnitude smaller than what inferred from the (microscopically inconsistent) assumption of disconnected vortices. Yet, it yields the observed glitch parameters, provided one assumes a small coupled fraction $Y_{\rm gl}<10\%$. The model is compatible with post-glitch recovery and with the presently known microphysics; the numerical results follow from implementing both spherical geometry and a realistic density profile, and they are robust.
\acknowledgments
This work was supported by CompStar, a Research Networking Programme of the European Science Foundation (\url{http://www.compstar-esf.org/}).
|
2,877,628,090,960 | arxiv | \section{Introduction}
Let $K$ be a field of characteristic zero.
Let $A$ be an abelian variety over $K$
and $Z$ $(\neq A)$ a closed subvariety of $A$.
A celebrated result of Raynaud \cite{Raynaud}
implies that the intersection of $Z$ with
torsion points $A_{\tor}$ on $A$ is finite,
if $Z$ is a curve of genus at least two,
or if $A$ is absolutely simple.
However, it is usually not easy to determine this finite set
$Z \cap A_{\tor}$ explicitly for given $A$ and $Z$.
Now let us assume $A=J$ is the Jacobian variety of
a smooth projective geometrically
connected curve $X$ of genus $g \geq 2$.
Of particular interest is
the case where $Z=X$
is the Abel-Jacobi embedded image of $X$
with respect to some base point.
Since Coleman \cite{Coleman1}
started to study this problem,
many works have been done in this direction.
See \cite{Tzermias} for a lucid survey on this subject.
Anderson \cite{Anderson} considered
the case where $Z=\Theta$ is the theta divisor of $J$.
He proved that torsion points of
certain prime orders are not on $\Theta$
when $X$ is a cyclic quotient of a Fermat curve of prime degree.
For details of this result and
its generalization by Grant \cite{Grant},
see Remark \ref{rem:anderson} (2) below.
In order to prove his result,
Anderson developed a $p$-adic analogue of the theory of
{\it tau function},
which was originally introduced by Sato
\cites{Sato, Sato-Sato} (see also \cite{Segal-Wilson})
in his study of soliton equations
(in the complex analytic setting).
In this paper, we apply Anderson's theory to other curves
and prove analogous results.
\subsection{Setting}\label{sect:setting}
To state our main result, we introduce notations.
Fix an integer $g \geq 2$.
Let $K$ be a field of characteristic zero
that contains a primitive $4g$-th root $\zeta$ of unity.
We consider a hyperelliptic curve $X$ of genus $g$ over $K$
defined by the equation
\begin{equation}\label{eqn:hyperelliptic}
y^2=x^{2g+1}+x.
\end{equation}
Let $\infty$ be the $K$-rational point
at which the functions $x$ and $y$ have poles.
There is an automorphism $r$ of $X$ of order $4g$
defined by $r(x,y)=(\zeta^2x,-\zeta y)$.
Let $G := \langle r \rangle$ be the subgroup of $\Aut(X)$
generated by $r$.
The Jacobian variety $J$ of $X$
will be considered as a ${\mathbb Z}[G]$-module
by the induced action of $G$.
(We will see in \S \ref{sect:otsubo}
that $J$ is absolutely simple when $g>45$.)
We define the theta divisor
$\Theta$ to be the set of $\mathscr{L}\in J$
such that $H^0(X, \mathscr{L}((g-1)\infty)) \not= \{ 0 \}$.
Note that $r(\infty)=\infty$ so that $\Theta$ is stable under the action of $r^*$.
For any $n \in {\mathbb Z}_{>0}$, we write
$J[n]$ for $n$-torsion subgroup of $J$.
\subsection{Main results}
Let $p$ be a prime number such that
$p \equiv 1\mod 4g,$
and choose a prime ideal $\wp \subset {\mathbb Z}[\zeta]$ lying above $p$.
We write $\chi$ for the composition of
\[ G \to {\mathbb Z}[\zeta]^*
\twoheadrightarrow ({\mathbb Z}[\zeta]/\wp)^* = {\mathbb F}_p^*
\]
where the first map is defined by $r \mapsto \zeta$.
We will show in Lemma \ref{lem:decomp} below
that we have
\[ \dim_{{\mathbb F}_p} J[p]^{\chi} = 1
\]
where $J[p]^{\chi}=\{ \mathscr{L} \in J[p] ~|~ r^*\mathscr{L} = \chi(r) \mathscr{L}\}.$
Our main results are the following:
\begin{thm}\label{thm:maimtheorem}
We have
\begin{equation*}
(J[p]^{\chi} + J[2] ) \cap \Theta \subseteq J[2].
\end{equation*}
\end{thm}
\begin{thm}\label{thm:maimtheorem2}
Assume that $K$ is a finite extension of ${\mathbb Q}_p$.
Let $Q\in X(K)$ and put $\mathscr{L}_Q := \mathscr{O}_X(Q-\infty)$.
Assume that the coordinates
$x(Q)$ and $y(Q)$ of $Q$ belong to the integer ring of $K$.
Then we have
\begin{equation*}
(J[p]^{\chi} + \mathscr{L}_Q ) \cap \Theta = \{\mathscr{L}_Q\}.
\end{equation*}
\end{thm}
\begin{remk}\label{rem:anderson}
\begin{enumerate}
\item
The set $\Theta \cap J_{\tor}$ is explicitly determined
when $g=2$ by Boxall-Grant \cite{Boxall-Grant}.
It consists of twenty-two points
(over an algebraically closed field).
\item
For the sake of comparison,
we recall Anderson's result \cite{Anderson}.
Fix an odd prime number $l$, integers $a \geq b>1$ such that $l+1=a+b$,
and a primitive $l$-th root $\zeta_l$ of unity.
Let $X$ be the smooth projective curve defined by
$y^l=x^a(1-x)^{b}$,
and define $J$ and $\Theta$ similarly as above.
(By Koblitz-Rohrlich \cite{Kob-Roh},
$J$ is absolutely simple.)
There is an automorphism $\gamma$ of $X$
defined by $\gamma(x,y)=(x,\zeta_l y)$,
which induces a ${\mathbb Z}[\zeta_l]$-module structure on $J$
such that $\zeta_l$ acts by $\gamma^*$.
For an ideal $\frak{a}$ of ${\mathbb Z}[\zeta_l]$,
we write $J[\frak{a}]$ for the $\frak{a}$-torsion subgroup of $J$.
Let $p$ be a prime number such that $p\equiv1\mod l$
and take a prime ideal $\wp\subset{\mathbb Z}[\zeta_l]$ over $p$.
Anderson's result \cite[Theorem 1]{Anderson} is the following:
$$
(J[\wp] + J[(1-\zeta_l)])\cap\Theta \subseteq J[(1-\zeta_l)].
$$
Grant \cite{Grant} improved Anderson's result by showing
for all $n\ge1$
$$
(J[\wp^n] + J[(1-\zeta_l)])\cap\Theta \subseteq J[(1-\zeta_l)]
$$
under the assumption that $X$ is hyperelliptic
(that happens iff $a \in \{(l+1)/2, l-1\}$).
\item
In our setting, $X, \infty, J$ and $\Theta$ are all defined over ${\mathbb Q}$,
and the choice of $\wp$ is arbitrary.
By taking different choices of $\wp$,
one can replace $J[p]^{\chi}$
by $J[p]^{\chi^i} := \{ \mathscr{L} \in J[p] ~|~ r^* \mathscr{L} = \chi(r)^i \mathscr{L} \}$
for any $i \in ({\mathbb Z}/4g{\mathbb Z})^*$
in Theorem \ref{thm:maimtheorem}.
(In our proof, though, the value of $s$ appearing
after \eqref{eq:lastentry} will be changed.
Note also that
a similar statement does not hold for Theorem \ref{thm:maimtheorem2}
because $Q$ may not be defined over ${\mathbb Q}$.)
It is an open problem to extend this result
to $i$ which is not prime to $4g$.
Another open problem is to replace
$J[p]$ by $J[p^n]$ with $n>1$ in
Theorems \ref{thm:maimtheorem}, \ref{thm:maimtheorem2}
(compare Grant's result recalled in (2) above).
\item
The crutial step in our proof
where we need to assume $X$ to be a special curve
\eqref{eqn:hyperelliptic}
is in \S \ref{sect:auxlemma}.
It might be possible to apply our method to other curves.
See Remark \ref{rem:addedcomment} for more discussion
about the possibility and difficulity in it.
\end{enumerate}
\end{remk}
This paper is organized as follows.
In \S 2
we recall some results from Anderson \cite{Anderson}.
In \S 3 we study geometry
of the hyperelliptic curve \eqref{eqn:hyperelliptic}.
The proof of Theorems \ref{thm:maimtheorem} and \ref{thm:maimtheorem2}
is completed in \S 4.
The last section \S 5 is devoted to an illustration of
Anderson's results recalled in \S2.
\section{Review of Anderson's theory}\label{sect:anderson}
In this section,
we recall (bare minimum of)
results of Anderson \cite[\S 2, 3]{Anderson}.
We formulate all results
without any use of Sato Grassmannian
(which is actually central in Anderson's theory).
All results in this section are merely reformulation of loc. cit.,
but for the sake of completeness
we include some explanation
using Sato Grassmannian in \S \ref{sect:app}.
\subsection{Krichever pairs}\label{sect:kri}
Let $X$ be a smooth projective geometrically irreducible curve
over a field $K$
equipped with a $K$-rational point $\infty$.
We fix an isomorphism $N_0 : \hat{\mathscr{O}}_{X,\infty} \cong K[[\tt]]$,
and write $N$ for the composition map
$\Spec K((\tt)) \to \Spec K[[\tt]] \overset{N_0}{\to} X$.
(Here $K[[\tt]]$ is the ring of power series
in $\tt$ with coefficients in $K$,
and $K((\tt))$ is its fraction field.)
An {\it $N$-trivialization} of a line bundle $\mathscr{L}$ on $X$
is an isomorphism ${\sigma} : N^*\mathscr{L}\cong K((\tt))$
of $K((\tt))$-vector spaces
induced by an isomorphism
${\sigma}_0 : N_0^*\mathscr{L}\cong K[[\tt]]$ of $K[[\tt]]$-modules.
A pair $(\mathscr{L}, {\sigma})$ of
a line bundle $\mathscr{L}$ on $X$ and an $N$-trivialization ${\sigma}$ of $\mathscr{L}$
is called a {\it Krichever pair}.
Two Krichever pairs are said to be isomorphic
if there exists an isomorphism of line bundles
compatible with $N$-trivializations.
We write $\Kr(X, N)$ for the set of isomorphism
classes of Krichever pairs.
We have a canonical surjective map
\begin{equation*}
[ \cdot ] : \Kr(X, N) \to \Pic(X), \qquad
[(\mathscr{L}, {\sigma})] = \mathscr{L}.
\end{equation*}
For each $n \in {\mathbb Z}$ we define
$\Kr^n(X, N) := \{ (\mathscr{L}, {\sigma}) \in \Kr(X, N) ~|~ \deg(\mathscr{L})=n \}$
to be the inverse image of $\Pic^n(X)$ by $[ \cdot ]$.
\subsection{A Krichever pair associated to a Weil divisor}\label{sect:weildiv}
Let $D = \sum_{P \in X} n_P P$
be a Weil divisor on $X$.
The associated line bundle $\mathscr{O}_X(D)$ admits an $N$-trivialization
$\sigma(D)$ induced by the composition
$\mathscr{O}_X(D) \hookrightarrow K(X) \overset{N}{\to} K((\tt))
\overset{T^{-n_{\infty}}}{\to} K((\tt))$.
(Here $n_{\infty}$ is the coefficient of $\infty$ in $D$.)
Thus we obtain a Krichever pair $(\mathscr{O}_X(D), \sigma(D))$.
\subsection{Vector space associated to a Krichever pair}\label{sect:vector}
For $(\mathscr{L},{\sigma}) \in \Kr(X, N)$,
we define a $K$-subspace $W(\mathscr{L}, {\sigma})$ of $K((\tt))$ by
\begin{equation*}\label{eqn:Krichever_pair}
W(\mathscr{L}, {\sigma}) :=
\{{\sigma} N^* f\in K((\tt))\ |\ f\in H^0(X\setminus\{\infty\}, \mathscr{L})\}.
\end{equation*}
Note that $A := W(\mathscr{O}_X, N)$ is a $K$-subalgebra of $K((\tt))$
such that $\Spec A \cong X \setminus \{ \infty \}$,
and that $W(\mathscr{L}, {\sigma})$ is an $A$-submodule of $K((\tt))$
for any $(\mathscr{L}, {\sigma}) \in \Kr(X, N)$.
The following fact is fundamental to us.
(See Proposition \ref{prop:corresp} for details.)
\begin{prop}\label{prop:fundamental}
Let $(\mathscr{L}, {\sigma}) ,(\mathscr{L}', {\sigma}') \in \Kr(X, N)$.
If $W(\mathscr{L}, {\sigma})=W(\mathscr{L}', {\sigma}')$, then
we have $(\mathscr{L}, {\sigma})=(\mathscr{L}', {\sigma}')$.
\end{prop}
\subsection{Admissible basis}\label{sect:kr-w}
Let $(\mathscr{L}, {\sigma}) \in \Kr(X, N)$.
Put $W=W(\mathscr{L}, {\sigma})$ and $i_0 := \deg(\mathscr{L})+1-g$.
It follows from the Riemann-Roch theorem that
there is a $K$-basis $\{w_i\}_{i=1}^\infty$ of $W$ such that
\begin{itemize}
\item[(1)] $\{\deg w_i\}_{i=1}^\infty$ is a strictly increasing sequence,
\item[(2)] $w_i$ is monic for all $i$, and
\item[(3)] $\deg (w_i-T^{i-i_0})$ is a bounded function of $i$.
\end{itemize}
(Here $\deg : K((\tt))^* \to {\mathbb Z}$ is
the sign inversion of the normalized valuation,
and $w \in K((\tt))$ is called {\it monic} iff
$\deg(w - T^{\deg w}) < \deg(w)$.)
Such a $K$-basis $\{w_i\}_{i=1}^\infty$ of $W$ will be called {\it admissible}.
We call $i(W):=i_0$ the {\it index} of $W$.
(The integer $i_0$ can be read off from $W$,
as it is the only integer that satisfies the property (3) above.)
The {\it partition} $\kappa=(\kappa_i)_{i=1}^\infty$ of $W$
is a non-increasing sequence of non-negative integers defined by
$$
\kappa_i := i- i(W) - \deg(w_i),
$$
which satisfies $\kappa_i=0$ for sufficiently large $i$.
The partition $\kappa$
does not depend on a choice of an admissible basis.
(Actually, it depends only on $\mathscr{L}$.)
The integer $\ell(\kappa):=\max\{i\ |\ \kappa_i\neq0\}$ will be called the {\it length} of
the partition $\kappa$.
(See also comments after \eqref{eq:index}.)
\subsection{Group structure}\label{sect:groupstructure}
We regard $\Kr(X, N)$ as an abelian group by the tensor product,
so that the identity element is given by $(\mathscr{O}_X, N)$.
Note that $[ \cdot ] : \Kr(X, N) \to \Pic(X)$ is a group homomorphism.
Take $(\mathscr{L}, {\sigma}), (\mathscr{L}', {\sigma}) \in \Kr(X, N)$
and let $(\mathscr{L}'', {\sigma}'')=(\mathscr{L} \otimes \mathscr{L}', {\sigma} \otimes {\sigma}')$
be their product.
Then
$W(\mathscr{L}'', {\sigma}'')$ coincides with the $K$-subspace of $K((\tt))$
spanned by
$\{ ww' \in K((\tt)) ~|~ w \in W(\mathscr{L}, {\sigma}), w' \in W(\mathscr{L}', {\sigma}') \}$.
\subsection{Theta divisor}
Let us write $J:=\Pic^0(X)$ for the {\it Jacobian variety} of $X$.
Let us also write
$\Theta\subset J$ for the {\it theta divisor},
which is defined to be
the set of $\mathscr{L} \in J$
such that $H^0(X, \mathscr{L}((g-1)\infty)) \not= \{ 0 \}$.
Observe that $(\mathscr{L},{\sigma}) \in \Kr^0(X, N)$
satisfies $\mathscr{L} \in \Theta$ if and only if
$$
W(\mathscr{L}, {\sigma}) \cap T^{g-1}K[[\tt]] \neq\{0\},
$$
because there is an isomorphism
$
W(\mathscr{L}, {\sigma}) \cap T^{g-1}K[[\tt]] \cong H^0(X, \mathscr{L}((g-1)\infty)).
$
(This is a key property which enables one
to interpret $\Theta$ as the `zero-locus' of the tau function.)
\subsection{Automorphism of a curve}\label{subsec:auto}
Suppose we are given
two endomorphisms $r$ and $\bar{r}$ of $K$-schemes
which fit in the commutative diagram
\begin{equation*}\label{eq:action}
\xymatrix{
\Spec K((\tt)) \ar[r]^(0.68){N} \ar[d]_{\bar{r}} & X\ar[d]^{r} \\
\Spec K((\tt)) \ar[r]^(0.68){N} & X.
}
\end{equation*}
In particular, it holds $r(\infty)=\infty$.
Then, for $(\mathscr{L}, {\sigma}) \in \Kr(X, N)$,
the composition
\[(r, \bar{r})^* {\sigma} : N^* r^* \mathscr{L} \cong \bar{r}^* N^* \mathscr{L}
\overset{\bar{r}^* \sigma}{\cong}
\bar{r}^* K((\tt)) = K((\tt))
\]
is an $N$-trivialization of $r^* \mathscr{L}$.
(Here the last equality holds since
$\bar{r}$ induces
an isomorphism $\bar{r}^* : K((\tt)) \to K((\tt))$ ).
Therefore we get an induced homomorphism
\[
\Kr(X, N) \to \Kr(X, N), \qquad
(\mathscr{L}, {\sigma}) \mapsto (r^* \mathscr{L}, (r, \bar{r})^* {\sigma}),
\]
which, by abuse of notation, will be denoted by $r^*$.
This homomorphism is compatible with $[ \cdot ]$
in the sense that $[r^*(\mathscr{L}, {\sigma})]=r^* \mathscr{L}$.
\subsection{The $p$-adic analytic part of Krichever pairs}\label{sect:analy}
From now on, we assume
$p$ is a prime number and
$K$ is a finite extension of the field ${\mathbb Q}_p$ of $p$-adic numbers.
Let $|\cdot|$ the absolute value on $K$ such that $|p|=p^{-1}$.
Let $H(K)$ be the ring defined by
\begin{equation*}
H(K):=\left\{ \sum_{i=-\infty}^{\infty}a_iT^i \ \biggm|\
a_i\in K,\
\sup_{i=-\infty}^{\infty}|a_i|<\infty,\
\lim_{i\to\infty}|a_i|=0 \right\}_.
\end{equation*}
Note that $H(K)$ is equipped with the norm
\begin{equation*}
\left\|\sum_i a_i T^i\right\| := \sup_{i}|a_i|,
\end{equation*}
and $(H(K), \| \cdot \|)$ is a $p$-adic Banach algebra over $K$ .
We write $\Kr_{\an}(X, N)$ for the subset of $\Kr(X, N)$
consisting of all Krichever pairs $(\mathscr{L}, {\sigma})$
such that $W(\mathscr{L}, {\sigma})$
admits an admissible basis $\{w_i\}$ satisfying
\begin{enumerate}
\item
$w_i \in H(K)$ for all $i$, and
\item
$\|w_i\|=1$ for almost all $i$.
\end{enumerate}
For each $n \in {\mathbb Z}$, we put
$\Kr_{\an}^n(X, N) = \Kr_{\an}(X, N) \cap \Kr^n(X, N)$.
For $(\mathscr{L}, {\sigma}) \in \Kr_{\an}(X, N)$,
we write $\bar{W}(\mathscr{L}, {\sigma})$ for the closure of
$W(\mathscr{L}, {\sigma})$ in $H(K)$.
One recovers $W(\mathscr{L}, {\sigma})$ from $\bar{W}(\mathscr{L}, {\sigma})$
by $W(\mathscr{L}, {\sigma}) = \bar{W}(\mathscr{L}, {\sigma}) \cap K((\tt))$.
(Here we regard both $H(K)$ and $K((\tt))$ as
$K$-vector subspaces of $\prod_{i \in {\mathbb Z}} K T^i$.)
Hence the following proposition is a consequence of
Proposition \ref{prop:fundamental}.
\begin{prop}\label{prop:fundamental2}
Let $(\mathscr{L}, {\sigma}) ,(\mathscr{L}', {\sigma}') \in \Kr_{\an}(X, N)$.
If $\bar{W}(\mathscr{L}, {\sigma})=\bar{W}(\mathscr{L}', {\sigma}')$, then
we have $(\mathscr{L}, {\sigma})=(\mathscr{L}', {\sigma}')$.
\end{prop}
\subsection{The $p$-adic loop group}
We define
the {\it $p$-adic loop group} ${\Gamma}(K)$
to be the subgroup of $H(K)^\times$
consisting of all
$\sum_i h_iT^i \in H(K)^\times$ such that
$|h_0|=1$,
$|h_i|\le1$ for all $i\le0$, and
there exists a real number $0<\rho<1$ such that
\begin{equation*}
|h_i|\le \rho^i \quad \textrm{for\ all\ }i\ge1.
\end{equation*}
Define the subgroups ${\Gamma}_+(K)$ and ${\Gamma}_-(K)$ of ${\Gamma}(K)$ by
\begin{eqnarray*}
{\Gamma}_+(K)&:=&\left\{\sum_i h_iT^i\in {\Gamma}(K) \biggm| h_0=1,h_i=0 \ (i<0) \right\}_,\\
{\Gamma}_-(K)&:=&\left\{\sum_i h_iT^i\in {\Gamma}(K) \biggm| h_i=0 \ (i>0) \right\}_.
\end{eqnarray*}
\begin{prop}[{\cite[\S 3.3]{Anderson}}; see also \S \ref{sect:final} below]
\label{prop:loopgroupaction}
There is an action of ${\Gamma}(K)$ on $\Kr_{\an}(X, N)$
characterized by the following property:
for any $h \in {\Gamma}(K)$ and $(\mathscr{L}, {\sigma}) \in \Kr_{\an}(X, N)$,
we have $\bar{W}(h(\mathscr{L}, {\sigma}))=h\bar{W}(\mathscr{L}, {\sigma})$.
(Here the right hand side means
$\{ hw ~|~ w \in \bar{W}(\mathscr{L}, {\sigma}) \}$.)
Moreover, this action satisfies the following properties:
\begin{enumerate}
\item For any
$h \in {\Gamma}(K)$ and $(\mathscr{L}, {\sigma}) \in \Kr_{\an}(X, N)$,
we have $\deg [h(\mathscr{L}, {\sigma})] = \deg [(\mathscr{L}, {\sigma})]$.
\item For any
$h \in {\Gamma}_-(K)$ and $(\mathscr{L}, {\sigma}) \in \Kr_{\an}(X, N)$,
we have $[h(\mathscr{L}, {\sigma})] = [(\mathscr{L}, {\sigma})]$.
\item Suppose $(\mathscr{O}_X, N) \in \Kr_{\an}(X, N)$.
For any $h \in \bar{W}(\mathscr{O}_X, N) \cap {\Gamma}(K)$
and $(\mathscr{L}, {\sigma}) \in \Kr_{\an}(X, N)$,
we have $[h(\mathscr{L}, {\sigma})] = [(\mathscr{L}, {\sigma})]$.
\end{enumerate}
\end{prop}
\subsection{Dwork loops and Anderson's theorem}
In his study of the $p$-adic properties of
zeta functions of hypersurfaces over finite fields
(see, for example, \cite{Dwork}),
Dwork constructed a special element of ${\Gamma}(K)$
(which we call a Dwork loop).
We shall exploit his construction.
Assume that $K$ contains a $(p-1)$-st root $\pi$ of $-p$.
Let $u$ be a unit of the integer ring of $K$.
A {\it Dwork loop} is defined by
\begin{equation*}\label{eqn:Dwork}
h(T):=\exp(\pi((uT)-(uT)^p)).
\end{equation*}
For all $i\ge0$, we have (see, for example \cite[Chapter I]{Koblitz})
$$
|h_i| \le | p |^{i(p-1)/p^2},
$$
where $h(T)=\sum_i h_iT^i$.
Therefore $h(T)\in {\Gamma}_+(K)$.
The following theorem,
which is a consequence of a delicate analysis of
Anderson's $p$-adic tau-function,
is technically crucial in \cite{Anderson}.
(See also \S \ref{sect:final}.)
\begin{thm}[{\cite[Lemma 3.5.1]{Anderson}}]
\label{thm:Dwork_loop}
Assume that $p\ge7$.
Let $h$ be a Dwork loop
and $(\mathscr{L}, \sigma) \in \Kr_{\an}^0(X, N)$.
We write $\kappa=(\kappa_i)_{i=1}^\infty$ and $\ell(\kappa)$
for the partition of $W(\mathscr{L}, \sigma)$
and the length of $\kappa$.
Assume further $W(\mathscr{L}, \sigma)$ satisfies that
\begin{itemize}
\item[(A1)] there exists an admissible basis
$\{w_i\}_{i=1}^\infty$ such that
$w_i \in H(K)$ and
$\|w_i\|=1$ for all $i\ge1$,
\item[(A2)] the partition
$\kappa$ satisfies $\max\{\kappa_1,\ell(\kappa)\}<p/4$.
\end{itemize}
Then, we have
$W(h(\mathscr{L}, \sigma))\cap T^{g-1}K[[\tt]]=\{0\}$.
Equivalently, we have
$$
[h(\mathscr{L}, {\sigma})] \not\in \Theta.
$$
\end{thm}
\section{Geometry of a hyperelliptic curve}\label{sect:geometry}
In this section,
we use the notations introduced in \S \ref{sect:setting}.
\subsection{Singular homology}
In this subsection we assume $K$ is a subfield of ${\mathbb C}$.
The singular homology $H_1(X({\mathbb C}), {\mathbb Z})$
is a free ${\mathbb Z}$-module of rank $2g$
on which $G$ acts linearly.
Let $\rho : G \to \Aut(H_1(X({\mathbb C}), {\mathbb Z}))$
be the corresponding representation.
Let $\chi : G \to \mu_{4g}$ be the character
given by $\chi(r)=\zeta$.
\begin{lem}\label{lem:singularhom}
The representation $\rho \otimes {\mathbb C}$ is equivalent to
$\oplus_{i=1, 3, \cdots, 4g-1} \chi^i$.
In particular,
the minimal polynomial of
$\rho(r)$ is $F(X) := X^{2g}+1$.
\end{lem}
\begin{proof}
We consider a
${\mathbb C}[G]$-module
$V=H^0(X, \Omega_{X/{\mathbb C}}^1)
= \langle w_i =x^{i-1}dx/y ~|~ i=1, \cdots, g\rangle_{{\mathbb C}}$.
A direct computation shows $r^*(w_i)=-\zeta^{2i-1}w_i$.
The lemma follows from an isomorphism
\[ H_1(X({\mathbb C}), {\mathbb Z}) \otimes {\mathbb C} \cong V \oplus \Hom(V, {\mathbb C}) \]
of ${\mathbb C}[G]$-modules.
\end{proof}
\subsection{Good trivialization}
The following is an easy consequence of Hensel's lemma:
\begin{lem}\label{lem:local_parameter}
There exists a unique element $u(T) \in 1+\tt{\mathbb Z}[[\tt]]$ such that
\[ u(T)^{2g} - u(T)^{2g-1} + (\tt)^{4g}=0. \]
\end{lem}
We define two elements $x(T), y(T) \in {\mathbb Z}[[\tt]][T]$ by
$$
x(T) := T^2 u(T), \qquad
y(T) := -Tx(T)^g.
$$
Note that
$x(T) \equiv T^2 \mod T{\mathbb Z}[[\tt]]$
and
$y(T) \equiv -T^{2g+1} \mod T^{2g}{\mathbb Z}[[\tt]]$.
It follows from Lemma \ref{lem:local_parameter} that
$(T^{-2} x(T))^{2g} - (T^{-2} x(T))^{2g-1} + (\tt)^{4g}=0$.
By multiplying $T^{4g}x(T)$, we get
$$
y(T)^2 = x(T)^{2g+1} + x(T).
$$
Therefore we can define an injection
$K(X) \hookrightarrow K((\tt))$
of $K$-algebras
by associating $x$ and $y$ with $x(T)$ and $y(T)$ respectively.
This induces an isomorphism
$N_0 : \hat{\mathscr{O}}_{X, \infty} \cong K[[\tt]]$,
and we can apply the results of \S \ref{sect:anderson}.
Note that $A := W(\mathscr{O}_X, N)$ is the $K$-subalgebra of $K((\tt))$
generated by $x(T)$ and $y(T)$.
\subsection{Admissible basis of $A$}
\label{remk:A-basis}
We construct a $K$-basis $\{ w_i \}_{i=1}^\infty$ of $A$ such that
\begin{enumerate}
\item $w_i \in {\mathbb Z}[[T^{-1}]][T]$ for all $i$,
\item $w_i - T^{2i-2} \in T^{2i-3}{\mathbb Z}[[T^{-1}]]$ for all $i \leq g+1$, and
\item $w_i - T^{i-1+g} \in T^{2g}{\mathbb Z}[[T^{-1}]]$ for all $i \geq g+2$.
\end{enumerate}
In particular,
$\{ w_i \}$ is admissible in the sense of \S \ref{sect:kr-w}.
First we put
$$
u_{i}=
\left\{
\begin{array}{ll}
x(T)^{i-1} & (1 \le i \le g), \\
x(T)^{g+(i-g-1)/2} & (i >g, ~i \not\equiv g \mod 2), \\
-y(T)x(T)^{(i-g-2)/2} & (i>g, ~i \equiv g \mod 2 ). \\
\end{array}
\right.
$$
Note that $u_i \in {\mathbb Z}[[T^{-1}]][T]$ for all $i$
and $\{ u_i \}$ is a $K$-basis of $A$.
We set $w_i = u_i$ for $i \leq g+1$.
Suppose we have constructed $w_1, \cdots, w_{i-1}$ for some $i \geq g+2$.
There exists $\delta \in \langle w_1, \cdots, w_{i-1} \rangle_{{\mathbb Z}}$
such that $u_i - T^{i-1+g} - \delta \in T^{2g}{\mathbb Z}[[T^{-1}]]$.
We then set $w_i := u_i - \delta$.
Note that the partition of $A$ is
$$
(g, g-1,\cdots , 2,1,0,0,\cdots),
$$
and its length is $g$.
\subsection{Two-torsion points}\label{sect:twotorsion}
For any $\mathscr{L} \in J[2]$,
we shall construct an $N$-trivialization ${\sigma}$ of $\mathscr{L}$
such that $W(\mathscr{L}, {\sigma})$ admits an admissible basis
$\{w_i \}$ satisfying
$w_i \in {\mathbb Z}[\zeta][[T^{-1}]][T]$ for all $i$.
Recall that the Weierstrass points on $X$ are
$$
\infty,\ P_0=(0,0),\ \textrm{and} \ P_i=(\zeta^{2i-1},0)\quad (1\le i\le 2g).
$$
It is proved in \cite[Chapter III, \S2]{MumfordII} that
the two-torsion subgroup $J[2]$ of $J$
consists of line bundles associated to Weil divisors
$$
D_I :=\sum_{i\in I}(P_i-\infty),\quad I\subset\{0,1,\cdots,2g\},\ |I|\le g.
$$
For a subset $I \subset\{0,1,\cdots,2g\}$ such that $s:=|I|\le g$,
we get a Krichever pair
$(\mathscr{L}_I, {\sigma}_I) := (\mathscr{O}_X(D_I), {\sigma}(D_I))$
by the construction in \S \ref{sect:weildiv}.
We further set $L_I := W(\mathscr{L}_I, {\sigma}_I)$.
We construct a basis $\{w_{I,i} \}_{i=1}^\infty$ of $L_I$ as follows:
define an element $f_I$ of $H^0(X\setminus\{ \infty \}, \mathscr{L}_I) \subset K(x,y)$ by
$$
f_I:=y\prod_{j\in I}(x-x(P_j))^{-1}.
$$
Note that the divisor of $f_I$ satisfies
$$\div(f_I)=
\sum_{j\not\in I}P_j -\sum_{j\in I}P_j - (2g-2s+1)\infty.
$$
Now we define
for $1\le i\le g-s$,
$$
u_{I,i} := T^s x(T)^{i-1}
$$
and for $1\le i$,
$$
u_{I, g-s+i}=
\left\{
\begin{array}{ll}
T^sx(T)^{g-s+(i-1)/2} & (i:\text{odd}) \\
T^sf_I(T)x(T)^{(i-2)/2} & (i:\text{even}), \\
\end{array}
\right.
$$
where
$f_I(T)$ is the image of $f_I$ by the embedding $N^*: K(x,y) \hookrightarrow K((\tt))$.
One sees that
$$
\deg(u_{I, i})=
\left\{
\begin{array}{ll}
2i-2+s& (1\le i \le g-s) \\
i+g-1 & (g-s <i ). \\
\end{array}
\right.
$$
Therefore $\{u_{I,i} \}_{i=1}^\infty$ is a $K$-basis of $L_I$
such that $u_{I, i} \in {\mathbb Z}[\zeta][[T^{-1}]][T]$ for all $i$.
Now we can produce an admissible basis $\{ w_{I, i} \}$ of $L_I$
with required properties
by the same procedure as \S \ref{remk:A-basis}.
Note that the partition of $L_I$ is
\begin{equation*}
(g-s, g-s-1, \cdots, 2, 1, 0 , 0, \cdots),
\end{equation*}
and the length of the partition is $g-s$.
\subsection{Points of degree one}\label{sect:degreeone}
We fix a non-Weierstrass point $Q\in X(K)$.
Let $(\mathscr{L}_Q, {\sigma}_Q)$ be the Krichever pair
associated to the Weil divisor $Q-\infty$
under the construction in \S \ref{sect:weildiv}.
We are going to construct
an admissible basis $\{w_{Q, i} \}$
of $L_Q := W(\mathscr{L}_Q, {\sigma}_Q)$
satisfying $w_{Q, i} \in {\mathbb Z}[x(Q), y(Q)][[T^{-1}]][T]$ for all $i$.
We define a function $f_Q \in
H^0(X\setminus \{ \infty \}, \mathscr{L}_Q)\subset K(x,y)$:
$$
f_Q:=l_Q\cdot(x-x(Q))^{-1}, \qquad
l_{Q}:= y-x+y(Q)+x(Q).
$$
A straightforward computation shows that
$\div(f_Q) + Q+(2g-1)\infty$
is an effective divisor of degree $2g$.
We construct a basis $\{u_{Q,i} \}_{i=1}^\infty$ of $L_Q$ as follows:
for $1\le i\le g$,
$$
u_{Q,i} := Tx(T)^{i-1}
$$
for $1\le i$,
$$
u_{Q, g+i}:=
\left\{
\begin{array}{ll}
Tf_Q(T)x(T)^{(i-1)/2} & (i:\text{odd}) \\
Tx(T)^{g+(i-2)/2} & (i:\text{even} ), \\
\end{array}
\right.
$$
where
$f_Q(T)$
is the image of $f_Q$ in $K((\tt))$ by the embedding $N^*$.
Note that
$f_Q(T)$ belongs to ${\mathbb Z}[x(Q), y(Q)][[T^{-1}]][T]$,
hence so does $u_{Q, i}(T)$.
(Here we used a fact that an element
$\sum_{i=-\infty}^{n} c_i T^i \in {\mathbb Z}[x(Q), y(Q)][[T^{-1}]][T]$
with $c_n \not= 0$ is invertible
if and only if $c_n \in {\mathbb Z}[x(Q), y(Q)]^*$.)
One sees that $$
\deg(u_{Q, i})=
\left\{
\begin{array}{ll}
2i-1 & (1\le i \le g) \\
i+g-1 & (g<i ). \\
\end{array}
\right.
$$
Therefore $\{u_{Q,i} \}_{i=1}^\infty$ is a $K$-basis of $L_Q$
such that $u_{Q, i} \in {\mathbb Z}[x(Q), y(Q)][[\tt]][T]$ for all $i$.
Now we can produce an admissible basis $\{ w_{Q, i} \}$ of $L_Q$
with required properties
by the same procedure as \S \ref{remk:A-basis}.
Note that the partition of $L_Q$ is
\begin{equation*}
(g-1, g-2, \cdots, 1, 0 , 0, \cdots),
\end{equation*}
and its length is $g-1$.
\subsection{Action of $G$ on $\Kr(X, N)$}\label{sect:actionofg}
We define a $K$-algebra automorphism $\bar{r}$
on $K((\tt))$ by
\begin{equation*}
\bar{r}\left(\sum_i a_iT^i\right):=\sum_i a_i(\zeta T)^i.
\end{equation*}
Then the diagram
\[
\xymatrix{
\Spec K((\tt)) \ar[r]^(0.68){N} \ar[d]_{\bar{r}} & X\ar[d]^{r} \\
\Spec K((\tt)) \ar[r]^(0.68){N} & X.
}
\]
commutes.
By \S \ref{subsec:auto},
we get an induced action of $G$ on $\Kr(X, N)$.
It holds that $W(r(\mathscr{L}, \sigma)) = \bar{r}(W(\mathscr{L}, {\sigma}))
(:= \{ \bar{r}(w) ~|~ w \in W(\mathscr{L}, {\sigma}) \}$).
\subsection{Remark on the simplicity of Jacobian}\label{sect:otsubo}
{\footnote{This remark is communicated to us by Noriyuki Otsubo.}}
(The result of this subsection will not be used in the sequel.)
We suppose $K$ is an algebraically closed field.
We deduce from
a result of Aoki \cite{Aoki} that the Jacobian variety of $X$
is simple as an abelian variety,
at least when $g>45$.
To see this,
let $X'$ be a smooth projective curve over $K$
defined by $s^{4g}=t(1-t)$.
Note that the curve $X'$ is a quotient of the Fermat curve of degree $4g$.
There exists a degree two map
$\pi : X' \to X$
given by $x=c^2s^2, ~y=c(2t-1)s$,
where $c=(-4)^{1/4g}$.
Aoki's result \cite{Aoki} shows that
the Jacobian variety of $X'$
has exactly two simple factors, provided $g>45$.
The existence of $\pi$ shows that
the Jacobian variety of $X$ must be one of two simple factors.
\section{Proof of main theorem}
We keep the notation and assumption in \S \ref{sect:geometry}.
Let $p$ be a prime number such that
$$
p \equiv 1 \mod 4g.
$$
Let $\wp$ be a prime ideal of ${\mathbb Z}[\zeta]$ lying above $p$.
Since the hyperelliptic curve \eqref{eqn:hyperelliptic} is defined over ${\mathbb Q}(\zeta)$,
we may assume that $K$ is a finite extension of ${\mathbb Q}_p$ such that $\wp = {\mathbb Z}[\zeta] \cap p{\mathbb Z}_p$ in $K$.
We further assume that $K$
contains all elements of $J[p]$ and
$(p-1)$-st roots of all rational integers.
\subsection{$p$-torsion of the Jacobian}
Note that ${\mathbb F}_p$ contains all the $4g$-th roots of unity.
Put $\bar\zeta := \zeta \mod\wp \in {\mathbb F}_p$.
Choosing an embedding $\bar{{\mathbb Q}}_p \hookrightarrow {\mathbb C}$,
we get an isomorphism $J[p] \cong H_1(X({\mathbb C}), {\mathbb Z}) \otimes {\mathbb F}_p$.
The representation $\rho_p : G \to \Aut(J[p])$
is thus equivalent to $\rho \otimes {\mathbb F}_p$.
Therefore Lemma \ref{lem:singularhom} implies the following:
\begin{lem}\label{lem:decomp}
The minimal polynomial of
$\rho_p(r)$ is
$$
F(X)\mod p\ = \prod_{i=1, 3, \cdots, 4g-1} (X-\bar{\zeta}^i).$$
Consequently, we have
\[ J[p] = \bigoplus_{i=1, 3, \cdots, 4g-1} J[p]^{\chi^i},
\quad
\dim_{{\mathbb F}_p} J[p]^{\chi^i}=1 ~~(i=1, 3, \cdots, 4g-1).
\]
Here, by abuse of notation,
we write $\chi^i$ for the composition
$G \overset{\chi^i}{\to} \mu_{4g} \hookrightarrow
{\mathbb Z}_p^* \overset{\mod p}{\twoheadrightarrow} {\mathbb F}_p^*$.
\end{lem}
\subsection{An auxiliary lemma}\label{sect:auxlemma}
The following lemma plays an important role in our proof for constructing $p$-torsion points.
This is the crucial point
where we need to assume $X$ to be a special curve
given by the equation \eqref{eqn:hyperelliptic}.
See Remark \ref{rem:addedcomment} below.
\begin{lem}\label{lem:aux}
We have an equation
\begin{equation}\label{eqn:T^p}
T^p - e_0 T = a(T) + g(T)
\end{equation}
for some $e_0 \in {\mathbb Z}_{(p)}^*$,
$a(T)\in A \cap {\mathbb Z}[[\tt]][T]$ and $g(T)\in\tt {\mathbb Z}[[\tt]]$.
\end{lem}
\begin{proof}
Setting $p=4gp'+1$,
we write
$$
x^{2gp'} (1+x^{-2g})^{2gp'} = e_+(x) + e_0 + e_-(x)
$$
where
$e_{\pm}(x) \in x^{\pm 2g}{\mathbb Z}[x^{\pm 2g}]$, respectively, and $e_0 \in {\mathbb Z}$.
Note that
$e_0 =
\begin{pmatrix} 2gp' \\ p' \end{pmatrix}$
is a $p$-adic unit.
We compute
\begin{align*}
e_+(x) + e_0 + e_-(x) &= x^{2gp'} (1+x^{-2g})^{2gp'}
= (x+x^{1-2g})^{2gp'}
\\
&= \left(\frac{x^{2g+1}+x}{x^{2g}}\right)^{2gp'}
= \left(\frac{y^2}{x^{2g}}\right)^{2gp'}
= \left(\frac{-y}{x^{g}}\right)^{p-1}.
\end{align*}
Recalling $y(T)=-Tx(T)^g$,
we get an equation in $K((\tt))$
\begin{equation*}
T^p - e_0 T = a(T) + g(T)
\end{equation*}
where
$a(T) := -y(T) {e_+(x(T))}/{x(T)^g}$
and $g(T) := T e_-(x(T))$.
Observe that $a(T)$ is in the image of $A=K[x, y]$ in $K((\tt))$
(since $e_+(x) \in x^{2g}{\mathbb Z}[x]$)
and that $g(T) \in \tt {\mathbb Z}[[\tt]]$.
\end{proof}
\begin{remk}\label{rem:addedcomment}
\footnote{This remark is communicated to us by Shinichi Kobayashi.}
If one does not care much about integrality of the coefficients,
the decomposition \eqref{eqn:T^p}
holds under weaker assumptions.
To see this,
using the notation in \S 2,
we consider a direct sum decomposition
\begin{equation}\label{eq:directsum}
K((\tt)) =
A \oplus K[[\tt]] \tt \oplus (\bigoplus_{i=1}^g K T^{w_i}),
\end{equation}
where $w_1=1 < w_2 < \cdots <w_g<2g$ is the
{\it Weierstrass gap sequence}.
Thus we can write
$T^p = a(T) + g(T) + \sum_{i=1}^g e_{i-1} T^{w_i}$
with $a(T) \in A, ~g(T) \in K[[\tt]] \tt$
and $e_0, \cdots, e_{g-1} \in K$.
Suppose that the automorphism $\bar{r}$ in \S 2.7
satisfies $\bar{r}(T) = \zeta T$ for a
primitive $n$-th root of unity $\zeta$
such that $p \equiv 1 \mod n$ and $n \geq 2g$.
Then,
since the decomposition \eqref{eq:directsum} is
preserved by the action of $\bar{r}$,
one has $e_1=\cdots=e_{g-1}=0$
and $T^p = a(T) + g(T) + e_0 T$.
However, in order to prove
that $e_0$ is a $p$-adic unit
(which is important for our purpose),
we had to proceed by concrete construction
given above.
It seems to be an interesting problem to find
a general method to detect if $e_0$ is a unit.
We hope to come back to this point in future work.
(It is also important that
the coefficients of $a(T)$ and $g(T)$ are
$p$-adically integral.)
\end{remk}
\subsection{Decomposition of a Dwork loop}\label{sect:decdwork}
The result of \S \ref{remk:A-basis}
shows that $(\mathscr{O}_X, N) \in \Kr_{\an}(X, N)$.
Recall that $\bar{A}:=\bar{W}(\mathscr{O}_X, N)$
is the closure of $A=W(\mathscr{O}_X, N)$ in $H(K)$.
Let $\pi$ and ${\varepsilon}_0$ be $(p-1)$-st roots
of $-p$ and $1/e_0$ respectively,
where $e_0 \in {\mathbb Z}_{(p)}^*$ is the number appearing in Lemma \ref{lem:aux}.
(They belong to $K$ by the assumption
made at the beginning of this section.)
We define a Dwork loop
\begin{align*}
h_D(T):=&
\exp(\pi(({\varepsilon}_0T)-({\varepsilon}_0T)^p))\\
=&\exp( -\pi {\varepsilon}_0^p(T^p-e_0T)).
\end{align*}
We write $\omega : {\mathbb F}_p^* \to \mu_{p-1} \subset {\mathbb Z}_p^*$
for the Teichm\"uller character
so that $\omega(i) \equiv i \mod p$.
For $i \in {\mathbb Z}$, we set $\omega(i) = \omega(i \mod p)$.
If we replace ${\varepsilon}_0$ by $\omega(i) {\varepsilon}_0$
for some $i \in {\mathbb Z}$,
then $h_D(T)$ will be changed to another Dwork loop
$h_D(\omega(i) T) \in \Gamma_+(K)$.
\begin{prop}\label{prop:comp_loop}
\begin{enumerate}
\item
There exist
$h_A \in \bar{A}\cap {\Gamma}(K)$
and $h_- \in{\Gamma}_-(K)$
such that
$$
h_{D}(T)^p=h_A(T) h_-(T).
$$
\item Let $i \in {\mathbb Z}$.
There exist
$h_{A, i}\in \bar{A}\cap {\Gamma}(K)$
and $h_{-, i}\in{\Gamma}_-(K)$
such that
$$
h_{D}(\omega(i)T) h_{D}(T)^{-i}=h_{A, i}(T) h_{-, i}(T).
$$
\end{enumerate}
\end{prop}
\begin{proof}
From the equation \eqref{eqn:T^p}, we have
\begin{align*}
h_D(T)^p=&
\exp(-p\pi{\varepsilon}_0^p(T^p-e_0T))\\
=&\exp(-p\pi{\varepsilon}_0^pa(T))\cdot\exp(-p\pi{\varepsilon}_0^p g(T)).
\end{align*}
Since $a(T)\in A \cap {\mathbb Z}[[\tt]][T]$ and $g(T)\in\tt {\mathbb Z}[[\tt]]$, we have
$$\begin{array}{l}
h_A(T):=\exp(-p\pi{\varepsilon}_0^pa(T))\in \bar{A}\cap{\Gamma}(K)\\
h_-(T):=\exp(-p\pi{\varepsilon}_0^p g(T))\in{\Gamma}_-(K),
\end{array}
$$
because the radius of convergence of $\exp(T)$ is $|p|^{1/(p-1)}=|\pi|$.
The first claim is proved.
Using the equation \eqref{eqn:T^p}
and $\omega(i)^p=\omega(i)$,
we compute
\begin{align*}
h_D(\omega(i) T)h_D(T)^{-i}=&\exp(-(\omega(i)-i)\pi {\varepsilon}_0^p(T^p-e_0T))\\
=&\exp(-(\omega(i)-i)\pi{\varepsilon}_0^pa(T))\cdot\exp(-(\omega(i)-i)\pi{\varepsilon}_0^p g(T)).
\end{align*}
Since $\omega(i)-i\equiv0\mod\wp$, we have
$$
\begin{array}{l}
h_{A, i}(T):=\exp(-(\omega(i)-i)\pi{\varepsilon}_0^pa(T))\in \bar{A}\cap{\Gamma}(K)\\
h_{-, i}(T):=\exp(-(\omega(i)-i)\pi{\varepsilon}_0^p g(T))\in {\Gamma}_-(K),
\end{array}
$$
and we are done.
\end{proof}
\subsection{Construction of $p$-torsion elements}\label{sect:torsion}
Recall that we have constructed a Dwork loop $h_D(T) \in \Gamma_+(K)$
in \S \ref{sect:decdwork}.
Recall also that we have defined an automorphism
$\bar{r}$ of $H(K)$ in \S \ref{sect:actionofg}
by $\bar{r}(h(T))=h(\zeta T)$.
\begin{prop}\label{prop:p-tor_Gr}
\begin{enumerate}
\item We have
$
[h_D(T) (\mathscr{O}_X, N)] \in J \setminus \Theta.
$
\item We have
$\{ [h_D(\xi T) (\mathscr{O}_X, N)] ~|~ \xi \in \mu_{p-1} \}
= J[p]^{\chi} \setminus \{ 0 \}.$
\end{enumerate}
\end{prop}
\begin{proof}
(1)
Put $(\mathscr{L}, {\sigma}) := h_D(\mathscr{O}_X, N) \in \Kr(X, N)$.
By Proposition \ref{prop:loopgroupaction} (1),
we have $\deg(\mathscr{L})=0$.
The result of \S \ref{remk:A-basis} shows that
$(\mathscr{O}_X, N) \in \Kr_{\an}^0(X, N)$
satisfies the assumptions (A1) and (A2)
of Theorem \ref{thm:Dwork_loop}.
It follows that $\mathscr{L} \not\in \Theta$.
(2)
We first show that $\mathscr{L} \in J[p] \setminus \{ 0 \}$.
Note that (1) implies that $\mathscr{L} \not= 0$.
For $K$-subspaces $V_1, \cdots, V_m$ of $H(K)$,
we write $V_1 \cdot \ldots \cdot V_m$ for the
$K$-span of $\{ \prod_{j=1}^m u_j ~|~ u_j \in V_j \}$.
When $V=V_1=\cdots=V_m$ we write
$V^m := V \cdot \ldots \cdot V$.
Let $V=\bar{W}(\mathscr{L}, {\sigma})$.
Proposition \ref{prop:loopgroupaction} shows that $V=h_D\bar{A}$.
Thus $V^p = h_D^p\bar{A}$.
By Proposition \ref{prop:fundamental2}
and \S \ref{sect:groupstructure},
we have
$(\mathscr{L}, {\sigma})^{\otimes p} = h_D^p(\mathscr{O}_X, N)$.
Propositions \ref{prop:comp_loop} (1)
and \ref{prop:loopgroupaction}
show $[h_D^p(\mathscr{O}_X, N)]=[(\mathscr{O}_X, N)]$.
We conclude $\mathscr{L}^{\otimes p}=\mathscr{O}_X$.
Similarly,
Proposition \ref{prop:comp_loop} (2)
shows that for all $i \in {\mathbb Z}$
\[
[h_D({\omega}(i)T)h_D(T)^{-i}(\mathscr{O}_X,N)]=[(\mathscr{O}_X,N)],
\]
thus we have
\begin{equation}\label{eq:lastentry}
[h_D(\omega(i) T) (\mathscr{O}_X, N)]=[h_D(T)^i(\mathscr{O}_X,N)]=\mathscr{L}^{\otimes i}.
\end{equation}
In particular,
if we take $s \in {\mathbb Z}$ such that $\omega(s)=\zeta (=\chi(r))$,
we get
\[
r^*(\mathscr{L})
=[\bar{r}^*(h_D(T))(\mathscr{O}_X, N)]
=[h_D(\zeta T)(\mathscr{O}_X, N)]
\overset{}{=}\mathscr{L}^{\otimes s}
=\chi(r) \mathscr{L},
\]
This shows $\mathscr{L} \in J[p]^{\chi}$
and hence $J[p]^{\chi}$ is a cyclic group of
order $p$ generated by $\mathscr{L}$.
Now \eqref{eq:lastentry} completes the proof.
\end{proof}
\subsection{Proof of Theorem \ref{thm:maimtheorem}}
We may suppose $K$ is a finite extension of ${\mathbb Q}_p$
satisfying the conditions stated
at the beginning of this section.
Take $\mathscr{L} \in J[2]$ and $\mathscr{L}' \in J[p]^{\chi} \setminus \{ 0 \}$.
We need to show $\mathscr{L} \otimes \mathscr{L}' \not\in \Theta$.
By Proposition \ref{prop:p-tor_Gr},
there exists a Dwork loop $h$ such that
$\mathscr{L}' = [h(\mathscr{O}_X, N)]$.
By \S \ref{sect:twotorsion},
there exists an $N$-trivialization ${\sigma}$ of $\mathscr{L}$
such that $W(\mathscr{L}, {\sigma})$ admits an admissible basis
$\{w_i \}$ satisfying $w_i \in {\mathbb Z}[\zeta][[T^{-1}]][T]$ for all $i$.
Hence $(\mathscr{L}, {\sigma})$ belongs to $\Kr_{\an}^0(X,N)$ and satisfies the assumptions
(A1) and (A2) of Theorem \ref{thm:Dwork_loop}.
It follows that $[h(\mathscr{L}, {\sigma})] \not\in \Theta$.
By
Propositions \ref{prop:fundamental2},
\ref{prop:loopgroupaction} and \S \ref{sect:groupstructure},
we have
$[h(\mathscr{L}, {\sigma})] = [(\mathscr{L}, {\sigma})]\otimes[h(\mathscr{O}_X, N)] = \mathscr{L} \otimes \mathscr{L}'$.
\subsection{Proof of Theorem \ref{thm:maimtheorem2}}
We may assume $Q$ is a non-Weierstrass point
by Theorem \ref{thm:maimtheorem}.
Then the same proof as the previous subsection works
if we put \S \ref{sect:degreeone}
in the place of \S \ref{sect:twotorsion}.
\section{Appendix: Sato Grassmannian}\label{sect:app}
In this section,
we explain Anderson's theory \cite{Anderson}
in a style much closer to his original framework.
It will be apparent that
the results in \S \ref{sect:anderson}
are the same results stated in another way.
\subsection{Sato Grassmannian}
We work under the notation and assumption
in \S \ref{sect:kri}.
The {\it Sato Grassmannian} $\Gr^\textit{alg}(K)$ is the set of
all $K$-subspace $V\subset K((\tt))$ such that
the $K$-dimensions of the kernel and cokernel of the map
\begin{equation*}
f_V : V\to K((\tt))/K[[\tt]]\ ;\ v\mapsto v+ K[[\tt]]
\end{equation*}
are finite.
The {\it index} of $V\in\Gr^\textit{alg}(K)$ is defined by
\begin{equation}\label{eq:index}
i(V):=\dim_K \Ker(f_V)- \dim_K \Coker(f_V).
\end{equation}
(The fibers of the map $i : \Gr^{\textit{alg}}(K) \to {\mathbb Z}$
are considered as `connected components' of $\Gr^{\textit{alg}}(K)$,
and each connected component
admits a {\it Schubert cell decomposition}
indexed by the set of all partitions,
but we do not need these facts.)
Recall that $A := W(\mathscr{O}_X, N)$ is a $K$-subalgebra of $K((\tt))$.
For $V \in \Gr^{\textit{alg}}(K)$,
we set $A_V := \{ f \in K((\tt)) ~|~ fV \subset V \}$,
which is a $K$-subalgebra of $K((\tt))$.
We define
\[ \Gr_A^\textit{alg}(K) := \{ V \in \Gr^{\textit{alg}}(K) ~|~ A_V = A \}. \]
For $V, V' \in \Gr_A^{\textit{alg}}(K)$, we define their product to be
$V \cdot V' = \langle w w' ~|~ w \in V, w' \in V' \rangle_K$,
under which $\Gr_A^{\textit{alg}}(K)$ becomes an abelian group.
\begin{prop}[{\cite[\S 2.3]{Anderson}; see also \cite{Mumford2}}]
\label{prop:corresp}
The construction of \S \ref{sect:vector}
defines an isomorphism of abelian groups
\[ W : \Kr(X, N) \to \Gr_A^{\textit{alg}}(K);
\qquad (\mathscr{L}, {\sigma}) \mapsto W(\mathscr{L}, {\sigma})
\]
which satisfies the following properties:
\begin{enumerate}
\item
We have $i(W(\mathscr{L}, {\sigma}))=\deg(\mathscr{L})+1-g$
for any $(\mathscr{L}, {\sigma}) \in \Kr(X, N)$.
\item
For $V, V' \in \Gr^\textit{alg}_A(K)$,
one has $[W^{-1}(V)]=[W^{-1}(V')]$
if and only if $V=uV'$ for some $u \in K[[\tt]]^*$.
\end{enumerate}
\end{prop}
All results in \S \ref{sect:kri}-\ref{sect:groupstructure}
are explained by this proposition.
\subsection{$p$-adic Sato Grassmannian}
Now we use the assumption and notation of \S \ref{sect:analy}.
Let $H_+(K)$ and $H_{-}(K)$ be the closed $K$-subspaces of $H(K)$ defined by
\begin{align*}
H_+(K)&:=\left\{\sum_i a_i T^i\in H(K) \ \biggm|\ a_i=0\ (\textrm{for\ all\ }i\le0)\right\}_,\\
H_{-}(K)&:=\left\{\sum_i a_i T^i\in H(K) \ \biggm|\ a_i=0\ (\textrm{for\ all\ }i>0)\right\}_.
\end{align*}
The {\it $p$-adic Grassmannian } $\Gr^{\an}(K)$ is the set of
all $K$-subspaces $\bar V\subset H(K)$
such that
$\bar V$ is the image of a $K$-linear injective map $w : H_+(K) \to H(K)$
satisfying the following conditions:
there exist $i_0 \in {\mathbb Z}$,
a $K$-linear operator $v : H_+(K) \to H_-(K)$ with $\|v \|\le1$,
and
a $K$-linear endomorphism $u$ on $H_+(K)$ with $\|u\|\le1$
that is a uniform limit of bounded $K$-linear operators of finite rank
(i.e. {\it completely continuous}),
such that
the map $T^{i_0}w$ has the form
\begin{equation*}
T^{i_0}w=
\left[
\begin{array}{c}
1+u \\
v \\
\end{array}
\right]
: H_+(K) \to
\left[
\begin{array}{c}
H_+(K) \\
H_-(K)\\
\end{array}
\right].
\end{equation*}
The {\it index} of $\bar V\in\Gr^{\an}(K)$,
denoted by $i(\bar V)$, is defined by the difference of the dimensions of
the kernel and cokernel of the projection map $\bar V\to H_+(K)$.
\begin{prop}[{\cite[\S 3.2]{Anderson}}]\label{prop:W^alg}
There is an injective map
\begin{equation*}
\Gr^{\an}(K) \hookrightarrow \Gr^\textit{alg}(K), \qquad
\bar V \mapsto \bar V^\textit{alg}:=\bar V \cap K((\tt)).
\end{equation*}
For any $\bar V \in \Gr^{\an}(K)$, one has $i(\bar V)=i(\bar V^{\textit{alg}})$.
For $V\in\Gr^\textit{alg}(K)$,
there exists $\bar V\in\Gr^{\an}(K)$ such that $\bar V^\textit{alg}=V$
if and only if
$V$ has an admissible basis $\{w_i\}$ such that
$w_i \in H(K)$ for all $i$ and $\|w_i\|=1$ for almost all $i$.
\end{prop}
By this proposition,
we regard $\Gr^{\an}(K)$ as a subset of $\Gr^{\textit{alg}}(K)$.
It follows that
$\Kr_{\an}(X, N) = \{ (\mathscr{L}, {\sigma}) \in \Kr(X, N) ~|~
W(\mathscr{L}, {\sigma}) \in \Gr^{\an}(K) \}$.
\subsection{Action of $p$-adic loop group and Anderson's theorem}
\label{sect:final}
In \cite[\S 3.3]{Anderson},
the action
\[ {\Gamma}(K) \times \Gr^{\an}(K) \to \Gr^{\an}(K), \qquad
(h, \bar V) \mapsto h \bar V := \{ hv ~|~ v \in \bar V \}
\]
of ${\Gamma}(K)$ on $\Gr^{\an}(K)$
is defined.
Proposition \ref{prop:loopgroupaction}
is also proved in loc. cit.
Finally, Theorem \ref{thm:Dwork_loop}
is a reformulation of \cite[Lemma 3.5.1]{Anderson}.
Anderson proved this extraordinary result
by introducing the $p$-adic version of {\it Sato tau-function},
which plays a central role in
Sato's theory of KP hierarchy
(see \cite{Sato, Sato-Sato, Segal-Wilson}).
Anderson's proof of Theorem \ref{thm:Dwork_loop} is based on
a careful estimate of the tau function.
\vspace{5mm}
\noindent
{\it Acknowledgement.}
We would like to express our gratitude to Noriyuki Otsubo
and Shinichi Kobayashi
for his insightful comments.
In particular, the remarks in \S \ref{sect:otsubo}
and \ref{rem:addedcomment}
are suggested by them.
We are also deeply grateful to Takeshi Ikeda
for stimulating discussion.
We learned the importance of
the equation of the form \eqref{eqn:T^p} from him.
\bibliographystyle{plain}
\begin{bibdiv}
\begin{biblist}
\bib{Anderson}{article}{
author={Anderson, G.~W.},
title={Torsion points on {J}acobians of quotients of {F}ermat curves and
{$p$}-adic soliton theory},
date={1994},
journal={Invent. Math.},
volume={118},
number={3},
pages={475\ndash 492},
}
\bib{Aoki}{article}{
author={Aoki, N.},
title={Simple factors of the {J}acobian of a {F}ermat curve and the
{P}icard number of a product of {F}ermat curves},
date={1991},
journal={Amer. J. Math.},
volume={113},
number={5},
pages={779\ndash 833},
}
\bib{Boxall-Grant}{article}{
author={Boxall, J.},
author={Grant, D.},
title={Examples of torsion points on genus two curves},
date={2000},
journal={Trans. Amer. Math. Soc.},
volume={352},
number={10},
pages={4533\ndash 4555},
}
\bib{Coleman1}{article}{
author={Coleman, R.~F.},
title={Torsion points on curves and {$p$}-adic abelian integrals},
date={1985},
journal={Ann. of Math. (2)},
volume={121},
number={1},
pages={111\ndash 168},
}
\bib{Dwork}{article}{
author={Dwork, B.},
title={{On the zeta function of a hypersurface}},
date={1962},
journal={Publications Math\'{e}matiques de l'IH\'{E}S},
volume={12},
number={1},
pages={5\ndash 68},
}
\bib{Grant}{article}{
author={Grant, D.},
title={Torsion on theta divisors of hyperelliptic {F}ermat {J}acobians},
date={2004},
journal={Compos. Math.},
volume={140},
number={6},
pages={1432\ndash 1438},
}
\bib{Kob-Roh}{article}{
author={Koblitz, N.},
author={Rohrlich, D.},
title={Simple factors in the {J}acobian of a {F}ermat curve},
date={1978},
journal={Canad. J. Math.},
volume={30},
number={6},
pages={1183\ndash 1205},
}
\bib{Koblitz}{book}{
author={Koblitz, N.},
title={p-adic analysis: A short course on recent work},
publisher={Cambridge Univ Pr},
date={1980},
volume={46},
}
\bib{Mumford2}{inproceedings}{
author={Mumford, D.},
title={An algebro-geometric construction of commuting operators and of
solutions to the {T}oda lattice equation, {K}orteweg de{V}ries equation and
related nonlinear equation},
date={1978},
booktitle={Proceedings of the {I}nternational {S}ymposium on {A}lgebraic
{G}eometry ({K}yoto {U}niv., {K}yoto, 1977)},
publisher={Kinokuniya Book Store},
address={Tokyo},
pages={115\ndash 153},
}
\bib{MumfordII}{book}{
author={Mumford, D.},
title={Tata lectures on theta. {II}},
series={Modern Birkh\"auser Classics},
publisher={Birkh\"auser Boston Inc.},
address={Boston, MA},
date={2007},
note={Jacobian theta functions and differential equations, With the
collaboration of C. Musili, M. Nori, E. Previato, M. Stillman and H. Umemura,
Reprint of the 1984 original},
}
\bib{Raynaud}{incollection}{
author={Raynaud, M.},
title={Sous-vari\'et\'es d'une vari\'et\'e ab\'elienne et points de
torsion},
date={1983},
booktitle={Arithmetic and geometry, {V}ol. {I}},
series={Progr. Math.},
volume={35},
publisher={Birkh\"auser Boston},
address={Boston, MA},
pages={327\ndash 352},
}
\bib{Sato-Sato}{incollection}{
author={Sato, M.},
author={Sato, Y.},
title={Soliton equations as dynamical systems on infinite-dimensional
{G}rassmann manifold},
date={1983},
booktitle={Nonlinear partial differential equations in applied science
({T}okyo, 1982)},
series={North-Holland Math. Stud.},
volume={81},
publisher={North-Holland},
address={Amsterdam},
pages={259\ndash 271},
}
\bib{Sato}{incollection}{
author={Sato, M.},
title={The {KP} hierarchy and infinite-dimensional {G}rassmann
manifolds},
date={1989},
booktitle={Theta functions---{B}owdoin 1987, {P}art 1 ({B}runswick, {ME},
1987)},
series={Proc. Sympos. Pure Math.},
volume={49},
publisher={Amer. Math. Soc.},
address={Providence, RI},
pages={51\ndash 66},
}
\bib{Segal-Wilson}{article}{
author={Segal, G.},
author={Wilson, G.},
title={Loop groups and equations of {K}d{V} type},
date={1985},
journal={Inst. Hautes \'Etudes Sci. Publ. Math.},
number={61},
pages={5\ndash 65},
}
\bib{Tzermias}{article}{
author={Tzermias, P.},
title={The {M}anin-{M}umford conjecture: a brief survey},
date={2000},
journal={Bull. London Math. Soc.},
volume={32},
number={6},
pages={641\ndash 652},
}
\end{biblist}
\end{bibdiv}
\end{document}
|
2,877,628,090,961 | arxiv | \section{Introduction}
This paper is concerned with the theoretical and numerical study of several inverse problems in structured population models. These models describe the coevolution of a population where individuals have a distinct quantitative trait, such as their size. The evolution of the number of individuals with given trait $n=n(t,x)$ is assumed to be governed by two effects: Interaction among individuals and interaction with their environment. In general, interactions between individuals are due to competition (e.g. for a common food source) or by random mutations. Here we consider the case where an individuals' offspring has the same trait as its parents, thus neglecting the effect of mutations. This leads to a model of the form
\begin{align}\label{eq:model}
\partial_t n(t,x) &= s[n]n,\quad x \in \mathbb{R},\, t \in [0,T],\\
\label{eq:init}
n(0,x) &= n_0(x).
\end{align}
The selection rate (or selective pressure) $s[n]$ introduces coupling with respect to the $x$ variable.
The dynamics of such equations has been studied extensively by many authors, see, e.g., \cite{Desvillettes2008,JG11,LP14}. Besides existence and uniqueness of solutions, their long time behavior is analyzed. Depending on the particular form of $s$ it is expected that only a few traits survive for large times, i.e., that the solution converges to a finite sum of Dirac measures. We refer to \cite{Desvillettes2008,Lorz2011,Lorz2015} for more details. This is strongly related to the notion of evolutionary stable strategy (ESS) and we refer the reader to \cite{Maynard1973}. Also note that similar models can also be derived from a stochastic models with finite populations, cf. \cite{Champagnat2006,ChampagnatStochastic2008,DieckmannStochastic1996}.
The dynamics of \eqref{eq:model}--\eqref{eq:init} are determined by the structure of $s[n]$, and knowledge of $s[n]$ allows for prediction of the evolution of the population at future times.
In this work we are interested in identifying the model parameter $s[n]$ from observational data of the solution to \eqref{eq:model}--\eqref{eq:init} in the class of logistic type selection rates, i.e.,
\begin{align}\label{eq:selection}
s[n] = p(x) - d(x)\rho(t).
\end{align}
Here the parameters to be identified are the reproduction rate $p$ and the trait-dependent weight function $d$ of the death rate $d\rho$,
where
\begin{align}\label{eq:defrho}
\rho(t) &= \int n(t,x)\;dx
\end{align}
denotes the total mass of the population at time $t$. Selection rates of form \eqref{eq:selection} are frequently used in the literature, see for example \cite{Roughgarden1979theory,Barabas2009}, yet sometimes with $\rho$ defined as a weighted integral over $n$. In our case, since $\rho(t)$ is simply the total mass, all individuals are in competition with one another, independent of their particular trait.
Typical data that we consider consist of the total population size $\rho(t)$, $0\leq t\leq T$, or of tuples $(\bar x,t)$ of the location of critical values $\bar x$ of $n(t,\cdot)$.
More precisely, we address the following inversion problems:
\begin{enumerate}
\item[(P1)] Given measurements of $\rho(t)$ on $[0,T]$, determine either the function $p(x)$, $d(x)$ or $n_0(x)$.
\item[(P2)] Given measurements of critical points of $n(t,\cdot)$, $t\in [0,T]$, determine either $p(x)$, $d(x)$ or $n_0(x)$.
\end{enumerate}
As will be elaborated below, there exist a number of transformations that can be applied to the parameters $p$ and $d$ yet leave the quantities $\rho$ and / or the critical points of $n$ unchanged. In these situations one cannot expect any positive identification result which is directly reflected in the assumption we have to make in our uniqueness theorems. More precisely, for (P1), we are able to give a positive identification result under suitable monotonicity assumptions on the parameters and present explicit counterexamples when these assumptions are violated. In situations when uniqueness is guaranteed, we present numerical reconstructions using Tikhonov regularization, and we verify convergence under a standard smoothness assumption.
For (P2), we derive explicit formulas for the derivatives $p'$, $d'$ and $n_0'$, which imply uniqueness and stability with respect to perturbation of the measured data. The latter is demonstrated by numerical examples. Finally, we also comment on the simultaneous identification problem
\begin{enumerate}
\item[(P3)] Given measurements of $\rho(t)$ as well as the position of critical points, determine both $p(x)$ and $n_0(x)$.
\end{enumerate}
In this case we cannot give a definite answer which is mainly due to the fact that it seems very delicate to combine the nonlocal information contained in $\rho(t)$ with the knowledge of critical points that is purely local. Finally, note that our setup is quite different from more common parameter identification problems for partial differential equations, see e.g. \cite{Isakov06}, since we neither have a differential operator acting in space nor measurements on the boundary.
\smallskip\\
This paper is organized as follows: In Section \ref{sec:properties}, we study the population model and show existence and uniqueness of solutions. In Section \ref{sec:inverse_population} we address (P1), give counterexamples to the identification problem for general parameters, and give classes of parameter functions for which the inverse problems in (P1) can be solved uniquely. In Section~\ref{sec:inverse_critical}, we consider (P2) and present reconstruction formulas for the derivates of the parameter functions evaluated at critical points of the population density, which is followed by a discussion regarding (P3). We present extensive numerical results for the actual reconstruction of the unknown parameters, including different ways to treat the (nonlinear) problem as well as convergence rates in Section \ref{sec:numerics}. Finally, in Section~\ref{sec:outlook}, we give an outlook for a population model with mutation.
\section{Existence of solutions}\label{sec:properties}
Equations \eqref{eq:model}--\eqref{eq:defrho} can be understood as a system of ordinary differential equations (for every point $x\in \mathbb{R}$) coupled via $\rho(t)$, which motivates to rewrite the solution using the following implicit representation
\begin{align}\label{eq:explicit}
n(t,x) = n_0(x)e^{tp(x) - d(x)\int_0^t \rho(s)\;ds}.
\end{align}
Integrating expression \eqref{eq:explicit} with respect to space yields the following nonlinear fixed-point equation for the total population
\begin{align}\label{eq:rho_fix}
\rho(t)= \int_\mathbb{R} n_0(x)e^{tp(x) - d(x)\int_0^t \rho(s)\;ds}dx,
\end{align}
which is an ordinary differential equation for $R(t)=\int_0^t\rho(s)ds$ with initial data $R(0)=0$. For convenience of the reader and for later reference, we provide a proof of uniqueness and existence of solutions to \eqref{eq:model}--\eqref{eq:defrho}. Let us refer also to \cite[Thm 2.1]{Desvillettes2008} for a similar strategy, yet in different function spaces.
\begin{thm}\label{thm:existence}
Let $p,d \in L^\infty(\mathbb{R})$ be non-negative and let $n_0\in L^1(\mathbb{R})$ be non-negative. Then there exists a unique $n\in C^{\infty}([0,T],L^1(\mathbb{R}))$ and $\rho\in C^\infty([0,T])$ solution to \eqref{eq:model}--\eqref{eq:init}.
\end{thm}
\begin{proof}
The proof relies on Banach's fixed point theorem. For
$$
M=\{\rho\in L^\infty(0,T):\rho\geq 0\}
$$
define the map $\Lambda:M\to M$ as
\begin{align}\label{eq:def_lambda}
(\Lambda(\rho))(t)=\int_\mathbb{R} n_0(x) e^{tp(x)-d(x)\int_0^t\rho(s)ds}dx.
\end{align}
By construction, fixed points of $\Lambda$ are solutions to \eqref{eq:rho_fix}. We endow the space $L^\infty(0,T)$ with the norm
\begin{align*}
\|\rho\|_{\infty,a}=\sup_{0<t<T} |\rho(t)|e^{-at}
\end{align*}
and chose $a=2\|n_0\|_{L^1} \|d\|_\infty e^{T\|p\|_\infty}$. We have $a=0$ when either $n_0\equiv 0$ or $d\equiv0$, and the assertion holds trivially. Let now $a>0$. Obviously, $\Lambda$ is a self-mapping. In order to show that $\Lambda$ is a contraction, we observe that
\begin{align*}
|e^{-d z}-e^{-dz_0}|\leq d |z-z_0|
\end{align*}
for all $z_0,z\geq 0$. Hence, we obtain for $\rho_1,\rho_2\in M$
\begin{align*}
|\Lambda(\rho_1)-\Lambda(\rho_2)|(t)&\leq \int_\mathbb{R} n_0(x) e^{tp(x)} |e^{-d(x)\int_0^t\rho_1(s)ds}-e^{-d(x)\int_0^t\rho_2(s)ds}|dx\\
&\leq \|n_0\|_{L^1} e^{T\|p\|_\infty} \|d\|_\infty\int_0^t|\rho_1(s)-\rho_2(s)|ds\\
&\leq\|n_0\|_{L^1} e^{T\|p\|_\infty} \|d\|_\infty\|\rho_1-\rho_2\|_{\infty,a} \frac{e^{at}}{a}.
\end{align*}
By the choice of $a$, we thus obtain
\begin{align*}
\|\Lambda(\rho_1)-\Lambda(\rho_2)\|_{\infty,a}\leq \frac{1}{2}\|\rho_1-\rho_2\|_{\infty,a},
\end{align*}
which shows that $\Lambda$ is a contraction. Banach's fixed point theorem implies the existence and uniqueness of $\rho\in M$ such that $\rho=\Lambda(\rho)$. Defining $n(t,x)$ via \eqref{eq:explicit} yields the unique solution to \eqref{eq:model}--\eqref{eq:init}. In addition, since $t\mapsto \int_0^t \rho(s)ds \in W^{1,\infty}(0,T)$, we infer that $n(t,x)\in W^{1,\infty}(0,T)$ a.e. $x$. The regularity assumptions on $p$, $d$ and $n_0$ yield that $n\in W^{1,\infty}(0,T;L^1(\mathbb{R}))$. Using \eqref{eq:defrho}, we then obtain $\rho\in W^{1,\infty}(0,T)$. Repeating these arguments, we obtain higher order differentiability in time of $\rho$ and $n$.
\end{proof}
\section{Identification from knowledge of the total population size}\label{sec:inverse_population}
In the following we address inverse problem (P1).
In general, the coefficient $p$ is not uniquely determined given measurements of the total population $\rho$ as shown by the following examples.
\begin{itemize}
\item[(i)] Translational invariance: Let $n_0(x) = 1$ for $x\in \mathbb{R}$, $d=0$ and let $c>0$ be arbitrary. In addition, choose a compactly supported function $p(x)$ and define the function $\bar p(x) := p(x+c)$. Solving \eqref{eq:model}--\eqref{eq:init} with parameters $p$ and $\bar p$, respectively, yields the same function $\rho(t)$.
\item[(ii)] Symmetry: Let $d(x)=d(-x)$, $n_0(x)=n_0(-x)$, and let $p_1(x)$ be arbitrary. If we define $p_2(x)=p_1(-x)$, then $n_2(x,t)=n_1(-x,t)$, and $\rho_1(t)=\rho_2(t)$ for $t\geq 0$.
\end{itemize}
These examples suggest to consider the class of strictly monotone coefficient functions $p$.
\begin{thm}\label{thm:ident_p}
Let $n_0\in C^0(\mathbb{R})$ be nonnegative with compact and connected support. Assume that $d(x)=d>0$ is constant.
Denote by $p_1$ and $p_2$ continuous and strictly monotone functions on the support of $n_0$ such that $p_1'p_2'>0$, and let $n_1$ and $n_2$ denote the solutions to \eqref{eq:model}--\eqref{eq:init} with $p$ replaced by $p_1$ and $p_2$, respectively. Then, with $\rho_1$ and $\rho_2$ being the respective population sizes we have
\begin{align*}
\rho_1 = \rho_2\text{ on } [0,T]\text{ implies } p_1 = p_2\text{ on } \mathrm{supp}(n_0).
\end{align*}
\end{thm}
\begin{proof}
By assumption $\rho=\rho_1=\rho_2$, and it follows from \eqref{eq:defrho} that
\begin{align}\label{eq:identity}
\int_{\mathbb{R}} n_0 e^{tp_1} e^{-d\int_0^t \rho(s)ds}\;dx = \int_\mathbb{R} n_0 e^{tp_2} e^{-d\int_0^t \rho(s)ds}\;dx.
\end{align}
Since $d$ is constant, this implies
\begin{align*}
\int_{\mathbb{R}} n_0 e^{tp_1} \;dx = \int_\mathbb{R} n_0 e^{tp_2} \;dx.
\end{align*}
Using monotonicity of $p_1$ and $p_2$, we can transform each of the integrals, using either $y=p_1(x)$ or $y=p_2(x)$ as new variables, respectively, to obtain
\begin{align*}
\int_{\mathbb{R}} \left( \frac{n_0(p^{-1}_1(y))}{p_1'(p_1^{-1}(y))}\chi_{\mathcal{P}_1}(y) - \frac{n_0(p^{-1}_2(y))}{p_2'(p_2^{-1}(y))}\chi_{\mathcal{P}_2}(y)\right) e^{ty}\;dy = 0,
\end{align*}
where we also used that $p_1'p_2'>0$. Here, $\mathcal{P}_i=p_i(\mathcal{S})$ for $\mathcal{S}={\rm supp}(n_0)$, $i=1,2$, and $\chi_{\mathcal{P}_i}$ denotes the indicator function of the set $\mathcal{P}_i$. Since $\mathcal{S}$ is a compact interval and $p_i \in C^0(\mathcal{S})$, $\mathcal{P}_i$ are compact intervals.
Differentiation with respect to $t$ and evaluating the result for $t=0$ then yields, for every $k\ge 0$,
\begin{align}\label{eq:N0dx}
\int_{\mathcal{P}_1\cup \mathcal{P}_2} \left( \frac{n_0(p^{-1}_1(y))}{p_1'(p_1^{-1}(y))}\chi_{\mathcal{P}_1}(y) - \frac{n_0(p^{-1}_2(y))}{p_2'(p_2^{-1}(y))}\chi_{\mathcal{P}_2}(y)\right) y^k\;dx = 0.
\end{align}
The term in brackets is continuous as a function of $y$ due to the construction of $\mathcal{P}_i$, $i=1,2$.
Since $\mathcal{P}_1\cup \mathcal{P}_2$ is compact, a density argument yields
\begin{align}\label{eq:diff_N}
\frac{n_0(p^{-1}_1(y))}{p_1'(p_1^{-1}(y))}\chi_{\mathcal{P}_1}(y) - \frac{n_0(p^{-1}_2(y))}{p_2'(p_2^{-1}(y))}\chi_{\mathcal{P}_2}(y) =0
\end{align}
for all $y\in \mathcal{P}_1\cup \mathcal{P}_2$. This readily implies ${\mathcal{P}_1}\setminus {\mathcal{P}_2}=\emptyset$ and ${\mathcal{P}_2}\setminus {\mathcal{P}_1}=\emptyset$, and hence $\mathcal{P}_1\cup \mathcal{P}_2=\mathcal{P}_1\cap \mathcal{P}_2$, i.e., $\mathcal{P}_1= \mathcal{P}_2$.
Introducing the primitive of $n_0$, i.e.,
\begin{align*}
N_0(x)=\int_{x_0}^x n_0(z) dz,
\end{align*}
where $x_0 = \min\mathcal{S}$, we see that \eqref{eq:diff_N} is equivalent to
\begin{align*}
\frac{d}{dy}\left(N_0(p_1^{-1}(y)) - N_0(p_2^{-1}(y))\right) = 0
\end{align*}
for all $y\in \mathcal{P}:=\mathcal{P}_1=\mathcal{P}_2$. The assumption $p_1'p_2'>0$ then implies $p_1(x_0)=p_2(x_0)$, and hence
$N_0(p_1^{-1}(y)) = N_0(p_2^{-1}(y))$ for all $y\in\mathcal{P}$.
Using the definition of $N_0$ we thus obtain
\begin{align*}
\int_{p_2^{-1}(y)}^{p_1^{-1}(y)}n_0(z)dz=0
\end{align*}
for $y\in \mathcal{P}$. Since, $p_i^{-1}(\mathcal{P})=\mathcal{S}$, $i=1,2$, and $n_0$ is positive in the interior of $\mathcal{S}$, we deduce that $p_2^{-1}(y)=p_1^{-1}(y)$ for all $y\in\mathcal{P}$, i.e., $p_1(x)=p_2(x)$ for all $x\in\mathcal{S}$.
\end{proof}
\begin{rem}[Identification of $d$ and $n_0$]
Interchanging the roles of $d$ and $p$ in the above examples shows that, in general, uniqueness of $d$ cannot be expected from knowledge of $\rho$ only.
With similar arguments as in the proof of Theorem~\ref{thm:ident_p}, one can, however, prove uniqueness of $d$ in the class of strictly monotone functions (either increasing or decreasing) given measurements of $\rho(t)$ and knowledge of $n_0$ and constant $p$.
Moreover, one can show that for $p$ and $n_0$ arbitrary, knowledge of $\rho(t)$, $t\geq 0$, uniquely determines constant parameters $d$.
The transformation $y=p(x)$ in the proof of Theorem~\ref{thm:ident_p} can also be used to identify compactly supported initial data $n_0$ if $p$ is strictly monotone and $d$ is constant. We leave the details to the reader.
\end{rem}
\section{Identification in critical points of the population}\label{sec:inverse_critical}
Above we have shown that, under appropriate assumptions, the total population size contains sufficient information for the determination of some of the parameters of the problem. These results, however, do not provide an explicit reconstruction formula. In this section, we show that knowledge of the critical points of the population density can be used to directly compute derivatives of the unknown parameters.\\
Before we state the results, we discuss properties of the critical points of $n$ in some detail.
\subsection{The critical points of $n$}\label{sec:critical}
We call a point $\bar x\in {\rm supp}(n_0)$ critical for $n$ if there exists a $t\geq 0$ such that $\partial_x n(t,\bar x)=0$.
\begin{lemma}\label{lem:formula_diff_log}
Denote by $n$ the solution to \eqref{eq:model}--\eqref{eq:init} for differentiable parameter functions $d$ and $p$.
Then, any critical point $\bar x$ of $n$ is characterized by
\begin{align}\label{eq:rec_from_max}
(\ln (n_0(x)))'_{\mid x=\bar x} = d'(\bar x) \int_0^t\rho(s)ds-t p'(\bar x) .
\end{align}
\end{lemma}
\begin{proof}
Using the chain rule, we see that $\bar x$ is also a critical point of $\ln n$, i.e.,
\begin{align*}
\partial_x (\ln(n(t,x)))_{\mid x=\bar x}=0.
\end{align*}
On the other hand, from the solution formula \eqref{eq:explicit}, we deduce that
\begin{align*}
\ln(n(t,x)) = \ln (n_0(x)) + t p(x) - d(x) \int_0^t\rho(s)ds,
\end{align*}
so we obtain the result by differentiation with respect to $x$ and evaluation at $x=\bar x$.
\end{proof}
Assuming that $d$ is constant, the critical points of $n$ are, therefore, those $\bar x\in {\rm supp}(n_0)$ for which $t\geq 0$ exists with
\begin{align}\label{eq:cond_crit}
\frac{n_0'(\bar x)}{n_0(\bar x)}+tp'(\bar x)=0.
\end{align}
We distinguish three cases:
\begin{itemize}
\item[(i)] For $n_0'(x)p'(x)>0$, the point $x$ is never a critical for $n(t,\cdot)$.
\item[(ii)] For $n_0'(x)p'(x)<0$, there exists a unique $t=-n_0'(x)/(n_0(x)p'(x))$ for which $x$ is a critical point of $n(t,\cdot)$.
\item[(iii)] For $n_0'(x)p'(x)=0$, if $p'(x)=0$, then \eqref{eq:cond_crit} implies $n_0'(x)=0$, and $x$ is a critical point of $n(t,\cdot)$ for all $t\geq 0$. Otherwise, if $p'(x)\neq 0$, then $x$ is critical point for $n(t,\cdot)$ only for $t=0$.
\end{itemize}
A similar discussion applies for $p$ constant and $d$ variable; or $n_0$ constant and $p$ and $d$ variable.
\subsection{Identification of a single parameter}
As a direct consequence of Lemma~\ref{lem:formula_diff_log} we obtain the following reconstruction formulas for the derivatives of the parameters.
\begin{thm}\label{thm:recon_p_d_prime}
Let $T>0$, and denote by $n$ the solution to \eqref{eq:model}--\eqref{eq:init} for differentiable parameter functions $d$ and $p$ and differentiable initial datum. Furthermore, let $\bar x$ be a critical point of $n(\cdot,t)$ for some $t>0$.
(i) If $d$ is constant, then $p'(\bar x)$ is uniquely determined by $n_0$, i.e.,
\begin{align}\label{eq:cond_max_p}
p'(\bar x)=-\frac{n_0'(\bar x)}{t n_0(\bar x)}.
\end{align}
(ii) If $p$ is constant, then $d'(\bar x)$ is uniquely determined by $n_0$ and $\int_0^t\rho(s) ds$, i.e.,
\begin{align}\label{eq:cond_max_d}
d'(\bar x)=\frac{n_0'(\bar x)}{ n_0(\bar x)\int_0^t \rho(s)ds}.
\end{align}
(iii) $(\ln(n_0(x)))'_{\mid x=\bar x}$ is uniquely determined by $p'(\bar x)$, $d'(\bar x)$ and $\int_0^t\rho(s)ds$ by \eqref{eq:rec_from_max}.
\end{thm}
\begin{rem}
It can be easily seen from the solution formula \eqref{eq:explicit} that the functions $n(t,x)$ and $n_c(x,t)$, which are solutions to \eqref{eq:model}--\eqref{eq:defrho} for parameters $(p,d)$ and $(p+c,d)$ with constants $c,d\in\mathbb{R}$, respectively, share the same critical points. In this sense, the previous theorem cannot be improved without further assumptions. A similar conclusion holds true for parameter pairs $(p,d)$ and $(p,d+c)$.
\end{rem}
\begin{rem}
In the situation of Theorem~\ref{thm:recon_p_d_prime},
if the closure of the set of critical points coincides with the support of $n_0$, then $p$ is determined up to an additive constant. If in addition $\rho(t)$ is known for some $t>0$, then this additive constant is fixed, i.e., $p$ is unique.
\end{rem}
\subsection{Remarks on simultaneous identification}
Simultaneous identification of multiple parameters or their derivatives is difficult.
Counting dimensions, it is to be expected that measurements of the one dimensional function $\rho(t)$ is not sufficient to simultaneously recover two the parameter functions, which is supported by the following examples
\begin{itemize}
\item[(i)] Let $n_0$ be any compactly supported function with $\int n_0 dx=a>0$. Let $d_1(x)$ and $d_2(x)$ be arbitrary functions, and define $p_i(x)=a d_i(x)$, $i=1,2$. Then $n_i(x,t)=n_0(x)$ solves \eqref{eq:model}--\eqref{eq:init} with $\rho_i(t)=\rho(0)=a$. Hence, knowledge of $\rho$ does not allow to identify $p$ and $d$ simultaneously.
\item[(ii)] Let $n_0$ be any function supported on $[0,1]$, $d=0$, and let $p_i:[0,1]\to[0,1]$, $i=1,2$, be two invertible functions that satisfy $p_i(0)=0$ and $p_i(1)=1$. We define the initial datum as $n_0^i(x)=n_0(p_i(x))p'_i(x)$, and denote $n_i$ the corresponding solutions to \eqref{eq:model}--\eqref{eq:init}. Using the substitution $y=p_i(x)$, we obtain that
\begin{align*}
\rho_i(t)&=\int_0^1 n_0^i(x) e^{p_i(x)t}dx=\int_0^1 n_0(y) e^{yt}dy,
\end{align*}
i.e., $\rho_1(t)=\rho_2(t)$.
Hence, it is not possible to determine $n_0$ and $p$ from $\rho$. This argument can be extended to $d>0$.
\end{itemize}
In Section~\ref{sec:inverse_critical}, we have seen that measuring the critical points allows for reconstruction of derivatives of one of the parameters.
The discussion in Section~\ref{sec:critical} shows that if $x$ is a critical point of $n$ for two distinct times, say $t_1, t_2\geq 0$, then $n_0'(x)=0$ and $p'(x)=0$ are uniquely determined given that $d\in\mathbb{R}$ is constant. Similarly, $n_0'(x)=0$ and $d'(x)=0$ if $p\in\mathbb{R}$. Using \eqref{eq:rec_from_max} this reasoning can be extended to non-constant $p$ and $d$, and to obtain formulas for $d'(x)$ and $p'(x)$ given $n_0'(x)/n_0(x)$ and $\rho(t)$, which is
\begin{align*}
\begin{pmatrix} \int_0^{t_1}\rho(s)ds & -t_1\\\int_0^{t_2}\rho(s)ds & -t_2\end{pmatrix}\begin{pmatrix} d'(\bar x)\\ p'(\bar x)\end{pmatrix}=\frac{n_0'(\bar x)}{n_0(\bar x)}\begin{pmatrix}1\\1\end{pmatrix}.
\end{align*}
We note that, in general, the matrix in the above linear system might be singular, thereby allowing for multiple solutions or none.
We note that identifying two of the parameter functions from knowledge of $\rho$ and $x(t)$, where $x(t)$ denotes a curve of critical points, with $x'(t)\neq 0$ remains an open problem.
\section{Reconstructions}\label{sec:numerics}
\subsection{Reconstructions from the total population size}
In this section we assume knowledge of the total population size $\{\rho(t):0\leq t\leq T\}$ in order to determine the parameter function $p(x)$.
Theorem~\ref{thm:ident_p} shows that measuring the total population size is sufficient in order to uniquely reconstruct the parameter $p$ as long as $d$ is a constant and $p$ is either strictly increasing or strictly decreasing.
Contrary to the situation of Theorem~\ref{thm:recon_p_d_prime}, there are, however, no explicit reconstruction formulas available.
We thus propose to use a variational regularization technique to numerically reconstruct $p$ from measurements of the (noisy) total population size $\rho^\delta(t)$, where $\delta > 0$ denotes the noise level.
In the following two subsections we discuss two approaches to define suitable Tikhonov regularizations in Hilbert spaces.
\subsubsection{Fully nonlinear forward operator}
We begin with the obvious definition of the nonlinear forward operator
\begin{align*}
F: X=H^1(\mathcal{S})\to Y=L^2(0,T),\quad p\mapsto \rho\quad\text{where}\quad \rho=\Lambda_p(\rho).
\end{align*}
Here, the subscript $p$ should emphasize the dependence on $p$ of the map $\Lambda$ as defined in \eqref{eq:def_lambda}. The choice of $X=H^1(\mathcal{S})$ is motivated by the continuity of the embedding $H^1(\mathcal{S})\hookrightarrow L^\infty(\mathcal{S})$, which implies that $F$ is well-defined by Theorem~\ref{thm:existence}.
Denoting by $p_0\in H^1(\mathcal{S})$ some a-priori knowledge, such as a monotonically increasing function, we construct stable approximations to the exact solution $p^\dagger$, which satisfies $F(p^\dagger)=\rho$, by minimizing the Tikhonov functional
\begin{align}\label{eq:Tik}
\frac{1}{2} \|F(p)-\rho^\delta\|_{Y}^2 + \frac{\alpha}{2}\|p-p_0\|_{X}^2,
\end{align}
over the space $H^1(\mathcal{S})$. Here and in the following we make the assumption that the data perturbation can be estimated as follows
\begin{align}
\|\rho-\rho^\delta\|_{L^2(0,T)}\leq \delta.
\end{align}
Standard theory of inverse problems can be used to prove existence of minizimers $p_\alpha^\delta$ and stable dependence on the data as long as $\alpha>0$, see e.g. \cite{EHN96}.
Widely used algorithms to minimize the Tikhonov functional employ the gradient of $F$.
Without proof (which amounts to a lengthy calculation using \eqref{eq:explicit}), we note that $F$ depends smoothly on $p$ and the Fr\'echet derivative is
\begin{align*}
F'(p):h\mapsto D\quad\text{where}\quad D(t)=\int_\mathbb{R} \left[th(x)-d(x)\int_0^t D(s)ds \right] n_0(x)e^{pt-d\int_0^t\rho ds}dx,
\end{align*}
for $p,h\in H^1(\mathcal{S})$.
We observe that the definition of $F'(p)h$ constitutes an ordinary differential equation for $\int_0^t D(s)ds$, which yields the explicit formula
\begin{align*}
(F'(p)h)(t)=D(t)=\int_\mathbb{R} h(x) n_0(x)\int_0^t \left(\int_0^s e^{pr}dr\right) e^{-d\int_0^s\rho(r)dr} dsdx.
\end{align*}
Using this formula, it is straightforward to obtain a formula for the adjoint operator $F'(p)^*\psi$, $\psi\in L^2(0,T)$, which is defined as the solution to
\begin{align*}
-\Delta w + w &= n_0(x) \int_0^T \psi(t) \int_0^t \left(\int_0^s e^{p(x)r}dr\right) e^{-d\int_0^s\rho dr}ds dt\qquad \text{in }\mathcal{S},\\
\partial_n w&=0\quad\text{on } \partial\mathcal{S}.
\end{align*}
It is easy to verify that for all $h,p\in H^1(\mathcal{S})$ and $\psi\in L^2(0,T)$
\begin{align*}
(F'(p)h,\psi)_{L^2(0,T)} = (h, F'(p)^*\psi)_{H^1(\mathcal{S})}.
\end{align*}
Convergence rates for the error $\|p_\alpha^\delta- p^\dagger\|_{H^1(\mathcal{S})}$ follow from assuming a source condition \cite{EHN96}
\begin{align}\label{eq:source_condition}
p^\dagger-p_0 = F'(p^\dagger)^*w
\end{align}
with sufficiently small $w\in L^2(0,T)$.
In order to approximate minimizers of the Tikhonov functional, we use the iteratively regularized Gauss-Newton (IRGN) method
\begin{align*}
p_{k+1}=p_k + (F'(p_k)^*F'(p_k)+\alpha_k I)^{-1}\big(F'(p_k)^*(\rho^\delta-F(p_k)+\alpha_k (p_k-p_0)\big),
\end{align*}
where $\alpha_k=\max\{\alpha,1/2^k\}$; see \cite{BakKok04} for a convergence analysis if $\alpha=0$ and $\delta=0$. Let us refer to \cite{ES15} for a discussion on the use of the IRGN method to minimize \eqref{eq:Tik} with $\alpha>0$.
\paragraph{Numerical example}
We illustrate the performance of the IRGN method choosing the example $n_0(x)=\cos(\pi x/2)$, for $x\in\mathcal{S}=(-1,1)$, $p^\dagger(x)=e^x$, and $d(x)=1$. The final time is chosen as $T=1$. We choose a spatial grid with spacing $10^{-3}$ and temporal grid with spacing $10^{-2}$.
The initial guess $p_0$ is chosen such that it satisfies \eqref{eq:source_condition} with $w(t)=e^{-t}$. A reconstruction is shown in Figure~\ref{fig:rec_p_nonlinear} together with the convergence rate of the error $\|p_\alpha^\delta-p^\dagger\|_{H^1(\mathcal{S})}$, which exhibits the rate $O(\sqrt{\delta})$ that is expected for Tikhonov regularization.
The good convergence behavior of the IRGN method can also be seen in Table~\ref{tab:rec_p_nonlinear}.
\begin{figure}
\includegraphics[width=.48\textwidth]{rec_p_nonlinear}
\includegraphics[width=.48\textwidth]{rec_p_nonl_H1err}
\caption{\label{fig:rec_p_nonlinear} Left: $p^\dagger$ (solid line) and corresponding reconstruction $p_\alpha^\delta$ for $\alpha=\delta=1.24\times 10^{-2}$ after $6$ IRGN iterations for minimizing \eqref{eq:Tik}. Right: A plot of the error $\|p_\alpha^\delta-p^\dagger\|_{H^1(\mathcal{S})}$ (dotted) and the curve $\sqrt{\delta}$ (solid) for different values of $\delta$.}
\end{figure}
\begin{table}
\caption{\label{tab:rec_p_nonlinear} Convergence behavior of the IRGN method for the minimization of \eqref{eq:Tik} for different noise levels $\delta$. The error convergence with $O(\sqrt{\delta})$, cf. Figure~\ref{fig:rec_p_nonlinear}.}
\centering
\begin{tabular}{c c c c}
\toprule
$\delta$ & $\|p_\alpha^\delta-p^\dagger\|_{H^1(\mathcal{S})}$ & $\|\rho^\delta-F(p_\alpha^\delta)\|_{L^2(0,T)}$ & \# iterations\\
\midrule
$1.2\times 10^{-1}$ & $1.7\times 10^{-1}$ & $1.4\times 10^{-1}$ & 1 \\
$1.2\times 10^{-2}$ & $5.9\times 10^{-2}$ & $2.4\times 10^{-2}$ & 6\\
$1.2\times 10^{-3}$ & $8.1\times 10^{-3}$ & $2.2\times 10^{-3}$ & 10\\
$1.2\times 10^{-4}$ & $6.0\times 10^{-3}$ & $2.2\times 10^{-4}$ & 14\\
$1.2\times 10^{-5}$ & $3.9\times 10^{-4}$ & $2.1\times 10^{-5}$ & 17\\
$1.2\times 10^{-6}$ & $1.5\times 10^{-4}$ & $1.7\times 10^{-6}$ & 21\\
$1.2\times 10^{-7}$ & $1.5\times 10^{-4}$ & $2.2\times 10^{-7}$ & 24\\
$1.2\times 10^{-8}$ & $1.3\times 10^{-5}$ & $2.4\times 10^{-8}$ & 27\\
$1.2\times 10^{-9}$ & $3.1\times 10^{-6}$ & $1.7\times 10^{-9}$& 31\\
$1.2\times 10^{-10}$ & $3.2\times 10^{-6}$ & $2.1\times 10^{-10}$ & 34 \\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{Perturbed forward operator}\label{sec:perturbed}
In order to reduce the nonlinearity of the inverse problem, let us present a second choice of forward operator.
Using the data $\rho^\delta$ into the right hand side of \eqref{eq:rho_fix}, we define a perturbed forward operator
\begin{align*}
F^\delta(p) = \int_\mathbb{R} n_0(x) e^{tp(x)-d(x)\int_0^t\rho(s)^\delta ds}dx.
\end{align*}
Similar as in the proof of Theorem~\ref{thm:existence} we obtain the following error estimate
\begin{align*}
\|F^\delta(p)- F(p) \|_{L^2(0,T)} \leq \|n_0\|_{L^1}e^{T\|p\|_\infty} \|d\|_\infty T \|\rho^\delta-\rho\|_{L^2(0,T)}.
\end{align*}
As above, we assume that $n_0$ is compactly supported with support $\mathcal{S}$.
Thus, in view of standard results from the analysis of Tikhonov regularization \cite{EHN96}, we can obtain stable approximations by minimizing the following Tikhonov functional with perturbed forward operator
\begin{align}\label{eq:Tik2}
\frac{1}{2} \|F^\delta(p)-\rho^\delta\|_{Y}^2 + \frac{\alpha}{2}\|p-p_0\|_{X}^2,
\end{align}
with $Y=L^2(0,T)$ and $X=H^1(\mathcal{S})$, $\mathcal{S}={\rm supp}(n_0)$ and $p_0\in X$.
For completeness, we provide the following result, which is a slight generalization of \cite[Thm 10.3]{EHN96}, see also \cite{Egger2015} for a corresponding result for linear problems.
\begin{lemma}
Let $F:X\to Y$ be a continuous and weakly lower semi-continuous operator between Hilbert spaces $X$ and $Y$. Let $\delta>0$ and let $F^\delta:X\to Y$ be continuous and weakly lower-semicontinuous such that $\|F^\delta(p)-F(p)\|_Y\leq C(\|p\|_X)\delta$ for all $p\in X$ with a constant $C(\|p\|_{X})$ that depends continuously on $\|p\|_X$. Then, for $\rho,\rho^\delta\in Y$ with $\rho\in R(F)$ and $\|\rho-\rho^\delta\|_Y\leq \delta$, the minimizers $\{p_\alpha^\delta\}$ of \eqref{eq:Tik2} converge along subsequences to a $p_0$-minimum-norm solution of $F(p)=\rho$ with $\delta\to 0$ provided that $\alpha\to 0$ and $\delta^2/\alpha\to 0$. If the $p_0$-minimum-norm solution is unique, then the whole sequence converges to the unique $p_0$-minimum-norm solution
\end{lemma}
\begin{proof}
The proof is similar to \cite[Thm. 10.3]{EHN96}, and we give only the steps that are different. Let $p^\dagger\in X$ be a $p_0$-minimum-norm solution. Since $\{p_\alpha^\delta\}$ minimize \eqref{eq:Tik2}, we have that
\begin{align*}
\frac{1}{2} \|F^\delta(p_\alpha^\delta)-\rho^\delta\|_{Y}^2 + \frac{\alpha}{2}\|p_\alpha^\delta-p_0\|_{X}^2
&\leq \frac{1}{2} \|F^\delta(p^\dagger)-\rho^\delta\|_{L^2(0,T)}^2 + \frac{\alpha}{2}\|p^\dagger-p_0\|_{X}^2\\
&\leq 2 C(\|p^\dagger\|_X)^2\delta^2+ \frac{\alpha}{2}\|p^\dagger-p_0\|_{X}^2,
\end{align*}
which implies boundedness $\{p_\alpha^\delta\}$ and weak convergence of a subsequence $\{p_{\alpha_k}^{\delta_k}\}$ to $p\in X$. Moreover, we have that
\begin{align*}
\|F^\delta(p_\alpha^\delta)-\rho^\delta\|_Y^2\leq 4 C(\|p^\dagger\|_X)^2\delta^2+\alpha\|p^\dagger-p_0\|^2_X.
\end{align*}
By weak lower-semicontinuity of $F$ and using the latter inequality, we obtain that
\begin{align*}
\|F(p)-\rho\|_Y&\leq \limsup_k \|F(p_{\alpha_k}^{\delta_k})-\rho^{\delta_k}\|_Y\leq \limsup_k \|F^{\delta_k}(p_{\alpha_k}^{\delta_k})-F(p_{\alpha_k}^{\delta_k})\|_Y + \|F^{\delta_k}(p_{\alpha_k}^{\delta_k})-\rho^{\delta_k}\|_Y\\
&\leq \limsup_k C(\|p_{\alpha_k}^{\delta_k}\|_X)\delta_k + C\delta_k+C\alpha_k\|p^\dagger-p_0\|_X=0,
\end{align*}
where we used continuity of the constant $C(\|p_{\alpha_k}^{\delta_k}\|_X)$ and boundedness of $\{p_{\alpha_k}^{\delta_k}\}$. Thus, $F(p)=\rho$. Proceeding as in the proof of \cite[Thm. 10.3]{EHN96}, we hence obtain the assertion.
\end{proof}
As before, $F^\delta$ is Fr\'echet differentiable with derivative
\begin{align*}
dF^\delta(p)h = \int_\mathcal{S} h(x) n_0(x) t e^{pt-d\int_0^t\rho^\delta ds} dx,\quad h\in H^1(\mathcal{S}),
\end{align*}
and the adjoint $dF^\delta(p)^*\psi$, $\psi\in L^2(0,T)$, is defined as the solution to
\begin{align*}
-\Delta w + w &= n_0(x) \int_0^T t\psi(t) e^{pt-d\int_0^t\rho^\delta ds}dt\qquad \text{in }\mathcal{S},\\
\partial_n w&=0\quad\text{on } \partial\mathcal{S}.
\end{align*}
The Tikhonov functional \eqref{eq:Tik2} can then be minimized as above by the IRGN method, which we consider next.
\paragraph{Numerical Example}
We consider the same example and setup as in the previous section. We observe, that using the perturbed forward operator yields essentially the same results as using the fully nonlinear forward operator. However, the numerical implementation of the perturbed forward operator is simpler. Figure~\ref{fig:rec_p} shows an exemplary reconstruction together with the exact solution and the convergence behaviour of the error $\|p_\alpha^\delta-p^\dagger\|_{H^1(\mathcal{S})}$ for different values of $\delta$. Table~\ref{tab:rec_p} shows, in addition, the convergence of the residuals for different values of $\delta$ and the required IRGN iterations to obtain a suitable reconstruction.
\begin{figure}
\includegraphics[width=.48\textwidth]{rec_p}
\includegraphics[width=.48\textwidth]{rec_p_H1err}
\caption{\label{fig:rec_p} Left: $p^\dagger$ (solid line) and corresponding reconstruction $p_\alpha^\delta$ for $\alpha=\delta=1.24\times 10^{-2}$ after $5$ IRGN iterations for minimizing \eqref{eq:Tik2}. Right: A plot of the corresponding errors $\|p_\alpha^\delta-p^\dagger\|_{H^1(\mathcal{S})}$ (dotted) and the curve $\sqrt{\delta}$ (solid) for different values of $\delta$.}
\end{figure}
\begin{table}
\caption{\label{tab:rec_p} Convergence behavior of the IRGN method for the minimization of \eqref{eq:Tik2} for different noise levels $\delta$. The error convergence with $O(\sqrt{\delta})$, cf. Figure~\ref{fig:rec_p}.}
\centering
\begin{tabular}{c c c c}
\toprule
$\delta$ & $\|p_\alpha^\delta-p^\dagger\|_{H^1(\mathcal{S})}$ & $\|\rho^\delta-F(p_\alpha^\delta)\|_{L^2(0,T)}$ & \# iterations\\
\midrule
$2.5\times 10^{-1}$ & $2.3\times 10^{-1}$ & $2.9\times 10^{-1}$ & 1 \\
$2.5\times 10^{-2}$ & $4.2\times 10^{-2}$ & $4.0\times 10^{-2}$ & 5\\
$2.5\times 10^{-3}$ & $4.9\times 10^{-3}$ & $3.4\times 10^{-3}$ & 9\\
$2.5\times 10^{-4}$ & $3.7\times 10^{-3}$ & $4.3\times 10^{-4}$ & 12\\
$2.5\times 10^{-5}$ & $3.4\times 10^{-4}$ & $4.7\times 10^{-5}$ & 15\\
$2.5\times 10^{-6}$ & $1.1\times 10^{-4}$ & $3.4\times 10^{-6}$ & 19\\
$2.5\times 10^{-7}$ & $2.2\times 10^{-5}$ & $3.8\times 10^{-7}$ & 22\\
$2.5\times 10^{-8}$ & $3.6\times 10^{-6}$ & $4.4\times 10^{-8}$ & 25\\
$2.5\times 10^{-9}$ & $2.1\times 10^{-6}$ & $3.4\times 10^{-9}$& 29\\
$2.5\times 10^{-10}$ & $2.1\times 10^{-6}$ & $4.2\times 10^{-10}$ & 32 \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Reconstructions using critical points of the population density}
We illustrate the reconstruction formulas given in Theorem~\ref{thm:recon_p_d_prime} by numerical examples. Contrary to Theorem~\ref{thm:ident_p}, Theorem~\ref{thm:recon_p_d_prime} does not require monotonicity of the parameter functions.
\paragraph{Reconstruction of $p'$ from critical points of $n$}
As an initial datum we choose $n_0(x)=\cos(\pi x/2)$, $d(x)=1$ and $p(x)=1+\sin(x)^2$ and we let $x\in (-1,1)$ and $t\in [0,10]$.
For our numerical computations, we discretize $x$ equidistantly with grid spacing $10^{-4}$. Similarly, we discretize time with time step size $10^{-2}$. In our numerical algorithms, given an approximation of $n(t,x)$ we thus compute $\rho(t)$ and $\int_0^t\rho(s)ds$ approximately using quadrature rules. Using these approximations, we compute an approximation of $n$ at the next time instance using \eqref{eq:explicit} with $\int_0^t\rho(s)ds$ replaced by its numerical approximation.
To apply Theorem~\ref{thm:recon_p_d_prime}, we collect the minima and maxima of the approximate population density over time as our data $\{(t_i,\bar x_i)\}$; cf. Figure~\ref{fig:p_prime_data} for snapshots of the approximation of $n(t,x)$ for $t\in \{2,6,9\}$. Since $n_0'(x)p'(x)\leq0$ for all $x\in (-1,1)$, all $x\in (-1,1)$ will eventually be critical points. The point $x=0$ is a critical point for all times, while each $x\neq 0$ is a critical point of $n(t,\cdot)$ exactly for one $t>0$, see Section~\ref{sec:critical}. In Figure~\ref{fig:p_prime_rec} the corresponding reconstruction $p'_r$ of $p'$ is shown.
As predicted by Theorem~\ref{thm:recon_p_d_prime}, we observe excellent agreement of the reconstruction with $p'$, which is to be expected for highly resolved approximation.
If we add $2.5\%$ of uniformly distributed noise to the location of the critical points, i.e., the data is changed to $\{(t_i,\bar x_i(1+\delta\eta))\}$ with $\eta\sim U(-1/2,1/2)$ and $\delta=0.05$, the reconstructions deteriorate, but only in a minor fashion, see Figure~\ref{fig:p_prime_rec}. In fact, employing the smoothness of the initial datum the influence of noise can be quantified by Taylor expansion. For sufficiently small noise, we obtain a linear rate of convergence in $\delta$ of the reconstruction error
\begin{align*}
\sup_{i} |p'(\bar x_i)-p_{r}'(\bar x_i (1+\delta\eta))|,
\end{align*}
showing well-posedness of the reconstruction problem if the initial data and its derivative are available.
The saturation for small noise is due to the errors in the numerical approximation, and it can be overcome by using a finer discretization to generate the simulated data.
\begin{figure}\centering
\includegraphics[width=.32\textwidth]{p_prime_data_2}
\includegraphics[width=.32\textwidth]{p_prime_data_6}
\includegraphics[width=.32\textwidth]{p_prime_data_9}
\caption{\label{fig:p_prime_data} Snapshots of the numerical approximation of $n(t,x)$ for $t\in\{2,6,9\}$ for the reconstruction of $p'$ (from left to right). The markers denote the corresponding critical points that are used in the reconstruction formula.}
\end{figure}
\begin{figure}\centering
\includegraphics[width=.32\textwidth]{p_prime_rec}
\includegraphics[width=.32\textwidth]{p_prime_rec_noise}
\includegraphics[width=.32\textwidth]{p_prime_rates}
\caption{\label{fig:p_prime_rec} Numerical reconstructions of $p'$ (red crosses) and the exact (unknown) function $p'$ (solid blue line) are shown. Left for critical points that are located within the accuracy of the numerical scheme; middle critical points with $2.5\%$ of uniform random noise. Right: Convergence rates for different noise levels $\delta=1/2^i$ for $i=4,\ldots, 15$ (crosses), the solid curve is proportional to $\delta$.}
\end{figure}
\paragraph{Reconstruction of $d'$ from critical points of $n$}
The setting is similar to the previous example. The difference is in that we choose $p(x)=1$, $d(x)=1-x^2$, and simulate until $T=3$. A similar discussion as for the previous example applies. In particular, since $d'(x)n'_0(x)>0$, all $x\in (-1,1)$ will eventually be critical points, see Section~\ref{sec:critical}. Recording the critical values of the population density and the total population allows for the reconstruction of the derivative of the unknown parameter $d$ if the initial datum is given.
Adding relative noise to the critical points will deteriorate the reconstruction only slightly; again showing well-posedness of the reconstruction problem.
\begin{figure}\centering
\includegraphics[width=.32\textwidth]{d_prime_data_1}
\includegraphics[width=.32\textwidth]{d_prime_data_2}
\includegraphics[width=.32\textwidth]{d_prime_data_3}
\caption{\label{fig:d_prime_data} Snapshots of the numerical approximation of $n(t,x)$ for $t\in\{1,2,3\}$ for the reconstruction of $d'$ (from left to right). The markers denote the corresponding critical points that are used in the reconstruction formula.}
\end{figure}
\begin{figure}\centering
\includegraphics[width=.32\textwidth]{d_prime_rec}
\includegraphics[width=.32\textwidth]{d_prime_rec_noise}
\includegraphics[width=.32\textwidth]{d_prime_rates}
\caption{\label{fig:d_prime_rec} Numerical reconstructions of $d'$ (red crosses) and the exact (unknown) function $d'$ (solid blue line) are shown. Left for critical points that are located within the accuracy of the numerical scheme; middle critical points with $2.5\%$ of uniform random noise. Right: Convergence rates for different noise levels $\delta=1/2^i$ for $i=4,\ldots, 15$ (crosses), the solid curve is proportional to $\delta$.}
\end{figure}
\section{Conclusions and outlook}\label{sec:outlook}
We considered several inverse problems for a nonlinear structured population model, whose dynamics is governed by a nonlocal averaging process.
More precisely, we investigated the reconstruction of model parameters given either to total population size or the critical points of the population density. We demonstrated that in both cases the model possesses several symmetries that that leave the measurements invariant, showing the limited information content of total population size or critical points as only measurements. Ruling out these situations by appropriate assumptions on the unknown quantities, we were however able to obtain uniqueness results and in some cases explicit reconstruction formulas as well.\\
In order to model local interactions due to (small) mutations, the following generalization in the form of a parabolic system has been derived in \cite{Champagnat2006}:
\begin{align*
\partial_t n(t,x) - \Delta n(t,x) &= [p(x) - \int d(x,y)n(t,y)\;dy]n(t,x),\\
n(0,x) &= n_0(x),
\end{align*}
where $d(x,y)$ allows to model more general competition behaviour. In this case, we are dealing with a second order parabolic equation and the explicit formular \eqref{eq:explicit} is no longer available. This different methods have to be applied yet we expect that some of our results can be extended to this case e.g. by using the heat kernel to obtain a fixed point equation for $\rho$. In particular, in such a setup using a perturbed forward operator as in Section~\ref{sec:perturbed} will yield a significant speed up in numerical computations.
The investigation of such a model is, however, out of the scope of this paper and is left for future study.
\section*{Acknowledgements}
JFP acknowledges support by the German Science Foundation DFG via EXC 1003 Cells in Motion Cluster of Excellence, M{\"u}nster. The authors would like to thank Barbara Kaltenbacher (Klagenfurt) for stimulating discussions.
\bibliographystyle{plain}
|
2,877,628,090,962 | arxiv | \section{Introduction}
The region of the QCD phase diagram
with low temperature and high chemical potential is still not well understood.
From the theoretical point of view, there are no accurate first principles
predictions
for the properties of QCD matter at high baryon densities. The numerical
lattice
simulation techniques that have been successfully applied to the study of the hot
quark gluon plasma (QGP) fail in the cold, baryon rich conditions due to the
sign problem. In spite
of the difficulties, significant progress has been accomplished both in the
theoretical description of moderate density nuclear matter \cite{eosnuc,tkhs}
and ultrahigh-density matter
\cite{gorda21,vuku20} but no reliable results exist in the crucial regime
between
approximately one and ten nuclear saturation densities. Several model
calculations
suggest that there is a low temperature deconfined phase of quarks and gluons,
the cold QGP, also called quark matter (QM). This phase might exist in the
the core of dense stars, an idea that has been around already for some
decades \cite{decs,alf}. It is even possible that a whole star, not only its
core, be made of quark matter \cite{witten84}. This possibility was
explored in several works
\cite{qstars,jorge20,fran12,jorge11,pagliara2011} and will be further
explored in this work, which is an update of \cite{fran12}.
An early analysis of the existing observational data
presented in \cite{ozel2006} concluded that most of the QM
EOSs were too soft and therefore unable to support the existence of
neutron stars with a quark phase. Since then it was shown in several
works that a self-bound star, composed entirely of quark matter, could
explain a massive neutron star. In order to obtain a stiff enough quark matter
equation of state, several groups introduced repulsive interactions among
the quarks, mediated by the exchange of vector particles
\cite{fran12,vbag,baym19,deb20,oert20,pisa21,sylhz}, which can be
``effective massive gluons'' or ``effective vector mesons''. Interestingly,
most of these developments make use of a mean field approximation for the
vector field and arrive at a similar result, which is a
quadratic term in the baryon density present both in the pressure and energy density.
From the experimental side, during the last decade we have witnessed
remarkable advances in the
observation of neutron stars:
the discovery of extremely massive neutron stars \cite{demo10,antoniadis};
qualitative improvements in X-ray radius measurements
\cite{gui13,oz16,freire16,stein17,stein16,miller17,bogda16};
and the famous LIGO/Virgo detection of gravitational waves (GWs)
originating from the NS-NS merger GW170817 \cite{ligo17}.
Increasingly stringent constraints have been placed on the EOS of
NS matter.
An accurate measurement of a compact object using Shapiro
delay \cite{croma}
yielded $ 2.14^{+0.1}_{-0.09} \,\, M_{\odot}$ for the J0740+6620 pulsar.
It has been argued that a handful of compact stars may achieve masses greater
than the PSR J0740+6620.
The events denoted as GW190814 \cite{ligo20} and GW190425 \cite{ligo20a}
suggest that the NS mass can be larger than
$ 2.5 \,\, M_{\odot}$.
Recent data (including a reliable determination of the radius) about the
pulsar PSR J0030+0451 were published in Refs.
\cite{nicer1,nicer2,nicer3,nicer4,nicer5}. Finally, very recently \cite{nicer6}
the NICER and XMM-Newton Collaborations presented a determination of the
mass and radius of the massive pulsar PSR J0740+6620.
Differences between candidate EOSs can have a significant
effect on the tidal interactions of neutron stars.
Recently new constraints appeared on the tidal deformability \cite{ligo18}.
It has been realized \cite{annala} that
the two-solar-mass constraint forces the EOS to be relatively
stiff at low densities. At the same time, the constraint on
$\Lambda(1.4 \, M_{\odot})$ sets an upper limit for the stiffness,
constraining the EOS band in a complementary direction.
In this paper we will update the study presented in \cite{fran12} and check
whether the EOS introduced in \cite{davi} remains a viable option, satisfying
the most recent experimental contraints.
This text is organized as follows. In Sec. II we briefly review the EOS for
the cold QGP. In Sec. III we introduce the
stability conditions and discuss its consequences. In Sec. IV we present the
Tolman-Oppenheimer-Volkoff (TOV) equations for stellar
structure calculations and their numerical solutions. In Sec. V we discuss the
tidal deformability and in Sec. VI we present some comments and conclusions.
\section{The equation of state}
Following \cite{fran12},
we consider a quark star consisting of {\it u}, {\it d} and {\it s} quarks
with masses $m_{u}=$ $5$ MeV, $m_{d}=$ $7$ MeV, and $m_{s}=$ $100$ MeV.
The derivation of the EOS \cite{davi} used here starts with the assumption
that the gluon field can be
decomposed into low (``soft'') and high
(``hard'') momentum components. The expectation values of the soft fields were
identified with the gluon condensates of dimension two and
four, respectively. The former generates a dynamical mass, $m_G$ for the hard
gluons,
and the latter yields an analogue of the ``bag constant'' term
in the energy density and pressure. Given the large number of quark sources,
even in
the weak coupling regime, the hard gluon fields
are strong, the occupation numbers are large, and therefore these fields can be
approximated by classical color fields.
The effect of the condensates is to soften the EOS whereas the hard gluons
significantly
stiffen it, by increasing both the energy density and pressure. With these
approximations it was possible to derive \cite{davi} an analytical expression
for the EOS, called MFTQCD (Mean Field Theory of QCD). When adapting this
equation of state to the stellar medium, we assume, as usual, that quarks and
electrons are in chemical equilibrium maintained by the weak
processes \cite{farhi}.
Neutrinos are assumed to escape and do not contribute to the pressure and
energy density.
Moreover, we impose charge neutrality and baryon number conservation.
These requirements yield a set of four algebraic equations for Fermi
momentum calculation for each quark flavor ($u$, $d$ and $s$) and for the
electrons ($e$)
$$
{k_{u}}^{3}+{k_{d}}^{3}+{k_{s}}^{3}=3\pi^{2}\rho_{B},
$$
\begin{equation}
2{k_{u}}^{3}={k_{d}}^{3}+{k_{s}}^{3}+{k_{e}}^{3},
\label{densidadee}
\end{equation}
$$
{k_{d}}^{2}+{m_{d}}^{2}={k_{s}}^{2}+{m_{s}}^{2},
$$
$$
\sqrt{{k_{u}}^{2}+{m_{u}}^{2}}+\sqrt{{k_{e}}^{2}+{m_{e}}^{2}}
=\sqrt{{k_{s}}^{2}+{m_{s}}^{2}},
$$
for a fixed baryon density $\rho_{B}$. The energy density is given
by \cite{davi}
$$
\epsilon=\bigg({\frac{27g^{2}}{2{m_{G}}^{2}}}\bigg) \ {\rho_{B}}^{2} +
\mathcal{B}_{QCD}
$$
$$
+\sum_{i=u,d,s}3{\frac{\gamma_{Q}}{2{\pi}^{2}}} \Bigg\lbrace
{\frac{{k_{i}}^{3}\sqrt{{k_{i}}^{2}+{m_{i}}^{2}}}{4}} +
{\frac{{m_{i}}^{2}{k_{i}}\sqrt{{k_{i}}^{2}+{m_{i}}^{2}}}{8}} -
{\frac{{m_{i}}^{4}}{8}}ln\Big[{k_{i}}+\sqrt{{k_{i}}^{2}+{m_{i}}^{2}} \ \Big]
+ {\frac{{m_{i}}^{4}}{16}}ln({m_{i}}^{2}) \Bigg\rbrace
$$
\begin{equation}
+{\frac{\gamma_{e}}{2{\pi}^{2}}} \Bigg\lbrace
{\frac{{k_{e}}^{3}\sqrt{{k_{e}}^{2}+{m_{e}}^{2}}}{4}} +
{\frac{{m_{e}}^{2}{k_{e}}\sqrt{{k_{e}}^{2}+{m_{e}}^{2}}}{8}}
- {\frac{{m_{e}}^{4}}{8}}ln\Big[{k_{i}}+\sqrt{{k_{e}}^{2}+{m_{e}}^{2}}
\ \Big] + {\frac{{m_{e}}^{4}}{16}}ln({m_{e}}^{2}) \Bigg\rbrace,
\label{epsib}
\end{equation}
and the pressure is
$$
p=\bigg({\frac{27g^{2}}{2{m_{G}}^{2}}}\bigg) \ {\rho_{B}}^{2}
- \mathcal{B}_{QCD}
$$
$$
+\sum_{i=u,d,s}{\frac{\gamma_{Q}}{2{\pi}^{2}}} \Bigg\lbrace
{\frac{{k_{i}}^{3}\sqrt{{k_{i}}^{2}+{m_{i}}^{2}}}{4}} -
{\frac{3{m_{i}}^{2}{k_{i}}\sqrt{{k_{i}}^{2}+{m_{i}}^{2}}}{8}}
+ {\frac{3{m_{i}}^{4}}{8}}ln\Big[
{k_{i}}+\sqrt{{k_{i}}^{2}+{m_{i}}^{2}} \ \Big]-
{\frac{3{m_{i}}^{4}}{16}}ln({m_{i}}^{2}) \Bigg\rbrace
$$
\begin{equation}
+{\frac{\gamma_{e}}{6{\pi}^{2}}} \Bigg\lbrace
{\frac{{k_{e}}^{3}\sqrt{{k_{e}}^{2}+{m_{e}}^{2}}}{4}}
- {\frac{3{m_{e}}^{2}{k_{e}}\sqrt{{k_{e}}^{2}+{m_{e}}^{2}}}{8}}
+ {\frac{3{m_{e}}^{4}}{8}}ln\Big[
{k_{e}}+\sqrt{{k_{e}}^{2}+{m_{e}}^{2}} \ \Big] -
{\frac{3{m_{e}}^{4}}{16}}ln({m_{e}}^{2}) \Bigg\rbrace ,
\label{pressb}
\end{equation}
where $m_{e}=$ $0.5$ MeV is the electron mass, $m_{G}$ is the dynamical
gluon mass, and $g$ is the coupling constant $(\alpha_{s}=g^{2}/4\pi)$ in QCD.
Our analogue of the bag constant, called here $\mathcal{B}_{QCD}$, is given by
\begin{equation}
\mathcal{B}_{QCD}= \frac{9}{128} \, \phi_{0}^4 =
\langle \frac{1}{4} F^{a\mu\nu}F^{a}_{\mu\nu} \rangle,
\label{bag}
\end{equation}
where $\phi_0$ is an energy scale associated with the energy density of the
vacuum and with the gluon condensate \cite{davi}.
In (\ref{epsib}) and (\ref{pressb}) the summation over quark colors has
already been performed. Throughout this work we employ the natural
units $G=1$, $\hbar=1$, $c=1$. Comparing Eqs. (\ref{epsib}) and (\ref{pressb})
with the equivalent definitions of energy and pressure in the modified
bag model with postulated repulsive vector interactions (see Eqs. (14)
and (16) of Ref. \cite{bla20}), we observe a similarity. Both
EOSs have a term proportional to $\rho_B^2$. In \cite{davi} it was derived
from QCD whereas in \cite{bla20} it was postulated.
\section{Stability conditions}
In this section we discuss the two stability conditions, which have to be
satisfied by stable strange quark matter. The first one is that the energy
per baryon of the deconfined phase
(for $P=0$ and $T=0$) is lower than the nonstrange infinite baryonic
matter defined in \cite{farhi,pagliara2011}. Following these works we
impose that:
\begin{equation}
E_{A} \equiv \frac{\epsilon}{\rho_{B}} \leq 934 \,\,\, \mbox{MeV}.
\label{estabilidade}
\end{equation}
This condition must hold at the zero pressure point and hence we can, from
(\ref{epsib}) and (\ref{pressb}), numerically
derive a relation between the bag constant $B_{QCD}$ and the ratio
$\xi=g/m_{G}$. We solve (\ref{pressb}) obtaining
$\rho_B=\rho_B (B_{QCD}, \xi)$, which is then inserted into (\ref{epsib}).
The resulting expression is used to write the
condition $\epsilon (B_{QCD}, \xi)/ \rho_B (B_{QCD}, \xi) = 934$ MeV,
which defines one ``stability frontier''.
This last equation is rewritten as $\xi = \xi (B_{QCD})$, is plotted in
Fig. 1 (solid line) and denoted by the 3-flavor line.
Points in the
$(\mathcal{B}_{QCD},\xi)$ plane located on the right of the solid
line are discarded since they do not satisfy
(\ref{estabilidade}). The solid line, corresponding to the maximal value
of $E_{A} = 934$ MeV, determines the maximum value of
$\mathcal{B}_{QCD}$. The minimum value of
$\mathcal{B}_{QCD}$ is determined by the
second stability condition, which requires nonstrange
quark matter in the bulk to have an energy per baryon higher than the one
of nonstrange infinite baryonic matter. By imposing that
\begin{equation}
E_{A} \equiv \frac{\epsilon}{\rho_{B}} \geq 934 \,\,\, \mbox{MeV},
\label{estabilidade2}
\end{equation}
for a two flavor quark matter at ground state, we ensure that atomic
nuclei do not dissolve into their constituent
quarks. The constraint (\ref{estabilidade2}) defines the dashed line in
the $(\mathcal{B}_{QCD},\xi)$ plane, denoted by the
2-flavor line in Fig. 1. Points located on the left of this line are
excluded because they do not satisfy (\ref{estabilidade2}).
The region between the two lines in Fig. 1 defines our stability window.
\begin{figure}[h]
\begin{center}
\epsfig{file=star21xi_vs_B.pdf,width=116mm}
\caption{Values of $\xi = g/m_{G}$ as a function of $\mathcal{B}_{QCD}$
for different values of the energy per baryon. The two lines define the
stability region.}
\end{center}
\label{fig1}
\end{figure}
Having fixed the $\mathcal{B_{QCD}}$ and $\xi$ parameters, we go back
to (\ref{epsib}) and (\ref{pressb}) and, obtaining $\epsilon$ and $p$
for successive values of $\rho_B$,
we construct the EOS in the form $p = p(\epsilon)$, plotted in Fig. 2a.
In the figure, the different lines correspond to the three parameter sets
listed in Table I.
In this type of plot the slope is the speed of sound,
which, due to causality, can not exceed the unity. This limit is shown by the
full lines in the figure. In Fig. 2b we show the corresponding values of the
speed of sound. As it can be seen, our model yields a much stiffer EOS, with
a speed of sound much larger than the conformal value, for which
$c^2_s = 1/3$.
The dot-dashed line shows the EOS obtained from a recently
updated version of the MIT bag model \cite{zzl,german}, which reads
\begin{equation}
p(\epsilon) = \frac{(\epsilon - B_{eff})}{3} -
\frac{a_2^2}{12 \pi^2 a_4}
\left[1 + \sqrt{1 + \frac{16 \pi^2 a_4}{a_2^2} (\epsilon - B_{eff})} \right],
\label{mit20}
\end{equation}
where $B_{eff}^{1/4} = 142.52 $ MeV,
$a_2^{1/2} = 100 $ MeV and $a_4 = 0.535$. As it can be seen, the MFTQCD EOS
generates stronger pressure for larger values of the parameter
$\xi = g/m_{G}$. This combination of parameters
appears in the first term of (\ref{pressb}), which comes from the repulsive
interactions \cite{davi}.
\begin{figure}[h]
\begin{tabular}{ccc}
\includegraphics[width=.50\linewidth]{star21p_vs_eps.pdf}& \,\,\, &
\includegraphics[width=.50\linewidth]{star21cs2_vs_eps.pdf} \\
(a) & \,\,\, & (b)
\end{tabular}
\caption{a) Equation of state obtained with MFTQCD. Set I, II and III
correspond to
the parameter combinations shown in Table I.
For comparison,
the dot-dashed line show the MIT Bag Model EOS used by Parisi et al.
in Ref.~\cite{german}. b) Speed of sound for the same parameter choices.}
\label{fig2}
\end{figure}
\section{TOV equation, mass and radius}
In order to describe the structure of a static, non-rotating compact star, we
use the Tolman-Oppenheimer-Volkoff (TOV) equation for the pressure
$p(r)$ \cite{glend}:
\begin{equation}
\frac{dp}{dr}=-\frac{\epsilon(r) M (r)}{r^2} \left[ 1 +
\frac{p(r)}{\epsilon(r)} \right] \left[ 1 + \frac{4\pi r^3 p(r)}{M(r)}
\right] \times
\left[ 1 - \frac{2M(r)}{r} \right]^{-1}.
\label{tov}
\end{equation}
The enclosed mass $M(r)$ of
the compact star is given by the mass continuity equation:
\begin{equation}
\frac{dM(r)}{dr}=4\pi r^2\epsilon(r).
\label{mass}
\end{equation}
Equations (\ref{tov}) and (\ref{mass}) express the balance between the
gravitational force and the internal pressure acting on a shell of mass
$dM(r)$ and thickness $dr$.
We solve numerically (\ref{tov}) and (\ref{mass}) for $p(r)$ and $M(r)$,
to obtain the mass-radius diagram.
The pressure and the energy density in (\ref{tov}) and (\ref{mass}) are given
by the MFTQCD expressions (\ref{pressb}) and (\ref{epsib}), respectively.
We take the central energy density to be $\epsilon(r = 0)=\epsilon_{c}$ and
then we integrate out (\ref{tov}) and (\ref{mass}) from $r=0$ up to $r=R$,
where the pressure at the surface is zero: $p(r=R)=0$. In Fig. 3 we show the
mass-radius diagram for several values of $\mathcal{B_{QCD}}$ and $\xi$
respecting the stability condition.
\begin{figure}[h]
\vskip2mm
\begin{center}
\epsfig{file=star21m_vs_r.pdf,width=160mm}
\end{center}
\caption{Mass-radius diagram for combinations of $\mathcal{B_{QCD}}$ and
$\xi$ allowed by the stability conditions. Set I, II and III correspond to
the parameter combinations shown in Table I. The points represent the region
favored by the measurements reported by the NICER and XMM-Newton
Collaborations
\cite{nicer1,nicer2,nicer3,nicer4,nicer5,nicer6}. The horizontal line
shows the mass of the compact object observed in the event GW190814.}
\label{fig3}
\end{figure}
In the diagram, the points represent the region
favored by the measurements reported in Refs.
\cite{nicer1,nicer2,nicer3,nicer4,nicer5,nicer6}.
We can see that, with the parameters chosen in the indicated range, our EOS
is able to satisfy all the constraints shown in the mass-radius diagram.
\begin{table}[!htbp]
\caption{Parameter sets used in the figures.}
\vspace{0.3cm}
\centering
\begin{tabular}{ccc}
\hline
Set & $\mathcal{B_{QCD}} (MeV/fm^{3})$ & $\xi (MeV^{-1}) $
\\ [0.8ex]
\hline
\hline
I & 70 & 0.0011 \\
\hline
II & 60 & 0.0016 \\
\hline
III & 50 & 0.0022 \\
\hline
\end{tabular}
\label{quarstar}
\end{table}
\section{Tidal deformability}
An object that experiences the tidal force of another object will deform.
The susceptibility to deform is often measured using dimensionless quantities
that are called Love numbers. The Love number is an interesting
quantity because it can be used to probe the dense-matter EOS using data from
double-neutron-star-merger events. There are various binary systems where two
objects orbit around each other. Because these objects loose energy to
gravitational waves, their orbits are not stable.
Therefore, they will inevitably approach each other until they finally merge.
Around the collision point, the generated gravitational-wave signal is strong
enough to be detected by terrestrial instruments.
The tidal deformability parameter is given by \cite{tanja,tanja2,sabatucci}
\begin{equation}
\label{eq:tidal}
\Lambda = \frac{2}{3} k_2 C^{-5},
\end{equation}
where $C \equiv M/R$ is the compactness of the star and $k_2$ is the tidal
Love number, which is given by \cite{tanja,tanja2,sabatucci}
\begin{eqnarray}
k_{2} & = & \frac{8C^{5}}{5} \left(1-2C\right)^{2} \left[2+2C
\left(y-1\right)-y\right] \left\{ 2C \left[6-3y+3C
\left(5y-8\right)\right]\right. \nonumber\\
& &\quad +4C^{3}\left[13-11y+C\left(3y-2\right)+2C^{2}
\left(1+y\right)\right] \nonumber\\
& &\quad \left.+3\left(1-2C\right)^{2}\left[2-y+2C
\left(y-1\right)\right]
\ln\left(1-2C\right)\right\}^{-1}, \label{eq:love}
\end{eqnarray}
where
\begin{equation}\label{eq:y}
y = \frac{R\, \beta(R)}{H(R)} - \frac{4\pi R^3 \epsilon_{sup}}{M},
\quad \beta(r) = \frac{dH(r)}{dr}.
\end{equation}
In the above equation, the second term is a correction due to the fact that
in our model the energy density at the surface of the star,
$\epsilon_{sup} \equiv \epsilon(P = 0)$ is not zero \cite{tanja2}. The functions
$H$ and $\beta$ can be obtained by solving the following system of differential
equations:
\begin{eqnarray}
H'(r) & = & \beta(r), \\
\frac{d\beta}{dr} & = & 2
\left(1 - \frac{2M(r)}{r} \right)^{-1} H(r)
\left\{
-2\pi
\left[
5\epsilon(r) +9P(r)
+\frac{\epsilon(r)+P(r)}{dP/d\epsilon}
\right]
\right. \nonumber \\
&&\left. \quad
+\frac{3}{r^2} + 2\left(1 - \frac{2M(r)}{r} \right)^{-1}
\left( \frac{M(r)}{r^2} +4\pi r P(r) \right)^2
\right\} \nonumber \\
&&\quad+\frac{2\beta(r)}{r}
\left(1 - \frac{2M(r)}{r} \right)^{-1}
\left[
-1 + \frac{M(r)}{r} +2\pi r^2
\left( \epsilon(r) - P(r) \right)
\right].\label{eq:beta}
\end{eqnarray}
The Love number $k_2$ measures how easily the bulk
of the matter in a star is deformed.
The Love number also encodes information about the star’s degree of central
condensation.
Stars that are more centrally condensed will have a smaller response to a tidal
field, resulting in a smaller Love number.
The Love number decreases with increasing compactness, and from
Eq.~(\ref{eq:love}) it can be seen that $k_2$ vanishes at the compactness
of a black hole ($M/R = 0.5$) regardless of the EOS dependent quantity $y$.
The tidal Love numbers of strange quark matter stars
are qualitatively different from those of hadronic matter
stars \cite{tanja2,latt10,under20}. The latter decrease strongly for
small values of the compactness.
In Fig.~4a we show the Love number $k_2$ as a function of the compactness
$C$.
We expect that a very compact star for any EoS is harder to deform then
a less compact one. This is what we see in the figures.
It is interesting to observe that the same variation of $\mathcal{B_{QCD}}$
and $\xi$ which produces visible effects in the equation of state and in the
mass-radius diagram does not lead to appreciable differences in the
$k_2$-$C$ plot. The curves shown in Fig.~4a are practically identical
to the curves
in the analogous plots shown in Refs.~\cite{tanja2}, \cite{latt10} and
\cite{under20}, which were
obtained with strange quark matter equations of state. This suggests that a wide
variety of quark matter EOSs lead to the same values of $k_2$. We also note
that our curves are close to the one obtained with the
ultrarelativistic EOS with the speed of sound $c^2_s =1/3$ \cite{under20}.
For completeness, in Fig.~4b we show the Love number $k_2$ as a function of
the variable $y$.
\begin{figure}[h]
\begin{tabular}{ccc}
\includegraphics[width=.50\linewidth]{star21k2_vs_c.pdf}& \,\,\, &
\includegraphics[width=.50\linewidth]{star21k2_vs_y.pdf} \\
(a) & \,\,\, & (b)
\end{tabular}
\caption{a) Tidal Love number $k_2$ as a function of the compactness. b) $k_2$
as a function of $y$. The different lines correspond to the three
parameter sets listed in Table I.}
\label{fig4}
\end{figure}
As pointed out in \cite{tanja2}, in contrast to the Love number,
the tidal deformability has a wide range of values, spanning roughly
an order of magnitude over the observed mass range of neutron stars
in binary systems. The updated version of the tidal deformability
estimate for a $1.4 \, M_{\odot}$
neutron star based on the gravitational-wave event GW170817 \cite{ligo18}
implies that
\begin{equation}
70 < \Lambda_{1.4} < 580.
\label{lambdamax}
\end{equation}
In Fig. 5 we show our results
for $\Lambda$ as a function of the star mass $M$.
As it can be seen, the constraint (\ref{lambdamax}) can be satisfied.
We note, however, the visible tension between this constraint and those
shown in the mass-radius plot. The larger values of the radius required
to fit the NICER points seem to be somewhat difficult to reconcile with the
$\Lambda$ values required by the GW170817 estimates. Other calculations
performed with quark matter stars \cite{laura,zzl,wsz,zm,lz} or hybrid
stars \cite{sylhz,consta,juwu,theo,kmt,han18} arrive at similar results.
On the other hand,
calculations of the tidal deformability with purely hadronic equations of
states \cite{latt18,tan,latt21} seem to reproduce the experimental data
more easily.
\begin{figure}[h]
\begin{center}
\epsfig{file=star21lambda_vs_m.pdf,width=160mm}
\caption{The tidal deformability parameter $\Lambda$ as a function of the
star mass. The different lines correspond to the three parameter sets
listed in Table I.
The vertical bar is the empirical tidal deformability at
$M = 1.4 M_{\odot}$ inferred from the Bayesian analysis of the GW170817
data at the 90\% confidence level \cite{ligo18}.}
\end{center}
\label{fig5}
\end{figure}
\section{Conclusion}
In \cite{davi} a new equation of state for cold quark matter was presented.
It was soon applied to the the study of neutron stars, treated as
self-bound strange quark stars. In this paper, almost ten years later,
we have updated the calculations published in \cite{fran12} and checked
whether that EOS can still account for the most recent astrophysical data.
We find that MFTQCD is still a viable option. However, we observe that the
parameter window is closing. A confirmation of the existing data and the
reduction of the error bars in the tidal deformability and in the NICER
neutron star radii data will be crucial to rule out strange quark star
models and reduce the freedom in the choice of the equation of state.
\begin{acknowledgments}
We are deeply grateful to J.\ Horvath and to G.\ Lugones for fruitful discussions.
This work was partially financed by the Brazilian funding agencies CAPES
and CNPq.
\end{acknowledgments}
|
2,877,628,090,963 | arxiv | \section{Examples}
In this section, we show several examples of our reduction scheme
to MIL.
Note that we use
$g_{\bf w}$, $\mathcal{G}$, $\widehat{\mathcal{G}}$, $\ell_{\mathrm{b}}$, $\ell_{\mathrm{sb}}$ defined in Section~\ref{sec:mil}.
\label{sec:applications}
\subsection{Top-1 Ranking Learning (TRL) problem}
\label{subsec:trl}
Learning to rank is a fundamental problem.
We consider the following situation.
A recommender (learner) has a set containing several items, and the recommender wants to
recommend an item to a target user.
Assume that we have a sample, that is, the sequences of the pair of the item set and the selected item.
The goal is to learn a function which inputs a set of items and outputs
the item that the target user will choose.
\paragraph{Problem setting:}
Let $\mathcal{X} \subseteq \Real^d$ be an instance space,
and $f: \mathcal{X} \rightarrow \Real$ be a target scoring function.
A set $A$ is a finite set of instances chosen from $\mathcal{X}$.
The learner receives a sequence of the sets of items
$S=(A_1, {\bf x}^*_1), \ldots, (A_n, {\bf x}^*_n)$, where each ${\bf x}^*_i \in A_i$ is the highest-valued item
determined by the target function $f$.
Each set in the sample is i.i.d drawn according to some unknown
distribution $\mathcal{D}$ over $2^{\mathcal{X}}$.
Suppose that the learner choose an item from an item set by a hypothesis
$h_{\bf w}: A \mapsto \arg\max_{{\bf x} \in A}\langle {\bf w}, {\bf x} \rangle$.\footnote{In this paper, we consider an $\arg\max$ function with a fixed tie-breaking rule.}
Let $\mathcal{H}^{\mathrm{TR}} = \{h_{\bf w} \mid \|{\bf w}\| \leq \Lambda\}$ be a hypothesis class.
Let $\ell: (y, \hat{y}) \mapsto I(y \neq \hat{y})$ be a zero-one loss function.
The goal of the learner is to find $h_{\bf w} \in \mathcal{H}^{\mathrm{TR}}$ with small expected misranking (or misidentification) risk
with respect to the target $f$. The generalization risk and empirical risk are formulated as:
\[
R^{\mathrm{TR}}_\mathcal{D}(h_{\bf w}) = \mathbb{E}_{A \sim \mathcal{D}}
\left[
\ell \left({\bf x}^*, h_{\bf w}(A) \right)
\right],
\quad \quad
\widehat{R}^{\mathrm{TR}}_{S}(h_{\bf w})= \frac{1}{n} \sum_{i=1}^n \ell \left({\bf x}_i^*, h_{\bf w}(A_i) \right),
\]
where ${\bf x}^* = \arg\max_{{\bf x} \in A}f({\bf x})$.
This TRL problem setting is similar to MIL setting in that
each data is given as a set.
However, the task of top-1 ranking is to identify the target item,
and thus that is different from the classification task in MIL.
As below, we show that we can use our reduction scheme for the ERM in TRL.
\paragraph{Reduction:}
Let us define $B_{(A, {\bf x}^*)} = \{{\bf x} - {\bf x}^* \mid {\bf x} \in A \backslash {\bf x}^*\}$.
Here we define the following $\alpha$:
\[
\alpha(A, {\bf x}^*) = (B_{(A, {\bf x}^*)}, -1),
\]
and define $\beta: g_{\bf w} \mapsto h_{\bf w}$ such that ${\bf w}$ in $g_{\bf w}$ is equal to ${\bf w}$ in $h_{\bf w}$ (i.e., $\beta$ is onto).
Then, $\alpha$ and $\beta$ satisfy the condition~(\ref{align:reduce_condition}) as follows:
\begin{align*}
\ell \left({\bf x}^*, \beta(g_{\bf w})(A) \right)
&= I\left(\arg\max_{{\bf x} \in A}\langle {\bf w}, {\bf x} \rangle \neq {\bf x}^* \right)
=I\left(\langle {\bf w}, {\bf x}^* \rangle - \max_{{\bf x} \in A\backslash {\bf x}^*}\langle {\bf w}, {\bf x} \rangle \leq 0 \right)\\
&=I\left(-1 \times \left(\max_{{\bf x} \in A\backslash {\bf x}^*}\langle {\bf w}, ({\bf x} -{\bf x}^*) \rangle \right) \leq 0 \right)
= \ell_{\mathrm{b}}(-1, g_{\bf w}(B_{(A, {\bf x}^*)}))
\end{align*}
Therefore, $(\mathcal{H}^{\mathrm{TR}}, \ell)$ is ERM-reducible to $(\mathcal{G}, \ell_{\mathrm{b}})$.
\paragraph{ERM algorithm:}
By Proposition~\ref{prop:main} (i'),
ERM in TRL for $S$ can be reduced to ERM in MIL for the training bag sample
$S' = ((B_{(A_1, {\bf x}_1^*)}, -1), \ldots, (B_{(A_n, {\bf x}_n^*)}, -1))$.
If we use hinge loss in the reduced problem,
it can be solved by one-class MI-SVM.
Thus, as aforementioned in Section~\ref{subsec:misvm},
we can obtain a global optimum in polynomial-time.
\paragraph{Generalization bound:}
By Proposition~\ref{prop:main} (ii') and Theorem~\ref{theo:rdm_mil},
we obtain the following result.
\begin{coro}
The following bound holds with probability at least $1-\delta$ for all $g_{\bf w} \in \mathcal{G}$:
\[
R^{\mathrm{TR}}_{D}(h_{\bf w}) \leq \widehat{R}^{\mathrm{MI}}_{S', \ell_{\mathrm{sb}}}(g_{\bf w}) + 2\mathfrak{R}_{S'}(\widehat{\mathcal{G}})+ 3\sqrt{\frac{1/\delta}{2n}},
\]
where $h_{\bf w} = \beta(g_{\bf w})$ and
\[
\mathfrak{R}_{S'}(\widehat{\mathcal{G}}) = \min\left\{
O\left( \frac{2 LC\Lambda\log_2(4L^2 r^2\Lambda^2 n\sum_{i=1}^n| A_i | ) \ln L^2 n}{\sqrt{n}} \right),
O\left( \frac{2 LC \Lambda\sqrt{\eta \ln (|\bigcup_{i=1}^n A_i |)}}{\sqrt{n}} \right)
\right\}.
\]
\end{coro}
Note that the norm of the instances in the training bag sample is bounded as:
$\|{\bf x}' - {\bf x}\| \leq \|{\bf x}'\| + \|{\bf x}\| \leq 2C$.
\subsection{Multi-Class Learning (MCL) problem}
\label{subsec:mcl}
\paragraph{Problem setting:}
Let $\mathcal{X} \subseteq \Real^d$ be an instance space, and $\mathcal{Y}=\{1, \ldots, k\}$ be an output space.
The learner receives a sequence of labeled instances
$S = (({\bf x}_1,y_1) \ldots, ({\bf x}_n, y_n)) \in (\mathcal{X} \times \mathcal{Y})^n$,
where each instance is drawn i.i.d according to some unknown distribution $\mathcal{D}$.
The learner predicts the label of ${\bf x}$ by a hypothesis
$h_{{\mathbf W}} \in \mathcal{H}^{\mathrm{MC}} = \{{\bf x} \mapsto \arg\max_{y \in \mathcal{Y}} \langle {\bf w}_y, {\bf x} \rangle \mid \|{\mathbf W}\| \leq \Lambda\}$, where ${\mathbf W} = ({\bf w}_1, \ldots, {\bf w}_k)^\top$ and $\|{\mathbf W}\| = \sqrt{\sum_{j=1}^k\|{\bf w}_j\|^2}$.
Let $\ell: (y, \hat{y}) \mapsto I(y \neq \hat{y})$ be a zero-one loss function.
The generalization risk and the empirical risk are defined as:
\[
R^{\mathrm{MC}}_\mathcal{D}(h_{{\mathbf W}}) = \mathbb{E}_{({\bf x},y)\sim \mathcal{D}} \ell \left(y, h_{\mathbf W}({\bf x}) \right), \quad \widehat{R}^{\mathrm{MC}}_S(h_{{\mathbf W}}) = \frac{1}{n}\sum_{i=1}^n \ell \left(y_i, h_{\mathbf W}({\bf x}_i) \right).
\]
\paragraph{Reduction:}
Let us define the following $dk$-dimensional vector:
\begin{align}
\label{align:dk_z}
\boldsymbol{z}_{({\bf x},y)} = ({\bf 0} \cdots \underbrace{{\bf x}}_{y\mathrm{th~block}} \cdots {\bf 0}),
\end{align}
where ${\bf 0}$ is a $d$-dimensional zero vector.
Let $B_{\boldsymbol{z}_{({\bf x}, y)}} = \{\boldsymbol{z}_{({\bf x}, y')} - \boldsymbol{z}_{({\bf x}, y)}| \forall y' \neq y\}$.
Here we define the following $\alpha$:
\[
\alpha({\bf x}, y)= (B_{\boldsymbol{z}_{({\bf x}, y)}}, -1).
\]
Let ${\boldsymbol{\omega}}$ denotes the $dk$-dimensional vector obtained by concatenating the vectors ${\bf w}'_1 \cdots {\bf w}'_k$.
We define $\beta: (g_{\boldsymbol{\omega}}) \mapsto h_{\mathbf W}$ such that ${\bf w}'_j$ in ${\boldsymbol{\omega}}$ corresponds to ${\bf w}_j$ in ${\mathbf W}$ (i.e., $\beta$ is onto).
Then, we can show that $\alpha$ and $\beta$ satisfy the condition~(\ref{align:reduce_condition}) as follows:
\begin{align*}
\ell(y, \beta(g_{\boldsymbol{\omega}})({\bf x}))
&= I \left(\langle {\bf w}_y, {\bf x} \rangle - \max_{y' \neq y}\langle {\bf w}_{y'}, {\bf x} \rangle \leq 0 \right)
= I \left(-1 \times \left(\max_{y' \neq y}\langle ({\bf w}_{y'} - {\bf w}_y), {\bf x}\rangle\right) \leq 0 \right)\\
&=I \left(-1 \times \left(\max_{y' \neq y}\langle {\boldsymbol{\omega}}, (\boldsymbol{z}_{({\bf x}, y')} - \boldsymbol{z}_{({\bf x}, y)})\rangle\right) \leq 0 \right)
= \ell_{\mathrm{b}}(-1, g_{\boldsymbol{\omega}}(B_{\boldsymbol{z}_{({\bf x}, y)}}))
\end{align*}
Therefore, $(\mathcal{H}^{\mathrm{MC}}, \ell)$ is ERM-reducible to $(\mathcal{G}, \ell_{\mathrm{b}})$.
\paragraph{ERM algorithm:}
By Proposition~\ref{prop:main} (i'),
ERM of MCL for $S$ can be reduced to MIL for the
bag training sample $S' = ((B_{\boldsymbol{z}_{({\bf x}_1, y_1)}}, -1), \ldots, (B_{\boldsymbol{z}_{({\bf x}_n, y_n)}}, -1))$.
Therefore, as similar to TRL,
if we use hinge loss in the reduced problem,
we can obtain a global solution of ERM in MCL by one-class
MI-SVM in polynomial-time.
\paragraph{Generalization bound:}
By Proposition~\ref{prop:main} (ii') and Theorem~\ref{theo:rdm_mil},
we have the following result.
\begin{coro}
The following bound holds with probability at least $1-\delta$ for all $g_{\boldsymbol{\omega}} \in \mathcal{G}$:
\[
R^{\mathrm{MC}}_{D}(h_{\bf w}) \leq \widehat{R}^{\mathrm{MI}}_{S', \ell_{\mathrm{sb}}}(g_{\boldsymbol{\omega}}) + 2\mathfrak{R}_{S'}(\widehat{\mathcal{G}})+ 3\sqrt{\frac{1/\delta}{2n}},
\]
where $h_{\mathbf W} = \beta(g_{\boldsymbol{\omega}})$ and
\[
\mathfrak{R}_{S'}(\widehat{\mathcal{G}}) = \min\left\{
O\left( \frac{\sqrt{2} LC\Lambda\log_2(2L^2 C^2\Lambda^2 n^2(k-1)) \ln L^2 n}{\sqrt{n}} \right),
O\left( \frac{\sqrt{2} LC \Lambda\sqrt{\eta \ln n(k-1)}}{\sqrt{n}} \right)
\right\}.
\]
\end{coro}
We used the fact that the number of instance in a bag $B_i$ is $(k-1)$,
and thus $|\bigcup_{i=1}^n B_i| \leq \sum_{i=1}^n|B_i| = n(k-1)$. Moreover, we used the fact $\|\boldsymbol{z}_{({\bf x}, y')} - \boldsymbol{z}_{({\bf x}, y)}\| \leq \sqrt{2}C$.
\subsection{Labeled and Complementarily labeled Learning (LCL) problem}
\label{subsec:lcl}
LCL is originally proposed by Ishida et al.~\cite{ishida2017learning}.
In the problem, some training instances are complementarily labeled (e.g., instance $x_i$ is NOT $y_i$).
We basically follow the problem setting and some assumptions provided by~\cite{ishida2017learning}.
\paragraph{Problem setting:}
Let $\mathcal{X} \subseteq \Real^d$ be an instance space, and $\mathcal{Y}=\{1, \ldots, k\}$ be an output space.
Let $\mathcal{D}$ be an unknown distribution over $\mathcal{X} \times \mathcal{Y}$.
We assume that the learner receives a sample $S$ drawn i.i.d from according to the distribution $\mathcal{D}'$
which gives true label with unknown probability $\theta$ and complementary label with probability $1-\theta$.
Moreover, we assume that the complementary label is chosen with uniform probability
(i.e., all complementary labels are equally chosen with the probability $k-1$).~\footnote{This assumption is proposed by Ishida et al.~\cite{ishida2017learning} as a reasonable situation in some practical tasks.}
More formally, we assume that the sample is given as
$S = (({\bf x}_1,y_1, \gamma_1) \ldots, ({\bf x}_n, y_n, \gamma_n)$
which is drawn i.i.d from according to the distribution $D'$ over $\mathcal{D} \times \{\mathrm{False}, \mathrm{True}\}$,
where $\gamma_i=\mathrm{True}$ means that $y_i$ is a true label
and $\gamma_i=\mathrm{False}$ means that $y_i$ is a complementary label (i.e., it indicates that ${\bf x}_i$ is NOT $y_i$).
For any $({\bf x}, y) \sim \mathcal{D}$, $\mathcal{D}'({\bf x}, y, \mathrm{True}) = \theta$ and $\mathcal{D}'({\bf x}, \bar{y}, \mathrm{False}) = \frac{1-\theta}{k-1}$ for any $\bar{y} \neq y$ (i.e., complementary label is chosen with uniform probability).
Other basic settings are the same as MCL problem.
The learner predicts the label of ${\bf x}$ by a hypothesis
$h_{\mathbf W} \in \mathcal{H}^{\mathrm{LC}} = \{{\bf x} \mapsto \arg\max_{y \in \mathcal{Y}} \langle {\bf w}_y, {\bf x} \rangle \mid \|{\mathbf W}\| \leq \Lambda\}$,
where we use the same ${\mathbf W}$ in the MCL case.
The final goal of the learner is to find $h_{{\mathbf W}} \in \mathcal{H}^{\mathrm{LC}}$ with small expected risk:
\[
R^{\mathrm{MC}}_\mathcal{D}(h_{\mathbf W}) = \mathbb{E}_{({\bf x},y)\sim \mathcal{D}} I \left(y \neq h_{\mathbf W}({\bf x}) \right)
\]
\par
However, it is difficult to minimize the corresponding empirical risk
directly by using the complementarily labeled data.
Therefore, we consider the following expected risk.\footnote{\cite{ishida2017learning} used different surrogate risk from ours. However, note that Ishida et al. and we still have the common final goal, minimizing $R^{\mathrm{MC}}_{\mathcal{D}}(h_{{\mathbf W}})$}
\[
\label{align:|cl_origin}
R^{\mathrm{LC}}_{\mathcal{D}'}(h_{{\mathbf W}})=
\mathbb{E}_{({\bf x}, y, \gamma)\sim \mathcal{D}'}
\left[I\left( \gamma = (y \neq h_{{\mathbf W}}({\bf x})) \right) \right].
\]
This risk means that: When $\gamma = \mathrm{True}$,
the learner does not incur risk if the learner predict the true label.
When $\gamma = \mathrm{False}$, the learner does not incur risk if the learner
predicts an assigned non-true label (i.e., complementary label).
Thus, the risk measure is defined by a pair $(y, \gamma) \in (\mathcal{Y} \times \{\mathrm{False}, \mathrm{True}\})$.
We can show that this risk has a strong relationship with the original MCL risk as below:
\begin{lemm}
\label{lemm:comp_gen}
For any $h_{{\mathbf W}} \in \mathcal{H}^{\mathrm{LC}}$, $R^{\mathrm{MC}}_{\mathcal{D}}(h_{{\mathbf W}}) = \frac{k-1}{\theta(k -2)+1}R^{\mathrm{LC}}_{\mathcal{D}'}(h_{{\mathbf W}})$ holds.
\end{lemm}
The proof is in the supplementary materials.
Now, let $\ell((y,\gamma), \hat{y}) = I(\gamma=(y \neq \hat{y}))$. We can define the empirical risk as:
\[
\label{align:er_lcl_origin}
\widehat{R}^{\mathrm{LC}}_{S}(h_{{\mathbf W}}) = \frac{1}{n}\sum_{i=1}^n
\ell \left((y_i,\gamma_i), h_{\mathbf W}({\bf x}_i) \right).
\]
\paragraph{Reduction:}
Let us use $\boldsymbol{z}_{({\bf x}, y)}$ defined as Eq. (\ref{align:dk_z}).
We define the following $\alpha$:
\[
\alpha({\bf x}, (y,\gamma))= (B_{\boldsymbol{z}_{({\bf x}, y)}}, v_\gamma).
\]
where $v_\gamma = +1$ if $\gamma=\mathrm{True}$, and $v_\gamma=-1$ otherwise.
As same as MCL case,
let ${\boldsymbol{\omega}}$ denotes the $dk$-dimensional vector obtained by concatenating the vectors ${\bf w}'_1 \cdots {\bf w}'_k$.
and we define $\beta: (g_{\boldsymbol{\omega}}) \mapsto h_{\mathbf W}$ such that ${\bf w}'_j$ in ${\boldsymbol{\omega}}$ corresponds to ${\bf w}_j$ in ${\mathbf W}$ (i.e., $\beta$ is onto).
Then, $\alpha$ and $\beta$ satisfy the condition~(\ref{align:reduce_condition}) as follows:
\begin{align*}
\ell((y, \gamma), \beta(g_{{\boldsymbol{\omega}}})({\bf x}))
&= I \left(v_\gamma \left(\max_{y' \neq y}\langle ({\bf w}_{y'} - {\bf w}_y), {\bf x}\rangle\right) \leq 0 \right)\\
&=I \left(v_\gamma \left(\max_{y' \neq y}\langle {\boldsymbol{\omega}}, (\boldsymbol{z}_{({\bf x}, y')} - \boldsymbol{z}_{({\bf x}, y)})\rangle\right) \leq 0 \right)
= \ell_{\mathrm{b}}( v_\gamma, g_{\boldsymbol{\omega}}(B_{\boldsymbol{z}_{({\bf x}, y)}}))
\end{align*}
Therefore, $(\mathcal{H}^{\mathrm{LC}}, \ell)$ is ERM-reducible to $(\mathcal{G}, \ell_{\mathrm{b}})$.
\paragraph{ERM algorithm:}
By Proposition~\ref{prop:main} (i'),
ERM of LCL for $S$ can be reduced to MIL for the
bag training sample $S' = ((B_{\boldsymbol{z}_{({\bf x}_1, y_1)}}, v_{\gamma_1}), \ldots, (B_{\boldsymbol{z}_{({\bf x}_n, y_n)}}, v_{\gamma_n}))$.
Therefore,
if we use hinge loss as a convex upper bound of binary zero-one risk,
ERM of LCL can be solved by MI-SVM.
Note that, different from TRL and MCL, the problem is a
binary classification MIL (not one-class classification).
Therefore, the optimization problem is not convex.
However, as aforementioned in Section~\ref{subsec:misvm},
it is known that we can obtain an $\epsilon$-approximate local optimum of the problem
efficiently in practice.
It is important to note that, in the special case that $v_i=-1$ for all $i$
(i.e., the sample contains only complementarily labeled data),
the problem is solved by one-class MI-SVM, and thus we can obtain a global optimum in polynomial time.
\paragraph{Generalization bound:}
By Proposition~\ref{prop:main} (ii') and Theorem~\ref{theo:rdm_mil},
we have the following result.
\begin{coro}
The following bound holds with probability at least $1-\delta$ for all $g_{\boldsymbol{\omega}} \in \mathcal{G}$:
\[
R^{\mathrm{MC}}_{D}(h_{\mathbf W}) \leq \frac{k-1}{\theta(k -2)+1}
\left(
\widehat{R}^{\mathrm{MI}}_{S', \ell_{\mathrm{sb}}}(g_{\boldsymbol{\omega}})
+ 2\mathfrak{R}_{S'}(\widehat{\mathcal{G}}) + 3\sqrt{\frac{1/\delta}{2n}}
\right),
\]
where $h_{\mathbf W} = \beta(g_{\boldsymbol{\omega}})$ and
\[
\mathfrak{R}_{S'}(\widehat{\mathcal{G}}) = \min\left\{
O\left( \frac{\sqrt{2} LC \Lambda\log_2(2 L^2C^2\Lambda^2 n^2(k-1)) \ln \Lambda n}{\sqrt{n}} \right),
O\left( \frac{\sqrt{2} LC \Lambda\sqrt{\eta \log n(k-1)}}{\sqrt{n}} \right)
\right\}.
\]
\end{coro}
The above is led by the same argument in MCL case.
\section{Related works}
\label{sec:comparison}
\paragraph{Other reduction techniques:}
There are various machine learning reduction schemes
(see, e.g., \cite{beygelzimer2015learning}).
We found general reduction schemes such as~\cite{pitt1990prediction,beygelzimer2005error}.
The important difference from the existing schemes is that
we focus on the reduction of ERM.
There are various applications of machine learning reductions, for example,
MCL with binary classification~\cite{james1998error, ramaswamy2014consistency},
cost-sensitive MCL with binary classification~\cite{beygelzimer2005weighted, beygelzimer2005error, langford2005sensitive}
ranking with binary classification~\cite{balcan2008robust,ailon2010preference,agarwal2014surrogate}.
To the best of our knowledge, the reduction to MIL has not yet been discussed.
We revealed that MIL can be connected with various learning problems
via our reduction scheme.
\paragraph{Top-1 ranking learning (TRL):}
There are many kinds of problem setting for ranking learning tasks.
As with our provided problem setting,
various measures for ranking at the top have been provided~\cite{rudin:colt06,agarwal11-infinite-push,NIPS2014_5222,menon2016bipartite,NIPS2012_4635}.
Top-1 ranking measure has already been discussed in~\cite{hidasi2018recurrent}.
However, the basic problem setting is different from our TRL problem.
They assume that the recommender has i.i.d drawn positive and negative items as a sample.
The loss is defined by each positive item with mini-batch sampled negative set of items.
Moreover, they did not show a general form of the problem and theoretical guarantee.
We originally gave a general form of the problem setting and
the theoretical aspects.
\paragraph{Multi-Class Learning (MCL):}
The basic result of the generalization performance is
known that the generalization error can be
upper bounded by the term which linearly depends
on the class size $k$~\cite{mohri2018foundations}.
Recently, Lei et al. showed the improved generalization
error bound
which is logarithmically dependent on the class size $k$~\cite{lei19}.
Our generalization bound is competitive with the existing bound.
However, our theoretical result can be derived by the
generalization bound of MIL shown in 2012.
In other words, the best result could be shown in 2012
if we used our reduction scheme.
Thus, we can say that our idea is important for accelerating theoretically
analysis of machine learning problems.
Interestingly, the optimization problem of
MI-SVM for sample $S'$ with hinge-loss is equivalent to
the optimization problem a standard multi-class SVM~(see, e.g.,~\cite{mohri2018foundations}).
\paragraph{Labeled and Complementarily-labeled Learning (LCL):}
LCL is originally proposed by
Ishida et al.~\cite{ishida2017learning}.
They also provided the generalization risk bound in the case that the training sample
contains only complementarily labeled instances (i.e., in our case $\theta=0$).
They said that, for a linear hypothesis class,
the following bound holds with probability at least $1-\delta$:
$R^{\mathrm{MC}}_{D}(h) \leq \widehat{R}(h) + k(k-1)L\sqrt{\nicefrac{R\Lambda}{n}} + (k-1)\sqrt{\nicefrac{8 \ln (2/\delta)}{n}}$.
Note that they used the empirical risk $\widehat{R}(h)$ for complementarily labeled instances
different from the risk that we defined (see details in~\cite{ishida2017learning}).
According to the difference, our generalization bound is incomparable with the
existing bound.
However, we can say that if we achieve small empirical risk
close to zero,
our provided risk bound
is $k$ times tighter than the existing bound.
By the definition of their empirical risk $\widehat{R}(h)$,
Ishida et al. carefully chose non-convex loss functions (symmetric loss functions such as ramp loss)
and optimized them by a gradient-based algorithm in practice.
Ishida et al. also provided another LCL framework~\cite{ishida19a} with
arbitrary loss functions and arbitrary prediction models.
They provided another gradient-based optimization algorithm and
performed well in practice.
However, the generalization performance has not been discussed.
They mentioned that their algorithm suffers from overfitting
and showed a practical way to avoid overfitting.
Note that, to the best of our knowledge, there is no polynomial-time algorithm
even in the special case that the training sample contains only complementarily-labeled data.
Our theoretical result can be extended to use any loss functions that provide
convex upper bound on the binary zero-one error.
Moreover, in the case that the sample contains only complementarily labeled data,
if we use hinge loss,
the algorithm derived (i.e., one-class MI-SVM) is the first polynomial-time algorithm.
\paragraph{Multiple-Instance Learning:}
Since Dietterich et al. first proposed MIL in~\cite{Dietterich:1997},
many researchers introduced various theories and applications
of
MIL~\cite{Gartner02multi-instancekernels,NIPS2002misvm,Sabato:2012:MLA,pmlr-v28-zhang13a,Doran:2014,CARBONNEAU2018329}.
\cite{suehiro2020multiple} revealed that a local-feature-based time-series classification
problem can be reduced to MIL problem.
However, in image processing area, it has ever known that
such local-feature-based learning can be reduced to MIL problem in practice~\cite{CARBONNEAU2018329}.
Our results show that various learning problems can be reduced to MIL.
\section{Discussion}
\section{Conclusion}
\label{sec:conclusion}
In this paper, we proposed a simple reduction scheme for empirical risk minimization (ERM)
that preserves empirical Rademacher complexity.
The reduction allows us to transfer known generalization bounds and algorithms for ERM to the target learning problems.
As an application of the reduction scheme, we showed that various learning problems can be
reduced to MIL.
The transferred ERM algorithms and generalization bounds
are novel theoretical results.
We gave a general form of the reduction scheme; however, some cases were not introduced in our applications.
For example, the case that $\beta$ is not onto.
Moreover, as an extension, we can consider the condition of ERM-reducible
such as $\ell(h(x), y) \leq \ell'(h'(x'), y')$ with $\epsilon$-gap instead of (\ref{align:reduce_condition}).
These uncovered studies may be useful for the case when an original target problem
is NP-hard. We leave for future work.
\section{General form of the reduction scheme for ERM}
\label{sec:general_form}
Consider two problems, an original problem and the base problem.
The original problem has a hypothesis class $\mathcal{H} \subseteq \{h: \mathcal{X} \rightarrow Y\}$,
and a loss function $\ell: \mathcal{Y} \times \mathcal{Y} \rightarrow \Real$.
The base problem has a hypothesis class $\mathcal{H}' \subseteq \{h': \mathcal{X}' \rightarrow Y'\}$,
and a loss function $\ell': \mathcal{Y}' \times \mathcal{Y}' \rightarrow \Real$.
By the fact that ERM for some fixed sample is parameterized by a pair of hypothesis and loss function,
we give the following definition.
\begin{defi}[{\bf ERM-reducible}]
$(\mathcal{H}, \ell)$ is \emph{ERM-reducible} to $(\mathcal{H}', \ell')$
if there exists a polynomial-time computable function
$\alpha: \mathcal{X} \times \mathcal{Y} \rightarrow \mathcal{X}' \times \mathcal{Y}'$
and $\beta: \mathcal{H}' \rightarrow \mathcal{H}$
such that for any $(x, y) \in \mathcal{X} \times \mathcal{Y}$ and for any $h' \in \mathcal{H}'$,
\begin{align}
\label{align:reduce_condition}
\ell(y, h(x)) = \ell'(y', h'(x')),
\end{align}
where $(x', y') = \alpha(x, y)$ and $h=\beta(h')$.
\end{defi}
\begin{prop}[{\bf Transferable results}]
\label{prop:main}
Suppose that $(H, \ell)$ is ERM-reducible to $(H', \ell')$ with $\alpha$ and $\beta$.
For any sample $S=(x_1, y_1), \ldots, (x_n, y_n) \in (\mathcal{X} \times \mathcal{Y})^n$,
the following (i) and (ii) hold:
\begin{enumerate}[(i)]
\item (In)equality of the ERMs:
\begin{align}
\label{align:erm_reduction}
\min_{h \in \mathcal{H}} \sum_{i=1}^n \ell(y_i, h(x_i))
\leq \min_{h \in \mathcal{H}_\beta} \sum_{i=1}^n \ell(y_i, h(x_i))
= \min_{h' \in \mathcal{H}'} \sum_{i=1}^n \ell'(y_i', h'(x_i')),
\end{align}
where $\mathcal{H}_\beta = \{\beta(h') \mid h' \in \mathcal{H}'\}$,
$(x_i', y_i') = \alpha(x_i, y_i)$ for $i = 1,\ldots,n$.
Moreover, we have $\beta(h') \in \arg\min_{h \in \mathcal{H}_\beta}\sum_{i=1}^n\ell(y_i, h(x_i))$.\\
\item Equality of the empirical Rademacher complexity:\\
Let $\widehat{\mathcal{H}}_\beta = \{(x, y) \mapsto \ell(y, h(x)) \mid h \in \mathcal{H}_\beta\}$,
and let $\widehat{\mathcal{H}}' = \{(x, y) \mapsto \ell'(y', h'(x)) \mid h' \in \mathcal{H}'\}$. We have
\begin{align}
\mathfrak{R}_S(\widehat{\mathcal{H}}_\beta) = \mathfrak{R}_{S'}(\widehat{\mathcal{H}}'),
\end{align}
where $S' = (x'_1, y'_1), \ldots, (x_n', y_n')$.
\end{enumerate}
For both (i) and (ii), in a special case that $\beta$ is onto
(i.e., $\mathcal{H} = \{\beta(h') \mid h' \in \mathcal{H}'\}$),
we additionally have the following:
\begin{enumerate}[(i')]
\item Equality of the ERMs:
\begin{align}
\min_{h \in \mathcal{H}} \sum_{i=1}^n \ell(y_i, h(x_i))
= \min_{h' \in \mathcal{H}'} \sum_{i=1}^n \ell'(y_i', h'(x_i')).
\end{align}
\item Equality of the empirical Rademacher complexity:\\
Let $\widehat{\mathcal{H}} = \{(x, y) \mapsto \ell(y, h(x)) \mid h \in \mathcal{H}\}$. We have
\begin{align}
\mathfrak{R}_S(\widehat{\mathcal{H}}) = \mathfrak{R}_{S'}(\widehat{\mathcal{H}}').
\end{align}
\end{enumerate}
\end{prop}
This proposition is easily derived by the definition of ERM-reducible.
Note that, in the proposed reduction scheme, we do not need to care about the generalization risk of the reduced (base) problem.
If we find an instance-transformation $\alpha$ and
a hypothesis-transformation $\beta$ satisfying the simple condition~(\ref{align:reduce_condition}),
then we only have to find $h' \in \mathcal{H}'$ which minimize the empirical risk in the reduced problem.
We can guarantee the generalization bound of the original problem
because of the preserving the empirical Rademacher complexity.
Moreover, we can restore the original hypothesis $h$ by $\beta(h')$.
\section{Introduction}
Reduction allows us to transfer known results (e.g., algorithms, generalization bounds) from a
reduced problem that have been already analyzed to an original problem.
In the previous research, it has been shown that various learning
problems can be reduced to other learning problems by various reduction schemes (e.g.,~\cite{james1998error,beygelzimer2005weighted,langford2005sensitive,balcan2008robust,langford2012predicting}).
We propose a general reduction scheme for empirical risk minimization (ERM) tasks.
In contrast to typical machine learning reductions,
we do not reduce a learning problem itself but reduce the ERM of the learning problem.
More precisely, our reduction scheme does not care about the generalization error of the hypothesis
obtained in the reduced problem.
However, the reduction scheme preserves the empirical Rademacher complexity~\cite{Bartlett:2003:RGC},
and thus our reduction allows us to transfer the generalization risk bounds to the original learning problem
in a straightforward way.
We define the simple reducible condition and show the transferable results from the original problem to the reducible problem.
Whereas a good definition of the instance-transformation is key in a lot of reductions (see, e.g., Example 2.9 of~\cite{mohri2018foundations}),
the key to our reduction scheme is that we define not only instance-transformation function $\alpha$ but also hypothesis-transformation $\beta$.
Thanks to the existence of $\beta$,
the obtained hypothesis on the reduced ERM can be restored to the original hypothesis.
As an application, we introduce the reduction to multiple-instance learning (MIL) problem.
The examples of the reducible problem to MIL problem include various learning problems;
top-1 ranking learning problem (TRL, we originally design the general formulation of the problem),
multi-class learning problem (MCL, see, e.g., ~\cite{mohri2018foundations}),
and labeled and complementarily labeled learning problem (LCL)~\cite{ishida2017learning}.
Although MCL and MIL are classical machine learning tasks,
interestingly, the connection between them has not yet been discussed.
We show that our reduction allows us to easily transfer some theoretical results from MIL to the examples.
Thanks to the existing theoretical results on MIL (e.g.,~\cite{Sabato:2012:MLA,suehiro2020multiple}),
some of the generalization bounds derived are incomparable or competitive with the
state-of-the-art results.
Moreover, in a special case of LCL,
the algorithm derived is the first polynomial-time algorithm.
The impact of our results is emphasized
by the simplicity of the scheme and the derivation process.
The contributions of the paper are as follows:
\begin{itemize}
\item We propose a simple reduction scheme for ERM problem.
The reduction specialized in ERM is a new perspective different from typical machine learning reductions.
\item By using the reduction scheme, we reveal that various learning problems can be reduced to MIL problem. The connection between the learning problems and MIL problem has never been discussed.
\item Despite the simplicity of the derivation, it turns out that we obtain novel theoretical results
for the several reducible learning problems. Our reduction scheme has enormous potential for accelerating theoretical analysis in the machine learning field.
\item Remarkably, in a special case of LCL, the algorithm derived is the first polynomial-time algorithm.
\end{itemize}
The paper outline is as follows. In Section~\ref{sec:prelim}, we introduce some preliminary definitions.
We give a general form of our reduction scheme in Section~\ref{sec:general_form}.
We introduce a problem setting of MIL and the known theoretical results in Section~\ref{sec:mil}.
In Section~\ref{sec:applications}, we introduce various examples of the reducible problems.
The related works and the comparison with our results are stated in Section~\ref{sec:comparison}.
We summarize and conclude the paper in Section~\ref{sec:conclusion}.
\section{Multiple-Instance Learning (MIL) problem}
\label{sec:mil}
In this section, we introduce Multiple-Instance Learning problem
which can be a general problem for several original problems
as introduced in the previous section.
\subsection{Problem setting}
Let $\mathcal{X} \subseteq \Real^d$ be an instance space.
A bag $B$ is a finite set of instances chosen from $\mathcal{X}$.
The learner receives a sequence of (binary) labeled bags
$S = ((B_1, y_1), \ldots, (B_n, y_n)) \in (2^{\mathcal{X}} \times \{-1, 1\})^n$ called a training bag sample, where
each labeled bag is independently drawn according to some unknown
distribution $\mathcal{D}$ over $2^{\mathcal{X}} \times \{-1, 1\}$.
In the MIL problem,
the following hypothesis class is commonly used in practice:
\[
\mathcal{G} = \{g_{\bf w}: B \mapsto \max_{{\bf x} \in B} \langle {\bf w}, {\bf x} \rangle : \|{\bf w}\| \leq \Lambda\},
\]
where $\|\cdot\|$ denotes 2-norm of $\cdot$.
Let $\ell_{\mathrm{b}}: (y, \hat{y}) \mapsto I(y\hat{y} \leq 0)$ be a zero-one loss function for binary classification.
The goal of the learner is to find a hypothesis $g \in \mathcal{G}$
with small generalization risk. The generalization risk and the empirical risk are formulated as:
\[
R^{\mathrm{MI}}_{\mathcal{D}}(g_{\bf w}) = \mathbb{E}_{(B,y)\sim \mathcal{D}}\left[\ell_{\mathrm{b}} \left(y, g_{\bf w}(B) \right)\right],
\quad \quad
\widehat{R}^{\mathrm{MI}}_{S}(g_{\bf w}) = \frac{1}{n}\sum_{i=1}^n
\ell_{\mathrm{b}} \left(y_i, g_{\bf w}(B_i) \right).
\]
In practice, a convex surrogate loss (e.g., hinge loss) is usually used for ERM.
We denote by $\ell_{\mathrm{sb}}$ the surrogate loss of $\ell_{\mathrm{b}}$,
and denote by
$\widehat{R}^{\mathrm{MI}}_{S, \ell_{\mathrm{sb}}}(g) = \frac{1}{n}\sum_{i=1}^n \ell_{\mathrm{sb}} \left(y_i, g_{\bf w}(B_i) \right)$.
the empirical risk defined using $\ell_{\mathrm{sb}}$.
\subsection{ERM algorithm in MIL}
\label{subsec:misvm}
Multiple-Instance SVM (MI-SVM)~\cite{NIPS2002misvm}
is a popular algorithm for the MIL problem.
MI-SVM employs $\mathcal{G}$ and usually use hinge loss as $\ell_{\mathrm{sb}}$.
The ERM problem with MI-SVM
is formulated as a non-convex optimization problem (see supplementary materials).
There are various algorithms that solve this non-convex optimization problem in practice (e.g., ~\cite{cheung2006regularization,astorino2019svm}).
For example, it is known that we can obtain $\epsilon$-approximate local optimum
efficiently by Difference of Convex (DC) programming~\cite{astorino2019svm}.
Note that, in a special case that the given labels are all negative (i.e., one-class situation),
the optimization problem becomes convex.
That is, the optimization problem of one-class MI-SVM can be solved in polynomial time
by a QP solver.
\subsection{Generalization bound for MIL}
\label{subsec:gen_mil}
For $\mathcal{G}$, two incomparable Rademacher complexity bounds
have been provided by~\cite{Sabato:2012:MLA,suehiro2020multiple}.
\begin{theo}[Rademacher complexity bounds of $\mathcal{G}$ with $L$-Lipschitz loss~\cite{Sabato:2012:MLA},\cite{suehiro2020multiple}]
\label{theo:rdm_mil}
Let $\mathcal{G} = \{ B \mapsto \max_{{\bf x} \in B} \langle {\bf w}, {\bf x} \rangle\ : \|{\bf w}\| \leq \Lambda \}$ be a hypothesis class,
Suppose that $\|{\bf x}\| \leq C$ for any ${\bf x} \in \mathcal{X}$.
Let $\ell_{\mathrm{sb}}$ be an $L$-Lipschitz loss function which provides
the convex upper bound on the binary zero-one loss.
Let $\widehat{\mathcal{G}} = \{(y, g_{\bf w}(B)) \mapsto \ell_{\mathrm{sb}}(y, g_{\bf w}(B)) \mid g_{\bf w} \in \mathcal{G}\}$.
The following bounds
hold for any $\widehat{\mathcal{G}}$:
\[
\mathfrak{R}_{S}(\widehat{\mathcal{G}}) =
\min\left\{
O \left(\frac{LC\Lambda \log_{2}
\left(4L^2C^2\Lambda^2 n\sum_{i=1}^n|B_i| \right)\ln(L^2 n)}{\sqrt{n}} \right),
O\left(\frac{LC\Lambda \sqrt{\eta \ln |\bigcup_{i=1}^n B_i|}}{\sqrt{n}} \right)
\right\},
\]
where $\eta$ is a VC-dimension-based parameter depending on the
distribution of the instances appearing in $S$.
\end{theo}
\section{Preliminary}
\label{sec:prelim}
We denote by $I(s)$ the indicator function of the event $s$.
We introduce the Rademacher complexity,
which is used to bound the generalization risk.
\begin{defi}
{\rm [The Rademacher complexity~\cite{Bartlett:2003:RGC}]}\\
Given a sample $S=(x_1,\dots,x_n) \in \mathcal{X}^n$,
the empirical Rademacher complexity $\mathfrak{R}_S(\mathcal{G})$ of a class $\mathcal{G} \subset
\{g: \mathcal{X} \to \Real\}$ w.r.t.~$S$
is defined as
$\mathfrak{R}_S(\mathcal{G})=\frac{1}{n}\mathbb{E}_{\boldsymbol{\sigma}}\left[
\sup_{h \in \mathcal{G}}\sum_{i=1}^n \sigma_i g(x_i)
\right]$,
where $\boldsymbol{\sigma} \in \{-1,1\}^n$ and each $\sigma_i$ is an independent
uniform random variable in $\{-1,1\}$.
\end{defi}
\paragraph{Generalization error bound~\cite{mohri2018foundations}}
Let $\mathcal{H}$ be a set of real-valued functions and $S$ be a size $n$ of training sample
which is independently drawn according to some unknown distribution $\mathcal{D}$.
The following bound holds with probability at least $1-\delta$
for all $h \in \mathcal{H}$
\[
\label{align:genbound}
R_{D}(h) \leq \widehat{R}_{S}(h) + 2\mathfrak{R}_S(\mathcal{H}) + 3\sqrt{\frac{1/\delta}{2n}},
\]
where we denote the generalization risk of $h$ by $R_{D}(h)$,
and denote the empirical risk of $h$ for sample $S$ by $\widehat{R}_{S}(h)$.
\section{Multiple-Instance SVM (MI-SVM)}
The optimization problem of MI-SVM is formulated as:
\begin{align*}
\min_{{\bf w},\boldsymbol{\xi}}
&\frac{1}{2}\|{\bf w}\|_2^2 + C\sum_{i=1}^n\xi_i \\
\text{subject to:~} &\forall i \in [n],~ y_i \max_{{\bf x} \in B_i} \langle {\bf w}, {\bf x} \rangle) \geq 1 -\xi_{i},~ \xi_i \geq 0,
\end{align*}
where $C$ is a constant hyper-parameter.
Note that, in one-class setting (i.e., $y_i=-1$ for any $i$)
the optimization problem of MI-SVM becomes convex programming problem.
\section{Proof of Lemma~\ref{lemm:comp_gen}}
\begin{proof}
By the assumption of $\mathcal{D}'$, the expected risk $R^{\mathrm{LC}}_{\mathcal{D}'}(h_{{\mathbf W}})$ is represented by using $\mathcal{D}$ and $p$ and $\theta$
as follows:
\begin{align}
\label{align:rlcd}
R^{\mathrm{LC}}_{\mathcal{D}'}(h_{{\mathbf W}}) = \mathbb{E}_{({\bf x}, y)\sim \mathcal{D}}
\left[\theta I\left((y \neq h({\bf x})) \right) +
(1- \theta)\sum_{\bar{y}\neq y}\frac{1}{K-1}
I\left(\bar{y}=h_{{\mathbf W}}({\bf x}) \right)
\right]
\end{align}
Let we denote $\rho_1=I \left(y \neq h_{\mathbf W}({\bf x}) \right)$ in $R^{\mathrm{MC}}_{\mathcal{D}}(h_{{\mathbf W}})$ and
denote $\rho_2=\theta I\left((y \neq h({\bf x})) \right) + (1- \theta)\sum_{\bar{y}\neq y}\frac{1}{K-1} I\left((\bar{y}=h_{{\mathbf W}}({\bf x})) \right)$ in $R^{\mathrm{LC}}_{\mathcal{D}'}(h_{{\mathbf W}})$.
We can consider the two cases of $h_{{\mathbf W}}$ for any $h_{{\mathbf W}} \in \mathcal{H}$ as follows:
For a fixed $({\bf x}, y)$,
(i) If $h_{{\mathbf W}}({\bf x}) = y$: $\rho_1=0$ and $\rho_2=0$, and thus there is no gap.
(ii) If $h_{{\mathbf W}}({\bf x}) \neq y$: The first term of $\rho_2$
is $\theta$ and the second term is equal to $(1-\theta)/(k-1)$,
because there exists unique $\hat{y}:\hat{y} \neq y$ which satisfies $\hat{y} = h_{{\mathbf W}}({\bf x})$.
Therefore, $\rho_2$ is equal to $\theta + \frac{1-\theta}{k-1}$.
Moreover, in this case $\rho_1 = 1$.
Thus, we have the bound $\frac{k-1}{\theta(k -2)+1}R^{\mathrm{LC}}_{\mathcal{D}'}(h_{{\mathbf W}}) = R^{\mathrm{MC}}_{\mathcal{D}}(h_{{\mathbf W}})$.
\end{proof}
|
2,877,628,090,964 | arxiv | \section{Introduction}
\label{section.introduction}
\begin{figure*}
\centering
\includegraphics[width=0.49\textwidth]{AA14861f01a.ps}
\includegraphics[width=0.49\textwidth]{AA14861f01b.ps}
\caption{HRC-I/{\em Chandra} images centred on $\sigma$~Ori~AB.
Approximate sizes are 30$\times$30\,arcmin$^2$ ({\em left}; note the borders of
the field of view in the corners) and 4$\times$4\,arcmin$^2$ ({\em right};
see also the Fig.~4 in Caballero 2007b).
North is up, east is left.}
\label{xfig_hrc-i}
\end{figure*}
The Trapezium-like system \object{$\sigma$~Ori}, the fourth brightest ``star''
in the Orion Belt, illuminates the \object{Horsehead Nebula} and injects energy
into its homonymous cluster, \object{$\sigma$~Orionis} (Garrison 1967; Wolk
1996; B\'ejar et~al. 1999).
Its age ($\tau \sim$ 3\,Ma -- Zapatero Osorio et~al. 2002; Sherry et~al. 2004),
relative closeness ($d \sim$ 385\,pc -- Caballero 2008b; Mayne \& Naylor 2008),
low extinction (0.04\,mag $< E(B-V) <$ 0.09\,mag -- B\'ejar et~al. 2004;
Sherry et~al. 2008), and high spatial density (Caballero 2008a) make the cluster
an ideal site to look for and characterise substellar objects (Zapatero Osorio
et~al. 2000; B\'ejar et~al. 2001; Caballero et~al. 2007; Bihain et~al. 2009).
The cluster is also investigated, for example, to study circumstellar discs
based on optical spectroscopy (Kenyon et~al. 2005; Sacco et~al. 2008; Gatti
et~al. 2008) or mid-infrared photometry (Oliveira et~al. 2006; Caballero
2007a; Zapatero Osorio et~al. 2008; Luhman et~al. 2008) and young X-ray emitter
stars (Sanz-Forcada et~al. 2004; Franciosini et~al. 2006; Skinner
et~al. 2008; L\'opez-Santiago \& Caballero 2008 and references therein).
X-ray observations in young open clusters, such as $\sigma$~Orionis, provide
information on winds of early-type stars, high-temperature coronae of late-type
stars, absorption by circumstellar discs, magnetic activity associated to fast
rotation, the cluster X-ray luminosity function, and, in general, the evolution
of young (pre-)main-sequence stars.
Except for the {\em ROSAT} variability analysis in Caballero et~al. (2009),
the latest X-ray studies in $\sigma$~Orionis have been carried out using
instruments onboard the {\em XMM-Newton} and {\em Chandra} space missions.
In this work, we analyse in detail observations of a large portion of the
cluster accomplished with the {\em Chandra} High Resolution Camera (HRC).
The lower sentivity of HRC with respect to EPIC/{\em XMM-Newton} (European
Photon Imaging Cameras) used by Franciosini et~al. (2006) was compensated by the
better spatial resolution and a longer exposure time, of almost 100\,ks.
Besides, the HRC observations in $\sigma$~Orionis were more sensitive and
covered a larger field of view than those performed with ACIS/{\em Chandra}
(Advanced CCD Imaging Spectrometer) by Skinner et~al.
(2008)\footnote{Skinner et~al. (2008) also used the High Energy Transmission
Grating, HETG, for the brightest sources.}.
HRC observations provide, however, no spectral information.
Some preliminary results based on the HRC/{\em Chandra} dataset, which is
publicly available from the {\em Chandra} Data Archive\footnote{\tt
http://cxc.harvard.edu/cda/} since 2003, have been advanced by Adams et~al.
(2002, 2003, 2004, 2005) and Caballero (2005, 2007b).
Here, we detect X-ray sources on the deep HRC image, cross-identify them with
optical, near-infrared, and previously-known X-ray sources, classify them into
young and field stars and galaxies using state-of-the-art spectro-, astro-, and
photometric data, compare with previous X-ray observations, and study the X-ray
luminosity funcion in the cluster, the frequency of X-ray emitters, and its
relation to spatial location, disc occurrence, and stellar mass.
\section{Analysis and results}
\label{section.analysis}
\subsection{Data retrieval}
\label{section.dataretrieval}
HRC, held in the {\em Chandra} focal plane array together with ACIS, is a double
CsI-coated microchannel plate detector similar to the High Resolution Imaging
photon-counting detectors onboard the {\em Einstein Observatory} and {\em
ROSAT}.
However, HRC has substantially increased capability compared with them in X-ray
quantum efficiency (in the energy range 0.08--10.0\,keV), detector size
(90$\times$90\,mm$^2$ or 16\,Mpx, which translates into a field of view of
31$\times$31\,arcmin$^2$), internal background rate, and, specially, spatial
resolution (down to 0.016\,arcsec).
Using the web version of ChaSeR at the {\em Chandra} Data Archive, we searched
and retrieved the package of primary data products associated to the
observations with identification number 2560 (sequence number 200168, principal
investigator S.~Wolk).
Observations were carried out on 2002 Nov 21--22 and took a total exposure time
of 97.6\,ks.
The field of view was approximately centred on $\sigma$~Ori~D (Mayrit~13084), a
B2V star located at 13\,arcsec to the massive binary (possibly triple) star
$\sigma$~Ori~AB at the bottom of the gravitational well in the centre of the
$\sigma$~Orionis cluster.
\subsection{Reduction}
\label{section.reduction}
Data reduction, starting with the level-1 event list provided by the
processing pipeline at the {\em Chandra} X-ray Center, was performed using
the {\em Chandra} Interactive Analysis of Observations software
CIAO~3.4\footnote{\tt http://cxc.harvard.edu/ciao3.4/} and the
{\em Chandra} Calibration Database CALDB~3.4.1\footnote{\tt
http://cxc.harvard.edu/caldb3/}.
We produced a level-2 event file using the CIAO task {\tt hrc\_process\_events}.
The data were filtered to remove events that did not have a good event ``grade''
or that had one or more of the ``status bits'' set to unity (see the
definitions of ``grade'' and ``status bits'' at the {\em Chandra}/CIAO
dictionary\footnote{\tt http://chandra.ledas.ac.uk/ciao/dictionary/}).
Intervals of solar background flaring were searched for, but none were found
(see, however, Section~\ref{section.xraylightcurves}).
As a result, we assumed a constant background and did not applied
time filtering.
An exposure map, needed by the source detection algorithm and to re-normalise
source count rates, was calculated with the CIAO tool {\tt mkexpmap} assuming a
monochromatic spectrum ($k_{\rm B}T$ = 1.0\,keV).
See further details in Albacete-Colombo et~al. (2008), where an alike reduction
process was performed.
\subsection{Source detection}
\label{section.sourcedetection}
Source detection was accomplished with the Palermo Wavelet Detection code
PWDetect\footnote{\tt http://www.astropa.unipa.it/progetti\_ricerca/PWDetect/}
version 1.3.2 (Damiani et~al. 1997a) on the level-2 event list restricted to the
0.5--10\,keV energy band and specifically compiled to run for a maximun of
7\,10$^6$ events.
PWDetect analyses the data at different spatial scales, from 0.25 to 16\,arcsec,
allowing the detection of both point-like and moderately extended sources and
the efficient resolution of close sources pairs.
The most important input parameter required by the code is the final threshold
significance for detection, $S_{\rm min}$ (in equivalent Gaussian $\sigma$s),
which depends on the background level, detector, and desired number of spurious
detections per field due to Poisson noise, as determined from extensive
simulations of source-free fields (cf. Damiani et~al. 1997a).
We determined the total number of background counts detected during the
entire exposure over the full HRC-I detector at 4.5\,10$^6$ photons with a
proprietary IDL script.
This background level translated into a final detection threshold of $S_{\rm
min}$ = 5.1$\sigma$ if we impose only {\em one spurious} detection in the field
of view.
A total of 109 HRC-I sources with $S >$ 5.1$\sigma$ were found with PWDetect.
We visually inspected each X-ray source and identified two ``double
detections'', corresponding to the stars Mayrit~3020~AB (No.~25) and
Mayrit~156353 (No.~11).
In detail, for each optical counterpart, PWDetect revealed two X-ray
sources, one bright and one faint and slightly decentred, separated by a few
tens of arcsecond.
This separation is smaller than the sizes of the point spread functions of the
X-ray sources.
The double detections may arise because of bad adopted background estimate near
bright X-ray sources (Damiani et~al. 1997a, 1997b).
We discarded the faint X-ray sources in the two cases\footnote{We thank
I.~Pillitteri for helpful guidance in this subject.} and kept the remaining 107
sources as reliable X-ray detections.
Their coordinates, significances of detection ($S$), angular separations to the
centre of field of view (offaxis), count rate, and associated uncertainties are
listed in Table~\ref{table.xraydetections}.
The sources are sorted by decreasing significance of detection.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{AA14861f02.eps}
\caption{Relative cumulative number of the HRC-I/{\em Chandra} X-ray
sources as a function of apparent flux.
The vertical [red] dashed line at 0.4\,10$^{-17}$\,W\,m$^{-2}$ indicates the
approximate completeness flux of our survey.}
\label{xfig_relN_flux}
\end{figure}
In addition, we estimated the apparent X-ray flux\footnote{Throughout this
work, we use the word `flux' for denoting the quantity $\lambda F_\lambda$.
For transforming between the Syst\`eme international d'unit\'es and the
centimetre-gram-second system, use the conversion factor
10$^{-14}$\,erg\,cm$^{-2}$\,s$^{-1}$ $\equiv$ 10$^{-17}$\,W\,m$^{-2}$.
Using $d$ = 385\,pc to the $\sigma$~Orionis cluster, a flux ${\mathcal F}$
= 10$^{-17}$\,W\,m$^{-2}$ translates into a {\em cgs} luminosity $\log{L_X}
\approx$ 29.25.} for each source.
We integrated the counts over a circular area three times wider than the one
used by PWDetect, which is in turn smaller than the size of the local point
spread function.
More than 97\,\% of the photons of a source fall within the circular area.
A mean background level was subtracted after integrating the counts over an area
of the same radius (but free of X-ray emission) in the vicinity of each source.
Finally, for the conversion betweeen counts and energy, we used the factor
$\overline{E_\gamma}$ = 1.2\,keV (mean energy per X-ray photon), which is
representative of late-type young stars in $\sigma$~Orionis.
This value was obtained by determining a weighted mean of the coronal
temperatures of the stars in Table~3 in L\'opez-Santiago \& Caballero (2008).
The completeness flux limit, which marks an inflection point in the cumulative
number of X-ray sources as a function of apparent flux, was
0.4\,10$^{-17}$\,W\,m$^{-2}$ (Fig.~\ref{xfig_relN_flux}).
The actual completness limit varies with the offaxis separation
(Section~\ref{section.spatial}).
\subsection{Cross-identification}
\label{subsection.cross}
\begin{table*}
\caption[]{X-ray stars not tabulated in the Mayrit catalogue
(Caballero~2008c)$^{a}$.}
\label{table.nonmayrit}
$$
\begin{tabular}{lcl cc cccc l}
\hline
\hline
\noalign{\smallskip}
No. & & Name & $\alpha$ & $\delta$ & $i$ & $J$ & $H$ & $K_{\rm s}$ & References$^c$ \\
& & & (J2000) & (J2000) & [mag] & [mag] & [mag] & [mag] & \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
25 & * & Mayrit 3020 AB & 05 38 44.84 & --02 35 57.1 & .... & 10.4$\pm$0.2 & 10.70$\pm$0.07 & 10.480$\pm$0.010 & vLO03, Ca05, Bo09 \\
31 & * & [W96] 4771--1056 & 05 39 00.52 & --02 39 39.0 & 12.371$\pm$0.06 & 11.665$\pm$0.028 & 11.221$\pm$0.024 & 11.110$\pm$0.022 & Wo96, Sk08 \\
39 & * & Mayrit 168291 AB & 05 38 34.31 & --02 35 00.0 & 12.272$\pm$0.03 & 11.216$\pm$0.031 & 10.565$\pm$0.033 & 10.354$\pm$0.030 & He07, Ga08, Sk08 \\
46 & & Mayrit 1093033 & 05 39 24.56 & --02 20 44.1 & 12.727$\pm$0.16 & 11.371$\pm$0.026 & 10.778$\pm$0.024 & 10.554$\pm$0.023 & He07 \\
47 & * & Mayrit 68229 & 05 38 41.35 & --02 36 44.4 & 14.404$\pm$0.03 & 12.988$\pm$0.026 & 12.330$\pm$0.024 & 12.084$\pm$0.025 & Wo96, Ca07b, Sk08 \\
48 & & Mayrit 172264 & 05 38 33.35 & --02 36 17.6 & 13.393$\pm$0.02 & 12.052$\pm$0.027 & 12.295$\pm$0.023 & 11.107$\pm$0.027 & Sh04, Sk08 \\
51 & & [SWW2004] 166 & 05 38 53.06 & --02 38 53.6 & 12.717$\pm$0.02 & 11.625$\pm$0.026 & 11.034$\pm$0.026 & 10.828$\pm$0.025 & Sh04, Ol06, Sk08 \\
57 & * & Mayrit 492211 & 05 38 27.74 & --02 43 00.9 & 13.636$\pm$0.03 & 11.189$\pm$0.030 & 11.447$\pm$0.024 & 10.287$\pm$0.024 & Sh04, Sa08 \\
58 & * & Mayrit 21023 & 05 38 45.31 & --02 35 41.3 & .... & 13.41$\pm$0.09 & 12.98$\pm$0.06 & 12.73$\pm$0.09 & Ca07b, Bo09 \\
85 & & Mayrit 270196 & 05 38 39.72 & --02 40 19.7 & 15.489$\pm$0.05 & 13.746$\pm$0.031 & 13.099$\pm$0.026 & 12.883$\pm$0.028 & Sk08 \\
95$^b$ & & Mayrit 605079 & 05 39 24.35 & --02 34 01.3 & 14.501$\pm$0.19 & 12.978$\pm$0.030 & 12.272$\pm$0.026 & 12.058$\pm$0.021 & Sh04, Sa07, Sa08 \\
98 & & Mayrit 1178039 & 05 39 33.78 & --02 20 39.8 & 13.783$\pm$0.15 & 12.367$\pm$0.026 & 11.598$\pm$0.023 & 11.429$\pm$0.023 & Sh04, Ol06 \\
99 & & Mayrit 957055 & 05 39 37.29 & --02 26 56.7 & 13.000$\pm$0.03 & 11.698$\pm$0.026 & 10.974$\pm$0.024 & 10.773$\pm$0.021 & Sh04 \\
\noalign{\smallskip}
\hline
\end{tabular}
$$
\begin{list}{}{}
\item[$^{a}$] Stars marked with an asterisk, `*', are commented in
Section~\ref{section.notes.table1}.
\item[$^{b}$] See Section~\ref{section.clustermembers} for a discussion on
Mayrit~605079.
\item[$^{c}$] Reference abbreviations --
Wo96: Wolk (1996);
vLO03: van~Loon \& Oliveira (2003);
Sh04: Sherry et~al. (2004);
Ca05: Caballero (2005);
Ol06: Oliveira et~al. (2006);
Ca07b: Caballero (2007b);
Sa07: Sacco et~al. (2007);
He07: Hern\'andez et~al. (2007);
Ga08: Gatti et~al. (2008);
Sa08: Sacco et~al. (2008);
Sk08: Skinner et~al. (2008);
Bo09: Bouy et~al. (2009).
\end{list}
\end{table*}
We cross-matched the 107 X-ray sources in Table~\ref{table.xraydetections} with
optical and near-infrared catalogues.
First, we searched for their optical/near-infrared counterparts in the Mayrit
catalogue of young stars and brown dwarfs in the $\sigma$~Orionis cluster
(Caballero 2008c).
He tabulated coordinates, $iJHK_{\rm s}$ magnitudes (from the DENIS and 2MASS
catalogues -- Epchtein et~al. 1997; Skrutskie et~al. 2006), and youth features
of a large number of confirmed and candidate cluster members.
He also tabulated foreground field dwarfs and background galaxies.
Of the 107 X-ray sources in our work, 77 were in the Mayrit catalogue.
Secondly, we found the optical/near-infrared counterparts of other 13 X-ray
sources not tabulated in the Mayrit catalogue, listed in
Table~\ref{table.nonmayrit}.
Caballero (2008c) did not record them because they had no 2MASS counterpart
(Nos.~25 and~58) or known youth features at that time and were located bluewards
of his conservative selection criterion in the $i$ vs. $i-K_{\rm s}$ diagram
(the remaining 11 stars).
However, most of the 11 ``blue'' X-ray stars are ``red'' enough to have been
considered in previous photometric searches in the cluster (see references in
footnote to Table~\ref{table.nonmayrit}).
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{AA14861f03.eps}
\caption{Separation between the 90 correlated HRC-I sources and their 2MASS
counterparts as a function of separation to the cluster centre
($\rho$ vs. $r$ diagram).
The horizontal (red) dashed line marks $\rho$ = 0\,arcsec (all the data points
are located above this line).}
\label{xfig_rho_r}
\end{figure}
In total, we found the optical/near-infrared counterparts of 90 X-ray sources.
The separations between the coordinates of the 2MASS and our X-ray sources is
plotted against the separation to the centre of the field of view in
Fig.~\ref{xfig_rho_r}.
None of them separates from zero by more than 1$\sigma$ (accounting for the
errors in the determination of the photo-centroids of the HRC-I and 2MASS
sources).
Average separations are $\Delta \alpha$ = 0.2$\pm$1.0\,arcsec and $\Delta
\delta$ = 0.0$\pm$0.7\,arcsec.
Square-mean-roots in the innermost 3\,arcmin, where the HRC-I point spread
functions are sharper, get below 0.1\,arcsec.
The remaining 17 non-cross-matched X-ray sources and their closest 2MASS sources
are listed in Table~\ref{table.noncounterpart}.
Following L\'opez-Santiago \& Caballero (2008), we also looked for the optical
photographic counterparts in the USNO-B1 catalogue (Monet et~al. 2003).
We had no success with the cross-matching.
In all cases, the separations between the coordinates of the HRC-I and 2MASS
sources are larger than 2$\sigma$ and get larger than 6$\sigma$ in 13 cases.
These 13 HRC-I sources must have counterparts fainter than the USNO-B1, DENIS,
and 2MASS limiting magnitudes at $B_J \sim$ 21.0\,mag, $R_F \sim$ 20.0\,mag, $i
\sim$ 18.0\,mag, $J \sim$ 17.1\,mag, $H \sim$ 16.4\,mag, and $K_{\rm s} \sim$
14.3\,mag, respectively.
We are not confident about the non-cross-matching of the other four X-ray
sources, which are separated to their closest 2MASS sources by less than
3$\sigma$.
In two cases, Nos.~62 and~96, nearby galaxies undetected by USNO-B1,
DENIS, or 2MASS are visible in public images (see footnotes to
Table~\ref{table.noncounterpart}).
Finally, in the other two cases, Nos.~97 and~107, the errors in coordinates of
X-ray sources could be underestimated and the 2MASS sources, which are cluster
member candidates (Burningham et~al. 2005; Caballero 2007b), may be the actual
optical counterparts (note the small angular separation of No.~97).
\begin{table}
\caption[]{The closest 2MASS sources to X-ray galaxy candidates without
optical/near-infrared counterpart listed in
Table~\ref{table.xraydetections}$^{a}$.}
\label{table.noncounterpart}
$$
\begin{tabular}{lc cc c l}
\hline
\hline
\noalign{\smallskip}
No. & & $\alpha$ & $\delta$ & $\rho$ & Name \\
& & (J2000) & (J2000) & [arcsec] & \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
24 & & 05 38 35.10 & --02 34 55.9 & 18.1 & ... \\
56 & & 05 38 39.65 & --02 30 21.0 & 13.9 & [SWW2004] 79 \\
62 & * & 05 38 14.22 & --02 35 07.3 & 6.31 & [W96] rJ053814--0235 \\
68 & & 05 38 59.65 & --02 38 15.6 & 41.2 & ... \\
76 & & 05 38 53.37 & --02 33 22.9 & 24.4 & Mayrit 203039 \\
83 & & 05 38 59.22 & --02 33 31.6 & 28.2 & .... \\
91 & & 05 38 41.37 & --02 28 31.8 & 12.3 & ... \\
93 & * & 05 38 44.70 & --02 43 22.3 & 38.9 & ... \\
96 & * & 05 39 06.64 & --02 38 08.1 & 3.22 & ... \\
97 & * & 05 38 43.86 & --02 37 06.8 & 0.766 & Mayrit 68191 \\
100 & & 05 38 24.49 & --02 29 22.8 & 27.9 & ... \\
101 & & 05 38 50.42 & --02 36 43.1 & 11.5 & ... \\
102 & & 05 38 42.00 & --02 39 23.2 & 15.7 & ... \\
104 & & 05 38 36.77 & --02 42 54.6 & 13.7 & ... \\
105 & & 05 38 34.79 & --02 34 15.8 & 16.2 & Mayrit 182305 \\
106 & & 05 39 10.21 & --02 37 09.8 & 25.9 & ... \\
107 & * & 05 38 28.25 & --02 32 27.4 & 4.16 & [BNL2005] 1.02 156 \\
\noalign{\smallskip}
\hline
\end{tabular}
$$
\begin{list}{}{}
\item[$^{a}$] Sources marked with an asterisk, `*', are commented in
Section~\ref{section.notes.table2}.
\end{list}
\end{table}
\subsection{Source classification}
\label{section.classification}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{AA14861f04a.eps}
\includegraphics[width=0.49\textwidth]{AA14861f04b.eps}
\caption{Colour-magnitude and colour-colour diagrams.
The different symbols represent:
cluster star and brown dwarf members and candidates (--red-- filled stars);
field stars (--blue-- crosses), and galaxies (--blue-- pluses).
In the $i$ vs. $i-K_{\rm s}$ diagram in the top, the dotted (blue) lines are the
approximate completeness and detection limits of the combined DENIS-2MASS
cross-correlation.
The solid (black) line is the criterion for selecting cluster stars and brown
dwarfs without known features of youth in $\sigma$~Orionis used by Caballero
(2008c).
The dashed (black) line is the criterion shifted bluewards by 0.25\,mag.
The reddest sources in the $J-K_{\rm s}$ vs. $i-J$ diagram in the bottom, with
colours $J-K_{\rm s} >$ 1.5\,mag, are the galaxies UCM0536--0239 and 2E~1456 and
the T~Tauri star Mayrit~609206 (V505~Ori).}
\label{xfig_iJKs}
\end{figure}
On the one hand, we have classified the 90 HRC-I sources with near-infrared
counterpart into 84 young cluster members and candidates, four X-ray field
stars, and two X-ray galaxies (Table~\ref{table.xraycounterparts}).
Details on this classification are given next.
On the other hand, the 13 HRC-I sources without optical or near-infrared
counterparts at separations larger than 6$\sigma$ are galaxies (possibly active
galactic nuclei; L\'opez-Santiago \& Caballero 2008).
The remaining four sources without (or with questionable) counterpart seem to be
two galaxies as well (Nos.~62 and~96; see above) and two cluster member
candidates (Nos.~97 and~107).
Given the reasonable uncertainty in the actual nature of the last four sources,
we cautiously discarded them for next steps of the analysis.
Colour-magnitude and colour-colour diagrams in Fig.~\ref{xfig_iJKs} illustrate
the source classification.
\subsubsection{Cluster members and candidates}
\label{section.clustermembers}
Of the 84 young cluster members and candidates, 72 (86\,\%) have
uncontrovertible features of youth:
OB spectral type, intense Li~{\sc i}~$\lambda$6707.8\,{\AA} resonant doublet in
absorption, mid-infrared flux excess due to a circumstellar disc, strong (broad,
asymmetrical) H$\alpha$ emission due to accretion, and/or weak alkali absorption
lines due to low gravity (Caballero 2008c and references therein;
Gonz\'alez-Hern\'andez et~al. 2008; Sacco et~al. 2008).
Two of them are fainter than the star-brown boundary at $J \approx$ 14.5\,mag
(Caballero et~al. 2007) and are, therefore, {\em bona~fide} X-ray ``young brown
dwarfs'' (Section~\ref{section.browndwarfs}).
The other 70 cluster members are classified in
Table~\ref{table.xraycounterparts} as ``young stars''.
There remain 12 stars that follow the photometric sequence defined by the
confirmed cluster stars in Fig.~\ref{xfig_iJKs} and that we classify as ``young
star candidates''.
All of them have been classified in the
same way in other photometric (Wolk 1996; Sherry et~al. 2004; Scholz \&
Eisl\"offel 2004; Caballero 2007b; Hern\'andez et~al. 2007; Bouy et~al. 2009)
and X-ray (Franciosini et~al. 2006; Skinner et~al. 2008) searches in the
cluster.
Of the young star candidates, there is spectroscopic information only for one.
\object{Mayrit~605079} (No.~95, [SWW2004]~127), a photometric member candidate
in Sherry et~al. (2004), was spectroscopically followed up by Sacco et~al. (2007,
2008).
They measured a radial velocity consistent with cluster membership, a faint
H$\alpha$ (chromospheric) emission, and a peculiar under-abundance of lithium.
They derived nuclear and isochronal ages about 10\,Ma older than expected for
$\sigma$~Orionis stars.
Mayrit~605079 might belong to a differentiated young stellar population in the
Orion Belt (Jeffries et~al. 2006; Caballero 2007a; Maxted et~al. 2008) or be
instead an active field M-dwarf interloper with CN contamination around
the Li~{\sc i} line (Caballero~2010).
\subsubsection{Field stars}
\label{subsection.fieldstars}
Caballero (2006) took high-resolution spectra of the two stars associated to the
HRC-I sources Nos.~42 and~69, and found no trace of Li~{\sc i} in absorption
(except for H$\alpha$ when it is in emission, the Li~{\sc i} line is the most
obvious spectroscopic feature in young $\sigma$~Orionis stars of the same
magnitude as Nos.~42 and~69).
The two of them were classified as non-cluster members by Caballero (2008c).
The star associated to the HRC-I source No.~51 was a photometric cluster member
candidate in Sherry et~al. (2004), but it has no lithium absorption, radial
velocity, and H$\alpha$ emission consistent with membership in $\sigma$~Orionis
according to Sacco et~al. (2008).
A fourth star, associated to the HRC-I source No.~31, was discovered and
spectroscopically investigated by Wolk (1996).
Its X-ray emission has been measured with {\em ROSAT} (Wolk 1996), {\em
XMM-Newton} (Franciosini et~al. 2006), and {\em Chandra} (Skinner et~al. 2008).
Given its location in the colour-magnitude diagram in Fig.~\ref{xfig_iJKs},
close to the confirmed field stars investigated by Caballero (2006) and its
unclear spectroscopic information (see footnote to Table~\ref{table.nonmayrit}),
we classify it as a ``possible field star''.
\subsubsection{Galaxies}
\label{subsection.galaxies}
There are two galaxies among the 90 HRC-I sources with 2MASS counterpart.
One is the very bright X-ray galaxy \object{2E~1456} (No.~9), which is extended
in optical and near-infrared images.
Besides, it has blue colours in the optical and red ones in the near infrared
(Caballero 2008c), an X-ray spectral energy distribution typical of an active
galactic nucleus (L\'opez-Santiago \& Caballero 2008), and irregular X-ray
variability (Caballero et~al. 2009).
Bright X-ray galaxies towards the $\sigma$~Orionis cluster are not uncommon (see
also 2E~1448 in L\'opez-Santiago \& Caballero 2008, which is out of the HRC-I
field of view).
The other cross-matched galaxy is \object{UCM0536--0239} (No.~64).
It is a Type~1 obscured quasi-stellar object at a spectroscopic redshift
$z_{sp}$ = 0.2362$\pm$0.0005 (Caballero et~al. 2008 and references therein).
The two galaxies have peculiar colours if compared to stars without thick discs
(Fig.~\ref{xfig_iJKs}).
\subsection{X-ray light curves}
\label{section.xraylightcurves}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{AA14861f05.ps}
\caption{A median HRC-I background light curve.
Note the high, decreasing, background level during the beginning of the
observation.}
\label{xfig_background}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{AA14861f06a.eps}
\includegraphics[width=0.49\textwidth]{AA14861f06b.eps}
\caption{{\em Top:} $\chi^2$ as a function of the mean count rate ($\chi^2_j$
vs. $\overline{CR_j}$ diagram) for 10$^3$ of the 10$^5$ X-ray simulated series.
{\em Bottom:} same as top window, but for the 107 X-ray real series. X-ray
sources above the dashed line have probabilities larger than 99.5\,\% of being
actual variables.
Light curves with mean count rates lower than 5\,ks$^{-1}$ were not used in the
statistical analysis.
Compare this figure with the Fig.~6 in Caballero et~al. (2009).}
\label{xfig_chi2_meancr}
\end{figure}
We built 107 X-ray light curves to look for flares and rotational modulation in
young stars.
For each X-ray source, we integrated the numbers of HRC-I counts in two
circular areas of the same radius, one centred on the source itself and the
other one in a region free of X-ray sources for subtracting the background
level.
The integration radii varied between 7 and 30\,arcsec depending on the offaxis
distance (i.e., the size of the point spread function)
The bin size was fixed to 1200\,s.
We discarded the first 5\,ks of each light curve because they were
affected by a relatively high background (this effect was only appreciable in
the faintest sources; Fig.~\ref{xfig_background}).
Next, we followed the same Poisson-$\chi^2$ analysis as in Caballero et~al.
(2009) on the 107 X-ray light curves to indentify variable sources
(Fig.~\ref{xfig_chi2_meancr}).
This analysis provides similar results as applying Kolmogorov-Smirnov tests or
carrying out a visual inspection of the light curves.
We used the parameters $A$ = 76, $B$ = 0.40\,ks$^{-2}$, and $s$ = 2 in the
sigmoid relation between the number of events and the mean count rate, and the
expression $\delta {\rm CR_i} = 0.91287 {\rm CR_i}^{1/2}$ in the relation
between the individual count rates and their errors.
In the case of the {\em Chandra} data, the above relations had much lower
uncertainties than for the {\em ROSAT} data in Caballero et~al. (2009).
Nine X-ray sources had probabilities of variability larger than a conservative
value of $p_{\rm var}$ = 99.5\,\% (Table~\ref{table.variablestars} and
Fig.~\ref{xfig_n0}).
The nine of them are $\sigma$~Orionis stars with signposts of youth.
Three stars (Nos.~7, 8, and~13) displayed apparent flares with
peak-to-quiescence ratios of about six and durations longer than 20\,ks.
Besides, we detected in star No.~4 the long-lasting decay of a flare with an
expected peak-to-quiescence ratio larger than six.
Other three stars (Nos.~11, 27, 30) also displayed flares during the
observations.
On the contrary to the other two stars, the flare observed in the star No.~30
was relatively faint and short (it showed a ``spike'' flare following the
nomenclature by Wolk et~al. 2005).
\begin{figure*}
\centering
\includegraphics[width=0.32\textwidth]{AA14861f07a.ps}
\includegraphics[width=0.32\textwidth]{AA14861f07b.ps}
\includegraphics[width=0.32\textwidth]{AA14861f07c.ps}
\includegraphics[width=0.32\textwidth]{AA14861f07d.ps}
\includegraphics[width=0.32\textwidth]{AA14861f07e.ps}
\includegraphics[width=0.32\textwidth]{AA14861f07f.ps}
\includegraphics[width=0.32\textwidth]{AA14861f07g.ps}
\includegraphics[width=0.32\textwidth]{AA14861f07h.ps}
\includegraphics[width=0.32\textwidth]{AA14861f07i.ps}
\caption{HRC-I/{\em Chandra} light curves of the nine X-ray variable stars in
Table~\ref{table.variablestars}.
The grey areas between 1 and 5\,ks indicate portions of all the light curves
affected by high background.}
\label{xfig_n0}
\end{figure*}
\begin{table}
\caption[]{Sources with a probability of X-ray variability in the HRC-I
data larger than $p_{\rm var}$ = 99.5\,\%.}
\label{table.variablestars}
$$
\begin{tabular}{l l cc l}
\hline
\hline
\noalign{\smallskip}
No. & Name & $\overline{CR}$ & $\chi^2$ & Variability \\
& & [ks$^{-1}$] & & type \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
4 & \object{Mayrit 348349} & 29.1 & 12.5 & Flare decay \\
7 & \object{Mayrit 789281} & 54.9 & 24.8 & Flare \\
8 & \object{Mayrit 863116} AB & 55.6 & 26.3 & Flare with structure \\
11 & \object{Mayrit 156353} & 13.0 & 3.6 & Flare \\
13 & \object{Mayrit 180277} & 11.8 & 4.1 & Flare \\
18 & \object{Mayrit 403090} & 11.0 & 4.5 & Rot. modulation? \\
27 & \object{Mayrit 489165} & 8.0 & 3.2 & Flare \\
28 & \object{Mayrit 489196} & 8.0 & 4.5 & Rot. modulation? \\
30 & \object{Mayrit 397060} & 5.4 & 4.1 & Flare \\
\noalign{\smallskip}
\hline
\end{tabular}
$$
\end{table}
\begin{figure*}
\centering
\includegraphics[width=0.33\textwidth]{AA14861f08a.ps}
\includegraphics[width=0.33\textwidth]{AA14861f08b.ps}
\includegraphics[width=0.33\textwidth]{AA14861f08c.ps}
\caption{Same as Fig.~\ref{xfig_n0}, but for three brightest X-ray stars:
Mayrit~AB ($\sigma$~Ori~AB, No.~1), Mayrit~114305~AB ([W96]~4771--1147~AB,
No.~2), and Mayrit~42062~AB ($\sigma$~Ori~E, No.~3).}
\label{xfig_n001to3}
\end{figure*}
The two remaining stars, Nos.~18 and~28, showed variations not clearly
attributable to ``usual'' flares.
The light curve of the source No.~28 is similar to that observed for
$\sigma$~Ori~E, a star with rotationally-modulated X-ray emission (see below).
The case of the source No.~18 is more complex.
The count-rate enhancement suffered at about 40\,ks from the beginning of the
observation could be related to a persistent flare, although occultation of part
of the corona by a companion or of an active region by stellar rotation should
not be discarded.
Nevertheless, since HRC-I does not provide spectral energy information, we could
not perform an analysis of the time-resolved spectra to corroborate the
hypothesis of rotational modulation in the light curves of the stars Nos.~18
and~28.
To date, there have been few incontestable cases of X-ray rotational modulation
in the $\sigma$~Orionis cluster (e.g., Franciosini et~al. 2006).
The most documented case is that of the bright B2Vpe star $\sigma$~Ori~E
(Mayrit~42062~AB, No.~3), which was found to have an X-ray emission modulated
with a period consistent with the stellar rotation, $P \sim$ 1.19\,d
($P \sim$ 103\,ks; Townsend et~al. 2010 and references therein), by Skinner
et~al. (2008).
Our Poisson-$\chi^2$ analysis gave $\sigma$~Ori~E a low probability of
variability.
However, Caballero et~al. (2009) noticed that the methodology was sensitive to
flaring activity, but not to low-amplitude modulation.
We visually inspected the X-ray light curve of $\sigma$~Ori~E and detected a
modulation with a sinusoidal-like variation of the HRC-I count rate between 20
and 50\,ks$^{-1}$ and an estimated period slightly longer than the duration of
the observations ($>$97.6\,ks), which is also consistent with the rotational
period.
On the contrary to Groote \& Schmitt (2004), Sanz-Forcada et~al. (2004), and
Caballero et~al. (2009), who reported strong X-ray flares in the light curves of
$\sigma$~Ori~E, we found any (the flares are originated in its low-mass
companion; Caballero et~al. 2009).
The light curve of $\sigma$~Ori~E is displayed in the right panel of
Fig.~\ref{xfig_n001to3} in comparison with the two brightest X-ray sources in
our HRC-I observations.
The supposed stable light curve of $\sigma$~Ori~AB (Mayrit~AB, No.~1), whose
X-ray emission is likely originated in a strong wind (in particular for
$\sigma$~Ori~AB: Sanz-Forcada et~al. 2004; Skinner et~al. 2008 -- in general for
OB stars: Lucy \& White 1980; Owocki \& Cohen 1999; Kudritzki \& Puls 2000
G\"udel \& Naz\'e 2009), had a $\chi^2$ value slightly below the limit $p_{\rm
var}$ that we adopted for variability, just as it occurred during the
ACIS-S observations by Skinner et~al. (2008).
The light curve of the classical T~Tauri star Mayrit~114305~AB
([W96]~4771--1147~AB, No.~2) had a lower $\chi^2$ value of about 1.2, but
showed a hint of rotational modulation.
\subsection{Beyond the completeness}
\label{section.beyond}
\begin{table*}
\caption[]{Previously-known sources in the 10-spurious search and not in
Table~\ref{table.xraydetections}.}
\label{table.beyond}
$$
\begin{tabular}{l cc ccc c ll l}
\hline
\hline
\noalign{\smallskip}
Name & $\alpha$ & $\delta$ & $\Delta \alpha$, $\Delta \delta$ & $S$ & Offaxis & CR & NX & CXO & Remarks \\
& (J2000) & (J2000) & [arcsec] & ($\sigma$) & [arcmin] & [ks$^{-1}$] & (Fr06)& (Sk08)& \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
Mayrit~734047 & 05 39 20.44 & --02 27 36.8 & 5.54 & 4.92 & 12.0 & 2.1$\pm$0.6 & 159 & ... & Li~{\sc i}, H$\alpha$ \\
Mayrit~468096 & 05 39 15.83 & --02 36 50.7 & 2.09 & 4.83 & 7.60 & 0.6$\pm$0.2 & 151 & ... & Li~{\sc i}, Class II \\
Mayrit~441103 & 05 39 13.47 & --02 37 39.1 & 1.58 & 4.62 & 7.13 & 0.42$\pm$0.14 & 148 & ... & ... \\
{[FPS2006]~NX~120} & 05 38 59.51 & --02 35 28.6 & 0.83 & 4.67 & 3.47 & 0.23$\pm$0.08 & 120 & 40 & Galaxy \\
Mayrit~270181 & 05 38 44.49 & --02 40 30.5 & 0.90 & 4.71 & 4.60 & 0.22$\pm$0.09 & ... & ... & Li~{\sc i}, low~$g$ \\
\noalign{\smallskip}
\hline
\end{tabular}
$$
\end{table*}
We performed a new search of X-ray sources in our HRC-I data by imposing a
less-restrictive identification criterion.
In Section~\ref{section.sourcedetection}, we established only one spurious X-ray
source among the 107 (actually 109) detections.
In this case, we eased the identification of very faint sources close to the
noise limit by setting to ten the maximum of spurious X-ray sources with
PWDetect.
The corresponding background level translated into a final threshold
of significance of detection $S_{\rm min}$ = 4.6$\sigma$ (it was $S_{\rm min}$ =
5.1$\sigma$ for a maximum of one spurious X-ray source).
The less-restrictive choice resulted in the detection of 142 sources (i.e., we
gained about 24 new reliable sources by accepting nine extra spurious
detections).
However, the gain was not considerable because of the large contamination by
extragalactic sources at low X-ray count rates.
Of the 33 newly identified sources, we list five in Table~\ref{table.beyond}.
Four of them were identified in the X-ray observations by Franciosini et~al.
(2006).
One of the four sources was also identified by Skinner et~al. (2008), which
supports our X-ray detections beyond the completeness.
There is optical/near-infrared counterpart for all HRC-I sources in
Table~\ref{table.beyond} except for [FPS2006]~NX~120 ([SSC2008]~40), which
is probably a galaxy (Franciosini et~al. 2006)\footnote{At less than
6\,arcsec from [FPS2006]~NX~120 lie 2MASS J05385930--0235282, a fore- or
background source based on $iJHK_{\rm s}$ colours, and {[BZR99]~S\,Ori~72}, a
young L/T-transition cluster member candidate or active galactic nucleus (Bihain
et~al. 2009).}.
The four cross-matched X-ray sources are $\sigma$~Orionis cluster members and
candidates with faint X-ray emission (Caballero 2008c).
Of them, only Mayrit~441103 has no known feature of youth.
We followed the criterion in L\'opez-Santiago \& Caballero (2008) to discard the
remaining 29 X-ray sources without 2MASS counterpart (including [SSC2008]~40) as
stellar/substellar candidates, and classified them as objects of extragalactic
nature.
\section{Discussion}
\label{section.discussion}
\subsection{Short-term X-ray variability: HRC-I light curves}
\label{section.shortterm}
The nine X-ray variable sources in Table~\ref{table.variablestars} are young
stars in the $\sigma$~Orionis cluster.
This makes a minimum frequency of X-ray variability of 11\,\% (9/84; it
increases to 12\,\% if we take into account $\sigma$~Ori~E).
The reader should compare this value with the ones of 36 and 39\,\% reported by
Franciosini et~al. (2006) and Caballero et~al. (2009), respectively, in the same
cluster, but using different sampling and datasets (in practice, 43\,ks of
continuous observations with {\em XMM-Newton} --Franciosini et~al. 2006-- and
one {\em ROSAT} visit per day during 34 days --Caballero et~al. 2009--).
Although Skinner et~al. (2008) did not provide a frequency, we estimated a rough
value at 25\,\% from their data (see below).
We ascribed the low frequency derived by us to our conservative variability
criterion, rather than to the different completeness depths of the surveys.
Our value of 11\,\% is a lower limit to the X-ray frequency because there are
probable variable young stars that did not pass our filter.
For example, stars Nos.~16 (Mayrit~97212) and~17 (Mayrit~157155), which were not
listed in Table~\ref{table.variablestars}, displayed hints of rotational
modulation and flaring activity, respectively, after a visual inspection.
We also compared our derived flare rate with other measurements in the
literature.
With seven flares detected during our observation among 84 young stars and
candidates, we derived $1/1180$ flares per star per kilosecond.
This value decreased to less than about $1/1070$ when we discarded the
early-type (OB) cluster stars (Section~\ref{section.earlytype}).
Both corrected and uncorrected values are consistent with previous
determinations of flare rates, although we did not consider the completeness
for flare detection.
For example, with different instruments, sensitivities, flare definitions and
energies, data biases, extragalactic contaminations, and stellar spectral-type
intervals, Wolk et~al. (2005), Albacete-Colombo et~al. (2007), and Stelzer
et~al. (2007) reported flare rates of $1/1150$, $1/610$, and $1/1320$ flares per
star per kilosecond, respectively, in star-forming regions slightly younger than
$\sigma$~Orionis ({Orion Nebula Cluster}, {Cyg~OB2}, and
{Taurus}; $\tau \sim$ 1--2\,Ma).
Several stars in Table~\ref{table.variablestars} have been previously reported
to display X-ray variability.
By applying Kolmogorov-Smirnov tests on the unbinned photon arrival times,
Franciosini et~al. (2006) found that roughly a half of the (weak-line and
classical) T~Tauri stars in $\sigma$~Orionis were variable at the 99\,\%
confidence level.
Eight cluster members with signposts of youth and two candidate members showed
clear flares during their {\em XMM-Newton} observations.
Of them, we were able to detect the X-ray emission in the HRC-I image of five
stars, of which only one displayed variability during our observations (No.~28,
Mayrit~489196, [FPS2006]~NX~61), but of rotational-modulation type.
However, the two X-ray light curves obtained with {\em XMM-Newton} and {\em
Chandra} resemble each other, so we may face the same variability type (e.g., a
low-amplitude, long-lasting flaring activity).
Franciosini et~al. (2006) also reported five young stars showing significant
variability not clearly attributable to flares.
We detected the five of them and found that one, No.~11 (Mayrit~156353,
[FPS2006]~NX~76), displayed a flare during our observations.
In contrast, during the entire {\em XMM-Newton} observations, the star showed a
steady decay by a factor of $\sim$2, which we attribute to the decay of a
long-lasting flare.
The frequencies of X-ray rotational modulation reported by us and Franciosini
et~al. (2006) are consistent with the approximate interval 1--3\,\%.
The list of ten variables in Skinner et~al. (2008) included Mayrit~42062~AB
($\sigma$~Ori~E; see above), the unseen galaxy associated to No.~24 (a slow
low-amplitude variable X-ray with unusual hardness and without
optical/near-infrared counterpart), and some young stars with slow decline
(No.~5, Mayrit~203039) or increase (No.~25, Mayrit~3020~AB) in count rate.
Besides, X-ray flares were visible in Mayrit~105249 (No.~12; variable in
Franciosini et~al. 2006) and, possibly, Mayrit~92149~AB (No.~29).
If we do not take into account $\sigma$~Ori~E, there are no stars in common in
the lists of variable X-ray sources in Skinner et~al. (2008) and our work.
Besides, two of the five most variable stars in the study by Caballero et~al.
(2009; Section~\ref{section.hrirosat}) appear also in
Table~\ref{table.variablestars}.
They are Mayrit~863116~AB (No.~8) and Mayrit~156353 (No.~11).
Interestingly, the HRC-I light curve of the bright star Mayrit~863116~AB showed
a flare with structure.
The double hump may be originated in a series of two flares of different shape
or in only one flare that was occulted by stellar rotation, a companion, or a
disc (Mayrit~863116~AB seems to be a spectroscopic binary with a warm
circumstellar disc; Caballero et~al. 2009).
There are only a few stars that have been repeatedly found to display the same
X-ray variability type, such as the bright early-type $\sigma$~Ori~E and
T~Tauri Mayrit~863116~AB stars.
To sum up, some young X-ray stars that displayed variability at other epochs did
not do it during our observations, and vice~versa.
This result was expected from the relatively low flare rate measured above of
one flare per star every one or two weeks.
As a result, the variability frequencies given above depend on several
factors including the sensitivity, length, and energy bandpass of the
observation and can only be taken as lower limits.
\subsection{Long-term X-ray variability: comparison to previous X-ray surveys in
$\sigma$~Orionis}
\label{section.longterm}
\begin{figure*}
\centering
\includegraphics[width=0.49\textwidth]{AA14861f09a.eps}
\includegraphics[width=0.49\textwidth]{AA14861f09b.eps}
\includegraphics[width=0.49\textwidth]{AA14861f09c.eps}
\includegraphics[width=0.49\textwidth]{AA14861f09d.eps}
\caption{Count rates of IPC/{\em Einstein} ({\em top left}), HRI/{\em ROSAT}
({\em top right}), ACIS-S+HETG/{\em Chandra} ({\em bottom left}), and EPIC/{\em
XMM-Newton} ({bottom right}) as a function of count rates of
HRC-I/{\em Chandra}.
The dashed lines indicate IPC--, HRI--, ACIS-S+HETG--, and EPIC--HRC-I
count-rate ratios of 4.00, 0.40, and 0.04, 4.50, 0.45, and 0.045, 2.50, 0.25,
and 0.025, and 15.0, 1.50, and 0.15 from top to bottom, respectively.
The OB-type binary star $\sigma$~Ori~AB has {\em not} been used as a
reference in the ACIS-S+HETG--HRC-I comparison.}
\label{xfig_othermission_crhrc-i}
\end{figure*}
\begin{table}
\caption[]{Energy bands, spatial resolutions, and field of view of some
X-ray instruments onboard space missions$^{a}$.}
\label{table.spacemissions}
$$
\begin{tabular}{ll ccc}
\hline
\hline
\noalign{\smallskip}
Space & Instrument & Energy & Resolution & FoV \\
mission & & [keV] & [arcsec] & [arcmin]\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
{\em Einstein} & HRI & 0.2--3.0 & 4 & 25 \\
& IPC & 0.3--3.5 & 60 & 75 \\
{\em ROSAT} & HRI & 0.1--2.4 & 5 & 20\,$\times$\,20\\
& PSPC & 0.1--2.4 & 15 & 114 \\
{\em Chandra} & HRC-I & 0.08--10 & 0.4 & 31\,$\times$\,31\\
& ACIS-S & 0.2--10 & 1.2 & 16\,$\times$\,16\\
{\em XMM-Newton}& PN & 0.2--15 & 6 & 30 \\
& MOS & 0.2--12 & 6 & 30 \\
\noalign{\smallskip}
\hline
\end{tabular}
$$
\begin{list}{}{}
\item[$^{a}$] See an exhaustive compilation of parameters of X-ray detectors at
{\tt http://space.mit.edu/$\sim$jonathan/xray\_detect.html}.
\end{list}
\end{table}
All the large space missions able to observe low- to mid-energy X-rays, i.e.
{\em Einstein Observatory} (HEAO-2), {\em ROSAT} (R\"ontgensatellit), {\em
XMM-Newton}, and {\em Chandra}, have observed the $\sigma$~Orionis region in
detail (Section~\ref{section.comparison}).
Besides, the Advanced Satellite for Cosmology and Astrophysics ({\em ASCA})
observed nearby areas close to the Horsehead Nebula and {Alnitak}
($\zeta$~Ori).
In principle, the different pointing centres and exposure times of the
observations, the singular apertures, fields of view, spatial resolutions and,
specially, detector responses of the instrument/telescope systems
(Table~\ref{table.spacemissions}), and the ``colours'' and intrinsic variability
of the X-ray sources avoid a direct comparison between previous results and
ours.
In spite of these differences, we expected to find a correlation between
count rates measured by HRC-I and the other used instruments and to identify
X-ray sources that deviate from the general trends.
See, e.g., the {\em Einstein}-{\em ROSAT} comparison in the Pleiades by Stauffer
et~al. (1994).
\begin{table}
\caption[]{Long-term X-ray variable stars.}
\label{table.variable}
$$
\begin{tabular}{ll c l}
\hline
\hline
\noalign{\smallskip}
No. & Name & Variability & Instrument \\
& & factor$^{b}$ & \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
3 & \object{Mayrit 42062} AB & 4.5 & EPIC \\
4 & Mayrit 348349 & 0.22 & EPIC \\
16 & \object{Mayrit 97212} & 7.2 & EPIC \\
20 & \object{Mayrit 344337} AB & 7.3 & EPIC \\
37 & \object{Mayrit 102101} AB & 8.3 & HRI \\
80 & \object{Mayrit 497054} & 4.6 & EPIC \\
84$^{a}$& \object{Mayrit 433123} & 0.21 & EPIC \\
97 & No.~97 & 5.9 & ACIS-S\\
99 & \object{Mayrit 957055} & 5.2 & IPC \\
... & \object{Mayrit 631045}$^{f}$ & $\gtrsim$4.8 & EPIC \\
... & \object{Mayrit 662301}$^{f}$ & $\gtrsim$6.3 & EPIC \\
... & \object{Mayrit 841079}$^{f}$ & $\gtrsim$4.4 & EPIC \\
\noalign{\smallskip}
\hline
\end{tabular}
$$
\begin{list}{}{}
\item[$^{a}$] See Section~\ref{section.browndwarfs} for a discussion on
the brown dwarf No.~84/Mayrit~433123.
\item[$^{b}$] Quotient of the measured count-rate ratio CR$_{1}$/CR$_{2}$ and
the average count-rate ratio (CR$_{1}$/CR$_{2}$)$_{0}$, where 1 denotes the
space-mission instrument listed in the last column and 2 denotes
HRC-I/{\em Chandra}.
The values of (CR$_{1}$/CR$_{2}$)$_{0}$ are 0.40 (IPC/{\em Einstein}), 0.45
(HRI/{\em ROSAT}), 0.25 (ACIS-S/{\em Chandra}), and 1.50
(EPIC/{\em XMM-Newton}).
\end{list}
\end{table}
The ``long-term variability'' found in our comparison and summarised in
Table~\ref{table.variable} and Fig.~\ref{xfig_othermission_crhrc-i} may
actually be the result of observing an X-ray source with short- or mid-term
variability (in scales of hours or a few days; e.g., flares) at two separated
epochs.
In particular, nine $\sigma$~Orionis stars and one galaxy displayed
quotients of the measured and average count-rate ratios larger than 4 or smaller
than 1/4.
Some of them showed variations of a factor 7 or more or were identified to
vary in different comparisons:
\begin{itemize}
\item No.~3/Mayrit~42062~AB underwent flaring-like activity during the EPIC
observations.
\item No.~4/Mayrit~348349 showed an apparent flare decay during our HRC-I
observations and other strong flare during the HRI/{\em ROSAT} ones.
\item No.~16/Mayrit~97212 and No.~20/Mayrit~344337~AB showed significant
variability not clearly attributable to flares during the EPIC observations.
\item No.~37(Mayrit~102101~AB underwent a strong flare during HRI observations.
\item The stars Mayrit~631045, Mayrit~662301, and Mayrit~841079, with
designations NX~149, NX~7, and NX~174, respectively, in Franciosini et~al.
(2006; Section~\ref{section.epicxmmnewton}) displayed flares and were bright
enough during EPIC observations to be fitted to one-temperature models.
Mayrit~841079 (V603~Ori) is the source of the Herbig-Haro object \object{HH~445}
(Reipurth et~al. 1998; Andrews et~al. 2004).
\end{itemize}
\subsection{The cluster X-ray luminosity function}
\label{section.luminosityfunction}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{AA14861f10a.eps}
\includegraphics[width=0.49\textwidth]{AA14861f10b.eps}
\caption{{\em Top panel:} same as Fig.~\ref{xfig_relN_flux}, but only for young
stars, young star candidates, and possible young stars in $\sigma$~Orionis (as
classified in Table~\ref{table.xraydetections}).
The dotted line indicates the relative cumulative number of Franciosini
et~al. (2006) EPIC X-ray sources as a function of apparent flux.
Except for a $4 \pi d^2$ factor, the two curves delineate the cumulative
X-ray luminosity function of the cluster.
{\em Bottom panel:} same as the top panel, but in an histogram.}
\label{xfig_xlf}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{AA14861f11a.eps}
\includegraphics[width=0.49\textwidth]{AA14861f11b.eps}
\caption{X-ray flux ({\em top}) and X-ray-to-$J$-band lumninosity ratio ({\em
bottom}) as a function of the $i-J$ colour.
Error bars account for the uncertainty in count rate and offaxis separation.}
\label{xfig_fluxLXLJ_i-J}
\end{figure}
The X-ray luminosity functions (XLFs) of young star clusters have been
extensively studied during the last three decades.
The {\em ROSAT} XLFs of the {Pleiades}, {Hyades}, or
{$\alpha$~Persei} ($\tau \sim$ 90--600\,Ma, $d \sim$ 45--190\,pc --
Stauffer et~al. 1994; Stern et~al. 1995; Randich et~al. 1996) represented a
cornerstone until the advent of {\em Chandra} and {\em XMM-Newton}.
By taking advantage of the improved spatial resolution of these space missions
currently under operation, clusters at longer heliocentric distances but with
much younger ages than the three of them above have been studied in detail
since, such as the {Orion Nebula Cluster}, {IC~348},
{NGC~1333}, {NGC~2264}, or {M~17} ($\tau \sim$ 1--10\,Ma,
$d \sim$ 260--1600\,pc -- Feigelson et~al. 2002; Preibisch \& Zinnecker 2002;
Getman et~al. 2002; Flaccomio et~al. 2006; Broos et~al. 2007).
In spite of the low number of X-ray emitters investigated in $\sigma$~Orionis
with respect to the star-forming regions listed above, it has sill a number of
advantadges, e.g., nearness, very low visual extinction, and wide knowledge of
its stellar and substellar populations (Section~\ref{section.introduction}).
Franciosini et~al. (2006) already investigated the XLF of $\sigma$~Orionis.
We illustrate the classical approach with Fig.~\ref{xfig_xlf}.
The HRC-I median flux of all the cluster members and candidates, without
attending to its spectral type, is 6.2\,10$^{-17}$\,W\,m$^{-2}$.
We transformed back the X-ray luminosities tabulated by Franciosini et~al.
(2006) to fluxes (see below).
For seven $\sigma$~Orionis stars detected by them but without luminosity
determination, we used their EPIC count rates and count-rate-to-flux
conversion factor.
Except for slight differences that can be ascribed to the different spectral
sensitivity of HRC-I and EPIC and mehod of flux estimation, the Franciosini
et~al. (2006) XLF and ours are quite similar.
Because of the long-lasting debate on the actual cluster distance and the
absence of spectral-type determination for all the $\sigma$~Orionis
members and candidates, we preferred instead the diagrams in
Fig.~\ref{xfig_fluxLXLJ_i-J} for our XLF discussion.
Both the apparent X-ray flux (top panel) and the X-ray-to-$J$-band lumninosity
ratio (bottom panel) are independent of the actual distance, while there are
accurate $i-J$ measurements for all the X-ray stars and brown dwarfs in
$\sigma$~Orionis, mostly taken from Caballero (2008c).
The optical/near-infrared colour $i-J$ is a suitable indicator of effective
temperature (i.e., of spectral type).
The use of other colours involving bluer optical and redder near-infrared bands
(e.g., $V-J$, $i-K_{\rm s}$) is currently impractical because of no data
availability (all the faintest cluster members lack $B$-, $V$-, and $R$-band
measurements) or flux excesses at wavelengths longer than 1.2\,$\mu$m in cluster
members with circum(sub)stellar material.
The X-ray-to-$J$-band lumninosity ratio, $L_X / L_J$, is defined by:
\begin{equation}
\frac{L_X}{L_J} = \frac{4 \pi d^2 {\mathcal F}_X}{4 \pi d^2 {\mathcal F}_J},
\end{equation}
\noindent where ${\mathcal F} \equiv \lambda F_\lambda$ is the apparent flux,
in watts per square meter, and the apparent $J$-band flux ${\mathcal F}_J$ is
approximately proportional to the apparent bolometric flux ${\mathcal F}_{\rm
bol}$.
The spectral energy distribution of late-K- and M-type stars peak at the $J$
band, which is besides the band least affected by photometric variability and
presence of discs.
The $L_X / L_J$ ratio is thus a proxy for $L_X / L_{\rm bol}$.
Diagrams showing X-ray-to-$J$-band luminosity ratio as a function of
colour/effective temperature/spectral type, as in the bottom panel in
Fig.~\ref{xfig_fluxLXLJ_i-J}, have been shown by, e.g., Micela et~al. (1999),
Reid (2003), and Daemgen et~al. (2007).
In our diagram, three different regions can be separated: massive early-type
stars (mostly OB), intermediate- and low-mass stars (GKM), and brown dwarfs
(with spectral types later than about M5.5 in $\sigma$~Orionis).
\subsubsection{Early-type stars}
\label{section.earlytype}
With HRC-I/{\em Chandra}, we identified eight $\sigma$~Orionis stars with
spectral types earlier than F0, listed in Table~\ref{table.earlytype}.
The list includes three stars in the eponymous $\sigma$~Ori Trapezium-like
system with spectral types B2 or earlier.
In Fig.~\ref{xfig_fluxLXLJ_i-J}, the eight of them have colours $i-J
\lesssim$ 0.2\,mag and display a wide range of $L_X / L_J$ ratios.
The spectral types in Table~\ref{table.earlytype} were borrowed from the
bright-star compilation in Caballero (2007a), except for the secondaries in the
binary systems Nos.~3 and~10 (a colon, ``:'', after a spectral type denotes
uncertainty; the letters ``p'' and ``e'' indicate peculiarity and emission,
respectively).
We estimated a K--M: spectral type for Mayrit~42062~B, the companion at $\rho
\approx$ 0.33\,arcsec to $\sigma$~Ori~E, based on its approximate $K_{\rm s}$
magnitude as evaluated by Bouy et~al. (2009).
The estimation of the late B-early A spectral type for Mayrit~306125~B, the
companion at $\rho \approx$ 0.47\,arcsec to Mayrit~306125~A (HD~37525), was
taken from Caballero et~al. (2009).
The brightest star in the cluster, No.~1/$\sigma$~Ori~AB + ``F'', seems to be
actually a close triple systems of OB stars (Frost \& Adams 1904; Bolton 1974;
Caballero 2008a; S. Sim\'on-D\'{\i}az et~al., in~prep.).
Only two stars, No.~53/Mayrit~524060 and No.~88/Mayrit~960106, are not known to
form part of a multiple system.
Of the eight early-type stars, three (Nos. 1, 3, and~10) were bright enough in
X-rays for HRI/{\em ROSAT} to be analysed by Caballero et~al. (2009).
Other three stars (Nos. 34, 53, and~74) were detected with EPIC/{\em
XMM-Newton} by Franciosini et~al. (2006).
In practice, they could not resolve the X-ray emission coming from the system
HD~294272 (No.~34/Mayrit~189303 and No.~74/Mayrit~182303).
The pair was first resolved in X-rays by Caballero (2007a) using our HRC-I/{\em
Chandra} dataset.
Of the other two stars, No.~88/Mayrit~960106 was detected with PSPC/{\em ROSAT}
by White et~al. (2000) but escaped other X-ray surveys.
The presence of the last star, No.~70/Mayrit~13084 ($\sigma$~Ori~D), in the
current HRC-I data was already noticed by Sanz-Forcada et~al. (2004), Caballero
(2007b), and Skinner et~al. (2008), but it has never been analysed.
The B2V star was not detected either with HRI-PSPC/{\em ROSAT}, EPIC/{\em
XMM-Newton}, or ACIS-S/{\em Chandra}.
The early-type stars with the lowest $L_X / L_J$ ratios were No.~70/Mayrit~13084
and No.~74/Mayrit~182303, which justified previous undetections, while the star
with the highest $L_X / L_J$ ratio was No.~88/Mayrit~960106.
This is the B9-type giant V1147~Ori, an $\alpha^2$~CVn-type variable with
peculiar silicon abundance (Joncas \& Borra 1981; North 1984; Catalano \& Renson
1998).
Its undetection in previous surveys with HRI/{\em ROSAT}, EPIC/{\em
XMM-Newton}, and ACIS-S/{\em Chandra} may reside simply in its location in
$\sigma$~Orionis, at about 16\,arcmin to the east of the cluster centre.
Only a few $\sigma$~Orionis stars more massive than 2.5\,$M_\odot$ (Caballero
2007a) have not been detected with HRC-I/{\em Chandra}.
They are {Mayrit~208324} (HD~294271, B5V),
{Mayrit~1116300}\footnote{L\'opez-Santiago \& Caballero (2008) provided a
restrictive upper limit of the EPIC/{\em XMM-Newton} apparent flux of
Mayrit~1116300.} (HD~37333, A1Va -- but see Naylor 2009), and
{Mayrit~11238} ($\sigma$~Ori~C, A2V).
The star {HD~37699}, a young B5V star with an envelope at 25.8\,arcmin to
the cluster centre, seems to be associated to the stellar population near the
Horsehead Nebula (Caballero \& Dinis 2008).
In summary, with HRC-I/{\em Chandra} we detected all the $\sigma$~Orionis stars
more massive than 5\,$M_\odot$ ($\sigma$~Ori~AB, D, E) and roughly two thirds of
the stars with masses in the interval 2.5 to 5\,$M_\odot$.
Stars in multiple systems or with spectral peculiarities tend to be among the
stars with detected X-ray emission.
\begin{table}
\caption[]{Early-type stars in $\sigma$~Orionis detected with HRC-I/{\em
Chandra}.}
\label{table.earlytype}
$$
\begin{tabular}{lll l}
\hline
\hline
\noalign{\smallskip}
No. & Name & Alternative & Spectral \\
& & name & type \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
1 & Mayrit AB & $\sigma$~Ori~AB + ``F''& O9.5V + B0.5V + ? \\
3 & Mayrit 42062 AB & $\sigma$~Ori~E & B2Vpe + K--M: \\
10 & \object{Mayrit 306125} AB & HD 37525 AB & B5Vp + B--A: \\
34 & \object{Mayrit 189303} & HD 294272 B & B8V \\
53 & \object{Mayrit 524060} & HD 37564 & A8V: \\
70 & \object{Mayrit 13084} & $\sigma$~Ori~D & B2V \\
74 & \object{Mayrit 182305} & HD 294272 A & B9.5III \\
88 & \object{Mayrit 960106} & V1147 Ori & B9IIIp \\
\noalign{\smallskip}
\hline
\end{tabular}
$$
\end{table}
\subsubsection{Brown dwarfs}
\label{section.browndwarfs}
\begin{table*}
\caption[]{Intermediate- and low-mass X-ray stars in $\sigma$~Orionis with
colours $J-K_{\rm s} >$ 1.15\,mag$^{a}$.}
\label{table.ctts}
$$
\begin{tabular}{lll c ccccc}
\hline
\hline
\noalign{\smallskip}
No. & Name & Alternative & $J-K_{\rm s}$ & Sp. & pEW(Li {\sc i}) & pEW(H$\alpha$) & SED & Phot. \\
& & name & [mag] & type & [m\AA] & [\AA] & class & variable \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
29 & Mayrit~92149~AB & [W96] rJ053847--0237 & 1.24$\pm$0.06 & M1.0: & 481$\pm$8 & --20.9$\pm$1.2 & II & no \\
36 & Mayrit~203283 & [W96] rJ053831--0235 & 1.16$\pm$0.04 & M0.0: & 479$\pm$6 & --10.2$\pm$0.9 & II & no \\
45 & Mayrit~609206 & V505 Ori & 2.01$\pm$0.04 & K7.0 & 431$\pm$11 & --25.1$\pm$0.7 & II & yes \\
61 & Mayrit~30241 & [HHM2007] 687 & 1.28$\pm$0.05 & ... & ... & ... & II & no \\
72 & Mayrit~521199 & TX Ori & 1.46$\pm$0.04 & K4 & ... & --16.6 & II & yes \\
75 & Mayrit~622103 & BG Ori & 1.30$\pm$0.04 & M0.5: & 480$\pm$7 & --40$\pm$3 & II & yes \\
79 & Mayrit~203260 & Haro 5--11 & 1.19$\pm$0.04 & M2.0: & 342$\pm$2 & --198$\pm$12 & II & no \\
80 & Mayrit~497054 & V509 Ori & 1.26$\pm$0.03 & M0.5: & 263$\pm$4 & --25.8$\pm$0.8 & II & yes \\
\noalign{\smallskip}
\hline
\end{tabular}
$$
\begin{list}{}{}
\item[$^{a}$] Spectral types and Li~{\sc i} and H$\alpha$ pseudo-equivalent
widths are from Zapatero Osorio et~al. (2002) and Sacco et~al. (2008).
The colon after the spectral type denotes a estimation based on photometry.
\end{list}
\end{table*}
Two red cluster members with high $L_X / L_J$ ratios stand out in the upper
right corner of the bottom panel in Fig.~\ref{xfig_fluxLXLJ_i-J}, with
colours $i-J \sim$ 2.4--2.7\,mag.
They are two of the only three X-ray brown dwarfs detected in $\sigma$~Orionis
with EPIC/{\em XMM-Newton} by Franciosini et~al. (2006):
No.~84/Mayrit~433123 (S\,Ori~25 -- B\'ejar et~al. 1999; Muzerolle et~al.
2003; Barrado y Navascu\'es et~al. 2003; Caballero et~al. 2004, 2007) and
No.~82/\object{Mayrit~396273} (S\,Ori~J053818.2--023539 -- B\'ejar et~al. 2004;
Kenyon et~al. 2005; Maxted et~al. 2008).
The third X-ray cluster brown dwarf, unidentified in our dataset, is
\object{Mayrit~487350} ([SE2004]~70, NX~67), which underwent a flare during the EPIC
observations and is located at a relatively short projected physical separation
to the planetary-mass object {\em candidate} {S\,Ori~68} (Scholz \&
Eisl\"offel 2004; Caballero et~al. 2006).
For Mayrit~396273, L\'opez-Santiago \& Caballero (2008) imposed a maximum
X-ray flux of 2.9\,10$^{-17}$\,W\,m$^{-2}$ from their EPIC/{\em XMM-Newton}
observations to the west of $\sigma$~Orionis, consistent with the flux reported
here (1.0$\pm$0.3\,10$^{-17}$\,W\,m$^{-2}$) and the flux estimated from the
Franciosini et~al. (2006) count rate ($\sim$0.6\,10$^{-17}$\,W\,m$^{-2}$).
The brown dwarf may have a high X-ray quiescent level or underwent flares
during both Franciosini et~al. (2006) and our observations.
Mayrit~396273 has the highest $L_X/L_J$ ratio in $\sigma$~Orionis after the two
young star candidates No.~94/Mayrit~887313 and No.~98/Mayrit~1178039 (which are
located at large offaxis separations).
The other brown dwarf, Mayrit~433123, is a photometric variable,
emission-line, accreting, substellar object of only about 0.058\,$M_\odot$, well
below the hydrogen burning mass limit (Caballero et~al. 2007).
From the long-term X-ray variability analysis in Section~\ref{section.longterm},
Mayrit~433123 was about five times brighter at the HRC-I/{\em Chandra} epoch
than at the EPIC/{\em XMM-Newton} one, which indicates that the brown dwarf
could flare during our observations.
Unfortunately, we could not perform a spectral analysis of the two
substellar objects and the low statistics prevented us to achieve conclusions on
the origin of the X-ray emission from their light curves.
One of the scenarios that could explain the X-ray emission in brown dwarfs
is accretion from a circumsubstellar disc, since the high electrical
resistivities in the neutral atmospheres of ultracool dwarfs are expected
to prevent significant dynamo action (Mohanty et~al. 2002; Stelzer et~al.
2010).
In fact, Mayrit~433123, with M6.5 spectral type and pEW(H$\alpha$) $\approx$
--44\,\AA, satisfies the empirical criterion to classify accreting T~Tauri
stars and substellar analogues using low-resolution optical spectroscopy of
Barrado y Navascu\'es \& Mart\'{\i}n (2003).
Besides, it seems to be rotationally locked to an imperceptible disc inclined $i
\approx$ 46\,deg with respect to us (Caballero et~al. 2004, 2007; Luhman et~al.
2008).
However, if a brown dwarf is young enough, it could still retain a (non
self-sustained) priomordial field.
Furthermore, Stelzer et~al. (2006) found that accreting brown dwarfs have lower
X-ray luminosity than non-accreting ones and suggested that substellar activity
is subject to the same mechanisms that suppress X-ray emission in
pre-main-squence stars during the T~Tauri phase.
The object statistics (two or three X-ray brown dwarfs) is still too poor to
conclude whether X-rays from brown dwarfs originate via the same processes as
from low-mass stars.
Using the same HRC-I/{\em Chandra} dataset, but with a coarse
identification process, Caballero (2007b) listed two additional faint X-ray
sources that were not identified by us, even during the 10-spurious search
(Section~\ref{section.beyond}).
They could be related to the young very low-mass star {Mayrit~50279}
(Sacco et~al. 2008) and the X-ray source {[FPS2006]~NX~77}.
Caballero (2007b) associated the latter to an infrared source with $J \sim$
19.0\,mag and $J-K_{\rm s} \sim$ 1.8\,mag (tentatively called Mayrit~72345).
If it belonged to $\sigma$~Orionis, it would be an L-type, planetary-mass object
with an estimated mass of 7\,$M_{\rm Jup}$.
Bouy et~al. (2009) agreed with this classification.
However, it would have an extraordinary luminosity ratio larger than
$L_X/L_{\rm bol} \sim 10^{-1}$ and, thus, we consider it instead an active
background galaxy candidate with very red infrared colours.
\subsubsection{Intermediate- and low-mass stars}
\label{section.ctts}
There are a few remarkable X-ray stars among the remaining cluster members and
candidates that are neither early-type stars nor young brown dwarfs.
One of them is No.~63/\object{Mayrit~591158} ([W96]~4771--0026), which has a
relatively blue colour $i-K_{\rm s} \approx$ 0.46\,mag and lies in the $L_X/L_J$
vs. $i-J$ diagram halfway between OB and active KM $\sigma$~Orionis stars.
Mayrit~591158 has cosmic lithium abundance, an effective temperature of about
6000\,K, a high rotational velocity of $v \sin{i}$ = 60$\pm$5\,km\,s$^{-1}$, a
partially-filled H$\alpha$ absorption line, and [S~{\sc ii}] and [N~{\sc ii}]
lines in emission (Caballero 2006; Gonz\'alez-Hern\'andez et~al. 2008).
This star is significantly warmer than the other six X-ray stars in the diagram
with colours 0.5\,mag $\lesssim i-J \lesssim$ 1.0\,mag, all of which have strong
lithium absorption lines and spectral type (or effective temperature)
determinations between late-G--K0 and K7.
As a result, Mayrit~591158 is the only X-ray emitter in $\sigma$~Orionis with a
spectral type between F and mid-G\footnote{Furthermore, Mayrit~591158 and
Mayrit~524060 (A8V:) are the only X-ray emitters in $\sigma$~Orionis with
spectral types between early-A and mid-G.}.
This fact is probably associated to the high reported rotational velocity, which
may favour an enhancement of the magnetic activity.
Other remarkable X-ray source is the young low-mass star candidate
No.~103/\object{Mayrit~578123} ([FPS2006]~NX~153), which is the third faintest
X-ray source in our sample and has a high $L_X/L_J$ ratio.
We estimated a mass of about 0.08--0.09\,$M_\odot$ from its $J$-band magnitude
as in Caballero et~al. (2007).
There is no spectroscopy available of Mayrit~578123 to confirm its membership in
$\sigma$~Orionis.
It has been widely discussed in the literature whether classical
(accreting) T~Tauri stars have a lower frequency and intensity of X-ray emission
than weak-line (non-accreting) T~Tauri stars (e.g., Feigelson et~al. 1993;
Neuh\"auser et~al. 1995; Preibisch \& Zinnecker 2002; Telleschi et~al. 2007
-- see also Stelzer et~al. 2006 for a discussion on X-ray emission from
T~Tauri-like brown dwarfs).
In the $\sigma$~Orionis cluster, Franciosini et~al. (2006), Caballero (2007b),
and L\'opez-Santiago \& Caballero (2008) confirmed the real deficiency in
classical T~Tauri stars in the XLF.
Some hypothesis have been presented to explain this deficiency, such as cooling
of active regions by accretion or absorption of X-rays by dust in a
circumstellar disc.
In the second picture, the geometry of the star-disc system with respect to us
plays a crucial r\^ole (i.e., edge-on discs occult the central object while
front-on ones do not).
Since the inclination angles of circumstellar discs are randomly distributed, we
expect no relation between the strength of both the X-ray emission and
near-infrared flux excess.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{AA14861f12a.eps}
\includegraphics[width=0.49\textwidth]{AA14861f12b.eps}
\includegraphics[width=0.49\textwidth]{AA14861f12c.eps}
\caption{{\em Top panel:} spatial location diagram.
The different symbols represent:
cluster star and brown dwarf members and candidates (--red-- filled stars),
field stars (--blue-- crosses), galaxies with optical/near-infrared counterpart
(--blue-- pluses), and galaxies without counterpart (--blue-- open circles).
Size is 40 $\times$ 40\,arcmin$^2$, with centre on $\sigma$~Ori~AB.
{\em Middle panel:} count rate of $\sigma$~Orionis stars as a function of the
angular separation to the cluster centre.
The dashed line sketches the approximate lower limit for detection of the
HRC-I/{\em Chandra} observations.
{\em Bottom panel:} relative cumulative number of X-ray $\sigma$~Orionis
star and brown dwarf members and candidates as a function of angular separation
to the cluster centre, $\rho$.
The dashed line indicates the expected values if the X-ray stars followed a
volume-density law proportional to $\rho^{-2}$.}
\label{xfig_delta_alpha}
\end{figure}
Following this discussion, we investigated the reddest KM-type X-ray stars in
$\sigma$~Orionis, which we expected to be classical T~Tauri stars with discs.
The eight X-ray stars with colours $J-K_{\rm s} >$ 1.15\,mag listed in
Table~\ref{table.ctts} have spectral energy distributions (from the
optical to 8.0--24\,$\mu$m) typical of discs harbours according to Hern\'andez
et~al. (2007).
Except for No.~61/Mayrit~30241, which misses spectroscopy, all the stars satisfy
the H$\alpha$-accretion criterion of Barrado y Navascu\'es \& Mart\'{\i}n
(2003).
Of them, only two stars, No.~72/\object{Mayrit~521199} (TX~Ori) and, specially,
No.~45/\object{Mayrit~609206} (V505~Ori, with $J-K_{\rm s}$ =
2.01$\pm$0.04\,mag) have colours redder than 1.4\,mag, while in $\sigma$~Orionis
there are about a dozen KM-type stars redder than this value (Caballero 2008c).
For example, none of the stellar sources of the four Herbig-Haro objects in
$\sigma$~Orionis (Reipurth et~al. 1998), which also have very red $J-K_{\rm s}$
colours, were detected with HRC-I (but the source of HH~445 was detected by
Franciosini et~al. 2006 -- Section~\ref{section.epicxmmnewton}).
Likewise, only six of the about thirty KM-type $\sigma$~Orionis stars redder
than $J-K_{\rm s}$ = 1.2\,mag were detected with HRC-I.
A detailed analysis of the frequency of X-ray emitters as a function of mass,
disc presence, and degree of accretion is to be done, but the values above hint
at a lower frequency and intensity of X-ray emission of classical (accreting)
T~Tauri stars in $\sigma$~Orionis than weak-line (non-accreting) T~Tauri stars.
\subsection{Spatial distribution of X-ray sources}
\label{section.spatial}
As a final analysis of the HRC-I data, we investigated the spatial distribution
of X-ray stars in $\sigma$~Orionis.
From the top panel in Fig.~\ref{xfig_delta_alpha}, the cluster stars are
concentrated towards the centre, defined by the eponymous $\sigma$~Ori~AB
system, which coincides with the centre of the field of view with a small error
of 13\,arcsec (Section~\ref{section.dataretrieval}).
The apparent concentration of galaxies without optical/near-infrared counterpart
and field stars in the innermost 10\,arcmin is due to the combined effect of
their faintness and the decreasing sensitivity of the HRC-I detector at large
offaxis separations.
Only relatively bright X-ray fore- and background sources, such as the field
star No.~69/[W96]~rJ053829--0223 or, specially, the galaxy 2E~1456, could be
detected at more than 10\,arcmin to the pointing centre.
The middle panel in Fig.~\ref{xfig_delta_alpha} illustrates the effect of
the degradation of the sensitivity towards the HRC-I borders: while roughly all
the X-ray sources with count rates $CR >$ 0.1\,ks$^{-1}$ were detected in the
central area, the lower limit for detection increased up to about 1\,ks$^{-1}$
at 10\,arcmin and about 4\,ks$^{-1}$ at 20\,arcmin.
According to Caballero (2008a), the radial distribution of $\sigma$~Orionis
stars (without attending to their X-ray emission) follows a power law
proportional to the angular separation to the cluster centre, $\rho^{+1}$, valid
only for $\rho \lesssim$ 20\,arcmin.
This distribution corresponds to a volume density proportional to $\rho^{-2}$,
which is expected from the collapse of an isothermal spherical molecular cloud.
From the bottom panel in Fig.~\ref{xfig_delta_alpha}, the X-ray stars in
$\sigma$~Orionis follow the power law $\rho^{+1}$ only in the innermost
4\,arcmin.
Apart from the limited field of view of the detector, at large offaxis
separations, the degradation of the sensitivity towards the HRC-I borders gets
important and many X-ray $\sigma$~Orionis stars were missed during the
observations.
We estimated that about 30 and more than 100 young stars and brown dwarfs
were missed in the 4--10 and 10--20\,arcmin annuli, respectively.
The sensitivity degradation must be taken into account when frequencies of X-ray
emitters are computed.
\section{Summary}
\label{section.summary}
We carried out a detailed analysis of the X-ray emission of young stars in the
$\sigma$~Orionis cluster ($\tau \sim$ 3\,Ma, $d \sim$ 385\,pc).
We analysed public HRC-I/{\em Chandra} observations obtained in November 2002.
The wide field of view, long exposure time of 97.6\,ks, and the superb spatial
resolution of HRC-I/{\em Chandra} allowed us to detect 107 X-ray sources, many
of which had not been identified in previous searches with IPC/{\em Einstein},
HRI/{\em ROSAT}, ACIS-S/{\em Chandra}, or EPIC/{\em XMM-Newton}.
After cross-matching with optical and near-infrared catalogues, we classified
the X-ray sources into 84 young cluster members and candidates, four active
field stars, and 19 galaxies, of which only two have known optical and
near-infrared counterparts.
Among the cluster members and candidates, two are {\em bona fide} brown dwarfs
with signposts of youth.
A robust Poisson-$\chi^2$ analysis to search for X-ray variability showed that
at least seven young stars displayed flares during the HRC-I observations, while
two (or three, if we include the B2Vpe star No.~2/Mayrit~42062~AB --
$\sigma$~Ori~E) may display rotational modulation.
Some of the observed flares were intense, with peak-to-quiescence ratios of
about six and durations longer than 20\,ks (and longer than our observations in
one~case).
We compared the count rates and variability status of our HRC-I sources with the
results of previous observations with {\em Einstein}, {\em ROSAT}, {\em
Chandra}, and {\em XMM-Newton}, and found that eleven stars displayed
significant X-ray flux variations between our observations and others, mostly
ascribed to flaring activity.
Interestingly, during the HRC-I observations, the brown dwarf
No.~84/Mayrit~433123 (S\,Ori~25) underwent an X-ray brightening by a factor five
with respect to the EPIC/{\em XMM-Newton} epoch.
Besides, we revisited old {\em ROSAT} data and found new flaring activity
in the $\sigma$~Orionis star No.~37/Mayrit~102101~AB.
To facilitate further studies, we also compiled the {\em ROSAT} sources
presented by Wolk (1996).
From this compilation, we noticed that he tabulated X-ray emission from the
brown dwarf Mayrit~433123, but he was not able to classify it as one of the
first discovered substellar objects.
The X-ray luminosity function that we presented here ranges from spectral type
O9.5V, which corresponds to a mass of about 18\,$M_\odot$, to M6.5, below the
hydrogen burning mass limit at 0.07\,$M_\odot$.
We found a tendency of early-type stars in multiple systems or with spectral
peculiarities to display X-ray emission.
On the other side of the luminosity function, the two detected brown dwarfs and
the least massive young star candidate are among the $\sigma$ Orionis members
with the highest values of $L_X/L_J$ luminosity ratios.
We found X-ray emission from only two stars in the spectral type interval from
early A to intermediate-late G.
We noticed that most of the $\sigma$~Orionis T~Tauri stars with the largest
infrared excesses have not been detected in X-ray surveys in the area, which
supports the scenario of a lower frequency and intensity of X-ray emission of
classical (accreting) T~Tauri stars than weak-line (non-accreting) T~Tauri
stars.
The only very red ($J-K_{\rm s} >$ 1.5\,mag) young star detected with
HRC-I/{\em Chandra} was No.~45/Mayrit~609206, which is a classical
T~Tauri star with a strong H$\alpha$ emission for its spectral type
(K7.0), photometric variability, and a spectral energy distribution typical of
Class~II objects.
Finally, we investigated the spatial distribution of the X-ray cluster members,
which is strongly affected by the degradation of the sensitivity towards the
borders of the HRC-I detector.
While roughly all the X-ray sources with count rates $CR >$ 0.1\,ks$^{-1}$ at
less than 4\,arcmin to the cluster centre were detected, the estimated numbers
of missed X-ray cluster members in the 4--10 and 10--20\,arcmin annuli are 30
and 100, respectively.
Since the core of $\sigma$~Orionis extends up to 20\,arcmin from the
centre, defined by the Trapezium-like $\sigma$~Ori system, additional de-centred
pointings with HRC-I/{\em Chandra}, EPIC/{\em XMM-Newton}, or the future Wide
Field Imager + Hard X-ray Imager instruments onbard the ESA-NASA-JAXA space
mission {\em International X-ray Observatory} are necessary to investigate the
full X-ray luminosity function of the cluster.
To conclude, a few shallow pointings around the cluster centre will probably be
more efficient to detect and characterise new X-ray young brown dwarfs in
$\sigma$~Orionis than a single deep pointing centred on the Trapezium-like
system.
\begin{acknowledgements}
We are indebt to the anonymous referee for his/her quick, polite, greatly
valuable report.
JAC is an {\em investigador Ram\'on y Cajal} at the CAB,
JFAC is a researcher of the Consejo Nacional de Investigaciones
Cient\'{\i}ficas y Tecnol\'ogicas (CONICET) at the UNComa, and
JLS is an AstroCAM post-doctoral fellow at the UCM.
This research made use of the SIMBAD, operated at Centre de Donn\'ees
astronomiques de Strasbourg, France, and NASA's Astrophysics Data System.
PWDetect has been developed by scientists at Osservatorio Astronomico di
Palermo.
Financial support was provided by the Universidad Complutense de Madrid, the
Comunidad Aut\'onoma de Madrid, the Spanish Ministerio de Ciencia e
Innovaci\'on, the Secretar\'{\i}a de Ciencia y Tecnolog\'{\i}a de la Universidad
Central de C\'ordoba, and the Argentinian CONICET under grants
AyA2008-06423-C03-03,
AyA2008-00695,
PRICIT S-2009/ESP-1496, and
PICT 2007-02177.
\end{acknowledgements}
|
2,877,628,090,965 | arxiv | \section{\@startsection {section}{1}{\z@}%
{-3.5ex \@plus -1ex \@minus -.2ex}%
{2.3ex \@plus .2ex}%
{\reset@font\bfseries}}
\renewcommand\subsection{\@startsection {subsection}{1}{\z@}%
{-3.5ex \@plus -1ex \@minus -.2ex}%
{2.3ex \@plus .2ex}%
{\reset@font\it \bfseries}}
\renewcommand\subsubsection{\@startsection {subsubsection}{1}{\z@}%
{-3.5ex \@plus -1ex \@minus -.2ex}%
{2.3ex \@plus .2ex}%
{\reset@font\it \bfseries \underline}}
\thispagestyle{empty}
\makeatother
\usepackage[a4paper,tmargin=2.5cm,bmargin=3.0cm,rmargin=2.5cm,lmargin=2.5cm]{geometry}
\setlength{\parindent}{1cm}
\renewcommand{\baselinestretch}{1.0}
\usepackage{caption}
\renewcommand{\captionfont}{\it \fontsize{10}{10}\selectfont}
\usepackage{fancyhdr}
\pagestyle{fancy}
\pagestyle{headings}
\begin{document}
\let\cleardoublepage\clearpage
\renewcommand\contentsname{\normalsize{\hspace{12 pt}Table of Contents:}}
\tableofcontents
\thispagestyle{empty}
\let\cleardoublepage\clearpage
\newpage
\pagenumbering{arabic}
\newcommand{\numchapter}{11}
\newcommand{\firstpage}{1}
\setcounter{page}{\firstpage}
\setcounter{chapter}{\numchapter}
\cfoot{}
\begin{center}
{\fontsize{20}{20}\selectfont \textbf{
\hfill Chapter \numchapter\\[5mm]
Homogenization Techniques for Periodic Structures\\[10mm]
}}
{\fontsize{14}{14}\selectfont Sebastien Guenneau$^{(1)}$, Richard Craster$^{(2)}$, Tryfon Antonakakis$^{(2,3)}$,
Kirill Cherednichenko$^{(4)}$ and Shane Cooper$^{(4)}$\\[5mm] }
{\fontsize{10}{10}\selectfont \textit{
$^{(1)}$CNRS, Aix-Marseille Universit{\'e}, \'Ecole Centrale Marseille, Institut Fresnel,\\
13397 Marseille Cedex 20, France, {\color{blue}{\underline{sebastien.guenneau@fresnel.fr}}}\\
$^{(2)}$ Department of Mathematics, Imperial College London, United Kingdom,
{\color{blue}{\underline{r.craster@imperial.ac.uk}}},\\
$^{(3)}$ CERN, Geneva, Switzerland,{\color{blue}{\underline{tryfon.antonakakis09@imperial.ac.uk}}},\\
$^{(4)}$ Cardiff School of Mathematics, Cardiff University, United Kingdom, \\
{\color{blue}{\underline{cherednichenko@cardiff.ac.uk}}},
{\color{blue}{\underline{coopersa@cf.ac.uk}}}.
}}
\end{center}
\thispagestyle{empty}
\pagestyle{fancyplain}
\renewcommand{\headrulewidth}{0.0pt}
\lhead[\itshape{\small{\thechapter .\thepage}}]{\itshape{\small{S. Guenneau et al.: Homogenization Techniques for Periodic Structures}}}
\rhead[\itshape{\small{Gratings: Theory and Numeric Applications, 2012}}]{\itshape{\small{\thechapter .\thepage}}}
\cfoot{}
\section{Introduction}
In this chapter we describe a selection of mathematical techniques and results that suggest interesting links between the theory of gratings and the theory of homogenization, including a brief introduction to the latter. By no means do we purport to imply that homogenization theory is an exclusive method for studying gratings, neither do we hope to be exhaustive in our choice of topics within the subject of homogenization. Our preferences here are motivated most of all by our own latest research, and by our outlook to the future interactions between these two subjects. We have also attempted, in what follows, to contrast the ``classical'' homogenization (Section \ref{clashom1}), which is well suited for the description of composites as we have known them since their advent until about a decade ago, and the ``non-standard'' approaches,
high-frequency homogenization (Section \ref{hfh}) and high-contrast homogenization (Section \ref{kc}),
which have been developing in close relation to the study of photonic crystals and metamaterials, which exhibit properties unseen in conventional composite media, such as negative refraction allowing for super-lensing through a flat heterogeneous lens,
and cloaking, which considerably reduces the scattering by finite size objects (invisibility) in certain frequency range. These novel
electromagnetic paradigms have renewed the interest of physicists and applied mathematicians alike in the theory of gratings
\cite{bookjaune}.
\subsection{Historical survey on homogenization theory}
The development of theoretical physics and continuum mechanics in the second half of the 19th and first half
of the 20th century has
motivated the question of justifying the macrosopic view of physical phenomena (at the scales visible to the
human eye) by ``upscaling'' the implied microscopic rules for particle interaction at the atomic level through
the phenomena at the intermediate, ``mesoscopic'', level (from tenths to hundreds of microns). This ambition
has led to an extensive worldwide programme of research, which is still far from being complete as of now.
Trying to give a very crude, but more or less universally applicable, approximation of the aim of this extensive activity, one could say that it has to do with developing approaches to averaging out in some way material properties at one level with the aim of getting a less detailed, but almost equally precise, description of the material response. Almost every word in the last sentence needs to be clarified already, and this is essentially the point where one could start giving an overview of the activities that took place during the years to follow the great physics advances of a century ago. Here we focus on the research that has been generally referred to as the theory of homogenization, starting from the early 1970s. Of course, even at that point it was not, strictly speaking, the beginning of the
subject, but we will use this period as a kind of reference point in this survey.
The question that a mathematician may pose in relation to the perceived concept of ``averaging out'' the detailed features of a heterogeneous structure in order to get a more homogeneous description of its behaviour is the following: suppose that we have the simplest possible linear elliptic partial differential equation (PDE) with periodic coefficients of period
$\eta>0.$ What is the asymptotic behaviour of the solutions to this PDE as $\eta\to0$? Can a boundary-value problem be written that is satisfied by the leading term in the asymptotics, no matter what
the data unrelated to material properties are? Several research groups became engaged in addressing this question about four decades ago, most notably those led by N. S. Bakhvalov, E. De Giorgi, J.-L. Lions, V. A. Marchenko, see \cite{Bakhvalov}, \cite{DeGiorgi}, \cite{Bensoussan78a}, \cite{MarchenkoKhruslov} for some of the key contributions of
that period. The work of these groups has immediately led to a number of different perspectives on the apparently basic question asked above, which in part was due to the different contexts that these research groups had
had exposure to prior to dealing with the issue of averaging. Among these are the method of multiscale
asymptotic expansions (also discussed later in this chapter), the ideas of compensated compactness
(where the contribution by L. Tartar and F. Murat \cite{Tartar}, \cite{Murat} has to be mentioned specifically),
the variational method (also known as the ``$\Gamma$-convergence"). These approaches were subsequently
applied to various contexts, both across a range of mathematical setups (minimisation problems, hyperbolic equations, problems with singular boundaries) and across a number of physical contexts (elasticity, electromagnetism, heat conduction). Some new approaches to homogenization appeared later on, too, such as
the method of two-scale convergence by G. Nguetseng \cite{Nguetseng} and the periodic
unfolding technique by D. Cioranescu, A. Damlamian and G. Griso \cite{CDG}.
Established textbooks that summarise these developments in different time periods, include, in
addition to the already cited book \cite{Bensoussan78a}, the monographs \cite{BP}, \cite{jikov94a}, \cite{SanchezPalencia},
and more recently \cite{ChPSh}. The area that is perhaps worth a separate mention is that of stochastic homogenization, where some pioneering contributions were made by S. M. Kozlov \cite{Kozlov}, G. C. Papanicolaou and S. R. S. Varadhan \cite{PV}, and which has in recent years been approached
with renewed interest.
A specific area of interest within the subject of homogenization that has been rapidly developing during the last decade or so is the study of the behaviour of "non-classical" periodic structures, which we understand here as
those for which compactness of bounded-energy solution sequences fails to hold as $\eta\to0.$ The related mathematical research has been strongly linked to, and indeed influenced by, the parallel development of the
area of metamaterials and their application in physics, in particular for electromagnetic phenomena.
Metamaterials can be roughly defined as those whose properties at the macroscale are affected by
higher-order behaviour as $\eta\to0.$ For example, in classical homogenization for elliptic second-order PDE one requires the leading (``homogenised solution'') and the first-order (``corrector'') terms in the $\eta$-power-series expansion of the solution in order to determine the macroscopic properties, which results in a limit of the same type as the original problem, where the solution flux (``stress'' in elasticity, ``induction'' in electromagnetics, ``current'' in electric conductivity,
``heat flux'' in heat conduction) depends on the solution gradient only (``strain'' in elasticity,
"field" in electromagnetics, ``voltage'' in electric conductivity, ``temperature gradient'' in heat condiction).
If, however, one decides for some reason, or is forced by the specific problem setup, to include higher-order terms as well, they are likely to have to deal with an asymptotic limit of a different type for small $\eta,$ which may, say, include second gradients of the solution in its constitutive law. One possible reason for the need to include such unusual effects is the non-uniform (in $\eta$) ellipticity of the original problems or, using the language of materials science, the high-contrast in the material properties of the given periodic structure. Perhaps the earliest mathematical example of such degeneration is the so-called "double-porosity model", which was first considered by G. Allaire \cite{Allaire} and T. Arbogast, J. Douglas, U. Hornung \cite{Arbogastetal} in the early 1990s. A detailed analysis of the properties of double-porosity models, including their striking spectral behaviour did not appear until the work \cite{Zhikov2000} by V. V. Zhikov. We discuss the double-porosity model and its properties in more detail in Section \ref{kc}.
Before moving on to the next section, it is important to mention one line of research within the homogenization area that has had a significant r\^{o}le in terms of application of mathematical analysis to materials, namely the subject of periodic singular structures (or ``multi-structures'', see \cite{KMM}). While this subject is clearly linked to the general analysis of differential operators on singular domains (see \cite{MNP}), there has been a series of works that develop specifically homogenization techniques for periodic structures of this kind (also referred to as ``thin structures'' in this context), {\it e.g.} \cite{Zhikovstructures}, \cite{ZhikovPastukhova}. It turns out that overall properties of such materials are similar to those of materials with high contrast. In the same vein, it is not difficult to see that compactness of bounded-energy sequences for problems on periodic thin structures does not hold (unless the sequence in question is suitably rescaled), which leads to the need for non-classical, higher-order, techniques in their analysis.
\subsection{Multiple scale method: Homogenization of microstructured fibers}\label{clashom1}
\begin{figure}[h]\centerline{\scalebox{0.7}{\includegraphics{fighomo1.pdf}}}
\vspace{-2cm}\caption{A diagram of the homogenization process:
when the parameter $\eta$ gets smaller ($\eta<\eta'$), the number of cells inside the fixed domain $\Omega_f$ becomes
larger. When $\eta\ll 1$, $\Omega_f$ is filled with a large number of small cells, and can thus be considered as an
effective (or homogenized) medium. Such a medium is usually described by anisotropic parameters depending upon
the resolution of auxiliary (``unit cell'') problems set on the rescaled microcopic cell $Y$ which typically contains one inclusion $D$.}
\label{fig1}
\end{figure}
Let us consider a doubly periodic grating of pitch $\eta$ and finite extent such as shown in Fig.\ref{fig1}. An interesting problem to look at is that of transverse electric (TE) modes--- when the magnetic field has the form $(0,0,H)$---
propagating within a micro-structured fiber with infinite conducting walls. Such an eigenvalue problem is known to have
a discrete spectrum: we look for eigenfrequencies $\omega$ and associated eigenfields $H$ such that:
$$
({\cal P}_\eta): \left\{
\begin{array}{ll}
\displaystyle{-\sum_{i,j=1}^2\frac{\partial}{\partial x_i}\left( \varepsilon_{ij}^{-1}(\frac{{\bf x}}{\eta})
\frac{\partial H({\bf x})}{\partial x_j}\right)} = \omega^2 \mu_0\varepsilon_0 H({\bf x}) \; & \hbox{in $\Omega_f$} \; , \\
\displaystyle{\varepsilon_{ij}^{-1}(\frac{{\bf x}}{\eta})
\frac{\partial H({\bf x})}{\partial x_i}n_j}=0 \; & \hbox{on $\partial\Omega_f$} \; ,
\end{array}
\right.
$$
where we use the convention ${\bf x}=(x_1,x_2)$, $\partial\Omega_f$ denotes the boundary $\Omega_f$,
and ${\bf n}=(n_1, n_2)$ is the normal to the boundary.
Here, $\varepsilon_0\mu_0=c^{-2}$ where $c$ is the speed of light in vacuum
and we assume that matrix coefficients of relative permittivity $\varepsilon_{ij}({\bf y})$,
with $i,j=1,2$, are real, symmetric (with
the convention ${\bf y}=(y_1,y_2)$), of period $1$ (in $y_1$ et $y_2$) and satisfy:
\begin{equation}
M{\mid{\bm \xi}\mid}^2\geq\varepsilon_{ij}({\bf y})\xi_i\xi_j\geq m{\mid{\bm \xi}\mid}^2 \; , \; \forall {\bm \xi}\in{\rm I\!R}^2 \; , \; \forall {\bf y}\in Y={[0,1]}^2 \; ,
\label{ineqhom}
\end{equation}
where ${\mid{\bm \xi}\mid}^2=(\xi_1^2+\xi_2^2)$, for given strictly positive constants $M$ and $m$.
This condition is met for all conventional dielectric media\footnote{When the periodic medium is assumed to be isotropic,
$\varepsilon_{ij}(y)=\varepsilon({\bf y})\delta_{ij}$, with the Kronecker symbol $\delta_{ij}=1$ if $i=j$
and $0$ otherwise.
For instance, (\ref{ineqhom}) has typically the bounds $M=13$ and $m=1$ in optics.
One class of problems where this condition (\ref{ineqhom}) is violated
(the bound below, to be more precise)
is considered in Section \ref{kc} on high-contrast homogenization.}.
\noindent We can recast $({\cal P}_\eta)$ as follows:
$$-\frac{\partial}{\partial x_i}\sigma^i(H({\bf x}))=\frac{\omega^2}{c^2}H({\bf x})$$
with
$$\sigma^i(H({\bf x}))=\varepsilon_{ij}^{-1}\left(\frac{{\bf x}}{\eta}\right)
\frac{\partial H({\bf x})}{\partial x_j} \; .$$
\noindent The multiscale method relies upon the following ansatz:
\begin{equation}
H=H_0({\bf x})+\eta H_1({\bf x},{\bf y})+\eta^2 H_2({\bf x},{\bf y}) + ...
\label{sebeq01}
\end{equation}
where $H_i({\bf x},{\bf y})$, $i=1,2,...$ is a periodic function of period $Y$ in ${\bf y}$.
\noindent In order to proceed with the asymptotic algorithm, one needs to rescale
the differential operator as follows
\begin{equation}
\frac{\partial H}{\partial x_i}=\left( \frac{\partial H_0}{\partial z_i}+\frac{\partial H_1}{\partial y_i}\right)
+\eta \left( \frac{\partial H_1}{\partial z_i}+\frac{\partial H_2}{\partial y_i}\right)+...
\label{sebeq02}
\end{equation}
where $\partial/\partial z_i$ stands for the partial derivative with respect to
the $i$th component of the macroscopic variable ${\bf x}$.
\noindent It is useful to set
$$\sigma^i(H)=\sigma^i_0+\eta\sigma^i_1+\eta^2\sigma^i_2+...$$
what makes (\ref{sebeq02}) more compact.
\noindent Collecting coefficients sitting in front of the same powers of $\eta$, we obtain:
$$\sigma^i_0(H)=\varepsilon_{ij}^{-1}({\bf y})\left( \frac{\partial H_0}{\partial z_i}+\frac{\partial H_1}{\partial y_i}\right)$$
$$\sigma^i_1(H)=\varepsilon_{ij}^{-1}({\bf y})\left( \frac{\partial H_1}{\partial z_i}+\frac{\partial H_2}{\partial y_i}\right)$$
and so forth, all terms being periodic in ${\bf y}$ of period $1$.
\noindent Upon inspection of problem $({\cal P}_\eta)$, we gather that
$$-\left(\frac{1}{\eta}\frac{\partial}{\partial y_i}+\frac{\partial}{\partial z_i}\right)\left(\sigma^i_0+\eta\sigma^i_1+...\right)=\frac{\omega^2}{c^2}H({\bf x})+...$$
so that at order $\eta^{-1}$
$$({\cal A}):-\frac{\partial}{\partial y_i}\sigma^i_0=0 \; ,$$
and at order $\eta^0$
$$({\cal H}):-\frac{\partial}{\partial z_i}\sigma^i_0-\frac{\partial}{\partial y_i}\sigma^i_1=\frac{\omega^2}{c^2} H_0 \; .$$
(the equations corresponding to higher orders in $\eta$ will not be used here).
\noindent Let us show that $({\cal H})$ provides us with an equation (known as the homogenized equation) associated with the macroscopic
behaviour of the microstructured fiber. Its coefficients will be obtained thanks to $({\cal A})$ which is an auxiliary problem related to
the microscopic scale. We will therefore be able to compute $H_0$ and $H_1$ thus, in particular, the first terms
of $H$ and $\sigma^i$.
\noindent In order to do so, let us introduce the mean on $Y$, which we denote $<.>$, which is an operator acting on
the function $g$ of the variable ${\bf y}$:
$$<g>=\frac{1}{\mid Y\mid}\int\int_Y g(y_1,y_2) dy_1dy_2 \; ,$$
where $\mid Y \mid$ is the area of the cell $Y$.
\noindent Applying the mean to both sides of $({\cal H})$, we obtain:
$$<({\cal H})>:-\frac{\partial}{\partial z_i} <\sigma^i_0>
-<\frac{\partial}{\partial y_i}\sigma^i_1>=\frac{\omega^2}{c^2} H_0 <1> \; ,$$
where we have used the fact that $<.>$ commutes with $\partial/\partial z_i$.
\noindent Moreover, invoking the divergence theorem, we observe that
$$<\frac{\partial}{\partial y_i}\sigma^i_1>=\frac{1}{\mid Y\mid}\int\int_Y \frac{\partial}{\partial y_i}\sigma^i_1({\bf y}) d{\bf y}
=\frac{1}{\mid Y\mid}\int_{\partial Y} \sigma^i_1({\bf y})n_i ds \; ,$$
where ${\bf n}=(n_1,n_2)$ is the unit outside normal to $\partial Y$ of $Y$.
This normal takes opposite values on opposite sides of $Y$, hence the integral over $\partial Y$ vanishes.
\noindent Altogether, we obtain:
$$<({\cal H})>:-\frac{\partial}{\partial z_i} <\sigma^i_0> =\frac{\omega^2}{c^2} H_0 \; ,$$
which only involves the macroscopic variable $x$ and partial derivatives
$\partial /\partial z_i$ with respect to the macroscopic variable.
We now want to find a relation between
$<\sigma_0>$ and the gradient in ${\bf x}$ of $H_0$. Indeed, we have seen that
$$\sigma^i_0(H)
=\varepsilon_{ij}^{-1}({\bf y})\left( \frac{\partial H_0}{\partial z_j}+\frac{\partial H_1}{\partial y_j}\right) \; ,$$
\noindent which from $({\cal A})$ leads to
$$({\cal A}1):-\frac{\partial}{\partial y_i}\left(\varepsilon_{ij}^{-1}({\bf y})\frac{\partial H_1}{\partial y_j}\right)
=\left( \frac{\partial H_0}{\partial z_j}\right) \left(\frac{\partial}{\partial y_i}\varepsilon_{ij}^{-1}({\bf y})\right) \; .$$
\noindent We can look at $({\cal A}1)$ as an equation for the unknown $H_1({\bf x},{\bf y})$, periodic of period $Y$ in ${\bf y}$
and parametrized by ${\bf x}$. Such an equation is solved up to an additive constant.
In addition to that, the parameter ${\bf x}$ is only involved via the factor
$\partial H_0/\partial z_j$. Hence, by linearity, we can write the solution $H_1({\bf x},{\bf y})$ as follows:
$$H_1({\bf x},{\bf y})= \frac{\partial H_0({\bf x})}{\partial z_j} w^j({\bf y}) \; ,$$
where the two functions $w^j({\bf y})$, $j=1,2$ are solutions to $({\cal A}1)$ corresponding to
$\partial H_0/\partial z_j({\bf x})$, $j=1,2$ equal to unity with the other ones being zero, that is solutions to:
$$({\cal A}2):-\frac{\partial}{\partial y_i}\left(\varepsilon_{ij}^{-1}({\bf y})\frac{\partial w^k}{\partial y_j}\right)
=\delta_{jk} \left(\frac{\partial}{\partial y_i}\varepsilon_{ij}^{-1}({\bf y})\right) \; ,$$
with $w^k({\bf y})$, $k=1,2$ periodic functions in ${\bf y}$ of period $Y$
\footnote{We note that $({\cal A}2)$ are two equations which merely depend upon $\varepsilon_{ij}^{-1}({\bf y})$, that is
on the microscopic properties of the periodic medium. The two functions $w^k$ (defined up to an
additive constant) can be computed once for all, independently of $\Omega_f$.}.
\noindent Since the functions $w^k({\bf y})$ are known, we note that
$$\sigma_i^0({\bf x},{\bf y})=\varepsilon_{ij}^{-1}({\bf y})\left( \frac{\partial H_0}{\partial z_j}+\frac{\partial H_1}{\partial y_j}\right)
=\varepsilon_{ij}^{-1}({\bf y})\left( \frac{\partial H_0}{\partial z_j}
+\frac{\partial H_0}{\partial z_k}\frac{\partial w^k({\bf y})}{\partial y_j}\right) \; ,$$
which can be written as
$$\sigma^i_0({\bf x},{\bf y})=\left( \varepsilon_{ik}^{-1}({\bf y})
+\varepsilon_{ij}^{-1}({\bf y})\frac{\partial w^k({\bf y})}{\partial y_j}\right)\frac{\partial H_0({\bf x})}{\partial z_k} \; .$$
\noindent Lets us now apply the mean to both sides of this equation. We obtain:
$$<\sigma^i_0>({\bf x})=\varepsilon_{{\rm{hom}},ik}^{-1} \frac{\partial H_0({\bf x})}{\partial z_k} \; ,$$
which can be recast as the following homogenized problem:
$$
({\cal P}_0): \left\{\begin{array}{ll}
\displaystyle{-\sum_{i,k=1}^2\frac{\partial}{\partial x_i}\left( \varepsilon_{{\rm{hom}},ik}^{-1}
\frac{\partial H_0({\bf x})}{\partial x_k}\right)} = \omega^2 \mu_0\varepsilon_0 H_0({\bf x}) \; &, \hbox{in $\Omega_f$} \; , \\
\displaystyle{\varepsilon_{{\rm{hom}},ik}^{-1}(\frac{{\bf x}}{\eta})
\frac{\partial H_0({\bf x})}{\partial x_i}n_k}=0 \; &, \hbox{on $\partial\Omega_f$} \; ,
\end{array}
\right.
$$
where $\varepsilon_{{\rm{hom}},ik}^{-1}$ denote the coefficients of the homogenized matrix of permittivity
given by:
\begin{equation}
\varepsilon_{{\rm{hom}},ik}^{-1}=\frac{1}{\mid Y \mid}\int\int_Y\left( \varepsilon_{ik}^{-1}({\bf y})
+\varepsilon_{ij}^{-1}({\bf y})\frac{\partial w^k({\bf y})}{\partial y_j}\right) \, d{\bf y} \; .
\label{sebeq03}
\end{equation}
\noindent As an illustrative example for this homogenized problem, we consider a
microstructured waveguide consisting of a medium with relative
permittivity $\varepsilon=1.25$ with elliptic inclusions
(of minor and major axes $0.3$ cm and $0.4$ cm respectively) with center to center spacing
$d=0.1cm$ with an infinite conducting boundary {\it i.e.} Neumann boundary
conditions in the TE polarization.
We use the COMSOL MULTIPHYSICS finite element package to solve the annex problem and we
find that ${[\varepsilon_{\rm hom}]}$ from (\ref{sebeq03}) writes as \cite{zolgue1}
$$
\left(
\begin{array}{cc}
1.9296204& -1.0533083 \, 10^{-16} \\
-44.417444 \, 10^{-18} & 2.1127643
\end{array}
\right) \; ,
$$
with $<\varepsilon>_{Y}= 2.2867255$. The off diagonal
terms can be neglected.
If we assume that the transverse propagating modes in the metallic waveguide
have a small propagation constant $\gamma\ll 1$, the above mathematical
model describes accurately the physics. We show in Fig. \ref{anisotropy}
a comparison between two TE modes of the microstructured waveguide
and its associated anisotropic homogenized counterpart. Both
eigenfrequencies and eigenfields match well (note that we use the
waveguide terminology wavenumber $k=\sqrt{\omega^2/c^2-\gamma^2}$).
\begin{figure}
\begin{minipage}[b]{0.5 \textwidth}
\centering
\begin{picture}(0,100)
\put(-100,0){\includegraphics[angle=0,width=0.8 \textwidth,
draft=false]{potentielVx.pdf}}
\end{picture}
\end{minipage}%
\begin{minipage}[b]{0.5 \textwidth}
\centering \includegraphics[angle=0,width=0.8\textwidth,
draft=false]{potentielVy.pdf}
\end{minipage}
\caption{Potentials $V_x$ (left) and $V_y$ (right):
The unit cell contains an elliptic inclusion of relative
permittivity ($\varepsilon=4.0+3i$) with minor
and major axis $a=0.3$ and $b=0.4$ in silica
$(\varepsilon=1.25)$.}
\label{potential}
\end{figure}
\begin{figure}[h!]
\centerline{\resizebox{14cm}{!}{
\rotatebox{-90}{\includegraphics{anis.pdf} } } }
\caption[anisotropy]{Comparison between transverse electric fields
$TE_{21}$ and $TE_{31}$ of a microstructured metallic waveguide
for a propagation constant $\gamma=0.1 cm^{-1}$
(wavenumbers $k=0.7707 cm^{-1}$ and $k=0.5478 cm^{-1}$
respectively), see left panel, with the $TE_{21}$ and $TE_{31}$ modes of the corresponding
homogenized anisotropic metallic waveguide for $\gamma = 0.1 cm^{-1}$
($k=0.7607 cm^{-1}$ and $k = 0.5201 cm^{-1}$, where
$k=\sqrt{\omega^2/c^2-\gamma^2}=\sqrt{\omega^2\varepsilon_0\mu_0-\gamma^2}$
were obtained from the computation of eigenvalues $\omega$
of homogenized problem $({\cal P}_0)$), see right panel.}
\label{anisotropy}
\end{figure}
\subsection{The case of one-dimensional gratings: Application to invisibility cloaks}
There is a case of particular importance for applications in grating theory: that of a periodic
multilayered structure. Let us assume that the permittivity of this medium is
$\varepsilon=\alpha$ in white layers and $\beta$ in yellow layers,
as shown in Fig. \ref{fig2}.
\begin{figure}[h]\centerline{\scalebox{0.5}{\includegraphics{fighomo2.pdf}}}
\vspace{0cm}\caption{Schematic of homogenization process for a one-dimensional
grating with homogeneous dielectric layers of permittivity $\alpha$ and $\beta$
in white and yellow regions. When $\eta$ tends to zero the number of layers
tends to infinity, and their thicknesses vanish, in such a way that
the width of the overall stack remains constant.}
\label{fig2}
\end{figure}
\noindent Equation $({\cal A}2)$ takes the form:
$$({\cal A}3):-\frac{d}{dy}\left(\varepsilon^{-1}({y})\frac{d w}{\partial y}\right)
=\left(\frac{d}{dy}\varepsilon^{-1}({y})\right) \; ,$$
with $w({y})$, periodic function in $y$ of period $1$.
\noindent We deduce that
$$-\frac{d w}{d y}=1+C\varepsilon({y}) \; .$$
\noindent Noting that $\displaystyle{\int_Y \frac{d w}{dy}}=w(1)-w(0)=0$, this leads to
$$\int_Y \left( 1+C\varepsilon({y}) \right) dy = 0 \; .$$
\noindent Since $\mid Y \mid=1$, we conclude that
$$C=-{<\varepsilon>}^{-1} \; .$$
\noindent The homogenized permittivity takes the form:
\begin{equation}
\begin{array}{lll}
\varepsilon_{{\rm hom}}^{-1}&=\displaystyle{\frac{1}{\mid Y \mid}\int_Y\left( \varepsilon^{-1}({y})
+\varepsilon^{-1}({y})\frac{dw({y})}{dy}\right) \, dy} \nonumber \\
&= <\varepsilon^{-1}({y})> -<\varepsilon^{-1}({y})+C> \nonumber \\
&= <\varepsilon^{-1}({y})> -<\varepsilon^{-1}({y})>+<{<\varepsilon({y})>}^{-1}> = {<\varepsilon({y})>}^{-1} \; .
\end{array}
\end{equation}
\noindent We note that if we now consider the full operator i.e. we include partial derivatives
in $y_1$ and $y_2$, the anisotropic homogenized permittivity takes the form:
$$
\varepsilon_{{\rm hom}}^{-1}=
\left(
\begin{array}{cc}
{<\varepsilon({y})^{-1}>} & 0 \\
0 & {<\varepsilon({y})>}^{-1}
\end{array}
\right) \; ,
$$
as the only contribution for $\varepsilon_{{\rm hom},11}^{-1}$ is $1/\mid Y\mid \int_Y \varepsilon^{-1}(y) \, dy$.
\noindent As an illustrative example of what artificial anisotropy can achieve, we
propose the design of an invisibility cloak. For this, let us assume that we have a
multilayered grating with periodicity along the radial axis.
In the coordinate system $(r,\theta)$, the homogenized permittivity clearly has the same
form as above. If we want to design an invisibility cloak with an alternation of two
homogeneous isotropic layers of thicknesses $d_A$ and $d_B$ and
permittivities $\alpha$, $\beta$, we then need to use the formula
\begin{equation}
\begin{array}{lll}
&\displaystyle{\frac{1}{\varepsilon_r}}=\displaystyle{\frac{1}{1+\eta}\left(\frac{1}{\alpha}+\frac{\eta}{\beta}\right)},
&\varepsilon_\theta=\displaystyle{\frac{\alpha+\eta \beta}{1+\eta}} \; , \;
\nonumber
\end{array}
\label{effective}
\end{equation}
where $\eta=d_B/d_A$ is the ratio of thicknesses for layers $A$ and
$B$ and $d_A+d_B=1$.
We now note that the coordinate transformation
$r'=R_1+r\frac{R_2-R_1}{R_2}$ can compress a disc $r<R_2$ into
a shell $R_1<r<R_2$, provided that the shell is described by the
following anisotropic heterogeneous permittivity \cite{pendry}
$\underline{\underline{\varepsilon}}^{{\rm cloak}}$
(written in its diagonal basis):
\begin{equation}
\begin{array}{lll}
\varepsilon_r^{{\rm cloak}} &=\displaystyle{{\left(\frac{R_2}{R_2-R_1}\right)}^2}{\left(\frac{r'-R_1}{r'}\right)}^2 \; , \;
& \varepsilon_{\theta}^{{\rm cloak}} =\displaystyle{{\left(\frac{R_2}{R_2-R_1}\right)}^2} \; ,
\end{array}
\label{rhort1}
\end{equation}
where $R_1$ and $R_2$ are the interior and the exterior radii of the cloak.
Such a metamaterial can be approximated using the formula
(\ref{effective}), as first proposed in \cite{huang2007},
which leads to the multilayered cloak
shown in Fig. \ref{lhf_fig1}.
\begin{figure}[h]
\hspace{-0.5cm}
\resizebox*{14cm}{!}{\includegraphics{lhf_fig1.pdf}}
\vspace{0.0cm}\mbox{}
\caption{Propagation of a plane wave of wavelength $7 \; 10^{-7}$m
(red in the visible spectrum)
from the left on a multilayered cloak of inner radius $R_1=1.5 \; 10^{-8}$m
and outer radius $R_2=3 \; 10^{-8}$m,
consisting of 20 homogeneous layers of equal thickness and of respective
relative permittivities
$1680.70,0.25,80.75,0.25,29.39,0.25,16.37,0.25,10.99,0.25,8.18,0.25,6.50,0.25,5.40$,
$0.25,4.63,0.25,4.06,0.25$
in vacuum. Importantly, one layer in two has the same permittivity.
}
\label{lhf_fig1}
\end{figure}
\section{High-frequency homogenization}\label{hfh}
Many of the features of interest in photonic crystals \cite{yablo,john}, or other
periodic structures, such as all-angle negative refraction \cite{zengerle87a,gralak2000,notomi2000,luo02a}
or ultrarefraction \cite{dowling,enoch2002} occur at high frequencies
where the wavelength and microstructure dimension are of similar orders.
Therefore the conventional low-frequency classical homogenisation clearly fails to
capture the essential physics and a different approach to distill the physics into an
effective model is required. Fortunately a high frequency homogenisation (HFH)
theory as developed in \cite{craster10a} is capable of capturing features such
as AANR and ultra-refraction \cite{josa2011} for some model structures.
Somewhat tangentially, there is an existing literature in the analysis community on
Bloch homogenisation \cite{allaire1998,allaire2005,birman,hoefer},
that is related to what we call
high frequency homogenisation. There is also a flourishing literature on developing
homogenised elastic media, with frequency dependent effective parameters, based
upon periodic media \cite{willis2011}. There is therefore considerable
interest in creating effective continuum models of microstructured media that break
free from the conventional low frequency homogenisation limitations.
\subsection{High Frequency Homogenization for Scalar Waves}
\label{sec:TMWaves}
Waves propagating through photonic crystals and metamaterials have proven to show different effects depending on their frequency. The homogenization of a periodic material is not unique.
The effective
properties of a periodic medium change depending on the vibration
modes within its cells. The dispersion diagram structure can be considered to be the identity of such a material and provides the most important information regarding
group velocities, band-gaps of dis-allowed propagation frequency bands, Dirac cones and many other interesting effects.
The goal of a homogenization theory is to provide an effective
homogeneous medium that is equivalent, in the long scale, to the
initial non-homogeneous medium composed of a short-scale periodic, or other
microscale, structure. This was achieved initially using the classical
theory of homogenization
\cite{Bensoussan78a,craster12a,jikov94a,mei96a,milton02a} and yields
an intuitively obvious result that the effective medium's properties
consist of simple averages of the original medium's
properties. This is valid so long as the wavelength is very large
compared to the size of the cells (here we focus on periodic media
created by repeating cells). For shorter wavelengths of the order of a
cell's length a more general theory has been developed
\cite{craster10a} that also recovers the results of the classical
homogenization theory. For clarity we present high frequency
homogeniaztion (HFH) by means of an illustrative example and consider
a two-dimensional lattice geometry for TE or TM polarised
electromagnetic waves. With harmonic time dependence, $\exp(-i\Omega
t)$ (assumed understood and henceforth suppressed), the governing equation is the scalar Helmholtz equation,
\begin{equation}
\nabla^2 u+\Omega^2u=0,
\label{eq:Helmotz}
\end{equation}
where $u$ represent $E_Z$ and $H_Z$, for TM and TE polarised
electromagnetic waves respectively, and
$\Omega^2={n^2}\omega^2/{c^2}$. In our example the cells are square
and each square cell of length $2l$ contains a circular hole and the
filled part of the cell has constant non-dimensionalized
properties. The boundary conditions on the hole's surface, namely the boundary $\partial S_2$, depend on the polarisation and are taken to be either of Dirichlet or Neumann type. This approach assumes infinite conducting boundaries which is a good approximation for micro-waves.
We adopt a multiscale approach where $l$ is the small length scale and
$L$ is a large length scale and we set $\eta = l/L\ll 1$ to be the
ratio of these scales. The two length scales let us introduce the
following two independent spatial variables, $\xi_i = x_i/l$ and $X_i = x_i/L$. The cell's reference coordinate system is then $-1<\xi<1$. By introducing the new variables in equation (\ref{eq:Helmotz}) we obtain,
\begin{equation}
u({\bf X},{\bm \xi}),_{\xi_i \xi_i}+\Omega^2 u({\bf X},{\bm \xi})+ 2\eta u({\bf X},{\bm \xi}),_{\xi_i X_i}+\eta^2u({\bf X},{\bm \xi}),_{X_iX_i}=0.
\label{eq:NewEquation}
\end{equation}
We now pose an ansatz for the field and the frequency,
\begin{align}
u({\bf X},{\bm \xi})=u_0({\bf X},{\bm \xi})+\eta u_1({\bf X},{\bm \xi})+\eta^2 u_2({\bf X},{\bm \xi})+\ldots ,
\nonumber\\
\Omega^2=\Omega_0^2+\eta \Omega_1^2+\eta^2 \Omega_2^2+\ldots
\label{eq:expansion2D}
\end{align}
In this expansion we set $\Omega_0$ to be the frequency of standing
waves that occur in the perfectly periodic setting.
By substituting equations (\ref{eq:expansion2D}) into equation
(\ref{eq:NewEquation}) and grouping equal powers of $\epsilon$ through
to second order, we obtain a hierarchy of three ordered equations:
\begin{equation}
u_{0,\xi_i\xi_i} + \Omega_0^2u_0=0,
\label{eq:leadingOrder}
\end{equation}
\begin{equation}
u_{1,\xi_i\xi_i} + \Omega_0^2u_1=
-2u_{0,\xi_iX_i}
-\Omega_1^2 u_0,
\label{eq:firstOrder}
\end{equation}
\begin{equation}
u_{2,\xi_i\xi_i} + \Omega_0^2 u_2 =-u_{0,X_iX_i}
-2u_{1,\xi_iX_i}
-\Omega_1^2 u_1 -\Omega_2^2 u_0.
\label{eq:secondOrder}
\end{equation}
These equations are solved as in \cite{antonakakis13a,craster10a} and
hence the description is brief.
\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{schematic.pdf
\includegraphics[scale=0.9]{brillouin3.pdf
\end{center}
\caption{Panel (a) An infinite square array of split ring resonators with the
elementary cell shown as the dashed line inner square. Panel (b)
shows the irreducible Brillouin zone, in wavenumber space, used for
square arrays in perfectly periodic
media based around the elementary cell shown of length $2l$ ($l=1$ in (b)).
Figure reproduced from Proceedings of the Royal Society \cite{antonakakis13a}.}
\label{fig:Brillouin}
\end{figure}
The asymptotic expansions are taken about the standing wave
frequencies that occur at the corners of the irreducible Brillouin
zone depicted in Fig. \ref{fig:Brillouin}. It should be noted that not
all structured cells will have the usual symmetries of a square, as in
Fig. \ref{fig:Brillouin}(a) where there is no reflexion symmetry from
the diagonals. As a consequence the usual triangular region $\Gamma XM$
does not always represent the irreducible Brillouin zone and the
square region $\Gamma MXN$ should be used instead. Also paths that
cross the irreducible Brillouin zone have proven to yield interesting
effects namely along the path $MX'$ for large circular holes
\cite{craster12b}.
The subsequent asymptotic development considers small perturbations about the points $\Gamma$, $X$ and $M$ so that the boundary conditions of $u$ on the outer boundaries of the cell, namely $\partial S_1$, read,
\begin{equation}
u|_{\xi_{i}=1}=\pm u|_{\xi_{i}=-1} \quad \text{and} \quad u_{,\xi_i}|_{\xi_{i}=1}=\pm u_{,\xi_i}|_{\xi_{i}=-1},
\label{eq:periodicBC}
\end{equation}
where the $+,-$ stand for periodic and anti-periodic conditions
respectively: the standing waves occur when these conditions are met.
The conditions on $\partial S_2$ are either of Dirichlet or Neumann type. The theory that follows is similar for both boundary condition cases, but the latter one is illustrated herein. Neumann boudary condition on the hole's surface or equivalently electromagnetic waves in TE polarization yield,
\begin{equation}
\frac {\partial u} {\partial {\bf n}} =u_{,x_i} n_i|_{\partial S_2}=0.
\label{eq:neumann}
\end{equation}
which in terms of the two-scales and $u_i({\bf X}, {\bm \xi})$ become
\begin{equation}
U_{0,\xi_i}n_i = 0,\quad
\label{eq:1rstNeumann}
(U_0f_{0,X_i} + u_{1,\xi_i})n_i=0,\quad
( u_{1,X_i} + u_{2,\xi_i}) n_i=0.
\end{equation}
\begin{comment}
For the sake of simplicity let us consider a Dirichlet type boundary on the wall of the holes such that $u|_{\partial S_2}=0$ implies
$u_i|_{\partial S_2}=0$ for $i=0,1,2$.
\end{comment}
The solution of the leading order equation is by
introducing the following separation of variables $u_0=f_0({\bf
X})U_0({\bm \xi};\Omega_0)$. It is obvious that $f_0({\bf X})$, which
represents the behaviour of the solution in the long scale, is not set
by the leading order equation and the resulting eigenvalue problem is
solved on the short-scale for $\Omega_0$ and $U_0$ representing the
standing wave frequencies and the associated cell's vibration modes
respectively.
To solve the first order equation (\ref{eq:firstOrder}) we take the
integral over the cell of the product of equation
(\ref{eq:firstOrder}) with $U_0$ minus the product of equation
(\ref{eq:leadingOrder}) with $u_1/f_0$ and this yields
$\Omega_1=0$. It then follows to solve for $u_1({\bf
X},{\bm \xi})=f_{0,X_i}({\bf X})U_{1_i}({\bm \xi})$ where the vector ${\bf
U}_1$ is found as in \cite{antonakakis13a}. By invoking a similar
solvability condition for the second order equation we obtain a second
order PDE for $f_0({\bf X})$,
\begin{align}
T_{ij}f_{0,X_iX_j}+\Omega_2^2f_0=0\quad{\rm where},
\nonumber\\
\quad T_{ij}=\frac{t_{ij}}{\int\int_S U_0^2dS}\quad{\rm for} \quad i,j=1,2
\label{eq:f_0}
\end{align}
entirely on the long scale with the coefficients $T_{ij}$ containing all
the information of the cell's dynamical response and
the tensor $t_{ij}$ represents dynamical averages of the properties of the medium. For Neumann boundary conditions on $\partial S_2$ its formulation reads,
\begin{equation}
t_{ii}=\int\int_SU_0^2dS+\int\int_S(U_{1_i,\xi_i}U_0-U_{1_i}U_{0,\xi_i})dS
\quad {\rm for} \quad i=1\ {\rm or}\ 2,
\label{eq:t11}
\end{equation}
\begin{equation}
t_{ij}=\int\int_S(U_{1_j,\xi_i}U_0-U_{1_j}U_{0,\xi_i})dS \quad {\rm for} \quad i\neq j.
\label{eq:tij}
\end{equation}
Note that there is no summation over repeated indexes for $t_{ii}$.
The tensor depends on the boundary conditions of the holes and has a
different form if Dirichlet type conditions are applied on $\partial
S_2$.
The PDE for $f_0$ has several uses, and can be verified by re-creating
asymptotically the dispersion curves for a perfect lattice system.
One important result of equation (\ref{eq:f_0}) is its use in the expansion of $\Omega$ namely in equation (\ref{eq:expansion2D}). In order to obtain $\Omega_2$ as a function of the Bloch wavenumbers we use the Bloch boundary conditions on the cell to solve for $f_0({\bf X})=\exp(i\kappa_jX_j/\eta)$, where $\kappa_j=K_j-d_j$ with $d_j=0,\pi/2,-\pi/2$ depending on the location in the Brillouin zone. The asymptotic dispersion relation now reads,
\begin{equation}
\Omega\sim\Omega_0+ \frac{T_{ij}}{2\Omega_0}\kappa_i \kappa_j.
\label{eq:asymptoticDispersion}
\end{equation}
Equation (\ref{eq:asymptoticDispersion}) yields the behaviour of the dispersion curves asymptotically around the standing wave frequencies that are naturally located at the edge points of the Brillouin zone. Fig. \ref{fig:2D_Neumann} illustrates the asymptotic dispersion curves for the first six dispersion bands of a square cell geometry with circular holes.
\begin{figure}
\begin{center}
\includegraphics[scale=0.8]{circle_04_combined.pdf}
\end{center}
\vspace{-1cm}
\caption{The dispersion diagram for a doubly periodic array of square cells with circular inclusions, of radius $0.4$, free at their inner boundaries shown for the irreducible Brillouin zone of
Fig. \ref{fig:Brillouin}. The dispersion curves are shown in solid lines and the asymptotic solutions from the high frequency homogenization theory are shown in dashed lines.
Figure reproduced from Proceedings of the Royal Society \cite{antonakakis13a}.}
\label{fig:2D_Dirichlet}
\end{figure}
An assumption in the development of equation
(\ref{eq:asymptoticDispersion}) is that the standing wave frequencies
are isolated. But one can clearly see in Fig. \ref{fig:2D_Dirichlet}
that this is not the case for third standing wave frequency at point
$\Gamma$ as well as for the second standing wave frequency at point
$X$. A small alteration to the theory \cite{antonakakis13a} enables the computation of the dispersion curves at such points by setting,
\begin{equation}
u_0=f_0^{(l)}({\bf X})U_0^{(l)}({\bm \xi};\Omega_0)
\label{eq:uzeroRep}
\end{equation}
where we sum over the repeated superscripts $(l)$.
Proceeding as before, we multiply equation (\ref{eq:firstOrder}) by $U_0^{(m)}$, substract $u_1((U^{(m)}_{0,\xi_i})_{\xi_i}+\Omega_0^2 U_0^{(m)})$ then integrate over the cell to obtain,
\begin{equation}
\left(\frac{\partial}{\partial X_j}{\bf A}_{jml}+\Omega_1^2{\bf B}_{ml}\right){\hat f}_0^{(l)}=0,\quad{\rm for}\quad m=1,2,\ldots,p
\label{eq:PreSystem}
\end{equation}
$\Omega_1$ is not necessarily zero,
and
\begin{equation}
{\bf
A}_{jml}=\int\int_S(U_0^{(m)}U_{0,\xi_j}^{(l)}-U_{0,\xi_j}^{(m)}U_0^{(l)})dS, \quad
{\bf B}_{ml}=\int\int U_0^{(l)}U_0^{(m)}dS.
\label{eq:Bmatrix}
\end{equation}
There is now a system of coupled partial differential equations for
the $f_0^{(l)}$ and, provided $\Omega_1\neq 0$, the leading order
behaviour of the dispersion curves near the $\Omega_0$ is now linear
(these then form Dirac cones).
For the perfect lattice, we set $f_0^{(l)}={\hat
f}_0^{(l)}\exp(i\kappa_jX_j/\eta)$ and
obtain the following index equations,
\begin{equation}
(i\frac{\kappa_j}{\eta}{\bf A}_{jml}+\Omega_1^2{\bf B}_{ml}){\hat f}_0^{(l)}=0, \quad \rm{for} \quad m=1,2,...,p
\label{eq:PreSystem2}
\end{equation}
The system of equation (\ref{eq:PreSystem2}) can be written simply as,
\begin{equation}
{\bf C}{\hat {\bf F}}_0=0,
\label{eq:System}
\end{equation}
with ${\bf C}_{ll}=\Omega_1^2{\bf B}_{ll}$ and ${\bf
C}_{ml}=i\kappa_j{\bf A}_{jml}/\eta$ for $l\neq m$. One must then solve
for $\Omega_1^2=\pm\sqrt{\alpha_{ij}\kappa_i\kappa_j}/\eta$ when the determinant of ${\bf C}$ vanishes and insert the result in,
\begin{equation}
\Omega\sim\Omega_0\pm\frac{1}{2\Omega_0}\sqrt{\alpha_{ij}\kappa_i\kappa_j}.
\label{eq:asymptoticExpansionLinear}
\end{equation}
If the $\Omega_1$ are zero one must go to the next order.
\subsubsection{Repeated eigenvalues: quadratic asymptotics}
\label{sec:RepEigenQuad}
If $\Omega_1$ is zero, $u_1=f_{0,X_k}^{(l)}U_{1_k}^{(l)}$ (we again
sum over all repeated $(l)$ superscripts) and
we advance to second order using
(\ref{eq:secondOrder}). Taking the difference between
the product of equation (\ref{eq:secondOrder}) with $U_0^{(m)}$ and
$u_2(U_{0,\xi_i\xi_i} + \Omega_0^2 U_0)$ and then
integrating
over the elementary cell gives
\begin{eqnarray}
&f_{0,X_iX_i}^{(l)}\int\int_SU_0^{(m)}U_0^{(l)}dS+
f_{0,X_kX_j}^{(l)}\int\int_S(U_0^{(m)}U_{1_k,\xi_j}^{(l)}-U_{0,\xi_j}^{(m)}U_{1_k}^{(l)})dS
\nonumber\\
&\quad +\Omega_2^2 f_0^{(l)}\int\int_S U_0^{(m)}U_0^{(l)}dS=0, \quad
{\rm for} \quad m=1,2,...,p
\label{eq:preLam2}
\end{eqnarray}
as a system of coupled PDEs.
The above equation is presented more neatly as
\begin{equation}
f_{0,X_iX_i}^{(l)}{\bf A}_{ml}+f_{0,X_kX_j}^{(l)}{\bf
D}_{kjml}+\Omega_2^2 f_0{\bf B}_{ml}=0, \quad {\rm for} \quad m=1,2,...,p.
\label{eq:Lam2}
\end{equation}
For the Bloch wave setting, using $f_0^{(l)}({\bf X})={\hat f}_0^{(l)}\exp(i\kappa_jX_j/\eta)$ we obtain the following system,
\begin{equation}
\left(-\frac{\kappa_i\kappa_i}{\eta^2}{\bf A}_{ml}-\frac{\kappa_k\kappa_j}{\eta^2}{\bf
D}_{kjml}+\Omega_2^2{\bf B}_{ml}\right){\hat f}_0^{(l)}=0, \quad {\rm for} \quad m=1,2,...,p
\label{eq:sys3}
\end{equation}
and this determines the asymptotic dispersion curves.
\subsubsection{The classical long wave zero frequency limit}
\label{sec:classical}
The current theory simplifies if one enters the classical long wave,
low frequency limit where $\Omega^2\sim O(\epsilon^2)$
as $U_0$ becomes uniform, and without loss of generality is set to be
unity, over the elementary cell. The final equation is again
(\ref{eq:f_0}) where the tensor $t_{ij}$ simplifies to
\begin{equation}
t_{ii}=\int\int_SdS+\int\int_SU_{1_i,\xi_i}dS,\quad
t_{ij}=\int\int_SU_{1_j,\xi_i}dS \quad {\rm for} \quad i\neq j
\label{eq:tij0}
\end{equation}
(with no summation over repeated suffices in this equation)
and $T_{ij}=t_{ij}/\int\int_S dS$.
\subsection{Illustrations for Tranverse Electric Polarized Waves}
\label{sec:EMWaves}
\begin{figure}
\begin{center}
\includegraphics[scale=0.7]{Holes.pdf}
\end{center}
\vspace{-1.0cm}
\caption{The dispersion diagrams for a doubly periodic array of square cells with split ring inclusions, free at their inner boundaries shown for the irreducible Brillouin zone of
Fig. \ref{fig:Brillouin}. The dispersion curves are shown in solid lines and the asymptotic solutions from the high frequency homogenization theory are shown in dashed lines.
Figure reproduced from Proceedings of the Royal Society \cite{antonakakis13a}.}
\label{fig:2D_Neumann}
\end{figure}
Let us now turn to some illustrative examples. We present in
Fig. \ref{fig:2D_Neumann} the TE polarization waves for three types
of SRR's (Split Ring Resonator's). Equation (\ref{eq:f_0}) represents
the wave propagation in the effective medium. It is noticable that
the $T_{ij}$ coefficients depend on the standing wave frequency and
that $T_{11}$ is not necessarily equal to $T_{22}$ in order to yield
an anisotropic effective medium for each separate frequency. Near
some of the standing wave frequencies the anisotropy effects are very
pronounced and well explained by the no longer elliptic equation
(\ref{eq:f_0}).
In the above equations $U_{1_i}$ is a solution of,
\begin{equation}
U_{1_j,\xi_i\xi_i} =0,
\label{eq:firstOrderHomo}
\end{equation}
with boundary conditions $(f_{0,X_i} + u_{1,\xi_i})n_i=0$ on
the hole boundary. If the medium is homogeneous as it is in the illustrative examples herein, equation (\ref{eq:firstOrderHomo}) is the same as
that for $U_0$, but with different
boundary conditions. The specific boundary conditions for $U_{1_j}$ are
\begin{equation}
U_{1_j,\xi_i}n_i=-n_j\quad\text{for}\quad j=1,2,
\label{eq:NeumannU1}
\end{equation}
where $n_i$ represent the normal vector components to the hole's surface. The role of ${\bf
U_1}$ is to ensure Neumann boundary conditions hold and the tensor
contains simple averages of inverse permittivity and permeability supplemented by
the correction term which takes into account the boundary conditions at $\partial S_2$.
Equation
(\ref{eq:tij0}) is the classical expression for the homogenised coefficient
in a scalar wave equation with constant material properties; (\ref{eq:firstOrderHomo})
is the well-known annex problem of electrostatic type set on a
periodic cell, see
\cite{Bensoussan78a,jikov94a}, and also holds
for the homogenised vector Maxwell's system, where ${\bf U_1}$ now has three components
and $i,j=1,2,3$ \cite{zolla00a,wellander,zolla07a}.
\subsubsection{Cloaking in metamaterials}
\begin{figure}
\vspace{-0.5cm}
\begin{center}
\includegraphics[scale=0.15]{hfh_fig4.pdf}
\includegraphics[scale=0.6]{holes_4_zoom.pdf}
\end{center}
\vspace{-1.4cm}
\caption{Cloaking in square arrays of SRRs with four holes:
A source at frequency $\Omega=2.8$, located in the center of a square metamaterial
consisting of 64 SRRs shaped as in Fig. \ref{fig:2D_Neumann}(b) produces a wave pattern
reminiscent of (a) concentric spherical field, (b) cloaking of a rectangular inclusion inside a slab of a metamaterial consisting of 38 SRRs
and (c) scattering of a plane wave from the same rectangular hole as the previous panel. (d) Zoom in dispersion diagram of
Fig. \ref{fig:2D_Neumann}(b). Panels (e), (f) and (g) present isofrequency plots of the respective the lower, middle and upper modes of the Dirac point. Figure reproduced from Proceedings of the Royal Society \cite{antonakakis13a}.}
\label{fig:unionjack}
\end{figure}
SRRs with 4 holes are now used and the dispersion diagrams are in
Fig. \ref{fig:2D_Neumann} (b). The flat band along the $M\Gamma$ path
is interesting for the fifth mode and we choose to illustrate cloaking
effects that occur here. In Fig. \ref{fig:unionjack}(a), we set an harmonic source at the
corresponding frequency $\Omega=2.8$ in an $8\times 8$ array of SRRs and
observe a wave pattern of concentric spherical modes. As can be seen in Figs. \ref{fig:unionjack}(b) and \ref{fig:unionjack}(c) a plane wave propagating at frequency $\Omega=2.8$ demonstrates perfect transmission through a slab composed of 38 SRRs but also cloaking of a rectangular inclusion where no scattering is seen before or after the metamaterial slab. Panel (d) of Fig. \ref{fig:unionjack} shows the location in the band structure that is responsible for this effect. Note that the frequency of excitation is just below the Dirac cone point located at $\Omega=2.835$ where the group velocity is negative but also constant near that location of the Brillouin zone illustrated through an isofrequency plot of lower mode of the Dirac point in Fig. \ref{fig:unionjack}(e).
In constrast with the isotropic features of panel (e), those of panels (f) and (g) show
ultra-flattened isofrequency contours that relate to
ultra-refraction, a regime more prone to omni-directivity than
cloaking. The asymptotic system of equations (\ref{eq:PreSystem}) describing the effective medium at the Dirac point can be uncoupled to yield one same equation for all $f_0^{(j)}$'s,
\begin{equation}
f_{0,X_iX_i}^{(j)}+0.7191\Omega_1^4 f_0^{(3)}=0
\label{eq:f03}
\end{equation}
After some further analysis, the PDE for $f_0^{(2)}$ is responsible for the effects at the frequency chosen $\Omega=2.8$.
\subsubsection{Lensing via AANR and St Andrew's cross in metamaterials}
We observe all-angle-negative-refraction effect in
metamaterials with SRRs with 8 holes. The dispersion curves in
Fig. \ref{fig:2D_Neumann}(c) are interesting, as the second curve
displays the hallmark of an optical band for a photonic crystal (it
has a negative group velocity around the $\Gamma$ point). However,
this band is the upper edge of a low frequency stop band induced by the
resonance of a SRR, whereas the optical band of a PC results from
multiple scattering, which thus arises at higher frequencies. We are
therefore in presence of a periodic structure behaving somewhat as a
composite intermediate between a metamaterial and a photonic crystal.
One of the most topical subjects in photonics is the so-called all-angle-negative-
refraction (AANR), which was first described in \cite{zengerle87a}.
AANR allows one to focus light emitted by a point, onto an image, even through a
flat lens, provided that certain conditions for AANR are met, such
as convex isofrequency contours shrinking with frequency
about a point in the Brillouin zone \cite{luo02a}. In Fig. \ref{fig:aanrbis}, we show
such an effect for a perfectly conducting photonic crystal (PC)
in Fig. \ref{fig:aanrbis}(a).
In order to achieve AANR, we choose a frequency on
the first dispersion curve (acoustic band) in Fig. \ref{fig:2D_Neumann}(c),
and we take its intersection with the light line $\Omega= \mid\kappa\mid$
along the $X\Gamma$ path. This means that we achieve negative group
velocity for waves propagating along the
$X\Gamma$ direction of the array, hence the rotation by an angle $\pi/4$
of every cell within the PC in panel (b) of Fig. \ref{fig:aanrbis}. This
is a standard trick in optics that has the effect of moving the origin of the
light-line dispersion to $X$ as, relative to the PC, the Bloch
wavenumber is along $X\Gamma$. This then creates optical effects due
to the interaction of the light-line with the acoustic branch, this
would be absent if $\Gamma$ were the light-line origin.
The anisotropy of the effective material is reflected from
coefficients $T_{11}=-5.53$ and $T_{22}=0.2946$. The same frequency
of the first band is reachable at point $N$ of the Brillouin zone. By
symmetry of the crystal, we would have $T_{11}=0.2946$ and
$T_{22}=-5.53$. The resultant propagating waves would come from the
superposition of the two effective media described
above. Fig. \ref{fig:aanrbis}(b) illustrates this anisotropy as the
source wave only propagates at the prescribed directions.
\begin{figure}
\vspace{-0.5cm}
\begin{center}
\includegraphics[scale=0.15]{hfh_fig5.pdf}
\includegraphics[scale=0.6]{fig14_zoom.pdf}
\end{center}
\vspace{-1cm}
\caption{Lensing via AANR and St Andrew's cross in square arrays of SRRs with eight holes:
(a) A line source at frequency $\Omega=1.1375$ located above a rectangular
metamaterial consisting of of 90 SRRs as in Fig. \ref{fig:2D_Neumann}(c) displays an image underneath
(lensing);
(b) A line source at frequency $\Omega=1.25$ located inside a square
metamaterial consisting of 49 SRRs as in Fig. \ref{fig:2D_Neumann}(c) displays the dynamically induced anisotropy of the effective medium;
(c) Zoom in dispersion diagram of Fig. \ref{fig:2D_Neumann}(c).
Note that each cell in the arrays in (a) and (b) has been rotated through an angle $\pi/4$.
Figure reproduced from Proceedings of the Royal Society \cite{antonakakis13a}.}
\label{fig:aanrbis}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.8]{brillouin.pdf}
\end{center}
\vspace{-8cm}
\caption{For the two dimensional example we show the geometry of the
doubly periodic simply supported plate (the dots represent the
simple supports) in panel (a) with the
elementary cell shown by the dotted lines and in (b) the irreducible Brillouin
zone with the lettering for wavenumber positions shown.
Figure reproduced from Proceedings of the Royal Society \cite{antonakakis12a}.
}
\label{fig:2dgeom}
\end{figure}
\subsection{Kirchoff Love Plates}
\label{sec:Plate}
HFH is by no means limited to the Helmholtz operator. HFH is here
applied to flexural waves in two dimensions \cite{antonakakis12a} for
which the governing equation is a fourth order equation
\begin{equation}
\nabla^4 u -\Omega^2 u = 0;
\end{equation}
assuming constant material parameters. Such a thin plate can be
subject to point, or line, constraints and these are common place in
structural engineering.
In two dimensions, only a few examples of constrained plates are available in the
literature: a grillage of line constraints as in \cite{mace81a} that is
effectively two coupled one dimensional problems, a periodic line array
of point supports \cite{evans07a} raises
the possibility of Rayleigh-Bloch modes and for doubly periodic point
supports there are exact solutions by \cite{mace96a} (simply
supported points) and by \cite{movchan07c} (clamped points); the simply
supported case is accessible via Fourier series and we choose this as
an illustrative example that is of interest in its own right; it is
shown in figure \ref{fig:2dgeom}(a). In
particular the simply supported plate has a zero-frequency stop-band
and a non-trivial dispersion diagram. It is worth noting that
classical homogenization is of no use in this setting with a zero
frequency stop band. Naturally waves passing through periodically
constrained plates have many similarities with those of photonics in optics.
We consider a
double periodic array of points at $x_1=2n_1$, $x_2=2n_2$ where $u=0$
(with the first and second derivatives continuous) and so
the elementary cell is one in $\vert x_1\vert<1, \vert x_2\vert<1$ with
$u=0$ at the origin (see Figure \ref{fig:2dgeom}); Floquet-Bloch conditions are applied at the
edges of the cell.
\begin{figure}
\begin{center}
\includegraphics[scale=0.7]{twoDpicture.pdf}
\end{center}
\vspace{-0.4cm}
\caption{The dispersion diagram for a doubly periodic array of point
simple supports shown for the irreducible Brillouin zone of
Fig. \ref{fig:2dgeom}. The figure shows the dispersion curves as solid lines. As dashed lines, the
asymptotic solutions from the high frequency homogenization theory
are shown. Figure reproduced from Proceedings of the Royal Society \cite{antonakakis12a}}
\label{fig:2Dplate}
\end{figure}
Applying Bloch's theorem and Fourier series the displacement is
readily found \cite{mace96a} as
\begin{equation}
u({\bf x})= \exp(i{\bm \kappa}\cdot{\bf x})\sum_{n_1,n_2}
\frac{\exp(-i\pi{\bf N}\cdot{\bf x})}{[(\kappa_1-\pi
n_1)^2+(\kappa_2-\pi n_2)^2]^2-\Omega^2},
\label{eq:2du}
\end{equation}
where ${\bf N}=(n_1,n_2)$, and enforcing the condition at the origin gives the dispersion relation
\begin{equation}
D(\kappa_1,\kappa_2,\Omega)=\sum_{n_1,n_2}\frac{1}{[(\pi
n_1-\kappa_1)^2 +(\pi n_2 -\kappa_2)^2]^2-\Omega^2}=0,
\label{eq:2d_dispersion}
\end{equation}
In this two dimensional example a
Bloch wavenumber vector ${\bm\kappa}=(\kappa_1,\kappa_2)$ is used
and the dispersion relation can be characterised completely by
considering the irreducible Brillouin zone $\Gamma XM$ shown in figure
\ref{fig:2dgeom}.
The dispersion diagram is shown in figure \ref{fig:2Dplate}; The singularities of the summand in equation (\ref{eq:2d_dispersion}) correspond to solutions within the cell
satisfying the Bloch conditions at the edges, in some cases these
singular solutions also satisfy the conditions at the support and are
therefore true solutions to the problem, a similar situation occurs
in the clamped case considered using multipoles in
\cite{movchan07c}. Solid lines in figure \ref{fig:2Dplate} label curves
that are branches of the dispersion relation, notable features are
the zero-frequency stop-band and also crossings of branches at the
edges of the Brillouin zone. Branches of the dispersion relation that
touch the edges of the Brillouin zone singly fall into two
categories, those with multiple modes emerging at a same standing wave frequency (such as
the lowest branch touching the left handside of the figure at M) and
those that are completely alone (such as the second lowest branch on
the left at M).
The HFH theory can again be employed to find an effective PDE entirely
upon the long-scale that describes the behaviour local to the standing
wave frequencies and the details are in \cite{antonakakis12a}, the
asymptotics from the effective PDE are shown in Fig. \ref{fig:2Dplate}
as the dashed lines.
\section{High-contrast homogenization}\label{kc}
Periodic media offer a convenient tool in achieving control of electromagnetic waves, due to
their relative simplicity from the point of view of the manufacturing process, and due to the possibility of using
the Floquet-Bloch decomposition for the analysis of the spectrum of the wave equation in such media. The latter issue has received a considerable amount of interest in the mathematical community, in particular from the perspective of the inverse problem: how to achieve a given spectrum and/or density of states for the
wave operator with periodic coefficients by designing an appropriate periodic structure? While the Floquet-Bloch decomposition provides a transparent procedure for answering the direct question, it does not yield a straightforward way of addressing the inverse question posed above.
One possibility for circumventing the difficulties associated with the inverse problem is by viewing the given periodic structure as a high-contrast one, if this is possible under the values of the material parameters used. The idea of considering high-contrast composites within the context of homogenization appeared first in the work by Allaire \cite{Allaire}, which discussed the application of the two-scale convergence technique (Nguetseng \cite{Nguetseng}) to classical homogenization. A more detailed analysis of high-contrast composites,
along with the derivation of an explicit formula for the related spectrum, was carried out in a major study by Zhikov \cite{Zhikov2000}. One of the obvious advantages in using high-contrast composites, or viewing a given composite as a high-contrast one, is in the mere existence of such formula for the spectrum. In the present section we focus on the results of the analysis of Zhikov, and on some more recent results for one-dimensional, layered, high-contrast periodic structures.
In order to get an as short as possible approach to the high-contrast theory, we consider the equation of electromagnetic wave propagation in the transverse electric (TE) polarisation, when the magnetic field has the form
$(0,0,H),$ in the presence of sources with spatial density $f({\bf x}):$
\begin{equation}
-{\rm div}(\varepsilon^\eta)^{-1}\left({\bf x}/\eta\right)\nabla H({\bf x})
=\omega^2 H({\bf x})+f({\bf x}), \ \ \ \ {\bf x}\in\Omega\subset{\mathbb R}^2,
\label{TMeq}
\end{equation}
where we normalise the speed of light $c$ to 1 for simplicity, which amounts to taking
$\varepsilon_0\mu_0=1$ in section \ref{clashom}, and
where the magnetic permeability is assumed to be equal to unity throughout the medium
({\it i.e.} $\mu=\mu_0$), and the function
$f({\bf x})$ is assumed to vanish outside some set that has positive distance to the boundary of $\Omega.$
The inverse dielectric permittivity tensor $(\varepsilon^\eta)^{-1}({\bf y})$ is assumed in this section, for simplicity,
to be a scalar, taking values $\eta^\gamma I$ and $I,$ respectively, on $[0,1]^2$-periodic open sets $F_0$
and $F_1,$ such that $\overline{F_0}\cup\overline{F_1}=
{\mathbb R}^2.$ Here $\gamma$ is a
positive exponent representing a ``contrast'' between material properties of the two components of the
structure that occupy the regions $F_0$ and $F_1.$ In what follows we also assume that $F_0\cap[0,1]^2$ has a finite distance to the boundary of
the unit cell $[0,1]^2,$ so that the ``soft'' component $F_0$ consists of disjoint ``inclusions'', spaced
$[0,1]^2$-periodically from each other, while the ``stiff'' component $F_1$ is a connected subset of ${\mathbb R}^2.$
The matrix $\varepsilon^\eta$ represents the dielectric permittivity of the medium at a given point, however the analysis and conclusions of this section are equally applicable to acoustic wave propagation, which is the
context we borrow the terms ``soft'' and ``stiff'' from. The assumed relation between the values of dielectric
permittivity $\varepsilon^\eta$ (in acoustics, between the ``stiffnesses'' ) on the two components of the
structure is close to the setting of what has been described as ``arrow fibres'' in the physics literature on electromagnetics, see {\it e.g}
\cite{pcfbook}.
A simple dimensional analysis shows that if $\omega\sim 1$ then the soft inclusions are in resonance with the
overall field if and only if $\gamma=2,$ which is the case we focus on henceforth.
The above equation (\ref{TMeq}) describes the wave profile for a TE-wave in the cylindrical domain
$\Omega\times{\mathbb R}$ domain, and it is therefore supplied with the Neumann condition $\partial H/\partial n=0$
\footnote{Neumann boundary conditions {\it i.e.} infinite conducting walls is a good model for metals in
microwaves, but much less so in the visible range of frequencies
wherein absorption by metals need be taken into account. Note also that in the TM polarization case,
when the electric field takes the form $(0,0,E)$, our analysis applies {\it mutatis mutandis} by
interchanging the roles of $\varepsilon$ and $\mu$, $H$ and $E$, and Neumann boundary
conditions by Dirichlet ones.}
on the
boundary of the domain and with the Sommerfeld radiation condition $\partial H/\partial\vert x\vert-{\rm i}\omega H=o(\vert x\vert^{-1})$ as $\vert x\vert\to\infty.$
In line with the previous sections, we apply the method of two-scale asymptotic expansions to the above problem, seeking the solution $H=H(x_1,x_2)=H({\bf x})$ in the form (see also (\ref{sebeq01} in Section \ref{clashom1})
\begin{equation}
H({\bf x})=H_0({\bf x},{\bf x}/\eta)+\eta H_1({\bf x},{\bf x}/\eta)+\eta^2 H_2({\bf x},{\bf x}/\eta)+...,
\label{twoscaleexp}
\end{equation}
where the functions involved are $[0,1]^2$-periodic with respect to the ``fast'' variable $y=x/\eta.$
Substituting the expansion (\ref{twoscaleexp}) into the equation (\ref{TMeq}) and rearranging the terms in the
resulting expression in such a way that terms with equal powers of $\eta$ are grouped together,
we obtain a sequence of recurrence relations for the functions $H_k,$ $k=0,1,...,$ from which they are
obtained sequentially. The first three of these equations can be transformed to the following system of equations for the leading-order term $H^{(0)}({\bf x},{\bf y})=u({\bf x})+v({\bf x},{\bf y}),$ ${\bf x}\in\Omega,$ ${\bf y}\in[0,1]^2:$
\begin{equation}
-{\rm div}\varepsilon_{\rm hom}^{-1}\nabla u({\bf x})=\omega^2\biggl(u({\bf x})
+\int_{F_0\cap[0,1]^2}v({\bf x},{\bf y})d{\bf y}\biggr)+f({\bf x}), \ \ \ {\bf x}\in\Omega,
\label{limit1}
\end{equation}
\begin{equation}
-\Delta_{\bf y} v({\bf x},{\bf y})
=\omega^2\bigl(u({\bf x})+v({\bf x},{\bf y})\bigr)
+f({\bf x}),\ \ \ \ y\in F_0\cap[0,1]^2,\ \ \ \ v({\bf x},{\bf y})=0,\ \ \ y\in{F_1}\cap[0,1]^2.
\label{limit2}
\end{equation}
These equations are supplemented by the boundary conditions for the function $u,$ of the same kind as in the
problems with finite $\eta.$ For the sake of simplifying the analysis, we assume that those inclusions that
overlap with the boundary of $\Omega$ are substituted by the ``main'', ``stiff'' material, where
$(\varepsilon^\eta)^{-1}=I.$
In the equation (\ref{limit1}), the matrix $\varepsilon_{\rm hom}$ is the classical homogenization matrix for
the perforated medium $\varepsilon F_1,$ see Section \label{clashom} above. However, the properties of the system
(\ref{limit1})--(\ref{limit2}) are rather different to those for the perforated-medium homogenised limit, described
by the equation
$-{\rm div}\varepsilon_{\rm hom}^{-1}\nabla u({\bf x})=\omega^2u({\bf x})+f({\bf x}).$ As we shall see next, the two-scale structure of
(\ref{limit1})--(\ref{limit2}) means that the description of the spectra of the problems (\ref{TMeq}) in the limit as
$\eta\to0$ diverges dramatically from the usual moderate-contrast scenario.
The true value of the above limiting procedure is revealed by the statement of the convergence, as $\eta\to0,$ of the spectra of the original problems to the spectrum of the limit problem described above, see \cite{Zhikov2000} and
by observing that the spectrum of the system (\ref{limit1})--(\ref{limit1}) is evaluated easily as follows. We write an eigenfunction expansion for $v({\bf x},{\bf y})$ as a function of $y\in{F_0}\cap[0,1]^2:$
\begin{equation}
v({\bf x},{\bf y})=\sum_{k=0}^\infty c_k({\bf x})\psi_k({\bf y}),
\label{vform}
\end{equation}
where $\psi_k$ are the (real-valued) eigenfunctions of the Dirichlet problem $-\Delta\psi_k=\lambda_k\psi_k,$
$y\in{F_1}\cap[0,1]^2,$ arranged in the order of increasing eigenvalues $\lambda_k,$ $k=0,1,...$ and orthonormalised according to the conditions $\int_{{F_0}\cap[0,1]^2}\vert\psi_k({\bf y})\vert^2d{\bf y}=1,$ $k=0,1,...,$ and
$\int_{{F_0}\cap[0,1]^2}\psi_k({\bf y})\psi_l({\bf y})d{\bf y}=0,$ $k\neq l,$ $k,l=0,1,...$ Substituting (\ref{vform}) into (\ref{limit2}),
we find the values for the coefficients $c_k,$ which yield an explicit expression for $v({\bf x},{\bf y})$ in terms of the function $u({\bf x}):$
\[
v({\bf x},{\bf y})=\bigl(\omega^2u({\bf x})+f({\bf x})\bigr)
\sum_{k=0}^\infty\Bigl(\int_{{F_0}\cap[0,1]^2}\psi_k({\bf y})d{\bf y}\Bigr)(\lambda_k-\omega^2)^{-1}\psi_k({\bf y}).
\]
Finally, using the last expression in the first equation in (\ref{limit1}) yields an equation for the function $u$ only:
\begin{equation}
-{\rm div}\varepsilon_{\rm hom}^{-1}\nabla u({\bf x})=\beta(\omega^2)\bigl(u({\bf x})+\omega^{-2}f({\bf x})), \ \ \ {\bf x}\in\Omega,
\label{homog1}
\end{equation}
where
the function $\beta,$ which first appeared in the work \cite{Zhikov2000}, is given by
\begin{equation}
\beta(\omega^2)=\omega^2\biggl(1+\omega^2
\sum_{k=0}^\infty\Bigl(\int_{{F_0}\cap[0,1]^2}\psi_k({\bf y})d{\bf y}\Bigr)^2(\lambda_k-\omega^2)^{-1}\biggr).
\label{betaformula}
\end{equation}
\begin{center}
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{shane_figure.pdf}
\caption{The plot of the function $\beta$ describing the spectrum of the problem (\ref{limit1})--(\ref{limit2}) subject to the boundary conditions.
The stop bands for the problem in the whole space ${\mathbb R}^2$ are indicated by the red intervals of the horizontal axis. The spectra of the problems (\ref{TMeq}) considered in the whole space converge, as $\eta\to0,$ to the closure of the complement of the union of the red intervals in the positive semiaxis.}
\label{shane_figure}
\end{figure}
\end{center}
The equation (\ref{homog1}) is supplemented by appropriate boundary conditions and/or conditions at infinity,
which are inherited from the $\eta$-dependent family, {\it i.e.} the Neumann condition at the boundary points
${\bf x}\in\partial\Omega$ and the radiation condition when $\vert {\bf x}\vert\to\infty.$ Clearly, the spectrum of this limit
problem consists of those values of $\omega^2$ for which $\beta(\omega^2)$ is in the spectrum of the
operator generated by the differential expression $-{\rm div}\varepsilon_{\rm hom}^{-1}\nabla$ subject to the same boundary conditions. For example, for the problem in the whole space ${\mathbb R}^2$
(describing the behaviour of TE-waves in a 3D periodic structure that is invariant in one specified direction)
this procedure results in a band-gap spectrum shown in Fig. \ref{shane_figure}. The end points of each pass band are found by
a simple analysis of the formula (\ref{betaformula}): the right ends of each pass band are given by those
eigenvalues $\lambda_k$ of the Dirichlet Laplacian on the inclusion ${F_0}\cap[0,1]^2$ that possess at least one eigenfunction with non-zero integral over ${F_0}\cap[0,1]^2$ (otherwise the corresponding term in
(\ref{betaformula}) vanishes), while the left ends of the pass bands are given by solutions to the polynomial equation of infinite order $\beta(\omega^2)=0.$ These points have a physical interpretation as eigenvalues of the
so-called electrostatic problem on the inclusion, see \cite{Zhikov2005}.
As in the case of classical, moderate-contrast, periodic media, the fact of spectral convergence offers significant computational advantages over tackling the equations (\ref{TMeq}) directly: as $\eta\to0$ the latter becomes increasingly demanding, while the former requires a single numerical procedure that serves all $\eta$ once the homogenised matrix $\varepsilon_{\hom}$ and several eigenvalues $\lambda_k$ are calculated. A significant new feature, however, as compared to the classical case,
is the fact of an infinite set of stop bands opening in the limit as $\eta\to,$ which are easily controlled by the explicit description of the band endpoints. This immediately yields a host of applications of the above results for the design
of band-gap devices with prescribed behaviour in the frequency interval of interest.
The theorem on spectral convergence for problems described by the equation (\ref{TMeq}) is proved in \cite{Zhikov2000} under the assumption of connectedness
of the domain $F_1$ occupied by the ``stiff'' component, via a variant of the extension procedure from $F_1$ to the whole of ${\mathbb R}^2$ for function sequences whose energy scales as $\eta^{-2}$ (or, equivalently, finite-energy
sequences for the operator prior to the rescaling ${\bf x}/\eta={\bf y}$). In the more recent works \cite{CCG}, \cite{CC}, this assumption is dropped in a theorem about spectral convergence for a general class of high-contrast operators, via a version of the two-scale asymptotic analysis akin to
(\ref{twoscaleexp}), for the Floquet-Bloch components of the resolvent of the original family of operators following the re-scaling
${\bf x}/\eta={\bf y}.$
In particular, in \cite{CCG} a one-dimensional high-contrast model is analysed, which in 3D corresponds to a
stack of dielectric layers aligned perpendicular to the direction of the magnetic field. Here the procedure described above for the 2D grating fails to yield a satisfactory
limit description as $\eta\to0,$ {\it i.e.} a description where the spectra of problems for finite $\eta$ converge to the spectrum of the limit problem described by the system (\ref{limit1})--(\ref{limit2}) as $\eta\to0.$ A more refined analysis of the structure of the related $\eta$-dependent family results in a statement of convergence to the set described by
the inequalities
\begin{equation}
-1\le\frac{1}{2}(\alpha-\beta+1)\sqrt{\lambda}\sin\Bigl(\sqrt{\lambda}(\alpha-\beta)\Bigr)
+\cos\Bigl(\sqrt{\lambda}(\alpha-\beta)\Bigr)\le1.
\label{limitspectrumformula}
\end{equation}
where $\alpha$ and $\beta$ denote the end-points of the inclusion in the unit cell, {\it i.e.}
$F_0\cap[0,1]^2=(\alpha,\beta)\times[0,1].$
Similarly to the spectrum of the 2D high-contrast problem, described by the function $\beta,$ the limit spectrum of the 1D problem has a band-gap structure, shown in Fig. \ref{limitspectrum}, however the description of the location of the bands is different in
that it is no longer obtained from the inequality $\beta>0,$ where $\beta$ is the 1D analogue of (\ref{betaformula}).
Importantly, the asymptotic behaviour of the density of states function as $\eta\to0$ is also very different
in the two cases. One can show that the family of resolvents for the problems (\ref{TMeq}) converges, up to a
suitable unitary transformation, to the resolvent of a certain operator whose spectrum is given exactly by
(\ref{limitspectrumformula}), see \cite{CC}. The rate of convergence is rigorously shown to be $O(\eta),$ as is anticipated by the expansion (\ref{twoscaleexp}).
\begin{center}
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{SIAMfigure.pdf}
\caption{The square root of the limit spectrum for a 1D high-contrast periodic stack, in TE polarisation.
The oscillating solid line is the graph of the function
$f(\omega)=\cos(\omega/2)-\omega\sin(\omega/2)/4$ in (\ref{limitspectrumformula}) with $\alpha=1/4,$
$\beta=3/4.$ The square root of the spectrum is the union of the intervals indicated by bold lines. }
\label{limitspectrum}
\end{figure}
\end{center}
The above 1D result is generalised to the case of an oblique incidence of an electromagnetic wave on the same 3D layered structure. Suppose that $x_2$ is the coordinate across the stack. Then, assuming for simplicity that the wave vector $(\varkappa,0,0)$
is parallel to the direction $x_1,$ it can be shown that all three components of the magnetic field are non-vanishing, with the magnetic component $H=H_3$ satisfying the equation
\[
-\Bigl((\varepsilon^\eta)^{-1}(x/\eta)H'(x)\Bigr)'=\Bigl(\omega^2-(\varepsilon^\eta)^{-1}(x/\eta)\varkappa^2\Bigr)H(x),
\]
subject to the same boundary conditions as before.
The modified limit spectrum for this family is given by those $\omega^2$ for which
({\it cf.} (\ref{limitspectrumformula}))
\begin{equation}
-1\le\frac{1}{2}(\alpha-\beta+1)\biggl(\omega-\frac{\varkappa^2}{\omega}\biggr)\sin\Bigl(\sqrt{\lambda}(\alpha-\beta)\Bigr)
+\cos\Bigl(\sqrt{\lambda}(\alpha-\beta)\Bigr)\le1,\ \ \ \ \omega>0,
\label{limitspectrumformulamoodified}
\end{equation}
where, as before, $\alpha$ and $\beta$ describe the ``soft" inclusion layer in the unit cell, see \cite{CCG}.
The set of $\omega$ described by the inequalities (\ref{limitspectrumformulamoodified})
is similar to that shown in Figure \ref{limitspectrum}, the only significant difference between
the two cases being a low-frequency gap opening near $\omega=0$ for
(\ref{limitspectrumformulamoodified}).
\section{Conclusion and further applications of grating theory}
To conclude this chapter, we would like to stress that advances in homogenization theory over the past
forty years have been fuelled by research in composites \cite{milton02a}.
The philosophy of the necessity for rigour expressed by Lord Rayleigh in 1892 concerning the Lorentz-Lorenz
equations (also known as Maxwell-Garnett formulae) can be viewed as the foundation act of homogenization:
`In the application of our results to the electric theory of light we contemplate a medium interrupted by spherical, or
cylindrical, obstacles, whose inductive capacity is different from that of the undisturbed medium. On the other hand,
the magnetic constant is supposed to retain its value unbroken. This being so, the kinetic energy of the electric
currents for the same total flux is the same as if there were no obstacles, at least if we regard the wavelength as infinitely great.'
In this paper, John William Strutt, the third Lord Rayleigh \cite{rayleigh}, was able to solve Laplace's equation
in two dimensions for rectangular arrays of cylinders, and in three-dimensions for cubic lattices of spheres.
The original proof of Lord Rayleigh suffered from a conditionally convergent sum in order to compute the
dipolar field in the array. Many authors in the theoretical physics and applied mathematics communities
proposed extensions of Rayleigh's method to avoid this drawback. Another limit of Rayleigh's
algorithm is that it does not hold when
the volume fraction of inclusions increases. So-called multipole methods have
been developed in conjunction with lattice sums
in order to overcome such obstacles, see {\it e.g.} \cite{mmp}
for a comprehensive review of these methods.
In parallel to these developments, the quasi-static
limit for gratings has been the subject
of intensive research, one might
cite \cite{ross1} and \cite{petit}
for important contributions in
the 1980s, and \cite{popov}
for a comprehensive review
of the modern theory of
gratings, including a close
inspection of homogenization
limit.
\begin{figure}
\begin{center}
\scalebox{0.2}{\includegraphics{figlast.pdf}}
\caption{Superlens application of grating:
(a) A time harmonic source at frequency $0.473$ displays an image through a square array of
square inclusions; (b) Effective magnetism versus frequency
using (\ref{effmag}) for square inclusions of relative permittivity $100$ with sidelength
$a=0.5d$ in matrix of relative permittivity $1$ (grating pitch $d=0.1$);
Negative values of the effective magnetism are in the frequency region $[0.432,0.534]$.
}
\label{figlast}
\end{center}
\end{figure}
Interestingly, in the pure mathematics community, Zhikov's work on high-contrast homogenization
\cite{Zhikov2000} has had important applications
in metamaterials, with the interpretation of his homogenized
equations in terms of effective magnetism first
put forward by O'Brien and Pendry \cite{obrienpendry},
and then by Bouchitt\'e and Felbacq
\cite{bouchitte}, although these
authors did not seem to be
aware at that time of
Zhikov's seminal
paper \cite{Zhikov2000}.
In order to grasp the
physical importance
of (\ref{homog1})-(\ref{betaformula}),
we consider the case of square inclusions of sidelength $a=d/2$, where
$d$ is the pitch of a bi-periodic grating. The eigenfunctions are
$\psi_{nm}({\bf y})=2\sin(n\pi y_1)\sin(n\pi y_2)$
in (\ref{betaformula})
and the corresponding eigenvalues
are $k^2_{nm}=\pi^2(n^2+m^2)$.
The right-hand side in the
homogenized equation (\ref{homog1})
can then be interpreted
in terms of effective magnetism:
\begin{equation}
\mu_{hom}(k)=1+\frac{64a^2}{\pi^4}\sum_{(n,m)odd} \frac{k^2}{n^2m^2(k^2_{nm}/a^2-k^2)} \; .
\label{effmag}
\end{equation}
This function can be computed numerically for instance with Matlab
and demonstrates that negative values can be achieved for
$\mu_{hom}$ near resonances, see Fig. \ref{figlast}(b).
This allows for superlensing via negative refraction,
as shown in Fig. \ref{figlast}(a).
Finally, we would like to point out that high-order homogenization
techniques \cite{kirill2004} suggest that most gratings display
some artificial magnetism and chirality when the wavelength
is no longer much larger than the periodicity \cite{boriseb}.
We hope we have convinced the reader that there is a
whole new range of physical effects in gratings which
classical, high-frequency and high-contrast
homogenization theories can capture.
\let\cleardoublepage\clearpage
\renewcommand\bibname{\normalsize{\hspace{12 pt}References:}}
|
2,877,628,090,966 | arxiv | \section{Introduction}
\label{intro}
Complex plasma is characterized by the presence of the micron sized charged dust particles in a normal electron-ion plasma. These type of systems are greatly affectionate to the plasma physics community due to their natural occurrence in different places in our universe i.e. in planetary rings, cometary tails, white dwarf matter and interstellar clouds etc. It also has existence in human made systems like plasma processing and plasma etching equipments in industry, fusion devices, rocket exhausts etc and for such wide occurrence it is important to characterize the different properties and novel features of the dusty plasma.
Since the macroscopic dust particles can be visualized and tracked at particle level, dusty plasma has been treated as a good experimental medium to study phase transition\cite{morf}, transport properties\cite{ratn,liu}, crystal formation\cite{manish,htho} and other collective phenomena\cite{piep,kmp}. The neutral dust particles when inserted into a laboratory plasma, become highly charged due to the different charging mechanism like plasma currents, photoelectric effects, secondary emission etc. Due to the higher mobility of the electrons than the ions, the dust particles acquire high negative charge. In dusty plasma experiments, charged micro particles levitate in the sheath at the lower electrode in gas discharge and it forms Yukawa systems where the interaction potential between the particles is of the form $\phi(r)\propto Q $exp$(-r/\lambda_D)/r$, ($r$ is the separation between two particles and $Q$ is the charge of each particle) which express Coulomb repulsion that is exponentially suppressed with screening length $\lambda_D$. When temperature exceeds melting temperature Yukawa system shows liquid state of matter and generation of shear flow in such configuration has enabled the measurement of the shear viscosity. In such system Nosenko and Goree\cite{nose} measure shear viscosity by using two parallel but counter propagating laser beams to generate a shear flow in a planar Couette configuration. Using molecular dynamics simulation in a 2D Yukawa liquid, Liu and Goree\cite{binl} have reported the dependence of shear viscosity on the temperature of random thermal motion of dust particles through Coulomb coupling parameter $\Gamma$. In this context, we should mention of the simulation works which have predicted the signature of non-Newtonian property\cite{dnk}.
Investigations of shear flows in a complex plasma fluid by Ivlev et. al\cite{ivle} have revealed the signature of non-Newtonian property with the viscosity coefficient varying with the velocity gradient. The experiment has been done with gas induced shear flow for different discharge currents and also by applying laser beams of different power. This has enabled measurement of the shear viscosity and confirmation of the non-Newtonian property over a considerable range of shear rates. Gavrikov et al. have also reported\cite{jpps} this phenomenon in a dusty plasma liquid.
In fluid systems, it is well known that the inhomogeneous bounded shear flow can drive the KH instability which is widely studied in experiment and also theoretically investigated with the famous Orr-Sommerfeld equation in twentieth century \cite{kundu,rpnt}. In an inviscid parallel flow without any point of inflection in the velocity profile (like parabolic flow), the disturbance field cannot extract energy from the basic shear flow, resulting in stable flow, but onset of viscosity makes it eligible for drawing energy and hence viscosity could destabilize such flow. The non-Newtonian property in complex plasma shows both shear thinning and thickening property depending upon the values of shear rate and parameter regime, so it is expected that these properties may have the opposite effect on the KH instability of inhomogeneous parabolic type profile. Motivated by these ideas, we have studied the effect of the velocity shear rate dependent viscosity on the growth rates and its dispersion by using the standard matrix eigenvalue technique.
This paper is organized in the following manner: Section(II) states the system and its equilibrium. The equilibrium momentum equation is solved numerically to get the equilibrium velocity profile and corresponding non-Newtonian viscosity which is also a function of space through dependance with velocity shear rate. Section(III) contains the derivation of the associated linear equations. Section(IV) contains the description of the nonlocal analysis including numerical procedure for obtaining the eigenvalues. The results showing the effect of the shear thinning and thickening property on the KH instability have been presented in this section. Finally a conclusion is drawn in section(V).
\section{System and its Equilibrium} \label{sec:bas}
In discharge plasma, the dust particles forming dust cloud levitate vertically (z-direction) in presence of external vertical electric field which balances the gravity of the dust particles. A bounded equilibrium flow is generated along the axis of the cylindrical vessel (y-direction) with variation in the perpendicular x-direction. In our analysis, we consider the flow region as a slab ($-L < x < L$) with the maximum flow speed in the middle of the discharge tube ($x = 0$) and velocity vanishes along the boundary. In such an inhomogeneous charged dust flow, a small wavy disturbance could be unstable which leads to the well known KH instability. Due to the non-Newtonian property of the dusty plasma, the unperturbed flow would deviate from the parabolic shape. Non-Newtonian viscosity has specific functional dependence on equilibrium shear flow rate and for analytical purpose proper mathematical model is required in this context. In case of complex plasma, the experimentally verified model for the kinematic viscosity $\nu(\gamma)$ with shear rate $\gamma$, given in Ref.\cite{ivle} can be written as
\bee
\nu(\gamma)=\frac{2(1+\epsilon)}{\sqrt{1+4\gamma^2-4\epsilon \gamma^4}+1-2\epsilon\gamma^2}\bar{\nu},
\label{model}
\ene
where $\bar{\nu}$ is the value of newtonian viscosity, $\gamma$ is equilibrium velocity shear rate defined as $\gamma=dv_0/dx$ which is normalized by $({\beta v^2_{T_0}/\bar{\nu}})^{1/2}$. Here $\beta $ is the friction rate, $v^2_{T_0}$ is the thermal velocity. The
other parameter $\epsilon$ which characterizes the non-Newtonian property is given as
$\epsilon = ({\cal A}/{\cal B}) \left(T_0/T_m\right)^{\alpha+\tau}$ and $\alpha=\tau=1$ as indicated in \cite{ivle}. Here, $T_0$ is the temperature at zero shear rate and $T_m$ is the melting temperature and $\cal A, \cal B$ are weal function of density
as given in the above mentioned reference. Since we are interested to study fluid properties of complex plasma, in our analysis $T_0>T_m$ \cite{saigo}. In the limit $\gamma,\epsilon \rightarrow 0$, the model converges to the Newtonian viscosity limit $\nu \rightarrow \bar{\nu}$.
Weakly coupled unmagnetized dusty plasma is completely described by the three basic equations (the continuity equation obeying the mass conservation, the Navier-Stoke's equation showing the momentum balance and the Poisson's equation which connects the potential fluctuation with the density variation) which are the following:
\bee
\fpar{n}{t} + \nabla \cdot (n {\bf v}) = 0,
\label{continuity}
\ene
\bee
\rho \left(\fpar{}{t} + {\bf v}\cdot\ \nabla \right) {\bf v} +n e Z {\bf E} + c_{d}^{2} \nabla \rho = \fpar{\sigma_{ij}}{x_j},
\label{momentum}
\ene
\bee
\nabla \cdot {\bf E} = 4 \pi e(n_e + Z n -n_i),
\label{poission}
\ene
where $c_{d}=\sqrt{T_d \mu_d \gamma_d/m}$, $T_d$ is the dust temperature due to random thermal motion, $\mu_d$ and $\gamma_d$ are respectively compressibility factor and adiabatic index \cite{pkaw}, $m$ is dust mass.The electric field is denoted as ${\bf E}$, $n$ is defined as dust number density where mass density $\rho = n m$, ${\bf v}$ is the dust fluid velocity and $Z$ denotes number of electronic charge on dust particle.
Here, we consider low neutral gas pressure so that dust neutral collision becomes less effective and may be neglected. Usually, by colliding with neutrals, dust particles lose their free energy essential for the instability and hence it leads to collisional damping.
We are interested to study low frequency wave ($\omega \ll kv_{Te}, kv_{Ti}$, where
$v_{Te}, V_{Ti}$ are thermal velocities of electrons and ions respectively) hence the electron and ion dynamics are considered to obey the Boltzmann relation. For electrostatic mode ($\bf E=-\nabla \varphi$), the electron density ($n_e$) and the ion density($n_i$) are connected with the electrostatic potential($\varphi$) as,
\bee
n_e = n_{e0} \exp(e\varphi/T_e), ~~~~~~~~~~~~~~~~~ n_i = n_{i0} \exp(-e\varphi/T_i),
\label{poiss}
\ene
where $T_e$ and $T_i$ represents electron and ion thermal temperature measured in the Boltzmann unit, $n_{e0}$ and $n_{i0}$ are density of electron and ion fluid at zero potential.
The viscous stress tensor is expressed as\cite{stei},
\[
\sigma_{ij} = \eta(\gamma)\left[ \left( \fpar{v_i}{x_j} + \fpar{v_j}{x_i}\right) - \frac{2}{3} \delta_{ij} \left( \nabla \cdot {\bf v}\right) \right],
\]
where $\eta(\gamma)$ is an non-Newtonian viscosity coefficient and effect of bulk viscosity is not considered. In ($x-y$) plane $\sigma$ is a $2 \times 2$ matrix with elements $\sigma_{xx}$, $\sigma_{xy}$, $\sigma_{yx}$ and $\sigma_{yy}$.
The coefficient of shear viscosity $\eta$ is constant for a Newtonian fluid but for non-Newtonian fluid it takes the form $\eta(\gamma)$ where $ \gamma = \sqrt{ II/2}$. The second scalar invariant of rate-of-strain tensor\cite{emit} is of the form
\bee
II = \sum_i \sum_j \left(\frac{\partial v_{i}}{\partial x_{j}} + \frac{\partial v_{j}}{\partial x_{i}}\right)\left(\frac{\partial v_{i}}{\partial x_{j}} + \frac{\partial v_{j}}{\partial x_{i}}\right)
\nonumber
\ene
where suffices $i,j$ varies as $x,y$.
In equilibrium, the density and temperature are assumed to be constant and a constant electric field ($E_0$) is directed along the y-direction. Dust particles are drifted along the y-direction with bounded equilibrium velocity profile having inhomogeneity along the x-direction (perpendicular to the electric field). For homogeneous density and pure shear flow, the continuity equation is automatically satisfied, and the Poisson's equation leads to quasi-neutrality of the system i.e. $(n_{e0} + Z n_0 -n_{i0}=0)$. From eq.(\ref{momentum}), the y-component of the equilibrium momentum equation can be written as
\bee
\fdar{}{x}\left[ \eta_0(\gamma) \gamma\right] = e Z n_0 E_0,
\label{eqlb}
\ene
where $\gamma = dv_0/dx$. In the above equation subscript `$0$' indicates equilibrium quantities. In experiment\cite{ivle}, gas-induced flow is used to generate equilibrium shear flow of dust particles. To take into account this experimental condition following ref\cite{ivle} we have included
another space dependent term $A x^2 $ in RHS of Eq. (\ref{eqlb}) where $A$ is the gas drag coefficient.
Therefore Right hand side of Eq.(\ref{eqlb}) now would be $ F_0(x)=A x^2 + e Z n_0 E_0$.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\resizebox{80mm}{!}{\includegraphics{fig1}} &
\resizebox{85mm}{!}{\includegraphics{fig2}} \\
\end{tabular}
\caption{(Color online) In the left figure, equilibrium flow profiles of dusts are plotted for different $\epsilon$. The solid red line curve shows the same for the Newtonian limit($\epsilon,\gamma \rightarrow 0$). In the right figure, non-Newtonian viscosity is plotted against unperturbed velocity shear rate. For $\epsilon = 0.1$, shear thinning property exists until $\gamma = 0.7$ and then shear thickening begins. As $\epsilon$ increases, the property changes from shear thinning to shear thickening and for $\epsilon = 0.8$, shear thinning property almost ceases.}
\label{Fvis}
\end{center}
\end{figure}
For the Newtonian viscosity, the solution of the Eq.(\ref{eqlb}) gives a parabolic velocity profile i.e. $v_{0y}(x) = \bar{v}(1-(x/L)^2)$, $L$ is the half width of shear layer. In our analysis, the equilibrium equation(\ref{eqlb}) is solved with the non-Newtonian viscosity model(\ref{model}) for the equilibrium force term $F_0(x)$. Here, \underline{fzero} function of MATLAB is used for solving equation(\ref{eqlb}) to calculate numerically values of $\gamma$ for each discrete points of space variable $x$ in the range $[-1:1]$. Then the array of values of $\gamma$ is integrated to get equilibrium velocity profile keeping in mind the boundary conditions $v_0 = 0$ for $x = \pm 1$ and $dv_0/dx|_{x=0} = 0$. With the numerical values of $\gamma$, non-Newtonian viscosity could be calculated from the model(\ref{model}). In the figure(\ref{Fvis}), both velocity and corresponding viscosity has been plotted for different values of $\epsilon$. In non-Newtonian regime, flow profiles deviates from the parabolic flow in Newtonian limit. Viscosity is plotted against the shear rate for different values of $\epsilon$ which clearly shows that the fluid property changes from shear thinning to shear thickening with increasing $\epsilon$.
\section{Linear Equations}\label{sec:ana}
We carry out linear stability analysis for the small amplitude wave so that the higher order terms in perturbation can be ignored for the assumption $|v_x|,|v_y| \ll |v_0|$ where $v_x$ and $v_y$ are the components of the small disturbance in dust flow. The total flow is the sum of the equilibrium flow and a small perturbation in flow:
\[
{\bf v}(x,y,t) = [v_0(x) +v_{y}(x,y,t)] \hat e_{y}+ v_{x}(x,y,t)\hat e_{x}.
\]
Space and time are normalized by the Debye length $\lambda_{D} = \sqrt{ T_i/4 \pi Z n_0 e^2}$ and dust plasma frequency $\omega_{pd}=\sqrt{4 \pi n_0 Z^{2} e^2/m}$ respectively. All densities i.e. electron ion and dust are normalized by $n_{0}$ and the electrostatic potential $\varphi$ by $T_i/e$ ($\phi \equiv e \varphi/T_i$). For small potential fluctuations ($\phi \ll 1$), the normalized Boltzmann relations(\ref{poiss}) can be expressed as,
\[
n_e = \frac{n_{eo}}{n_{0}}\left( 1 + \phi \frac{T_i}{T_e} \right), ~~~~~~~~~ n_i = \frac{n_{io}}{n_{0}}\left( 1 - \phi \right).
\]
The linearized dimensionless Poisson's equation is written as,
\bee
\nabla^2 \phi = n + \alpha \phi,
\label{lin_pois}
\ene
where $\alpha = \left(n_{e0} T_i + n_{i0} T_e\right)/(n_0 Z T_e)$.
The linear continuity equation in normalized variables can be written as,
\bee
\left(\fpar{}{t} +v_0 \fpar{}{y} \right)n+ \fpar{v_x}{x} + \fpar{v_y}{y} = 0.
\label{lin_cont}
\ene
x, y components of the linearized dimensionless momentum equation of the dust fluid are respectively given by,
\bee
\left(\fpar{}{t} + v_0 \fpar{}{y}\right)v_x - \fpar{\phi}{x} + c_{d}^2 \fpar{n}{x} =\eta_0 \nabla^2 v_x + \left(\frac{\eta_0}{3} \fpar{}{x} - \frac{2}{3} \eta'_0 v''_0 \right) \left( \nabla \cdot {\bf v} \right)
+ \eta'_0 v'_0 \fpar{}{y} \left( \fpar{v_x}{y}+\fpar{v_y}{x} \right) + 2 \eta'_{0} v''_{0} \fpar{v_x}{x}
\label{vx}
\ene
and
\bee
\left(\fpar{}{t} + v_0 \fpar{}{y}\right)v_y+v_x\fdar{v_0}{x}-\fpar{\phi}{y} + c_{d}^2\fpar{n}{y} = \eta_{0} \nabla^2 v_y + \frac{\eta_0}{3} \fpar{}{y}\left(\nabla \cdot {\bf v}\right)
+ \left\{ 2 \eta'_0 v''_{0} + \eta''_0 v''_0 v'_0 + \eta'_{0} v'_{0} \fpar{}{x} \right\} \left( \fpar{v_x}{y}+\fpar{v_y}{x} \right)
\label{vy}
\ene
where $\eta_0$ is normalized by $\omega_{pd} \lambda_{D}^{2} M n_0$. The $\eta'_0$ and $\eta''_0$ denotes the single and double derivative of $\eta_0$ with respect to $v'_0$ and $v'_0 = dv_0/dx$.
\section{Eigenvalue Analysis}\label{sec:eig}
It is not possible to carry out Fourier analysis along the direction of inhomogeneity. Thus the perturbed variables $v_x$, $v_y$, $\phi$ and $n$ would take the form as $ n(x,y,t) = n(x) e^{i(k y -\omega t)} $. So the linearized equations (\ref{lin_pois}-\ref{vy}) can be expressed as four normalized ordinary differential equation in $y$ by the following equations:
\beea
k v_0(x) n + k v_y - i \fdar{v_x}{x} = \omega n,
\label{pert_con}
\enea
\beea
n + \left( \alpha - \nder{2}{}{x} + k^2 \right) \phi = 0,
\label{pert_pois}
\enea
\beea
-ic_{d}^2 \fdar{n}{x} + i \fdar{\phi}{x}
+ \left[ k v_0 + i \eta_0 \left( \nder{2}{}{x} - k^2\right) - i\eta'_0 v'_0 k^2 + 2 i \eta'_0 v''_0 \fdar{}{x} + i
\left(\frac{\eta_0}{3} \fdar{}{x} - \frac{2}{3}\eta'_0 v''_0\right) \fdar{}{x}\right]v_x\nonumber \\
+ \left[ -\eta'_0 v'_0 \fdar{}{x} - \left(\frac{\eta_0}{3}\fdar{}{x}
- \frac{2}{3}\eta'_0 v''_0\right) \right] k v_y= \omega v_x.
\label{pert_vx}
\enea
\beea
k c_{d}^2 n - k \phi +\left[ -i\frac{ v'_0}{k} -\left(2 \eta'_0 v''_0 + \eta''_0 v''_0 v'_0 + \eta'_0v'_0\fdar{}{x}\right)-\frac{\eta_0}{3} \fdar{}{x}\right] kv_x + \nonumber\\
\left[ k v_0 + i \eta_0 \left( \nder{2}{}{x} - k^2 \right)-\frac{i}{3} k^2 \eta_0
+i\left(2 \eta'_0 v''_0 + \eta''_0 v''_0 v'_0 + \eta'_0v'_0\fdar{}{x}\right)\fdar{}{x} \right] v_y = \omega v_y,
\label{pert_vy}
\enea
\begin{figure}
\centering
\includegraphics[width=4in,height=3in]{fig3}
\caption{(Color online) Growth rate of instability is plotted against wave number for different values of parameter $\epsilon$ in incompressible limit. The solid red line curve shows that for Newtonian limit. For $\epsilon = 0.2$, the growth rate is close to that of Newtonian limit. }
\label{incom}
\end{figure}
We have solved these four coupled linear eigenvalue equations and investigated the growth rate of the KH instability with the variation of different parameters like Mach number($M = |v_0|/c_d$), non-Newtonian parameter($\epsilon$) and wave number($k$).
First we investigate the incompressible limit ($c_d \gg |v_0|$) where the density and the potential fluctuations is negligibly small so that equation(\ref{pert_pois}) becomes trivial and continuity equation(\ref{pert_con}) reduces to $k v_y - i d v_x/d x = 0$. We have carried out the matrix eigenvalue analysis using the standard eigenvalue subroutine(eig) in MATLAB after proper discretization of the above mentioned equations. Following central difference scheme has been used for the purpose of discretization
\begin{center}
\beea
\nder{2}{\phi}{x} = \frac{\phi_{i+1}-2\phi_{i}+\phi_{i-1}}{\Delta^2},~~~~~~~~~~~~~~~~~~~\nonumber\\
\nder{}{\phi}{x} = \frac{\phi_{i+1}-\phi_{i-1}}{2\Delta},~~~~~~~~~~~~~~~~~~~~~~~~~~\nonumber
\enea
\end{center}
where $\Delta$ is the grid spacing. In figure (\ref{incom}), the growth rate is shown plotted against the wave number ($k$) for different values of $\epsilon$. The solid red line curve indicates the Newtonian regime and the other curves are for values of $\epsilon = 0.05, 0.1, 0.2, 0.3 ~$and$~ 0.8 $. The kinematic viscosity in Newtonian limit is considered as $ 1.538\times10^{-4} m^2/s$. Different values of $\epsilon$ incorporates different functional dependance of viscosity with flow shear rate. In figure(\ref{Fvis}), we have shown how viscosity coefficient changes its property from shear thinning to shear thickening one with increase of plasma temperature $T_0$. As the value of $\epsilon$ is increased to $0.3$, growth rate diminishes below that of the Newtonian case and the shear thickening property overpowers the effect of shear thinning. Hence, we can summarize that the shear thinning property enhances the instability but on the contrary, the shear thickening property has stabilizing role on the KH instability. For $\epsilon = 0.8$, shear thickening effect stabilizes the medium. Here, variation of viscosity with shear rate plays the stabilizing role on the KH instability.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\resizebox{90mm}{!}{\includegraphics{fig4}} &
\resizebox{90mm}{!}{\includegraphics{fig5}} \\
\end{tabular}
\caption{(Color online) In the left figure, two sets of curves are shown for two different Mach number ($M$) including and excluding dispersion term in Poisson's equation for $\epsilon=0.1$. In each set of curves, dotted line represents the curve without dispersion effect and the solid line with dispersion term. In the right one, compressibility is introduced by increasing the Mach number and it indicates that the growth rate diminishes as compressibility strengthens in the medium for $\epsilon = 0.3$. Here, $M = 0$ curves shows incompressible limit for comparison.}
\label{com}
\end{center}
\end{figure}
\begin{table*}[h]
\begin{center}{\footnotesize
\begin{tabular}{| c | c | c | l |}
\hline
~~~ kL ~~~ & ~~~~~~~~$\epsilon$ ~~~~~~~&~~~~~~$M$ ~~~~~~~ &~~~ Growth rate($s^{-1}$) \\
\hline
$ $ & $ $ & $ 0.5 $ & $~~~~~~~~~ 0.3258 $ \\
$ $ & $ $ & $ 0.7 $ & $~~~~~~~~~ 0.3215 $ \\
$ $ & $ $ & $ 0.9 $ & $~~~~~~~~~ 0.3146 $ \\
$1.2 $ & $0.1 $ & $ 1.2 $ & $~~~~~~~~~ 0.3032 $ \\
$ $ & $ $ & $ 1.4 $ & $~~~~~~~~~ 0.2902 $ \\
$ $ & $ $ & $ 2.0 $ & $~~~~~~~~~ 0.2542 $ \\
$ $ & $ $ & $ 2.5 $ & $~~~~~~~~~ 0.2179 $ \\
\hline
$ $ & $ $ & $ 0.5 $ & $~~~~~~~~~ 0.1208 $ \\
$ $ & $ $ & $ 0.7 $ & $~~~~~~~~~ 0.1178 $ \\
$ $ & $ $ & $ 0.9 $ & $~~~~~~~~~ 0.1137 $ \\
$1.2$ & $0.2 $ & $ 1.2 $ & $~~~~~~~~~ 0.1056 $ \\
$ $ & $ $ & $ 1.4 $ & $~~~~~~~~~ 0.0989 $ \\
$ $ & $ $ & $ 2.0 $ & $~~~~~~~~~ 0.0623 $ \\
$ $ & $ $ & $ 2.5 $ & $~~~~~~~~~ 0.0368 $ \\
\hline
\end{tabular}}
\end{center}
\caption{ Comparison of growth rates for different parameters $M$, $k$ and $\epsilon$.}
\label{table:cmpre}
\end{table*}
Now, we include the effects of compressibility in our system to study the role of density fluctuation on the instability. Figure(\ref{com}) shows that the growth rate decreases as we increase the Mach number i.e., compressibility weakens the instability. Inclusion of compressibility effect enables dissipation of some energy to drive longitudinal waves. For small mach no, compressibility effect is too weak to stabilize but here, shear thickening property could play the role. In plasma, quasi-neutrality is a widely accepted approximation for wavelengths larger than Debye length ($\lambda_{D}$) where the dispersion term of poisson's equation has negligible contribution. In figure(\ref{com}), for two different values of $M = 1.4, 0.9$ growth rate is plotted with and without considering the dispersion term in Poisson's equation. The dispersion is much prominent for higher compressibility ($M = 1.4$). So, in the regime of higher mach number quasi-neutrality is not a correct approximation. In figure (\ref{3d}), contour plot of growth rate is drawn in 2D plane of wave number and Mach number and it is seen that as growth rate decreases from $0.3$ to $0.21$, the unstable region spans. A surface plot of the growth rate vs wave number and $\epsilon $ is shown for $M = 2.4$. The flat area addresses the stability region on the $(\epsilon-k)$ plane and the hill area indicates unstable portion. For higher temperature (large values of $\epsilon$), the unstable region shrinks and flat area widens which depicts the stabilizing effect of shear thickening property.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\resizebox{80mm}{!}{\includegraphics{fig6}} &
\resizebox{95mm}{!}{\includegraphics{fig7}} \\
\end{tabular}
\caption{(Color online) The left figure shows contour plot of growth rate in the plane of mach number ($M$) and wave number for $\epsilon = 0.1$. In the right figure surface plot of growth rate is drawn on the parametric space of $\epsilon$ and $k$ for mach no. $M=2.4$. }
\label{3d}
\end{center}
\end{figure}
\section{Summary}\label{sec:sum}
To summarize, in this article we have investigated stability of inhomogeneous bounded flow in non-Newtonian dusty plasma. With an experimentally justified mathematical model of variation of viscosity with equilibrium velocity shear rate, the linear stability analysis is carried out numerically using standard matrix eigenvalue technique. In figure(\ref{Fvis}), viscosity coefficient and corresponding equilibrium velocity flow profiles are shown which are the solutions of the equilibrium momentum balance equation(\ref{eqlb}). Depending on the variable parameter $\epsilon$ (depends on the ratio of equilibrium plasma temperature to dust crystal melting temperature), non-Newtonian property changes from shear thinning to shear thickening regime. Here, we have shown that shear thinning property is more favorable for Kelvin-Helmholtz instability so that small values of $\epsilon$ increases the growth rate which is more strong compared with the Newtonian limit. On the shear thickening regime (large values of $\epsilon$), growth rate diminishes and hence it acts against the instability and stabilizes the medium. For the KH instability, incompressible limit shows the maximum growth of instability and as we include finite density fluctuation (compressibility), a part of energy available for the instability is exhausted for the longitudinal fluctuation in the system and thus instability weakens by some percentage.
Viscosity has dissipative effect in fluid but also it has the nature of diffusing momentum. In bounded flow strong velocity shear exists in boundary layers which is diffused outwards by viscosity which leads to instability\cite{dzre}. In our case, equilibrium shear rate increases with decrease of parameter $\epsilon$ near the boundary and accordingly for lower values of $\epsilon$ growth rate is much greater than that of Newtonian limit.
Before concluding, we make comments on the effect of dust charge fluctuation and strong coupling between dust particles on the KH instability. Since charge is a dynamical variable in dusty plasmas, a complete analysis should involve dust charge fluctuations. However, such effects are known to produce the usual damping on the dynamics of dust particles\cite{srb,jana}. Dust charge fluctuations result from perturbation in electron and ion densities. So, the effect of dust charge fluctuation does not manifest itself in the incompressible limit. But, for compressible dusty plasma, dust charge fluctuation gives rise to some damping effect on the KH instability.
In our present analysis, the strong coupling effect between dust particles is not included. However, in our earlier publication \cite{baner} it has been shown that the strong coupling introduces elastic property in dusty plasma which usually enhances the KH instability. So, study of the KH instability in strongly coupled non-Newtonian dusty plasma would be interesting and is therefore left for future work.
As amplitude of small velocity fluctuation increases due to instability, system goes to nonlinear state and it would start to show vortex formation due to the convective nonlinearity of momentum equation. In non-Newtonian system, viscous stress tensor drives another type of nonlinear effect through the dependence of viscosity on shear flow rate and it could lead to many interesting phenomena like recurrence \cite{dbnl}. In nonlinear regime, it would be interesting to study these nonlinear effects on KH instability.
|
2,877,628,090,967 | arxiv | \section{Introduction}
Kaon photoproduction on the nucleon provides an important tool
for understanding the dynamics of hyperon-nucleon systems.
Accurate information on the elementary amplitude is vital for calculating
the cross sections of the hypernuclear photoproduction, since
the amplitude serves as the basic input, which determines the
accuracy of predictions \cite{Motoba,ProdH}. At present, these
calculations can be compared with high resolution spectroscopy
data of the hypernuclei, which are available from the experiments
performed at the Jefferson Laboratory \cite{Hashimoto}.
Since the hypernucleus production cross section is sensitive
to the elementary amplitude, especially at forward kaon angles,
a precise description of the elementary process at this
kinematics is obviously desired.
The two sets of ample, good quality, experimental data provided
recently by the CLAS (CL05) \cite{CL05} and SAPHIR (SP03) \cite{SP03}
collaborations were expected
to help us learn more about the process; however, they reveal a lack
of consistency at forward and backward kaon angles \cite{CL05}
(see also Ref.~\cite{HYP03} in which results of the first analysis
of the CLAS data \cite{oldCLAS} were used).
The previous SAPHIR data by Tran {\it et al.} (SP98) \cite{SP98}
also display different behavior at small kaon angles compared to
that observed in the old pre-1972 data, e.g.,
from Bleckmann {\it et al.} \cite{Bleckmann}
(hereafter referred to as OLD).
The uncertainty in the experimental information causes a wide
range of model predictions at forward kaon angles. The situation is
illustrated in Fig.~\ref{crs_comp}, where the CL05, SP03, SP98 and
OLD data (as listed in Ref.~\cite{AS90}) are compared
with predictions of different phenomenological models. Obviously,
the data and the models, which were fitted to various data sets,
differ significantly for $\theta_K < 45^{\circ}$, which leads to a
large input uncertainty in the hypernuclear calculations \cite{ProdH}.
\begin{figure}[htb]
\includegraphics[width=.55\textwidth,angle=-90]{fig1_bydzovskyIII_PRC.ps}
\caption{Comparison of various data sets with predictions of
different phenomenological models,
Saclay-Lyon A (SLA) \cite{SLA98}, Kaon-Maid (KM) \cite{Ben99},
M2, H2 \cite{HYP03}, Williams-Ji-Cotanch (WJC) \cite{WJC92}, and
Adelseck-Saghai (AS1) \cite{AS90}. Data are adopted from
Refs.~\cite{CL05}(CL05), \cite{SP03}(SP03),
\cite{SP98}(SP98), and \cite{AS90}(OLD).
Total error bars are indicated in the plot.}
\label{crs_comp}
\end{figure}
At present, there are two large data sets, the latest CLAS and
SAPHIR ones, with comparable statistical significance, but
they diverge in some kinematic regions. Measurements of the
differential cross sections at small kaon angles from
LEPS \cite{LEPS} provide another good quality data set for
energies from 1.5 to 2.4 GeV. These data are consistent with
the CLAS but not the SAPHIR data. The older data, SP98 and OLD,
are scarce; and for $\theta_K < 45^\circ$, they also reveal some
discrepancies, as shown by open squares \cite{SP98} and open
circles \cite{Bleckmann} in Fig.\,\ref{crs_comp}. This situation
clearly indicates that before a reliable determination of
the parameters of a model for the elementary process can be
performed, we have to decide which data sets
are consistent with each other and which can thus
be used in fitting the models.
The purpose of this work is to analyze
the mutual consistency and similarities of the data sets by using
selected isobaric models. The analysis will enable a better
determination of the elementary amplitude, especially at forward
angles. We also discuss certain problems of the isobaric models
with the description of the data at forward directions.
This paper is organized as follows:
In Sec.~\ref{analysis}, the basic formalism and definitions of the
kinematic regions used in this analysis are given.
The experimental data and the utilized models are briefly discussed
in Secs. II A and II B, respectively. In Sec.~\ref{Results_and_discussion},
results are presented and discussed. Conclusions are given in
Sec.~\ref{Conclusions}.
\section{Analysis}
\label{analysis}
Although there are some kinematics overlaps of the considered data
sets, an interpolation by using an analytical formula is still necessary
to perform a direct comparison. To avoid this we compare
the observed cross sections with predictions of theoretical models.
For this purpose, we calculate the relative deviation for each data
point as done in the analysis of OLD data~\cite{AS90},
\begin{equation}
R_i=\frac{\sigma_i^{\rm exp}-\sigma^{\rm th}(E_i,\theta_i)}
{\Delta\sigma_i^{\rm stat}},
\label{deviation}
\end{equation}
where $\sigma_i^{\rm exp}$ and $\Delta\sigma_i^{\rm stat}$ are
the measured value and its statistical uncertainty, respectively,
at the kinematics given by the photon laboratory energy $E_i$
and the kaon center of mass angle $\theta_i$. The theoretical
value $\sigma^{\rm th}(E_i,\theta_i)$ is calculated within
a particular isobaric model at the appropriate kinematic point.
If the theoretical values correctly describe the reality and
the experimental values are randomly scattered around them
with the variance given by $\Delta\sigma_i^{\rm stat}$, then
the variable $R_i$ possesses a normal distribution with the mean
$\mu$=0 and the variance $\sigma^2$=1. We are, however, far from
this ideal case. The distribution of $R_i$, calculated
for a particular model and experimental data set, which clearly
depends on the chosen model, thus, characterizes a consistency of
the model with the data set. To this end, we also calculate
the required parameters of the distribution, i.e., the mean value
\begin{equation}
\langle R\rangle = \frac{1}{N}\sum_{i=1}^{N}R_i,
\end{equation}
the second algebraic moment
\begin{equation}
\langle R^2\rangle = \frac{1}{N}\sum_{i=1}^{N}R_i^2 = \chi^2 / N,
\label{chi2}
\end{equation}
the standard deviation
\begin{equation}
s^2 = \frac{N}{N-1}\langle(\Delta R)^2\rangle =
\frac{N}{N-1}(\langle R^2\rangle -\langle R\rangle ^2),
\label{stdev}
\end{equation}
and the number of data points with $R_i$ in the interval of
($\langle R\rangle-2$, $\langle R\rangle+2$) relative to the number
of data $N$, which is denoted by $N_2$ (in \%). The summations run
over the data points included in the sample. The agreement between
model predictions and experimental data is expressed by $\chi^2/N$
which includes also information on the data dispersion. The mean
value $\langle R\rangle$ shows a coherent shift of the data with
respect to the model predictions. The condition $\langle R\rangle=0$
is necessary for the model and data to describe simultaneously the
reality (a population).
Provided that the data are randomly scattered around the theoretical
values $\sigma^{\rm th}(E_i,\theta_i)$ with the variance
$\Delta\sigma_i^{\rm stat}$, i.e. \{$R_i$, i=1, $N$\} is a random sample
with a normal distribution, the hypothesis that the true value of
the mean $\langle R\rangle$ equals zero (the null hypothesis) can
be tested by calculating the statistical parameter (Student's
t-variable) \cite{StatMan}:
\begin{equation}
z_1 = \sqrt{N-1}\frac{\langle R\rangle}{\sqrt{\langle(\Delta R)^2\rangle}} .
\label{z_1stat}
\end{equation}
Here, the variance of the normal distribution of $R_i$ is supposed
to be known and can be approximated by the standard deviation (\ref{stdev}),
since $N$ is sufficiently large ($>30$) for the assumed data sets.
The hypothesis will be rejected with a confidence level of $\alpha$
if $|z_1|>z_{\alpha/2}$, where the critical value $z_{\alpha/2}$ = 1.96
and 2.58 for the confidence level of 5\% and 1\%,
respectively \cite{StatMan}.
In this analysis, we define two types of data samples taken from
each of the experimental data sets with different kinematics, i.e.:
\begin{itemize}
\item sample A: $0.91$ GeV$ < E_i < 2.6$ GeV
and $0^\circ < \theta_i < 180^\circ$,
\item sample B: $0.91$ GeV$ < E_i < 2.6$ GeV
and $0^\circ < \theta_i < 60^\circ$.
\end{itemize}
The statistics of sample B are more sensitive to the differences
between the data and model predictions at forward angles, where
the largest discrepancies among the data sets and models exist
(see Fig.~\ref{crs_comp}). Polarization and total cross section data
are not considered in our analysis.
\subsection {Experimental data}
\label{expdata}
The following experimental data sets consisting of differential cross
sections have been used in calculating $R_i$:
\begin{itemize}
\item the CLAS data \cite{CL05}, labeled by CL05 in the figures and tables,
\item the latest SAPHIR data \cite{SP03} (SP03),
\item the LEPS data \cite{LEPS} (LEPS), and
\item the set of pre-1972 data (OLD), used in the analysis of Adelseck
and Saghai~\cite{AS90}.
\end{itemize}
Note that the last set is listed in Table IX of Ref.~\cite{AS90},
except for the data by Decamp {\it et al.} (Orsay data). In the CL05
data set, we only consider the data points from threshold up to
$E_{\gamma}^{\rm lab}$ = 2.6 GeV ($W=2.4$ GeV, see samples A
and B), in order to make an overlap with the SP03 data set and to
maintain a reasonable description of the cross sections provided
by isobaric models.
The statistical uncertainties of the cross sections were used in the
analysis and in the fits of the new models (see the next subsection).
The systematic uncertainty of CL05 was estimated to be 8\%
except for the forward-most angle bins, where the uncertainty amounts
to 11\% \cite{CL05}. For the SP03 \cite{SP03} and OLD \cite{AS90}
data the systematic error bars were reported for each data point.
The overall systematic uncertainty of the LEPS data was estimated
to be 7\% \cite{LEPS}.
It was shown that the LEPS data are in good agreement with the CLAS data
within the total uncertainty and are systematically higher than the SP03
data at all angles ($\theta_{\rm K}^{\rm c.m.} < 41^\circ$) \cite{LEPS}.
The SP03 data are systematically smaller than the CL05 ones for $W>1.75$~GeV.
We note that an energy-independent scale factor of about 3/4 between the
CL05 and SP03 results was suggested in Ref.~\cite{CL05}.
\subsection{Models used in the analysis}
\label{models}
Theoretical values of the cross sections in Eq.~(\ref{deviation})
were calculated within the isobaric models for the photoproduction
of $K^+$ on the proton. In these models the amplitude is
constructed by using the Feynman diagrammatic technique, assuming
only contributions of the tree-level diagrams. The effective
Lagrangian is written in terms of resonant states and asymptotic
particles. Because of the absence of a dominant resonance, as in the
case of pion and $\eta$ photoproductions, various nucleon and hyperon
resonances are considered, which results in a copious number of
models \cite{Byd03}. Hadrons were
supposed to be pointlike particles in the strong vertices in some
models \cite{SL96,WJC92,AS90,SLA98} but, in the newest
ones \cite{Ben99,HYP03,Jan01}, the hadron structure is considered
by means of hadronic form factors. The effective coupling constants
in the models were determined by fitting the appropriate observables
to experimental data.
In our analysis, the Saclay-Lyon (SL) \cite{SL96} and Kaon-Maid
(KM) \cite{Ben99} models were adopted. Common to these models is that,
besides the extended Born diagrams, they also include kaon resonances
$K^*(890)$ and $K_1(1270)$. In Ref. \cite{WJC92}, it was shown that
these $t$-channel resonant terms together with the nucleon ($s$-channel)
and hyperon ($u$-channel) resonances can improve the agreement with
the experimental data in the intermediate energy region. The models
differ in the choice of the particular $s$- and $u$-channel
resonances in the intermediate state, in the treatment of the hadron
structure, and in the set of experimental data to which the free
parameters were adjusted. However, the two main coupling constants,
$g_{KN\Lambda}$ and $g_{KN\Sigma}$, fulfill the limits of 20\% broken
SU(3) symmetry \cite{SL96} in both models.
In the SL model, four hyperon and three nucleon resonances with
the spin up to 5/2 are included and their coupling constants were
fitted to the OLD data set \cite{AS90} and the first results of SAPHIR
by Bockhorst {\it et al.} \cite{SAPHIR94}. In the KM model, four
nucleon but no hyperon resonances were assumed and the parameters
of the model were fitted to the OLD and SP98 \cite{SP98} data sets.
The SL and KM models were expected to provide reasonable results for
photon energies below 2.2 GeV. In our analysis, however, we consider
the results of these models for energies up to 2.6 GeV.
In the SL model, hadrons are treated as pointlike objects, in contrast
to the KM model in which hadronic form factors (h.f.f.) are inserted
in the hadronic vertices \cite{Ben99}. The inclusion of h.f.f. in the
isobaric model substantially improves the agreement with the higher
energy data. However, it appears to be the source of the significant
suppression of the cross sections at very small kaon angles and higher
energies ($E_{\gamma} > 1.7$ GeV, see Fig.~4a in Ref.~\cite{ProdH} and
Fig.~\ref{crs_comp} for M2 and H2 models, which include h.f.f. and
were fitted to the results of the first analysis of the CLAS
data \cite{HYP03}).
In addition to the KM and SL models, we have also included two new
models, which are referred to as fit 1 and fit 2. Fit 1 includes,
besides the Born terms and kaon resonances $K^*$ and $K_1$, the same
$s$-channel resonances as in the KM model: $S_{11}(1650), P_{11}(1710),
P_{13}(1720)$ and $D_{13}(1895)$. The latter is known as
the ``missing'' resonance, a resonance predicted by the quark model but
not yet listed in the Particle Data Book \cite{Ben99}.
Its presence in the model of this type is, however, important for
the description of the resonant structure seen in the SAPHIR and
CLAS data \cite{Ben99,HYP03}. The background part of the amplitude
is improved by assuming the $u$-channel resonances as suggested
by Janssen {\it et al.} \cite{Jan01}. Particularly,
$S_{01}(1670)$ and $P_{11}(1660)$ hyperon states were chosen in
fit 1, as they give the best agreement with the data. The hadron
structure in the strong vertices is modeled by the dipole-type
form factors introduced by a certain gauge-invariant
technique \cite{DW01}. The cutoff parameters in the form factors
of the Born and resonant contributions are independent. The free
parameters of fit 1, i.e. the coupling constants and cutoffs, were
determined by fitting the differential cross sections to all
CLAS data in the energy region of $E_{\gamma}^{\rm lab} < 2.6$ GeV
(see the definition of sample A in Section~\ref{analysis}).
The model fit 1 exhibits a strong suppression of the cross sections at
small kaon angles for $E_{\gamma} >1.5$~GeV as discussed above in
connection with h.f.f. This pattern, being connected with a strong
suppression of the Born terms, particularly the electric part of the
proton exchange, causes large deviations of the model predictions
from the data at small angles, which precludes analysis of the data
at forward angles.
To have a more realistic description of the forward-angle data we
assume also a model \underline{without h.f.f.}, fit 2.
The resonance content of fit 2 was motivated by the SL model, which
shows a better agreement with the data in the forward hemisphere
than the KM model, especially for energies $E_{\gamma} > 1.7$~GeV
($W > 2$~GeV, see the next section). Therefore, the following
resonances were included in fit 2:
the $t$ channel, $K^*$ and $K_1$; $s$ channel, $P_{13}(1720)$,
$D_{15}(1675)$, and $D_{13}(1895)$; and $u$ channel, $S_{01}(1405)$,
$S_{01}(1670)$, and $P_{01}(1810)$.
The nucleon $P_{11}(1440)$ resonance, which was included in SL but
whose coupling constant is very small \cite{SL96}, was repalced by
$D_{13}(1895)$ to better describe the resonance behavior of the data.
The presence of the hyperon $P_{11}(1660)$ resonance, which was also included
in the SL model, appears to be irrelevant in the forward-angle region.
On the other hand, the higher spin (5/2) s-channel resonance
$D_{15}(1675)$ appears to be very important for reduction of the cross
section at energy $W>1.8$~GeV and forward angles. Its coupling constants
appear to be much larger than those of the other $s$-channel
resonances in fit 2. Parameters of fit 2 were fitted to CL05 for energy
up to 2.6 GeV but for $\theta_{\rm K}^{\rm c.m.} < 90^\circ$.
Note that the kaon angles were limited in order to avoid problems
of these models at backward regions \cite{Byd03} (see also the next section)
and to achieve a good agreement with the data at forward angles.
In both fits, statistical uncertainties of experimental data (see
Sect.~\ref{expdata}) were taken into account and the two main
coupling constants were forced to keep the limits of 20\% broken SU(3)
symmetry:
$-4.4 \leq g_{KN\Lambda}/\sqrt{4\pi} \leq -3.0$ and
$0.8 \leq g_{KN\Sigma}/\sqrt{4\pi} \leq 1.3$. The values of the
cutoff parameters were also confined in the range of
$0.6$ GeV $\leq \Lambda \leq 2.0$ GeV. The best values of
$\chi^2/{\rm n.d.f.}$ for fit 1 and fit 2 are 3.46 and 1.80, respectively.
\section{Results and discussion}
\label{Results_and_discussion}
The statistical parameters of the distributions defined in
Section~\ref{analysis} for samples A and B are listed in
Tables~\ref{sampleA} and \ref{sampleB}, respectively, while
the relative deviations of the experimental values from
theoretical predictions, $R_i$, are displayed in
Figs.~\ref{anal_2_E}-\ref{anal_1_F}. The corresponding mean
values, $\langle R\rangle$, are indicated by the dashed lines
in each panel of the figures. Panels in a row correspond to
the particular model, whereas panels in a column use
the same experimental data set (see Section~\ref{expdata} for the
definitions of the data labels and Section~\ref{models} for the
definitions of the model labels). In Figs.~\ref{anal_2_E} and
\ref{anal_1_E}, the deviations of each data point for all kaon
angles (sample A) are plotted as functions of the kaon
c.m. angle and the total c.m. energy, respectively.
Figure~\ref{anal_1_F} shows results for the forward angles,
from $0^\circ$ up to $60^\circ$ (sample B).
\renewcommand{\baselinestretch}{1}
\begin{table}[t]
\caption{Statistical parameters of sample A. \label{sampleA}}
\begin{center}
\begin{ruledtabular}
\begin{tabular}{lrrrrrr}
\multicolumn{1}{c}{data set} & \multicolumn{1}{c}{$N$} &
\multicolumn{1}{c}{$\langle R\rangle$} &
\multicolumn{1}{c}{$\chi^2/N$} &
\multicolumn{1}{c}{$\langle (\Delta R)^2\rangle$} &
\multicolumn{1}{c}{$z_1$} & \multicolumn{1}{c}{$N_2$} (\%)\\
\hline
\multicolumn{7}{c}{model KM}\\
CL05 & 1109 & -0.22 & 25.7 & 25.7\ \ \ \ & -1.41 & 37.1\\
SP03 & 701 & -1.04 & 6.69 & 5.60\ \ \ \ & -11.7 & 68.9\\
LEPS & 60 & 0.08 & 45.4 & 45.4\ \ \ \ & 0.09 & 26.7\\
OLD & 91 & 1.00 & 3.82 & 2.82\ \ \ \ & 5.66 & 74.7\\
\hline
\multicolumn{7}{c}{model SL}\\
CL05 & 1109 & -17.7 & 2145 & 1832\ \ \ \ & -13.7 & 1.5\\
SP03 & 701 & -6.59 & 198 & 155\ \ \ \ & -14.0 & 7.0\\
LEPS & 60 & -0.60 & 10.1 & 9.70\ \ \ \ & -1.47 & 51.7\\
OLD & 91 & -0.09 & 5.72 & 5.71\ \ \ \ & -0.35 & 68.1\\
\hline
\multicolumn{7}{c}{fit 1 }\\
CL05 & 1109 & 0.15 & 3.42 & 3.39\ \ \ \ & 2.66 & 72.8\\
SP03 & 701 & -1.24 & 5.89 & 4.36\ \ \ \ & -15.7 & 70.6\\
LEPS & 60 & 2.96 & 31.5 & 22.7\ \ \ \ & 4.76 & 20.0\\
OLD & 91 & -0.04 & 11.6 & 11.6\ \ \ \ & -0.10 & 47.3\\
\hline
\multicolumn{7}{c}{fit 2}\\
CL05 & 1109 & -6.81 & 544 & 498\ \ \ \ & -10.2 & 2.5\\
SP03 & 701 & -3.61 & 66.8 & 53.8\ \ \ \ & -13.0 & 30.1\\
LEPS & 60 & 0.26 & 6.76 & 6.69\ \ \ \ & 0.77 & 70.0\\
OLD & 91 & -0.32 & 4.83 & 4.72\ \ \ \ & -1.41 & 59.3\\
\end{tabular}
\end{ruledtabular}
\end{center}
\end{table}
\begin{table}[t]
\caption{Statistical parameters of sample B.\label{sampleB}}
\begin{center}
\begin{ruledtabular}
\begin{tabular}{lrrrrrr}
\multicolumn{1}{c}{data set} & \multicolumn{1}{c}{$N$} &
\multicolumn{1}{c}{$\langle R\rangle$} &
\multicolumn{1}{c}{$\chi^2/N$} &
\multicolumn{1}{c}{$\langle (\Delta R)^2\rangle$} &
\multicolumn{1}{c}{$z_1$} & \multicolumn{1}{c}{$N_2$} (\%)\\
\hline
\multicolumn{7}{c}{model KM}\\
CL05 & 252 & -2.06 & 37.2 & 33.0\ \ \ \ & -5.67 & 17.1\\
SP03 & 178 & -1.82 & 10.0 & 6.75\ \ \ \ & -9.30 & 53.4\\
LEPS & 60 & 0.08 & 45.4 & 45.4\ \ \ \ & 0.09 & 26.7\\
OLD & 46 & 1.35 & 5.43 & 3.60\ \ \ \ & 4.78 & 73.9\\
\hline
\multicolumn{7}{c}{model SL}\\
CL05 & 252 & -0.05 & 3.69 & 3.68\ \ \ \ & -0.37 & 70.2\\
SP03 & 178 & -1.84 & 8.87 & 5.48\ \ \ \ & -10.5 & 65.2\\
LEPS & 60 & -0.60 & 10.1 & 9.70\ \ \ \ & -1.47 & 51.7\\
OLD & 46 & -0.59 & 4.60 & 4.25\ \ \ \ & -1.91 & 73.9\\
\hline
\multicolumn{7}{c}{fit 1}\\
CL05 & 252 & 0.22 & 4.91 & 4.86\ \ \ \ & 1.60 & 65.1\\
SP03 & 178 & -1.00 & 4.49 & 3.50\ \ \ \ & -7.08 & 75.8\\
LEPS & 60 & 2.96 & 31.5 & 22.7\ \ \ \ & 4.76 & 20.0\\
OLD & 46 & 1.12 & 12.7 & 11.4\ \ \ \ & 2.23 & 52.2\\
\hline
\multicolumn{7}{c}{fit 2}\\
CL05 & 252 & 0.11 & 1.98 & 1.97\ \ \ \ & 1.25 & 84.5\\
SP03 & 178 & -1.70 & 7.37 & 4.47\ \ \ \ & -10.7 & 67.4\\
LEPS & 60 & 0.26 & 6.76 & 6.69\ \ \ \ & 0.77 & 70.0\\
OLD & 46 & 0.37 & 3.23 & 3.09\ \ \ \ & 1.42 & 78.3\\
\end{tabular}
\end{ruledtabular}
\end{center}
\end{table}
\begin{figure}[!ht]
\includegraphics[width=.85\textwidth]{fig2_bydzovskyIII_PRC.ps}
\caption{Deviations of the experimental data points from the
predictions of the models as a function of cosine of the kaon c.m.
angle. The data for the photon laboratory energy below 2.6 GeV
are assumed. The mean values of $R$ are represented by dashed
lines. Panels in one row correspond to the same theoretical
model, whereas panels in the same column use the same
experimental data set.}
\label{anal_2_E}
\end{figure}
\begin{figure}[!ht]
\includegraphics[width=.85\textwidth]{fig3_bydzovskyIII_PRC.ps}
\caption{As in Fig.~\ref{anal_2_E}, but the deviations are
a function of the total c.m. energy. The data cover the full range
of the kaon c.m. angle (sample A).}
\label{anal_1_E}
\end{figure}
\begin{figure}[!ht]
\includegraphics[width=.85\textwidth]{fig4_bydzovskyIII_PRC.ps}
\caption{As in Fig.~\ref{anal_1_E}, but for experimental data with
$0^\circ <\theta_{\rm K}^{\rm c.m.} < 60^\circ$ (sample B).}
\label{anal_1_F}
\end{figure}
Table~\ref{sampleA} reveals that the $\chi^2/N$ values of the model KM are
much larger for the CL05 and LEPS data sets than for the SP03 and OLD ones.
The SP03 data seem also to be scattered closer to the model predictions
than the CL05 and LEPS as indicated by the values of $N_2$. However, for
sample A the average relative statistical uncertainties,
$\Delta \sigma^{\rm stat}/\sigma^{\rm exp}$,
are smaller for the CL05 (10\%) and LEPS (6\%) than for the SP03 (38\%) data,
which makes the values of $R_i$, and therefore $\chi^2/N$, much
smaller for the SP03. The statistics $|z_1|$ in Table~\ref{sampleA},
which is not sensitive to this effect, shows that the KM model provides
a good description of the CLAS ($|z_1|=1.41$) and LEPS ($|z_1|=0.09)$
data sets (in this case, if we reject the null hypothesis, there is
a large probability that we are wrong). On the contrary, the KM model does
not seem to be consistent with the SP03, i.e., $|z_1|=11.7 \gg 2.58$ for
the confidence level of 1\% (the null hypothesis can be safely rejected).
The very large values of $\chi^2/N$ and $|\langle R\rangle|$ for
the model SL with CL05 and SP03 data in comparison with those for
the KM model are mainly
due to the deficiency of the SL model in describing the data at backward
angles ($\theta_{\rm K}^{\rm c.m.} > 100^\circ$) for $W>2$ GeV, as can be
clearly seen in Figs.~\ref{anal_2_E} and \ref{anal_1_E}.
However, at forward angles, the SL model gives a better agreement with
the CL05 and OLD data than the KM model [see the statistics
$\langle R\rangle$, $\chi^2/N$,
$z_1$, and $N_2$ in Table~\ref{sampleB} (sample B) and
Fig.~\ref{anal_1_F}]. This indicates that the SL model (without
h.f.f.) is more suited for the description of the forward-angle data
than KM. The SL model also agrees well with the OLD data at backward
angles (Table~\ref{sampleA}), since these data are limited to
the photon energies up to 1.5 GeV, and, moreover, they were used
to fit the parameters of the model.
The new model, fit 1, which was fitted to the CL05 data for all angle bins
(sample A), gives small $\langle R\rangle$ (0.15) but quite large
$z_1$ (2.66) for CL05 (Table~\ref{sampleA}), which suggests that
the model describes the data with a confidence level smaller than 1\%.
The largest deviations $R_i$ are found, however, for the data at small angles
and in the energy range of 1.8 -- 2 GeV (see Fig.~\ref{anal_1_F}).
Comparison of the $\langle R\rangle$ and $\chi^2/N$ for fit 1 with CL05
in Tables~\ref{sampleA} and \ref{sampleB} also indicates that the
model systematically underpredicts the data for small angles.
The underprediction of the most-forward-angle cross sections by fit 1
is also apparent for the LEPS data, as obviously shown by
Figs.~\ref{anal_2_E} -- \ref{anal_1_F}.
\begin{figure}[!ht]
\includegraphics[width=.5\textwidth,angle=270]{fig5_bydzovskyIII_PRC.ps}
\caption{Cross sections at $\theta_{\rm K}^{\rm c.m.}=3^\circ$ as
predicted by several isobaric models (see text for details).}
\label{theta3}
\end{figure}
In Fig.~\ref{theta3} we demonstrate the behavior of the forward-angle
($\theta_{\rm K}^{\rm c.m.}=3^\circ$) cross sections as a function of
energy for the assumed models. The cross section suppression
predicted by the models with h.f.f. (KM and fit 1) is clearly seen
for $W>1.8$ GeV. These results suggest that models with h.f.f.
introduced in a certain way \cite{DW01} cannot provide a realistic
description of the forward-angle data. Therefore, the concept of
h.f.f. \cite{Ben99,DW01} should be further investigated to correct
the too strong damping of the cross sections at forward angles and
larger energies, which is not observed in the existing data.
As expected,
the results of fit 2 (fitted only to the forward-hemisphere data)
with CL05 at forward angles (sample B) are very good (see
Table~\ref{sampleB} for the statistics). However, at backward angles,
fit 2 reveals the same deficiency as seen with the SL model (see
Figs.~\ref{anal_2_E} and \ref{anal_1_E} and Table~\ref{sampleA}),
although in general fit 2 is much better.
The fit 2 model also provides better statistics at forward angles
for the LEPS and OLD data than for SP03 (see Table~\ref{sampleB}),
which quantitatively demonstrates that at forward angles the CL05,
LEPS, and OLD data can be described simultaneously by
an isobaric model without h.f.f. Most of the data are scattered
near the model predictions as shown by the large values of $N_2$
(defined with the statistical uncertainty) and
$|\langle R\rangle|\approx 0$.
The values of $|z_1|$ are small enough in comparison with the value
for the 5\% confidence level (1.96), which means that if we reject
the null hypothesis, there is greater than 5\% probability that
we are wrong. On the contrary, the value $|z_1|=10.7$ for the SP03 data
shows very bad agreement of the SAPHIR data with the model fit 2.
Therefore, the hypothesis that fit 2 describes the SP03 data can be
ruled out with a very high confidence.
To estimate the relative global scaling factor between the CL05
and SP03 data, we calculated the quantity
\begin{equation}
\chi_0^2 = \sum_i \left(\frac{a\;\sigma_i^{\rm exp}-\sigma^{\rm th}
(E_i,\theta_i)}{\Delta\sigma_i^{\rm stat}}\right)^2 ,
\label{globalchi}
\end{equation}
using the SP03 data. The parameter $a$ was chosen to minimize
$\chi_0^2$. For fit 1 and the full data set (sample A), $a=1.13$
and $\chi_0^2/N=4.80$. These values show that shifting the SP03
data up by 13\% improves the agreement with the fit 1 model.
For fit 2 and the forward-angle data (sample B), $a=1.15$ and
$\chi_0^2/N=5.29$, which indicates 15\% scaling. These results
are in good agreement with the estimated systematic uncertainties
of the CL05 (8\%) and SP03 data. They are, however, smaller than
the suggested scaling factor of $\approx 4/3$ \cite{CL05}.
The coherent shift of the SP03 data with respect to the CL05, LEPS,
and OLD ones is also apparent from the comparison
of the appropriate values of $\langle R\rangle$ for fit 1
(Table~\ref{sampleA}) and fit 2 (Table~\ref{sampleB}). Therefore,
this analysis quantitatively shows that a combination of the CL05
and SP03 data should not be considered in fixing the parameters
of models, especially at forward angles. Instead, the use of
the CL05, LEPS, and OLD data sets is the more preferred choice.
Refitting the fit 2 model parameters using the CL05,
LEPS, and OLD data in the forward hemisphere
($\theta_{\rm K}^{\rm c.m.} < 90^\circ$) yields
$\chi^2 /{\rm n.d.f.} = 2.33$ and small changes in coupling constants.
The largest changes appear for the coupling constants of the
$s$-channel $D_{15}(1675)$ and $u$-channel $P_{01}(1810)$ resonances.
We note that
the former is important for a proper description of the
forward-angle and high-energy cross sections, which is necessary
for fitting especially the LEPS data.
Finally, let us discuss the physics consequence of the discrepancy
between the CL05 and SP03 data on the fitted resonance parameters.
As shown by the recent multipoles approach~\cite{Mart:2006dk}, the use
of these data sets individually or simultaneously leads to quite different
parameters of resonances which, therefore, could lead to different
conclusions about ``missing resonances''.
Fitting to the SP03 data, e.g., indicates that the $S_{11}(1650)$,
$P_{13}(1720)$, $D_{13}(1700)$, $D_{13}(2080)$, $F_{15}(1680)$, and
$F_{15}(2000)$ resonances are required, while fitting to the CL05
data leads alternatively to the $P_{13}(1900)$, $D_{13}(2080)$,
$D_{15}(1675)$, $F_{15}(1680)$, and $F_{17}(1990)$ resonances.
Nevertheless, both CL05 and SP03 support the existence of the
missing $D_{13}(2080)$ resonance previously found in the Kaon-Maid
model by using the SP98 data \cite{SP98} and denoted as
$D_{13}(1895)$ (see Section~\ref{models}). It was found that the
extracted mass of this resonance would be 1936 (1915) MeV
if the SP03 (CL05) data were used. We have refitted the original
Kaon-Maid model to investigate this phenomenon. The result is shown
in Table~\ref{tab:missing}. Obviously, the extracted values
corroborate the finding of Ref.~\cite{Mart:2006dk}. The reason that
the mass is slightly shifted to a higher value (as well as the broader
width $\Gamma$ in the case of CL05) is obvious from the total cross
section data (see the second peak of the total cross section
shown in Fig.~9 of Ref.~\cite{Mart:2006dk}).
\begin{table}[t]
\centering
\caption{Extracted values of mass $M$ and width $\Gamma$ of
the missing $D_{13}$ resonance in Kaon-Maid using three different
experimental data sets.}
\label{tab:missing}
\begin{ruledtabular}
\begin{tabular}[c]{lccc}
& Original (SP98)~\cite{SP98} & SP03~\cite{SP03} & CL05~\cite{CL05}\\
\hline
$M$ (GeV) & $1.895\pm 0.004$ & $1.938\pm 0.004$ & $1.927\pm 0.003$\\
$\Gamma$ (GeV) & $0.372\pm 0.029$ & $0.233\pm 0.008$ & $0.570\pm 0.019$\\
\end{tabular}
\end{ruledtabular}
\end{table}
\section{Conclusions}
\label{Conclusions}
We have analyzed the old (pre-1972) and new (CLAS 2005, SAPHIR 2003,
and LEPS) experimental data by comparing them with several existing
isobaric models, along with two new models fitted to the CLAS data.
Special attention was given to the forward-angles data, i.e. data
with $\theta_K\leq 60^\circ$. The phenomenon of the cross section
suppression at forward angles for the isobaric models with
the hadronic form factors was observed.
At forward angles, the CLAS 2005, LEPS, and pre-1972 data can be
described reasonably well within the isobaric model without hadronic
form factors. The SAPHIR 2003 data are systematically shifted below
the model predictions which requires a global scaling factor of 15\%
to remove the discrepancy. The model without hadronic form factors,
however, cannot describe the data in the backward hemisphere and
at energies $W > 2$~GeV.
The isobaric models with hadronic form factors were shown to give
too strong damping of the cross sections at small kaon angles and
energies $W > 1.9$~GeV, which results in a disagreement with
existing experimental data. In their present forms, these models
are therefore not suited for the description of photoproduction
in this kinematic region, which is important, e.g., in the
calculation of hypernuclear photoproduction. Needless to say,
more precise experimental data at very small kaon c.m. angles
($0^\circ - 15^\circ$) would help solve this problem.
The Saclay-Lyon and Kaon-Maid models do not describe the data
satisfactorily as indicated by the statistics $|z_1|$ for testing
hypotheses. The former model is more consistent with the pre-1972 and
LEPS data sets than with the CLAS 2005 and SAPHIR 2003 ones.
At forward angles, the Saclay-Lyon model agrees quite well with
the CLAS data. The Kaon-Maid model provides a better description
of the CLAS 2005, LEPS, and pre-1972 data than the SAPHIR
2003 ones.
The relative-global-scaling factor between the SAPHIR and CLAS
data is estimated to be 1.13, which is in agreement with the given
systematic uncertainties. This discrepancy was shown to affect
the parameters of the ``missing'' resonance $D_{13}(1895)$ in the
Kaon-Maid model. The extracted values of the mass and width of the
resonance differ by 11 and 337 MeV, respectively, when the SAPHIR
and CLAS data are individually used in fitting the parameters.
This finding agrees with the conclusion of a similar
analysis that used the multipoles approach~\cite{Mart:2006dk}.
\section{Acknowledgment}
The authors are grateful to O. Dragoun for useful discussions and
interest in this work. P.B. acknowledges support provided by the
Grant Agency of the Czech Republic, Grant No.202/05/2142 and the
Institutional Research Plan AVOZ10480505. T.M. acknowledges the
support from the Faculty of Mathematics and Sciences, UI, as well
as from the Hibah Pascasarjana grant.
\renewcommand{\baselinestretch}{1.5}
|
2,877,628,090,968 | arxiv | \subsection*{Abstract}
Large-scale key-value storage systems sacrifice consistency in the
interest of dependability (i.e., partition-tolerance and
availability), as well as performance (i.e., latency). Such systems
provide eventual consistency, which---to this point---has been difficult to quantify in real systems.
Given the many implementations and deployments of
eventually-consistent systems (e.g., NoSQL systems), attempts have
been made to measure this consistency empirically, but they suffer
from important drawbacks. For example, state-of-the art consistency
benchmarks exercise the system only in restricted ways and disrupt the
workload, which limits their accuracy.
In this paper, we take the position that a consistency benchmark
should paint a comprehensive picture of the relationship between the
storage system under consideration, the workload, the pattern of
failures, and the consistency observed by clients. To illustrate our
point, we first survey prior efforts to quantify eventual consistency.
We then present a benchmarking technique that overcomes the shortcomings of
existing techniques to measure the consistency observed by clients as
they execute the workload under consideration. This method is
versatile and minimally disruptive to the system under test. As a
proof of concept, we demonstrate this tool on Cassandra.
\section{Experimental evaluation} \label{sec:exp}
To demonstrate our benchmarking methodology, we integrated our consistency measurement
technique into YCSB \cite{ycsb},
and used the modified YCSB to measure consistency in Cassandra \cite{cassie},
a widely adopted key-value storage system.
Our experiments use YCSB v.\ 0.1.4 and Cassandra v.\ 1.1.0.
The experimental hardware platform is a cluster of ten commodity
dual-socket 6-core Xeon servers equipped with 1GigE network interface
cards and 96GB DRAM. Each server ran a 32-thread YCSB client on one
socket, and a Cassandra node on the other socket, configured with
default options except as follows: keys were hashed uniformly across
all nodes and 3-way replicated using the ``simple'' replica placement
strategy \cite{cassiewiki}. By default, the Cassandra connector in
YCSB used consistency level ``ONE'' for both reads and writes. This
consistency level requires that a write be applied to the commit log
and memory table of at least one replica node before returning to the
client, and allows a read to return the value obtained from the first
replica that responds.
We instrumented the YCSB source code to log timing information for
each operation using a millisecond-precision clock. We pre-loaded
Cassandra with 1000 keys and applied a read-heavy (80\% get, 20\%
put) workload for 60 seconds. The keys were drawn from YCSB's ``hot
spot'' distribution, with 80\% of the operations going to a subset of
hot keys comprising 20\% of the key space.
\remove{
Using the collected timing information, we compute staleness using the
$\Delta$ quantity defined in \cite{gls:fun}. First, we group
operations into \emph{clusters}---groups of operations that access
(i.e., read or write) the same value \cite{gktest}. Next, we determine conflicts
between pairs of clusters by evaluating a \emph{scoring function $\chi$},
which is defined formally in \cite{gls:fun}. At a high level, the
scoring function quantifies the staleness of data due to consistency
violations between operations in two clusters, and indicates the
relative staleness observed by read operations. It is measured in
units of time and has a range from zero to infinity. The maximum of
the score function for any pair of clusters is the $\Delta$-value for
the execution, which indicates the worst-case relative staleness
observed in the workload.
}
\begin{figure}[tbp]
\centering
\vspace{-10pt}
\subfigure{%
\includegraphics[width=\linewidth]{histogram.png}%
}%
\caption{Histogram of score function ($\chi$) values.}
\label{fig:delta_histogram}
\end{figure}
\remove{
We emphasize that we can use this technique to measure the scoring
function and $\Delta$ for any arbitrary workload, and not just for the
synthetic workloads used in our experiment. We can also choose any key
for our measurements, and do not need to introduce additional
operations into the workload in order to obtain
measurements. Moreover, our methods are minimally intrusive, in that
we don't need to inject load into the system and disturb the original
workload to measure the workload.
}
We computed $\chi$ and $\Delta$ from collected timing information, as described in Section~\ref{sec:pos}.
Figure~\ref{fig:delta_histogram} is a histogram of positive $\chi$
values for all keys. Each point represents the relative staleness
observed by some read operation on some key. The value of $\chi$ ranges
from 1ms to 233ms, and the margin of error due to clock skew is around 1ms.
In comparison, Wada et al.\ report much higher maximum staleness levels in their experiments using Amazon's SimpleDB
(see Figures 2 and 3 in \cite{wada:cidr}).
\remove{
The histogram in Figure \ref{fig:delta_histogram} depicts the
distribution of $\chi$ values for one key in one run. Each
point represents the relative staleness observed by a read operation
in a cluster (e.g., subset of Cassandra nodes). The maximum point
equals the $\Delta$ value for that combination of execution and
key. The range [1ms-4ms] is most interesting, and seems to follow
(approximately) an exponential distribution. This distribution
indicates that although a system like Cassandra only promises eventual
consistency, the \emph{observed} data staleness is less that 4ms most
of the time. The scoring function and $\Delta$ values can quantify the
consistency properties of a system at run-time, which can be
invaluable, for example, to clients monitoring the service level
agreements with the storage provider and the client.
}
\remove{We repeat the above experiment $10$ times and obtain the $\Delta$ value
for each run. The $\Delta$ values have an average of 53ms with a
standard deviation of 12ms, indicating a low degree of staleness in the worst case.
We expect to observe larger $\Delta$'s for heavier workloads, and much larger
$\Delta$'s still for executions containing failures.}
\begin{figure}[tbp]
\centering
\vspace{-10pt}
\subfigure{
\includegraphics[width=0.93\linewidth]{ts.png}%
}
\caption{Time series plot of score function ($\chi$) values.}
\label{fig:time_series}
\end{figure}
Figure~\ref{fig:time_series} shows a time series plot of the $\chi$ values for all keys).
This visualization allows us to observe how staleness varies over time, in contrast to the
distribution of staleness values captured in Figure~\ref{fig:delta_histogram}.
In Figure~\ref{fig:time_series}, the x-axis depicts the approximate time when a read returns a stale value, and the y-axis depicts the corresponding $\chi$ value.
Most of the data points are concentrated near the x-axis, as we expect based on the histogram,
and furthermore there are a few visible ``inconsistency spikes''.
Finally, we measured the overhead of instrumentation that is required to compute the staleness metric and observed a performance loss of less than five percent with instrumentation enabled.
\remove{
This visualization allows us to observe how staleness
varies over time (for a history of operations), instead of only the
worst-case staleness captured in Figure~\ref{fig:delta_histogram}. We
observe a number of spikes in the plot, which may indicate instances
of high contention, and failure patterns in the system. We will
explore this line of work in future.
}
\section{Conclusions and future work}
In this paper, we present a client-centric benchmarking methodology
for understanding eventual consistency in distributed key-value
storage systems. Our methodology measures observed, rather than
worst-case, consistency. It extends the popular YCSB benchmark to
measure the staleness of data returned by reads using the concept of
$\Delta$-atomicity \cite{gls:fun}. Because our technique does not
inject operations into the workload, it measures consistency in a more
faithful manner than prior benchmarks. By measuring consistency in a
system-agnostic manner, we provide a quantitative methodology for
examining the performance vs.\ consistency trade-offs across various
key-value system architectures.
Using a preliminary implementation of our methodology, we demonstrate that the staleness of data in
Cassandra exhibits a long and thin tail.
That is, the worst-case staleness is much higher than the typical staleness
of data returned by read operations. This observation has implications for a system
administrator when deciding how to configure or deploy a system like
Cassandra---depending on the desired performance and deployment size,
the choice of replication factor and quorum sizes can be guided by
our benchmark results rather than guesswork.
We are actively extending our work to consider runs with failures.
Events such as network partitions, software crashes, or
device failures may trigger special execution paths in the system and
result in different consistency behaviors.
Our goal in future work is to stage experiments involving such failures
through additional modifications to the $\Delta$-enabled YCSB suite.
\begin{comment}
We are actively pursuing several extensions to this work. First, our
measure of $\Delta$-atomicity is defined over a trace
history. However, it may be useful to provide an \emph{instantaneous}
metric for an on-line measurement. A sound definition of instantaneous
consistency must consider both when a consistency violation may have
occurred, and some score indicating the severity of the violation.
Computing such scores on-line is difficult because the definition of
these atomicity measures take into account contextual history of an
operation, and therefore depend upon how much of that history is
available at the time of the on-line measurement.
Figure~\ref{fig:time_series} represents an off-line calculation of the
scores, and is our first step toward dynamic on-line visualization.
Finally, we are interested in designing simple black-box techniques
for increasing, or ``amplifying'' consistency. For example, our
experiments with Cassandra demonstrate that simply slowing down reads by a
few milliseconds would result in a trace with very strong consistency
properties. Can this type of ``consistency amplification'' be used for
fine-grained client-side control over consistency in a key-value system?
\end{comment}
\section{Introduction}
Large-scale key-value storage systems are quickly becoming an
essential component of many IT infrastructures. From fast-growing
start-ups to large enterprises, these systems are
becoming commonplace in production use because of their ability to
scale easily and the availability of many widely-supported software
implementations. However, in order to provide performance and
dependability at scale, the common principle followed by these
key-value systems is to relax data consistency \cite{Vogels}.
As these systems find their way into a wider variety of industries, it
becomes increasingly important to understand the implications of this
relaxed consistency model: to what extent relaxation improves
system performance and to what extent it degrades data consistency.
For example, Web-based applications rely on key-value systems to
provide high-throughput and low-latency access to content. While these
applications do not strictly require serializability
for correct operation, they may require a stronger
property than eventual consistency, such as causal or ``causal+''
\cite{cops:sosp11} consistency, in order to improve the user
experience.
On the other hand, cloud-based health care applications likely value
predictable consistency over performance. Eventually consistent
updates to a patient's record may introduce mistakes along the path of
patient care. For example, stale information (e.g., due to weak
consistency) about a patient's dosage or medical history may lead to
incorrect, or---in an extreme case---harmful treatment plans.
Today, cloud customers who care about consistency have limited means
to understand or control data consistency when choosing among
available storage systems, or their configurations. For example,
decisions to tune ``knobs'' such as the replication factor or
quorum size remain ad-hoc, and may lead to excessive replication or
operational costs. More importantly, no combination of these
knob settings can ensure that the storage system is strongly consistent (e.g.,
always returns the freshest data). This shortcoming is a fundamental
limitation of such always-available, partition-tolerant systems, as
observed by Brewer \cite{brew:cap} and formalized by Lynch et
al.\ \cite{lg:cap}. Moreover, many modern systems often choose to
further sacrifice consistency for better performance \cite{abadi:cap}.
We argue that a methodology for comprehensive consistency measurement
is necessary to evaluate today's eventually consistent systems. Such
a measurement framework can identify the shortcomings of architectural
designs or implementation errors in existing systems. Moreover, it can
determine the actual consistency behavior of a particular deployment,
which may be helpful to guide configuration and deployment decisions.
Prior techniques for measuring consistency follow a methodology that
is oversimplified, and as a result suffer from important drawbacks.
For example, the act of measurement disrupts the workload by injecting
operations, causing a troublesome ``observer effect''. Moreover, the
injected operations tend to stress the system, which may elicit
worst-case behavior even for a light workload. Understanding
observed, as opposed to worst-case, consistency is important for
systems designers considering performance trade-offs, particularly if
observed consistency is vastly different from the worst-case.
Our position is simple---a consistency benchmark should
produce precise and accurate measurements of consistency with minimal
disruption to the system under evaluation. These measurements should
reflect the consistency actually observed by clients in the workload
under consideration, rather than the consistency of operations
injected artificially into the workload.
Furthermore, a benchmark must collect measurements in a system-agnostic way,
enabling comparisons not only between different implementations of the
\emph{same} consistency model (e.g., sloppy quorums \cite{vogels:dynamo}),
but also between different consistency models.
In this paper, we describe a principled approach to consistency
measurement that captures more faithfully and accurately the
actual consistency behavior of a key-value storage system
for an arbitrary workload.
Our specific contributions are:
\vspace{-2.5mm}
\begin{enumerate}\setlength{\itemsep}{-1.5mm}
\item A survey of known techniques for quantifying and benchmarking consistency, and discussion of their limitations (Section~\ref{sec:rel}).
\item An outline of a more general and precise approach to consistency measurement (Section~\ref{sec:pos}).
\item A proof-of-concept benchmarking tool, which we use to obtain consistency measurements for the Cassandra \cite{cassie} key-value store (Section~\ref{sec:exp}).
\end{enumerate}
\section{Toward a benchmarking framework} \label{sec:pos}
We focus on creating a client-centric benchmarking tool that
measures observed consistency and is
minimally disruptive to the system under evaluation. Since
consistency and fault-tolerance are intimately related in eventually
consistent systems, the tool should provide support for fault
injection. This includes crashes (individual and correlated) as well
as network partitions, and necessitates ``white-box'' access to the
infrastructure. Finally, the tool should simplify analysis of the
results by presenting useful visualizations to the user.
As a stepping stone towards building a comprehensive benchmarking
framework, we now describe a methodology for minimally disruptive
measurement of consistency in arbitrary workloads. We then suggest
how such measurements might be visualized. Since our methodology is
client-centric, it can be married with any workload generator. The
measurement entails collecting timing information at clients for an
arbitrary interleaving of operations, and calculating consistency
metrics only from this information using theoretically-sound
techniques. As a running example, we consider the calculation of
the $\Delta$ quantity described in Section~\ref{sec:rel}, and then
discuss integration with YCSB~\cite{ycsb}.
$\Delta$-atomicity is defined abstractly for arbitrary
executions, including ones containing concurrent writes to the same
key. To quantify staleness, we propose to calculate $\Delta$ for a
given execution using the procedure described in \cite{gls:fun}.
First, we group operations into \emph{clusters}---sets of operations
that access the same key and read or write same value \cite{gktest}.
For example, in Figure~\ref{fig:delta-extended} there are three
clusters, red, blue and green, corresponding to the values 1, 2 and 3.
Next, we choose a key $k$ and for each pair of clusters for that key,
and we determine the staleness due to the interaction of operations in
these clusters by evaluating a \emph{scoring function $\chi$}
\cite{gls:fun}. We omit the formal details and point out only that in
Figure~\ref{fig:delta-extended}, $\chi$ is the width of the staleness ``gaps''
experienced by \textsf{read(1)} and \textsf{read(2)}. Finally, we compute
the $\Delta$ value for key $k$ by taking the maximum of $\chi$ over
all pairs of clusters for $k$. We repeat for each key and, taking the
maximum, obtain a global $\Delta$ indicating the staleness for the
entire execution. Note that since the calculation combines time
values from multiple hosts, accuracy is contingent upon synchronized
clocks.
The quantities $\chi$ and $\Delta$ can be displayed visually in
various ways. For example, using $\Delta$'s for different keys, we
can plot a histogram that shows what proportion of the key space was
read in a consistent manner. Or, using $\chi$ values for one key, we
can plot a histogram that shows what proportion of clusters contained
reads of stale values (which, in turn, estimates what proportion of
reads returned stale values). We can also use a time series plot of
$\chi$ to visualize the \emph{instantaneous
consistency} in an execution, which indicates the staleness of
values read at different points in time. This allows us to observe
how staleness varies over time (e.g., in response to load spikes
or failures), information that is masked by $\Delta$ alone since it
quantifies consistency for the duration of an entire execution. Note
that $\chi$ and $\Delta$, as well as the corresponding visualizations,
can be obtained for a subset of the key space (e.g., chosen
through random sampling).
\section{Related work} \label{sec:rel}
Consistency in this paper refers to the notion that different clients
accessing a storage system agree in some way on the state of data. In
the literature, this is termed the \emph{client-centric}
view, as opposed to the \emph{data-centric} view, which refers to
details that are not directly observable by clients (e.g., messages in
flight, state of replicas). The client-centric view is more natural
in the context of benchmarking consistency, as it does not require
system-specific and disruptive instrumentation to collect intimate details
of the execution.
Instead, it considers only the information that clients can capture locally
as they apply \emph{get} and \emph{put} operations on keys, such
as the start and end time of each operation as well as its arguments
and response.
Client-centric definitions of consistency typically refer to agreement
on when and in which order operations take effect (e.g., see \cite{terry:baseball}).
As we discuss shortly, early attempts to benchmark consistency
focus on the ``when'', and interpret this question as meaning roughly
``How soon after a write operation returns do read operations
return the written value?'', or in other words, ``How eventual is
eventual?'' \cite{Bermbach,ycsbpp,wada:cidr}.
Formalizing and answering these questions precisely bring us a step
closer to understanding the complex relationship between
the workload applied to a storage system, the failure pattern,
the configuration parameters, and the observed client-centric consistency.
In contrast, prior work covers a narrow sub-space of this multidimensional relationship
that considers only failure-free executions,
and relies on an informal methodology that exercises the storage system
only near the limits of its ``consistency envelope''.
\remove{
For example, if a data item is in
the same state for some period of time (i.e., is not being written),
then a system where all the read operations return the latest value of
the data item is more consistent than one where a read returns a stale
or incorrect value. Or, if the state of the data item is changing,
then a system where different clients observe the sequence of state
changes in the same order is more consistent than one where two
clients observe the sequence of states in a different orders.
}
\vspace{-6pt}
\paragraph{Definitions of version and time-based staleness}\ \\
Staleness is a fundamental concern in data management, and can be used
to describe the quality of both the data and the system that stores
it. In this benchmark, we focus on the quality of the storage system,
and in particular the protocol synchronizing different replicas of
data. To that end, we consider staleness as a relative measure: how
long ago was the value read first updated (e.g.,
see Figure~\ref{fig:delta-extended}). In other words, data becomes
stale the first time it is overwritten by newer data.
Prior techniques for quantifying staleness in key-value storage
systems either count versions (e.g., the value read is the
second-latest value written) or measure time (e.g., the value read
is one hour older than the latest value written) \cite{aiyer, gls:fun,
ksc:qos, yv:costs, conit, zz:trading}.
These quantities are easy to state precisely under the simplifying
assumption that read and write operations are instantaneous---a
collection of unique points on a one-dimensional axis. In that case,
there is a natural total order on the operations, and moreover the
``latest value'' at any point in time is well defined. In contrast,
in real-world scalable storage systems, operational latencies due to
processing, networking and I/O are non-trivial, and so there can be multiple
operations in flight at any given time, even on a single key. Thus,
non-trivial latencies and parallelism complicate reasoning
about when a given operation takes effect, as well as the order in
which operations take effect relative to each other.
A more precise treatment of staleness devised by the theory community
includes the (client-centric) concepts of $k$-atomicity \cite{aiyer} and $\Delta$-atomicity
\cite{gls:fun}. The $k$-atomicity property is a natural formalization
of version-based staleness. An execution of operations in a key-value store is
$k$-atomic if the operations in that execution can be totally ordered
so that: (1) the total order extends the ``happens before''
partial order (i.e., if operation $A$ ended before operation $B$ began
during the execution, then $A$ precedes $B$ in the total order); and
(2) each read returns the value assigned by one of the $k$ most recent
writes preceding the read in the total order. (In the case $k=1$,
$k$-atomicity corresponds to Lamport's atomicity concept
\cite{lam:ipc}, which we discuss below.) For any given execution,
we can quantify version-based staleness by solving the following
optimization problem: find the smallest $k$ for which the execution is
$k$-atomic. We are not aware of an efficient (i.e., poly-time)
solution to this problem, although \cite{gls:fun} presents progress
toward solving the corresponding decision problem for $k=2$.
The $\Delta$-atomicity property attempts to capture time-based
staleness by stating that read operations must return values that are
at most $\Delta$ time units staler than the latest value for a key.
More formally, if we ``stretch'' the start time of each read to a
point $\Delta$ time units earlier, then the resulting execution should
be atomic in Lamport's sense \cite{lam:ipc}. For any given
execution, it is possible to compute the smallest $\Delta \geq 0$ for
which that execution is $\Delta$-atomic using an efficient algorithm
\cite{gls:fun}.
Figure~\ref{fig:delta-extended} illustrates $\Delta$ in action. The
start and end times are shown for three writes and two reads, all
operating on the same key. We assume that each operation
takes effect between its beginning and end. For example, 2 is the latest value from
the moment \textsf{write(2)} ends, and possibly even earlier. Thus,
\textsf{read(1)} returns a value that is stale by at least the width
of the ``gap'' between it and \textsf{write(2)}. Even though
\textsf{write(3)} is the latest value, staleness for \textsf{read(1)} is measured from
the \emph{first} unseen update to the key: \textsf{write(2)}. Similarly,
the staleness for \textsf{read(2)} is measured from the end of
\textsf{write(3)}.
\begin{figure}[!t]
\centering
\subfigure{%
\includegraphics[width=\linewidth]{delta-extended.pdf}%
}
\caption{Example calculation of $\Delta$.}
\label{fig:delta-extended}
\vspace{-10pt}
\end{figure}
For completeness, we also briefly discuss well-studied notions of weakly
consistent shared objects from distributed computing theory
literature. Lamport proposed the notions of \emph{safe},
\emph{regular} and \emph{atomic registers} (i.e., shared objects that
support read and write operations). These specifications describe the
correct behavior of read operations when they can execute concurrently
with writes and with each other, but do not adequately capture the
possibility that non-concurrent operations may appear to take effect out of
order---a commonplace phenomenon in modern quorum-replicated systems.
Lamport's atomicity property is similar in spirit to Herlihy and Wing's
\emph{linearizability} \cite{her:lin} and Papadimitriou's \emph{strict
serializability} \cite{papa:ss} for read/write register objects.
\vspace{-6pt}
\paragraph{Measuring and bounding staleness}\ \\
Several papers attempt to measure or bound staleness in order to
characterize the spectrum of trade-offs surrounding Brewer's celebrated
CAP principle \cite{abadi:cap, brew:cap}.
Wada et al.\ \cite{wada:cidr} measure time-based staleness in cloud
storage platforms by writing timestamps to a key from one client three
times per second, reading the same key from another client fifty times
per second, and computing the difference between the reader's local
time and the timestamp read. In experiments using
Amazon's SimpleDB \cite{simpledb}, they observe staleness on
the order of seconds.
The methodology of Wada et al.\ is sufficient to obtain evidence
relevant to their central research question---whether cloud storage
systems in practice provide more consistency than they promise.
However, their technique also has several disadvantages as a
consequence of exercising the system in an artificial way. First, the
measurement is disruptive because it introduces additional write
operations to the workload. This is unsuitable in a production
environment, unless the operations are applied to a special ``dummy''
key, in which case the outcome may not predict accurately the
staleness observed by reads on the other keys. Secondly, the
technique is pessimistic because it considers a pattern of access
where read operations occur back-to-back with writes. This
measurement captures the minimum time needed for replicas of a
key-value pair to synchronize, but in a real world workload, gaps
between operations may result in clients observing far less staleness.
In particular, if the load is trivial then it is possible that all
operations (except the ones injected artificially) will be atomic. A
third drawback is the use of only a single writer. While this
certainly simplifies calculations, the measurements obtained may fail
to cover special execution paths of the storage system for dealing
with concurrent writes, which hurts accuracy further.
Bermbach et al.\ \cite{Bermbach} and Patil et al.\ \cite{ycsbpp}
measure staleness using techniques similar to Wada et al. The
latter paper presents an extension of the Yahoo Cloud Serving
Benchmark (YCSB) \cite{ycsb}, with support for basic consistency
benchmarking. Their technique relies on a middleware service,
namely ZooKeeper \cite{zoo},
to convey timing information between readers and writers.
This technique is limited in precision due to the latency
introduced by operations on ZooKeeper, and hence it produces
results with one-sided error:
reported consistency violations are true assuming synchronized
clocks, but lack of reported violations does not imply
atomic behavior.
Bailis et al.\ \cite{pbs} consider the problem of predicting the
staleness from an abstract model of the storage system, including
details such as the distribution of latencies for network links. This
work considers both version and time-based staleness, and provides an
upper bound on the probability that a client observes stale data.
This prediction, similar to the measurements of Wada et al., may be
overly pessimistic for light workloads. Predicting and measuring
staleness are complementary techniques---prediction can be used for
planning and measurement can be used in a variety of ways, such as
performance tuning, monitoring, evaluating service-level agreements,
and feedback control.
\vspace{-6pt}
\paragraph{Other work}\ \\
Shapiro et al.\ formalize eventual consistency for
shared objects that avoid conflicts by design, for example
by providing commutative operations \cite{cfrdt, ccrd}.
Conventional key-value storage systems, like Cassandra, fall
outside this category because write operations are inherently conflict-prone.
Zhu et al.\ \cite{zhu} give formal definitions of eventual
consistency for read/write storage systems,
as well as several client-centric properties:
read-your-writes, monotonic reads, writes follow reads, and monotonic
writes. This work does not provide a way to measure the difference
between a particular consistency property and the actual consistency
delivered by a storage system.
Less formal definitions of eventual consistency appear in numerous papers
(e.g., \cite{bayou,Vogels}).
\remove{
\begin{enumerate}
\item CAP-type trade-offs \cite{brew:cap,lg:cap,abadi:cap}.
\item Reference \cite{zhu} -- They give formal client-centric definitions of read-your-writes, monotonic reads, writes follow reads,
and monotonic writes. They also define eventual consistency, but that definition is not client-centric. The paper is theoretical and has no empirical component. In this paper ``verification'' means ``proof sketch''
\item Prince Mahajan convergence paper
\begin{itemize}
\item eventually consistent systems provide more than they promise in practice
\item when a system allows the user to choose consistency settings, consistency with stronger settings was not much better than with weak settings
\end{itemize}
\item YCSB / YCSB++
\begin{itemize}
\item intrusive and not very precise due to the use of ZooKeeper
\item single-writer only for the key being tested
\end{itemize}
\item HP Labs PODC paper
\begin{itemize}
\item looks at verification algorithms only, no empirical evaluation
\item lacks the concept of ``instantaneous consistency''
\end{itemize}
\item PBS -- Peter Bailis et al.
\begin{itemize}
\item gives a worst-case probabilistic bound on staleness, but how tight are the bounds?
\item prediction, not measurement
\end{itemize}
\item consistency rationing -- VLDB 2012
\begin{itemize}
\item talks about price more than CAP-type trade-offs and benchmarking
\end{itemize}
\item Epsilon-serializability \cite{epsilon} -- not looking at staleness
\end{enumerate}
}
|
2,877,628,090,969 | arxiv | \section{Introduction}
The new era of gravitational wave (GW) astronomy has the potential to drastically change our understanding of Cosmology and Fundamental Physics over a broad range of redshifts. While standard sirens \cite{Schutz:1986gp} can be used to probe the Hubble expansion up to redshifts $z\sim\mathcal{O}(10)$ and place constraints on late-time modifications to gravity \cite{Belgacem:2019pkk,Finke:2021aom}, the detection of a Stochastic Gravitational Waves Background (SGWB) is often regarded as one of the best observational windows to probe the physics operating during the early Universe. Indeed, the sensitivity of future gravitational wave interferometers (GWIs) may be enough to reconstruct the spectral shape of the SGWB \cite{Kuroyanagi:2018csn,Caprini:2019pxz,Flauger:2020qyi} making it possible to separate its cosmological and astrophysical contributions \cite{Boileau:2020rpg} and shed light on inflation, reheating, primordial black holes, phase transitions and more in general physics at energies out of the reach of future particle physics accelerators (see Refs.~\cite{Kuroyanagi:2009br,Kuroyanagi:2011fy,Kuroyanagi:2015esa,Bartolo:2016ami,Kuroyanagi:2017kfx,Sasaki:2018dmp,Fujita:2018ehq,Caldwell:2018giq,DEramo:2019tit,Caprini:2019egz,Domenech:2020kqm,Fumagalli:2020nvq,Braglia:2020taf,Fumagalli:2021cel,Calcagni:2020tvw} for an incomplete list of works).
Although most of the research on the SGWB has focused on the isotropic part of the graviton two-point function, i.e., its energy density, the characterization of other SGWB observables have been studied, such as higher point graviton correlation functions \cite{Bartolo:2018qqn} and their polarization \cite{Callister:2017ocg,Domcke:2019zls} and anisotropies \cite{Mentasti:2020yyd,Contaldi:2020rht,Banagiri:2021ovv}. Furthermore, another way to maximize the information that can be extracted by GW data is by cross-correlating them with those from other cosmological observations. The prime example of the power of Multi-messenger observations is the detection of GW170817, a neutron stars merger \cite{TheLIGOScientific:2017qsa}, that revolutionized our understanding of Cosmology \cite{Ezquiaga:2018btd} and Astrophysics \cite{GBM:2017lvd,Drout:2017ijr}.
In this paper, we explore the possibility of constraining the cosmological history of our Universe using joint observations of Cosmic Microwave Background (CMB) and SGWB anisotropies. Recently, a line-of-sight formalism to compute the SGWB anisotropies generated by the cosmological propagation of GWs through scalar and tensor inhomogeneities has been derived in Refs.~\cite{Contaldi:2016koz,Bartolo:2019oiq,Bartolo:2019yeu}. The formalism has been used to compute the spectra of the anisotropies induced by extra-relativistic degrees of freedom in Ref.~\cite{DallArmi:2020dar}, where it was shown that the early decoupling of gravitons makes them an excellent observable to constrain the physics of the early Universe. Their cross-correlation with CMB temperature anisotropies has instead first been considered in Refs.~\cite{Adshead:2020bji,Malhotra:2020ket} and used to forecast constraints on primordial non-Gaussian signals originated by scalar-tensor-tensor interactions.
Building on the intuition of Ref.~\cite{DallArmi:2020dar}, our goal is to investigate the effects of pre-recombination physics on SGWB anisotropies as well as the CMB temperature and E-mode polarization ones. In order to assess the detectability of SGWB anisotropies and the information gain resulting from the cross-correlation, we derive the noise curves for future gravitational waves interferometers (GWIs) and perform a Fisher analysis to forecast the improvement in the error on the cosmological parameters describing the pre-recombination physics. As benchmark, we consider three popular early time modifications to the standard model in the form of extra relativistic degrees of freedom (as in Ref.~\cite{DallArmi:2020dar}), a massless non-minimally coupled scalar field \cite{Rossi:2019lgt,Braglia:2020iik,Ballesteros:2020sik} and Early Dark Energy (EDE) \cite{Poulin:2018cxd,Agrawal:2019lmo}. Our findings suggest that SGWB anisotropies, and their cross-correlation with CMB ones, will be a valuable observational channel to further constrain these models.
We note that unresolved astrophysical sources of gravitational waves such as can also contribute to the SGWB and to its anisotropies~\cite{Regimbau:2016ike,Cusin:2017fwz,Jenkins:2018nty,Bertacca:2019fnt}.
For simplicity, in this paper, we only focus on the cosmological contribution, leaving the astrophysical one for future exploration.
Our paper is organized as follows. We review the Boltzmann formalism to compute the SGWB anisotropies induced by density perturbations in the next Section and compute the theoretical spectra of SGWB anisotropies in Section~\ref{sec:models}. In Section~\ref{sec:noise}, we compute the noise spectra adopted in our Fisher analysis, which we describe in detail in Sections~\ref{sec:Fisher}. We then present our results in Section~\ref{sec:results} and conclude in Section \ref{sec:conclusions}.
\section{ Anisotropies of the Stochastic Gravitational Wave Background}
In this Section, we review the line-of-sight formalism that we adopt to compute the SGWB anisotropies angular spectra in the next Sections. For a detailed treatment, we refer to the original papers Refs.~\cite{Bartolo:2019oiq,Bartolo:2019yeu}.
We consider the metric in the longitudinal gauge $ds^2=a^2(\eta)\left\{-(1+2\Phi)d\eta^2+\left[(1-2\Psi)\delta_{ij}+h_{ij}\right]dx^idx^j\right\}$, where $a(\eta)$ is the scale factor and $h_{ij}$ is the transverse and traceless degrees of freedom. Following a Boltzmann approach \cite{Dodelson:2003ft}, we can define the distribution function of gravitons as $f \left( x^\mu,\,p^\mu\right)$, where $x^\mu$ and $p^\mu\equiv dx^\mu/d\lambda$ are respectively the position and momentum of the graviton and $\lambda$ is an affine parameter along its trajectory. Disregarding collisional terms for GWs (see Ref.~\cite{Bartolo:2018igk} for a discussion) and keeping only terms at first order in perturbations, the Boltzmann equation for $f$ is
\begin{equation}
\frac{\partial f}{\partial \eta}+
n^i \, \frac{\partial f}{\partial x^i} +
\left[ \frac{\partial \Psi}{\partial \eta} - n^i \, \frac{\partial \Phi}{\partial x^i} + \frac{n^i n^j}{2} \frac{\partial h_{ij} }{\partial \eta} \right] q \, \frac{\partial f}{\partial q} = 0 \,,
\end{equation}
where $n^i\equiv p^i/p$ with $p\equiv \sqrt{p_i p^i}$ is the unit vector describing the direction of motion of the GW, and $q\equiv p a$ is the comoving momentum.
We decompose the distribution function into the sum of an isotropic and homogeneous part and a perturbed one as $f \left( \eta ,\, x^i ,\, q ,\, n^i \right)
\equiv {\bar f} \left( q \right) - q \, \frac{\partial {\bar f}}{\partial q} \, \Gamma \left( \eta ,\, x^i ,\, q ,\, n^i \right)$. The first term is related to the homogeneous energy density of GWs as
\begin{equation}
\bar{\Omega}_\text{\tiny GW} \left( q \right) \equiv \frac{4 \pi}{\rho_{\rm crit,0}} \, \left( \frac{q}{a_0} \right)^4 {\bar f} \left( q \right) \;,
\end{equation}
where $0$ denotes the value today and $\rho_{\rm crit,0}=3H_0^2/(8\pi G)$ is the critical energy density with $H\equiv a'/a^2$. The Fourier transform of the perturbed term $\Gamma (\eta ,\, x^i ,\, q ,\, n^i) \equiv\int d^3k/(2\pi)^3 e^{i \vec{k}\cdot \vec{x}}\Gamma(\eta ,\, k^i ,\, q ,\, n^i)$ satisfies the following equation \cite{Contaldi:2016koz}:
\begin{equation}
\Gamma'+ i \, k \, \mu\, \Gamma = S (\eta, k^i, n^i) \, ,
\label{Boltfirstgamma1}
\end{equation}
where a prime denotes a derivative with respect to conformal time, $\mu \equiv (k^i/k) \cdot n_i$, and the source term is
\begin{equation}
S = \Psi' - i k \, \mu \, \Phi - \frac{1}{2}n^i n^j \, h_{ij}' \,.
\end{equation}
The function $\Gamma$ can then be expanded in spherical harmonics as $\Gamma(n^i)=\sum_\ell\sum_m\,\Gamma_{\ell m} Y_{\ell m}(n^i)$ and the statistical distribution of the harmonic coefficient is characterized by the two point function $\left\langle \Gamma_{\ell m} \Gamma_{\ell' m'}^* \right\rangle \equiv \delta_{\ell \ell'} \,\, \delta_{mm'} \, \left({\widetilde C}_\ell^I(q)+{\widetilde C}_\ell^S+{\widetilde C}_\ell^h\right)$, where the three terms on the right hand side, that we assume to be uncorrelated among themselves, represent the initial contribution to the total anisotropies from the SGWB generation mechanism and the anisotropies induced by the propagation of GWs through scalar and tensor perturbations, respectively \cite{Bartolo:2019oiq,Bartolo:2019yeu}. In the following, we will discard the first and third term and set ${\widetilde C}_\ell^I \left( q \right)={\widetilde C}_\ell^h=0$ as the first one is not relevant for our purposes\footnote{Mechanisms generating non-vanishing initial anisotropies include primordial black hole formation \cite{Bartolo:2019oiq,Bartolo:2019yeu,Garcia-Bellido:2016dkw,Bartolo:2019zvb}, primordial scalar-tensor-tensor non-Gaussianities \cite{Adshead:2020bji,Malhotra:2020ket}, phase transitions in the early Universe \cite{Kumar:2021ffi} and cosmic strings \cite{Kuroyanagi:2016ugi,Jenkins:2018nty}. } and we have checked that ${\widetilde C}_\ell^h$ is negligible compared to the scalar contribution ${\widetilde C}_\ell^S$ for observationally viable values of the GW amplitude. Therefore, the only remaining contribution to the angular spectrum of the anisotropies is explicitly given by \cite{Bartolo:2019oiq,Bartolo:2019yeu}:
\begin{equation}
C_\ell^\Gamma\equiv{\widetilde C}_\ell^S = 4 \pi \int \frac{dk}{k} \, {\cal T}_\ell^{\left( S \right) \,2} \left( k ,\, \eta_0 ,\, \eta_{\rm in} \right)
\, \mathcal{P}_{\mathcal{R}} \left( k \right) \,,
\label{Cell-res}
\end{equation}
where $j_\ell (x)$ are the Bessel functions, $\mathcal{P}_{\mathcal{R}}(k)$ is the primordial scalar power spectrum and the transfer function is
\begin{align}
&{\cal T}_\ell^{\left( S \right) } \left( k ,\, \eta_0 ,\, \eta_{\rm in} \right) \equiv T_\Phi \left( \eta_{\rm in} ,\, k \right) \, j_\ell \left( k \left( \eta_0 - \eta_{\rm in} \right) \right)\\
&+ \int_{\eta_{\rm in}}^{\eta_0} d \eta' \,
\frac{\partial \left[ T_\Psi \left( \eta ,\, k \right) + T_\Phi \left( \eta ,\, k \right) \right] }{\partial \eta} \,
j_\ell \left( k \left( \eta_0 - \eta \right) \right),\nonumber
\end{align}
with $T_\Phi$ and $T_\Psi$ being the transfer functions for the Newtonian potentials.
\begin{figure*}
\includegraphics[width=.49\columnwidth]{ClNeff_88mm.pdf}
\includegraphics[width=.49\columnwidth]{ClNeffTEGW_88mm.pdf}
\caption{\label{fig:ClNeff} Spectra of SGWB anisotropies in the model with extra relativistic degrees of freedom. The angular power spectra are shown for [Left] the variable $\Gamma$ and [right] their cross-correlation with the CMB T (solid) and E (dashed) spectra. The parameter $\Delta N_{\rm eff}=N_{\rm eff}-3.046$ is varied according to the legend. }
\end{figure*}
As for CMB photons, the scalar potentials induce a Sachs-Wolfe (SW) effect which dominates at large scales, i.e. small $\ell$'s, given by the first term in ${\cal T}_\ell^{\left( S \right) }$, and an integrated SW (ISW) effect, which depends on the variation of the scalar potentials integrated along the line of sight \cite{Contaldi:2016koz,Bartolo:2019oiq,Bartolo:2019yeu}. The crucial difference with the CMB, however, is the absence of the visibility function $g(\tau)=-(d\tau/dt) e^{-\tau}$, where $\tau$ is the optical depth, which effectively sets $\eta_i\simeq\eta_*$ for CMB photons, where $\eta_*$ is around the last scattering surface \cite{DallArmi:2020dar}.
Although the variable $\Gamma$ is the most natural to adopt in the Boltzmann approach, it is not the mostly used when it comes to studying the noise for the angular spectra of the SGWB anisotropies. In particular, we will compute the noise spectra with schNell code \cite{Alonso:2020rar}, which is optimized to compute the anisotropies of the energy density of GWs $\Omega_{\rm GW}(f,\,n^i)$ itself. The latter is related to $\Gamma$ by \cite{Bartolo:2019oiq,Bartolo:2019yeu}
\begin{equation}
\label{eq:Omega-Gamma}
\Omega_{\rm GW}(f,\,n^i)\equiv\bar{\Omega}_{\rm GW}(f)\left[1+\left(4-\alpha\right)\Gamma\right],
\end{equation}
where, for simplicity, we restrict to a power-law monopole $\Omega_{\rm GW}(f)$ of the form
\begin{equation}
\bar{\Omega}_{\rm GW}(f)=A_*\left(\frac{f}{f_*}\right)^\alpha\,,
\end{equation}
with $\alpha$ being the spectral index and $f_*$ being a reference frequency such that $A_*=\bar{\Omega}_{\rm GW}(f_*)$.
Using these relations and expanding $\Omega_{\rm GW}$ in spherical harmonics as $\Omega_{\rm GW}(f,\,n^i)\equiv\bar{\Omega}_{\rm GW}(f)\left(1+\delta_{\rm GW}(f,\,n^i)\right)$, we can relate the angular power spectrum of $\Omega_{\rm GW}$ to the one of $\Gamma$ as:
\begin{equation}
\label{eq:CellOmega}
C_\ell^\Omega(f)=A_*^2\left(\frac{f}{f_*}\right)^{2\alpha}(4-\alpha)^2C_\ell^\Gamma.
\end{equation}
Note that the $C_\ell^\Omega$s are frequency dependent. In this paper, for simplicity, we will always compute them at the reference frequency $f_*$.
As anticipated in the Introduction, we will also be interested in computing the cross-correlation of the SGWB anisotropies with CMB ones. We define such cross-correlation as:
\begin{equation}
C_\ell^{\Omega\,X} = 4 \pi \, \int \frac{d k}{k} \, \left[ {\cal T}_\ell^S \left( k\right) \Delta^X(k)\right]
\, \mathcal{P}_{\mathcal{R}} \left( k \right),
\end{equation}
where $X=\{T,\, E\}$ and $\Delta^X(k)$ is the transfer function for CMB photons.
In order to compute the spectra of the SGWB anisotropies, we have modified the publicly available code {\tt CLASS}\footnote{\href{https://github.com/lesgourg/class\_public}{https://github.com/lesgourg/class\_public}}
\cite{Lesgourgues:2011re,Blas:2011rf}.
\section{Selected cosmological models and theoretical spectra}
\label{sec:models}
We now present the theoretical spectra for the SGWB anisotropies and their cross-correlation with the CMB ones. For each of our benchmark models for early time modifications, we first review the main features and then numerically explore how they affect the angular spectra of the anisotropies.
\subsection{Extra relativistic degrees of freedom}
\label{sec:DeltaNeff}
We start by considering the contribution of extra relativistic species. In this context, modifications to the expansion history of the early Universe are often enclosed in the effective number of relativistic species $N_{\rm eff}$, defined as
\begin{equation}
\label{eq:Neff}
\rho_r=\left[1+\frac{7}{8}\left(\frac{4}{11}\right)^{\frac{4}{3}}N_{\rm eff}\right]\rho_\gamma,
\end{equation}
where $\rho_r$ is the radiation energy density and $\rho_\gamma$ is the photon energy density. For the Standard Model of particle physics, there are three species of active neutrinos corresponding to $\Delta N_{\rm eff} \equiv N_{\rm eff}-3.046 = 0$, where the small correction $N_{\rm eff}-3=0.046$ accounts for the fact that neutrino decoupling is immediately followed by $e^+\,e^-$ annihilation, see e.g. Ref.~\cite{Lesgourgues:2018ncw}.
As mentioned in the Introduction, the imprints of relativistic degrees of freedom on the spectra of the SGWB anisotropies were first studied in Ref.~\cite{DallArmi:2020dar}, to which the reader is referred for more details. The formalism of Ref.~\cite{DallArmi:2020dar} is a bit more general than ours, as the authors analyze the response of SGWB anisotropies to a change in the fractional energy density of decoupled relativistic species at the time of graviton decoupling, i.e. $\eta_i$, defined as $f_{\rm dec}(\eta_i)=g_*^{\rm dec}(T_i)/g_*(T_i)$, where $g_*$ and $g_*^{\rm dec}$ are the relativistic degrees of freedom in the standard model and the ones of decoupled particles, respectively. The latter parameterization therefore allows to include decoupled relativistic particles at early times that are no longer relativistic today and therefore do not contribute to $N_{\rm eff}$. In our paper, for simplicity, we restrict to extra-relativistic species that are still relativistic today.
As can be seen from Fig.~\ref{fig:ClNeff}, a variation in $N_{\rm eff}$ affects high multipoles, since the redshift of the matter-radiation equality, or equivalently the comoving size sound horizon $r_s$, is changed by the extra contribution to the Hubble rate. However, as noticed in Ref.~\cite{DallArmi:2020dar}, since gravitons decouple earlier than CMB photons, the Sachs-Wolfe plateau is also affected, unlike in the CMB spectra.
\subsection{Non-Minimally coupled scalar field}
\label{sec:CC}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{fCC88mm.pdf}
\end{center}
\caption{\label{fig:NMC} Energy injection in the NMC model. The initial condition of the scalar field $\phi_i$ (in Planckian units) is varied according to the legend and we fix $\xi=-1/6$.}
\end{figure}
\begin{figure*}
\includegraphics[width=.49\columnwidth]{ClCC_88mm.pdf}
\includegraphics[width=.49\columnwidth]{ClCCEGW_88mm.pdf}
\caption{\label{fig:ClCC} Spectra of SGWB anisotropies in the NMC model. The angular power spectra are shown for [Left] the variable $\Gamma$ and [right] their cross-correlation with the CMB T (solid) and E (dashed) spectra. The initial condition of the scalar field $\phi_i$ (in Planckian units) is varied according to the legend and we fix $\xi=-1/6$.}
\end{figure*}
As a second example, we consider a Non-Minimally coupled (NMC) scalar field $\phi$ that modifies gravity before and around recombination \cite{Rossi:2019lgt,Braglia:2020iik,Ballesteros:2020sik}, described by the following Lagrangian:
\begin{equation}
\label{eq:modelCC}
S = \int \mathrm{d}^{4}x \sqrt{-g} \left[ \frac{F(\phi)}{2}R
- \frac{(\partial\phi)^2}{2} -\Lambda\right]+ S_m \,,
\end{equation}
where $F(\phi) = M_{\rm pl}^2+\xi\phi^2$, $R$ is the Ricci scalar, and $S_m$ is the action for matter fields. In this model, the scalar field is frozen deep into the radiation era and starts to move around the matter-radiation era driven by its coupling to pressureless matter at the level of the equations of motion.
For negative values of the coupling $\xi<0$, this model modifies the expansion history of the Universe by contributing with a nearly constant energy fraction before recombination, after which it redshifts away as fast or faster than radiation, depending on the magnitude of $\xi$ \cite{Braglia:2020iik,Ballesteros:2020sik}. We show the dependence of this contribution, quantified by the quantity $f_{\rm NMC}\equiv\rho_{\rm NMC}/\rho_{\rm crit}$, where $\rho_{\rm NMC}$ is the effective energy density of the scalar field \cite{Gannouji:2006jm}, in Fig.~\ref{fig:NMC}. As can be seen, the magnitude of $f_{\rm NMC}$ increases with the initial condition of the scalar field $\phi_i$.
Although the background expansion history resembles the one of the $\Delta N_{\rm eff}$ model introduced before, the behavior of the perturbations is different \cite{Rossi:2019lgt} resulting in distinct cosmological predictions \cite{Ballesteros:2020sik}. We show the theoretical spectra for the NMC model, focusing for simplicity on the so-called Conformally Coupled case $\xi=-1/6$ in Fig.~\ref{fig:ClCC}.
\subsection{Early Dark Energy}
\label{sec:EDE}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fEDE_88mm.pdf}
\caption{\label{fig:fEDE} Energy injection in the RnR model. The parameter $\phi_i$ is varied according to the legend (in Planckian unit). We plot examples for the redshift of energy injection $z_c=3160$ and $z_c=7.3\times10^5$, corresponding to $\beta=2$ and $\beta=6$, respectively.}
\label{fig:fEDE}
\end{figure}
\begin{figure*}
\includegraphics[width=.49\columnwidth]{ClEDE_88mm.pdf}
\includegraphics[width=.5\columnwidth]{ClEDETEGW_88mm.pdf}
\caption{\label{fig:ClNE} Spectra of SGWB anisotropies in the EDE model. The angular power spectra are shown for [Left] the variable $\Gamma$ and [right] their cross-correlation with the CMB T (solid) and E (dashed) spectra. The parameter $\phi_i$ is varied according to the legend (in Planckian unit). We plot spectra for $z_c=3160$ and $z_c=7.3\times10^5$, corresponding to $\beta=2$ and $\beta=6$, respectively. }
\end{figure*}
As a last example, we consider the case of Early Dark Energy (EDE). This class of models has recently become popular as a solution to the $H_0$ tension \cite{Verde:2019ivm}. Here, we consider the so called Rock n Roll (RnR) model of Ref.~\cite{Agrawal:2019lmo} as representative of EDE (see \cite{Poulin:2018cxd,Agrawal:2019lmo,Niedermann:2019olb,Berghaus:2019cls,Sakstein:2019fmf,Lin:2019qug,Ye:2020btb,Braglia:2020iik,Braglia:2020bym,Gonzalez:2020fdy,Braglia:2020auw} for an incomplete list of EDE models). The Lagrangian is that of a minimally coupled, canonical scalar field $\phi$:
\begin{equation}
\label{eq:modelEDE}
S = \int \mathrm{d}^{4}x \sqrt{-g} \left[ \frac{R}{2}
- \frac{(\partial\phi)^2}{2} -\Lambda-V(\phi)\right]+ S_m \,,
\end{equation}
where $V(\phi)$ is a quartic potential of the form $V(\phi)=\lambda\phi^{2 n}/2n$. We take $n=2$ and parametrize the dimensionless constant $\lambda$ as $\lambda=10^{2 \beta}/(3.516\times10^{109})$, where $3.516\times10^{109}$ is the \emph{numerical} value of $M_{\rm pl}^4$ in ${\rm eV}^4$.
The cosmological dynamics of the scalar field can be summarized as follows. Deep in the radiation era, the scalar field is frozen by the Hubble friction to its initial value. At these times, the kinetic energy of the scalar field is essentially zero. Therefore, the energy of the scalar field is subdominant with respect to that of radiation, and it has an equation of state $w_{\phi}=-1$, hence the name \emph{Early Dark Energy}. When the effective mass of $\phi$ becomes comparable to the Hubble parameter, i.e. $d^2 V/d\phi^2 \simeq H^2$, it quickly rolls down the potential and eventually oscillates around its minimum at $\phi=0$, with a cycle-averaged equation of state given by \cite{Turner:1983he}:
\begin{equation}
w_\phi=\frac{n-1}{n+1}.
\end{equation}
For the specific example of the quartic potential with $n=2$, we get $w_\phi=1/3$ and the scalar field energy density redshifts away as fast as radiation. This is a crucial feature of EDE models that makes them a perfect candidate to solve the $H_0$ tension since these models sizeably modify the Hubble expansion only in a narrow redshift range around recombination and leave unaffected the very early and the late dynamics of the Universe.
However, if the scalar field is massive enough so that it starts to move early during the radiation era, the radiation-like behavior of the envelope of the oscillations is not necessarily negligible. These two different regimes are shown in Fig.~\ref{fig:fEDE}, where we plot the EDE fractional contribution $f_{\rm EDE}=8\pi G \rho_\phi/3 H^2$ to the total density of the Universe for different masses of the scalar field. Note that we work directly with the parameters from the Lagrangian and follow the conventions of Ref.~\cite{Braglia:2020auw}. Another possibility is to trade the model parameters to $z_c$ and $f_{\rm EDE}$, which are respectively the redshift when the scalar field starts to move and the amount of energy injected into the cosmic fluid at $z_c$.
We show the effect of varying the fraction of energy injected into the cosmic fluid in Fig.~\ref{fig:ClNE}. First, we fix the value of $z_c=3160$, i.e., the best-fit value found in Ref.~\cite{Agrawal:2019lmo}. For this value, the model was originally found to sizably alleviate the $H_0$ tension (see Ref.~\cite{Braglia:2020auw} for analysis with latest data). In this case, only high multipoles are affected, and we see no observable effect on the spectra of the SGWB anisotropies at the large scales.
Then, we fix $z_c$ to a much larger value $z_c=7.3\times10^5$ and plot again the variation of the spectra with $f_{\rm EDE}$, using different colors. Now the EDE component has almost completely diluted away at the redshift of recombination, and the background effect is similar to effectively adding new relativistic species for redshift smaller than $z_c$. Since gravitons decouple much earlier than photons, like for the $\Delta N_{\rm eff}$ case, their transfer functions are affected by the presence of EDE, which contributes to the Sachs-Wolfe effect. Now, however, the situation is somewhat different. First of all, the background evolution is not the same. Second, unlike extra-relativistic species, minimally coupled scalar fields such as EDE do not contribute to the anisotropic stress perturbations. As a result, the SGWB spectra and their cross-correlation with CMB ones are different from those in Fig.~\ref{fig:ClNeff}.
\section{Noise angular spectra}
\label{sec:noise}
In order to forecast the capabilities of different GWIs to constrain the cosmological models that we have just presented, the noise angular power spectra $N_\ell$ for each GWI have to be specified in the Fisher analysis of the next Section. A code for the efficient computation of $N_\ell$, ready to be used for forecasting, has been recently presented in Ref.~\cite{Alonso:2020rar} and made publicly available at the link in this footnote\footnote{\href{https://github.com/damonge/schNell/branches}{https://github.com/damonge/schNell/branches}}. The code, called schNell, was used in \cite{Alonso:2020rar} to compute the noise angular power spectra of several GWIs including LISA, LIGO and some combinations of ground-based interferometers. We modified it to include also other experiments, as discussed below. Here, we only wish to present the SGWB anisotropies noise spectra, referring the interested reader to Ref.~\cite{Alonso:2020rar} for all the details regarding its computation and the implementation in schNell.
\begin{figure*}
\includegraphics[width=\columnwidth]{PLI_88mm.pdf}
\caption{\label{fig:PLI} We plot the power-law integrated sensitivity curves with SNR$=5$ for the experiments mentioned in the main text. }
\end{figure*}
In this paper, we consider several planned GWIs. As for future space-based detectors, we examine LISA, BBO, and DECIGO. As for ground-based detectors, we only consider the example, presented in Ref.~\cite{Alonso:2020rar}, of a cross-correlation between the Einstein Telescope and the Cosmic Explorer, which we refer to as EC. The sensitivity of other current or planned GWIs is not enough to measure the tiny SGWB anisotropies studied in this paper. We show the power-law integrated sensitivity (PLS) curves \cite{Thrane:2013oya} in Fig.~\ref{fig:PLI}. The meaning of these curves is that every power-law $\Omega_{\rm GW}$ that crosses them will be measured with a given signal-to-noise ratio (SNR) \cite{Thrane:2013oya}, which we take to be ${\rm SNR}=5$ in Fig.~\ref{fig:PLI}.
The overlap reduction function and the noise power spectral density (PSD) for each GWI have to be specified in schNell to compute the noise spectra. For LISA and EC, their choice is specified in Ref.~\cite{Alonso:2020rar}. For BBO and DECIGO, we use the noise PSD and the overlap reduction functions given in Ref.~\cite{Kuroyanagi:2014qza} (see also \cite{Kudoh:2005as}). Note that, for DECIGO, we plot three examples. In addition to the standard DECIGO configuration, we plot its 'Upgraded' configuration, whose sensitivity is improved by about a factor of 3, and the 'Ultimate' configuration, which is the most optimistic version of DECIGO with the sensitivity only limited by quantum noise. Since the PLS curves fore DECIGO and upgraded DECIGO are not significantly different from the one of BBO, in the rest of this paper, we will only consider ultimate DECIGO, which has a far better sensitivity than BBO. For space-based GWIs, also the orbital motion has to be specified. For BBO and DECIGO, whose specifications are not firmly established yet, we describe their coordinates using the default motion of LISA implemented in schNell, which is the one discussed in Ref.~\cite{Rubbo:2003ap}. Note also that, while we consider the hexagram configuration of BBO and DECIGO, the proposal of an improved version, where the main hexagram is supplemented with two additional LISA-like interferometers, has also been discussed \cite{Yagi:2011wg}.
We show the noise angular power spectra for our benchmark GWIs in Fig.~\ref{fig:Nell}. In producing the noise curves, we considered a scale-invariant SGWB with $\alpha=0$. We stress that the noise curves refer to the quantity $C_\ell^\Omega$, not $C_\ell^\Gamma$. The two quantities are related by Eq.~\eqref{eq:CellOmega}. As can be seen from Fig.~\ref{fig:Nell}, due to the symmetry properties of the overlap reduction functions of space-based GWIs, they are most sensitive to even multipoles. However, for BBO and DECIGO, the difference between the sensitivity of even and odd multipoles is slightly reduced due to their hexagram configuration \cite{Kudoh:2005as}.
\begin{figure}
\includegraphics[width=\columnwidth]{NellSpace_88mm.pdf}
\caption{\label{fig:Nell} We plot the noise angular power spectrum $N_\ell$ for the GWIs considered in our analysis. For LISA, BBO, and Ultimate DECIGO, we mark even multipoles with small triangles to highlight the different sensitivity to odd multipoles. }
\end{figure}
\section{Fisher methodology }
\label{sec:Fisher}
Having discussed the noise spectra, we go on to describe the details of our Fisher analysis. The matrix $\mathbf{C}_\ell$ containing the information from the anisotropies of the CMB and SGWB and their cross-correlation spectra has the following form:
\begin{eqnarray}
\label{eq:Cl}
\centering
\left( \begin{array}{ccc}C_\ell^{TT} + N_\ell^{TT} & C_\ell^{TE}& C_\ell^{T-GW} \\C_\ell^{TE}&C_\ell^{EE}+N_\ell^{EE}&C_\ell^{E-GW}\\C_\ell^{T-GW}& C_\ell^{E-GW} & C_\ell^{GW}+N_\ell^{GW} \end{array}\right).
\label{eq:cov_definition}
\end{eqnarray}
Then, the Fisher matrix is simply given by
\begin{align}
F_{ij}&=\sum_{\ell=2}^{\ell_{\rm max}^{\rm GW}}\frac{2\ell+1}{2}f_{\rm sky}{\rm Tr}\left(\mathbf{C}_\ell^{-1}\frac{\partial \mathbf{C}_\ell}{\partial p_i}\mathbf{C}_\ell^{-1}\frac{\partial \mathbf{C}_\ell}{\partial p_j}\right)\nonumber\\
&+\sum_{\ell=\ell_{\rm max}^{\rm GW}+1}^{\ell_{\rm max}}\frac{2\ell+1}{2}f_{\rm sky}{\rm Tr}\left(\mathbf{c}_\ell^{-1}\frac{\partial \mathbf{c}_\ell}{\partial p_i}\mathbf{c}_\ell^{-1}\frac{\partial \mathbf{c}_\ell}{\partial p_j}\right),
\end{align}
where $\mathbf{c}_\ell'$ is the matrix obtained removing the third row and column from Eq.~\eqref{eq:Cl} and $\ell_{\rm max}^{\rm GW}$ is the angular resolution of the GWIs. Assuming the likelihood functions for the parameters $p_j$ to be Gaussian, the most optimistic errors on $p_i$ can be estimated with the Cramer-Rao bound $\sigma_i^2=F_{ii}^{-1}$. We assume a conservative sky fraction of $f_{\rm sky}=0.7$ for both CMB and GW experiments.
As for the noise spectra, $N_\ell^{\rm GW}$ is computed as outlined in the previous Section, whereas for the CMB, we use white noise spectra given as
\begin{equation}
\centering
N^{ X X' }_\ell = s^{\, 2} \exp \left(\ell(\ell+1) \frac{\theta^{\ 2}_{\textsc{fwhm}}}{8\log2}\right)\,,
\label{eq:beamnoise}
\end{equation}
where $s$ is the total intensity instrumental noise in $\mu$K-radians and $X X'=\{TT,\,EE\}$.
As an example of a future large-scale CMB polarization experiment we use the LiteBIRD experiment \cite{Hazumi:2019lys} for which we use the noise specifications given in Table (3.1) of \cite{Hazra:2018eib} and fix $\ell_{\rm max}=1350$. Here an important comment is in order. We note that all the models we consider in this paper introduce new physics before and/or around the time of recombination and, therefore, modify the acoustic peak structure of the CMB spectra. As such, the maximum multipole $\ell_{\rm max}=1350$ mapped by LiteBIRD would not be competitive to constrain these models, compared to the large multipoles already mapped by Planck. The optimal one would be the proposed CMB S4 experiment, which can map the CMB spectra over the multipole range of $30\leq\ell\leq3000$. However, it is not possible to exploit the cross-correlation of CMB and SGWB anisotropies for this experiment, as GWIs are sensitive to small multipoles (see below). In addition, to perform a robust forecast including such large multipoles, the spectra of CMB lensing potentials also have to be included, which goes beyond the scopes of our paper. For these reasons, we choose to consider LiteBIRD, which will provide the best measurement of the CMB polarization spectra at small multipoles, and leave the task of a full Fisher analysis considering CMB S4 experiments and including lensing spectra for future work.
We forecast the capability of SGWB and CMB anisotroies cross-correlation measurements to constrain the models presented in Section~\ref{sec:DeltaNeff}-\ref{sec:EDE} using our Fisher formalism, and compare them with the case of CMB only to show the improvement of adding GW data. For simplicity, we focus on a scale-invariant SGWB with $\alpha=0$, but our results are not significantly affected by this specific choice. For each experiment, we fix $f_*$ to the frequency where the sensitivity is maximum, and vary $A_*$ in the range $A_*\in[10^{-11},\,10^{-9},\,10^{-7}]$ and the angular resolution $\ell_{\rm max}^{\rm GW}\in[2,\, 30]$.
\section{Results}
\label{sec:results}
We now present the results of our Fisher analysis for each model. Since all these models have been recently proposed as solutions to the tension between local and early-time measurements of the Hubble parameter, as fiducial cosmologies, we considered the best-fit parameter values that best ease the tension. We provide the fiducial parameters in each of the following subsections. In our analysis, we vary the cosmological parameters (including the six standard ones) altogether, although, to keep the discussion simpler, we only focus it on the extra parameter(s) characterizing each model.
\begin{figure*}
\includegraphics[width=\columnwidth]{ERRNeff_88mm.pdf}
\caption{\label{fig:sigmaNeff} Results for the $\Delta N_{\rm eff}$ model. We plot the relative improvement $\Delta\sigma/\sigma\equiv (\sigma_{\rm CMB + SGWB}-\sigma_{\rm CMB})/\sigma_{\rm CMB}$ of the error on $N_{\rm eff}$, plotted by changing the angular resolution of the GW experiments up to $\ell_{\rm max}^{\rm GW}=30$. Each panels shows different fidutial values of the GW amplitude $A_* = [10^{-11},\,10^{-9},\,10^{-7}]$.}
\end{figure*}
\begin{figure*}
\includegraphics[width=\columnwidth]{ERRCC6panels_88mm.pdf}
\caption{\label{fig:sigmaNMC} Results for the NMC model. We plot the relative improvement $\Delta\sigma/\sigma\equiv (\sigma_{\rm CMB + SGWB}-\sigma_{\rm CMB})/\sigma_{\rm CMB}$ of the errors on $\phi_i$ and $\xi$.}
\end{figure*}
\subsection{Forecasts on the $\Delta N_{\rm eff}$ model}
We start by considering the $\Delta N_{\rm eff}$ model. We adopt the following fiducial cosmology \cite{Bragliatesi} :
\begin{align}
& 100*\theta_s=1.0410,\,\,\,\, 100\, \omega_b=2.274,\, \,\,\,\, \omega_c=0.1246,\notag\\
& \tau_{\rm reio}=0.058,\,\,\,\,\ln10^{10}A_s=3.063,\notag n_s=0.9786,\\&N_{\rm eff}=3.041.\label{eq:bestNeffR19}
\end{align}
The results of our Fisher analysis are shown in Fig.~\ref{fig:sigmaNeff}, where we plot the quantity $\Delta\sigma/\sigma\equiv (\sigma_{\rm CMB + SGWB}-\sigma_{\rm CMB})/\sigma_{\rm CMB}$, which measures the fractional improvement in the constraints that we gain by adding measurements of SGWB anisotropies to CMB ones.
In addition to the results for different experiments, we also plot the relative error for the cosmic variance limited case (the red dashed lines). We can see that the error decreases when both $A_*$ and $\ell_{\rm max}^{\rm GW}$ increase. Indeed, by increasing the monopole amplitude $A_*$, the SNR becomes larger and, as we see in Fig.~\ref{fig:ClNeff}, the variations in the anisotropies spectra extend up to large multipoles, so larger values of $\ell_{\rm max}^{\rm GW}$ enhance the constraining power.
The only GWI that can reach the cosmic variance limit is Ultimate DECIGO in the case of a large amplitude of the monopole. Fig.~\ref{fig:sigmaNeff} shows that EC and LISA improve the constraints very little and only for a large monopole amplitude. The situation is more optimistic for BBO and DECIGO for which the improvement on the errors is a bit better and can be obtained even with smaller values of $A_*$. However, for realistic values of $\ell_{\rm max}^{\rm GW}\sim15$ \cite{Contaldi:2020rht}, the improvement in the error is never larger than $\sim10\%$.
\subsection{Forecasts on the NMC model}
For the NMC model we choose the following fiducial cosmology, see Table~III of Ref.~\cite{Abadi:2020hbr}:
\begin{align}
& 100*\theta_s=1.0420,\,\,\,\, 100\, \omega_b=2.247,\, \,\,\,\, \omega_c=0.1192,\notag\\
& \tau_{\rm reio}=0.060,\,\,\,\,\ln10^{10}A_s=3.060,\notag n_s=0.9727,\\&\xi=-1/6,\,\,\,\, \phi_i =0.297.\label{eq:paramsNMC}
\end{align}
Our results are shown in Fig.~\ref{fig:sigmaNMC}, where, as before, we show the relative improvement of the forecast error obtained by cross-correlating SGWB and CMB anisotropies with respect to the one by CMB anisotropies alone. The relative improvement in the constraints is very similar for both the coupling $\xi$ and the initial condition $\phi_i$. Unlike the $N_{\rm eff}$ model, the modifications induced by the NMC model are confined to very large scales (compare Figs.~\ref{fig:ClNeff} and \ref{fig:ClNE}), so the gain in considering larger multipoles is reduced compared to the one observed in the previous subsection. Indeed, as can be seen from the red dashed line, representing a CV limited GWIs, $\ell_{\rm max}^{\rm GW}=15$ is already enough to gain a $\sim30\%$ improvement in the errors wrt to the CMB ones.
In the case of LISA and EC, we need a large monopole amplitude of $A_*=10^{-7}$ to improve the constraints of $\sim10\%$, but no further improvements are seen for $\ell_{\rm max}^{\rm GW}\geq5$, because the noise $N_\ell$ becomes too large. On the other hand, for the same amplitude, both BBO and Ultimate DECIGO will decrease the errors by $\sim20\%$. For Ultimate DECIGO, the noise is so small that this also holds for a smaller amplitude of $A_*=10^{-9}$, while the noise of BBO is larger, allowing to map SGWB anisotropies only up to $\ell_{\rm max}^{\rm GW}=6$, and the relative improvement shrinks to $\sim 14\%$. For an even smaller amplitude $A_*=10^{-11}$, the SNR decreases, and BBO (Ultimate DECIGO) is able to map SGWB anisotropies only up to $\ell_{\rm max}^{\rm GW}=4\, (6)$, reducing the improvement to $\sim 10\%\, (14\%)$.
\subsection{Forecasts on the EDE model}
For the EDE model we choose the following fiducial cosmology, see Table~II of Ref.~\cite{Braglia:2020auw}:
\begin{align}
& 100*\theta_s=1.0417,\,\,\,\, 100\, \omega_b=2.286,\, \,\,\,\, \omega_c=0.1242,\notag\\
& \tau_{\rm reio}=0.059,\,\,\,\,\ln10^{10}A_s=3.059,\notag n_s=0.9813,\\&\phi_i=0.48,\,\,\,\,\beta=2.09.\label{eq:paramsEDE}
\end{align}
The former set of parameters represent an energy injection of about $6\%$ of the total energy density of the Universe located at the redshift $z_c=3390$ \cite{Braglia:2020auw}. As shown in Section~\ref{sec:EDE}, the SGWB anisotropies at $\ell\leq30$ depend very weakly on variations around this fiducial choice of parameters. Therefore, as expected, we find that the inclusion of SGWB anisotropies improves the error on the model parameter of less than $1\%$, as shown by the solid lines in Fig.~\ref{fig:sigmaRnR}, even if no noise is considered for the GWIs.
As an illustrative case, it is interesting to explore the case of earlier energy injection, which increases the imprints of EDE on the SGWB spectra as shown in Section~\ref{sec:EDE}. For fiducial values, we choose the $\Lambda$CDM best-fit parameters for the same choice of the dataset used for obtaining the values in Eq.~\eqref{eq:paramsEDE}, which are also given in Table~II of Ref.~\cite{Braglia:2020auw}:
\begin{align}
& 100*\theta_s=1.04229,\,\,\,\, 100\, \omega_b=2.265,\, \,\,\,\, \omega_c=0.1178,\notag\\
& \tau_{\rm reio}=0.057,\,\,\,\,\ln10^{10}A_s=3.047, n_s=0.9719
\end{align}
and add a small EDE component corresponding to an energy injection of $\sim0.4\%$ at the redshift $z_c\sim1.6\times10^5$, obtained by setting $\phi_i=0.1$ and $\beta=6$.
In this case, as shown in Fig.~\ref{fig:ClNE}, since the energy injection occurs deep in the radiation era, the subsequent radiation-like redshifting of the averaged energy density of EDE contributes to the total expansion history, sizably affecting the SGWB anisotropies. For this reason, we can observe in Fig.~\ref{fig:sigmaRnR} that the error on the EDE parameters improves much more than in the previous case.
Although the scenario just discussed is already excluded by CMB data, it can be useful as an illustrative example of the capability of SGWB anisotropies to constrain the physics of the very early Universe. In particular, our findings suggest that they will be a valuable tool to constrain scenarios, such as those discussed in Refs.~\cite{Caldwell:2018giq,DEramo:2019tit,Domenech:2020kqm}, where the equation of state of the Universe differ from the one of radiation at very large redshifts and for a limited amount of time, complementing the information encoded in the spectral shape of the monopole~\cite{Caldwell:2018giq,DEramo:2019tit}. We note that such scenarios do not leave any direct imprints on the CMB since the modified dynamics entirely occur well before the last scattering surface.
\begin{figure}
\includegraphics[width=\columnwidth]{ERRrnr_88mm.pdf}
\caption{\label{fig:sigmaRnR} Results for the EDE model. We plot the relative improvement $\Delta\sigma/\sigma\equiv (\sigma_{\rm CMB + SGWB}-\sigma_{\rm CMB})/\sigma_{\rm CMB}$ of the errors on $\phi_i$ and $\beta$. We only consider the illustrative case of a Cosmic Variance limited IGW and present the two fiducial sets of parameters discussed in the text.}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
The anisotropies of the Stochastic Gravitational Wave Background (SGWB) have received increasing attention. The information they encode could be exploited to disentangle the contribution to the SGWB of astrophysical and cosmological processes, which are expected to contribute with distinct signature to the anisotropies. As a cosmological probe, the SGWB anisotropies induced by the propagation of GWs through the large-scale density perturbations \cite{Bartolo:2019oiq,Bartolo:2019yeu} have been recently shown to be sensitive to the imprints of the physics operating at very large redshift during the radiation dominated era \cite{DallArmi:2020dar}. It is therefore natural to maximize the information carried by SGWB anisotropies by cross-correlating it with the ones of the CMB, which are currently the best observable to constrain early Universe physics \cite{Adshead:2020bji,Malhotra:2020ket}. The advantage of considering GWs is that they decouple at the end of inflation, much earlier than the CMB photons, leading to a longer time intregration over the graviton's line of sight.
In this paper, we have explored the SGWB anisotropies produced by non-standard models of pre-recombination physics, and computed their cross-correlation with CMB anisotropies in temperature and polarization. To model the change in the early Universe cosmological history, we have considered three popular models that modify it by adding extra-relativistic degrees of freedom, as in \cite{DallArmi:2020dar}, a Non-Minimally coupled scalar (NMC) field, and an Early Dark Energy (EDE) component. Some of our main results are summarized in Figs.~\ref{fig:ClNeff}, \ref{fig:ClCC} and \ref{fig:ClNE} that show that the three models affect the theoretical spectra in different ways.
In order to quantify the capability of future planned Gravitational Wave Interferometers (GWIs) for constraining pre-recombination physics, we have performed a simple Fisher analysis including SGWB and CMB anisotropies and their cross-correlation. Since GWIs will only be sensitive to large angular scales due to their poor resolution, we have considered a large-scale CMB polarization experiment in our analysis and use the specifications of the LiteBIRD satellite. Figs.~\ref{fig:sigmaNeff} and \ref{fig:sigmaNMC} show the reduction of the error by the addition of GWIs interferometers to LiteBIRD. We have shown that the exact magnitude of this reduction depends on the model considered, and more importantly, on the GWI used and the monopole amplitude of the SGWB. As expected, the maximal improvement is obtained for experiments with better sensitivity and angular resolution.
Regarding the model dependencies, we have found that adding SGWB measurements to CMB ones improves the error on the parameters describing extra-relativistic degrees of freedom or early non-minimal couplings to gravity, while EDE models affect the anisotropies spectra only at large multipoles out of the reach of future GWIs. However, for illustrative purposes, we have also shown the case of a very early energy injection within EDE models, for which the improvement in the error on the model parameters is more optimistic. Our results suggest that SGWB are a useful tool to constrain variations of the equation of state deep in the radiation era \cite{Caldwell:2018giq,DEramo:2019tit,Domenech:2020kqm}.
Our analysis can be improved in several ways. First of all, in order to be able to cross-correlate SGWB and CMB spectra, we have considered a large-scale polarization CMB experiment. However, the models that we have analyzed could impact the high multipoles of the CMB spectra and therefore are better constrained by experiments that can map those scales, such as Planck or the future planned CMB S4. Such choice will be, however, complicated by the need to include CMB lensing spectra, which have a crucial impact on the acoustic structure of the CMB spectra. We leave this for future work. Another important point is that, for definiteness, we have estimated the angular spectra of SGWB anisotropies at a fixed reference frequency $f_*$ that we took, for each experiment, as the location of its sensitivity peak. As shown in Eq.~\eqref{eq:CellOmega}, though, the angular spectra are generically frequency dependent and it would be interesting to explore better SGWB anisotropy estimators making use of the anisotropies at different frequencies.
\vspace{0.5cm}
{\bf Note added:} While this project was nearly complete, we became aware of Ref.~\cite{Ricciardone:2021kel}, where the cross-correlation between CMB temperature anisotropies and the cosmological and astrophysical SGWB was analyzed and used to produce constrained realization maps of the SGWB anisotropies out of the CMB ones.
\vspace{0.5cm}
{\bf Acknowledgements}
MB would like to thank David Alonso for help with the schNell code and Dhiraj Hazra for discussions on Fisher forecasts. We thank the authors of Ref.~\cite{Ricciardone:2021kel} for sharing the draft of their paper with us. Numerical computations for this research were done on the Sciama High Performance Compute cluster, which is supported by the ICG, SEPNet, and the University of Portsmouth. The authors are supported by the Atracción de Talento contract no. 2019-T1/TIC-13177 granted by the Comunidad de Madrid in Spain. SK is partially supported by Japan Society for the Promotion of Science (JSPS) KAKENHI Grant no. 20H01899 and 20H05853.
\vspace{0.5cm}
\noindent
|
2,877,628,090,970 | arxiv | \section{introduction}
For quotient structures $X/I$, $Y/J$ and $\Phi$ a homomorphism between them, a representation of $\Phi$ is a map $\Phi_{*}: X\rightarrow Y$ such that
\begin{center}
\begin{tikzpicture}
\matrix(m)[matrix of math nodes,
row sep=2.6em, column sep=2.8em,
text height=1.5ex, text depth=0.25ex]
{X&Y\\
X/I&Y/J\\};
\path[->,font=\scriptsize,>=angle 90]
(m-1-1) edge node[auto] {$\Phi_{*}$} (m-1-2)
edge node[auto] {$\pi_{I}$} (m-2-1)
(m-1-2) edge node[auto] {$\pi_{J}$} (m-2-2)
(m-2-1) edge node[auto] {$\Phi$} (m-2-2);
\end{tikzpicture}
\end{center}
commutes, where $\pi_{I}$ and $\pi_{J}$ denote the respective quotient maps. Note that since representation is not required to satisfy any algebraic properties, its existence follows from the axiom of choice. If $\Phi$ has a representation which is a homomorphism itself we say $\Phi$ is \emph{trivial}. The question whether automorphisms between some quotient structures are trivial, sometimes is called \emph{rigidity} question and has been studied for various structures (see for example \cite{ShStep}, \cite{Farah.rig}, \cite{JustOra}, \cite{Rudin}, \cite{PhilWeav} and \cite{FaCalkin}). It turns out that for many structures the answers to these questions highly depend on the set-theoretic axioms.
To see a brief introduction on rigidity questions for Boolean algebras the reader may refer to \cite[\S 1]{FaSh}. Also for a good reference on C*-algebras refer to \cite{Black}.
By the Gelfand-Naimark duality (see \cite{Black}, \S II.2.2) the rigidity question has an equivalent reformulation in the category of commutative C*-algebras. For non-unital C*-algebra $\mathcal{A}$ the corona of $\mathcal{A}$ is the non-commutative analogue of the \v{C}ech-Stone reminder of a non-compact, locally compact topological space.
Motivated by a question of Brown-Douglas-Fillmore \cite[1.6(ii)]{BDF} the rigidity question for the category of C*-algebras has been studied for various corona algebras. In particular, assuming the continuum hypothesis (\textbf{CH}) Phillips and Weaver \cite{PhilWeav} constructed $2^{\aleph_{1}}$ many automorphism of the Calkin algebra over a separable Hilbert space. Since there are only continuum many inner automorphisms of the Calkin algebra this implies that there are many outer automorphisms. On the other hand it was shown by I. Farah \cite{FaCalkin} that under \emph{Todorcevic's Axiom}\footnote{Todorcevic's Axiom is also known as the 'Open Coloring Axiom' and it is a well-known consequence of the 'Proper Forcing Axiom'. } (\textbf{TA}) all automorphisms of the Calkin algebra over a separable Hilbert space are inner and later he showed that the \emph{Proper Forcing Axiom} (\textbf{PFA}) implies that all automorphisms of the Calkin algebra over any Hilbert space are inner \cite{FaAll}.
In \cite{SamFarah} S. Coskey and I. Farah have conjectured the following:\\
\textbf{Conjecture 1}: The Continuum Hypothesis implies that the corona of
every separable, non-unital C*-algebra has nontrivial automorphisms.
\textbf{Conjecture 2}: Forcing axioms imply that the corona of every separable,
non-unital C*-algebra has only trivial automorphisms.\\
In conjecture 2 the notion of triviality refers to a weaker notion than the one used in this paper, and it assures that automorphisms are \emph{definable} in \textbf{ZFC} in a strong sense. In the same article it has been proved that assuming \textbf{CH} every $\sigma$-unital C*-algebra which is either simple or stable has non-trivial automorphisms (see also \cite{FaMcSc}). On the other hand \textbf{TA} and \textbf{MA} imply that all automorphisms of reduced products of UHF-algebras are trivial \cite{Paul}.
In \cite{FaCalkin} the corona algebras of the form $\prod_{n}\mathbb{M}_{k(n)}(\mathbb{C})/\bigoplus_{n}\mathbb{M}_{k(n)}(\mathbb{C})$ play a crucial role in proving that "\textbf{TA} implies all automorphisms of the Calkin algebra are inner". In the class of C*-algebras $\prod_{n}\mathbb{M}_{k(n)}(\mathbb{C})$ can be considered as a good counterpart of $P(\mathbb{N})$ in set theory and as it will be clear from next section, trivial automorphisms of the corona of these algebras give rise to trivial automorphisms of the boolean algebra $P(\mathbb{N})/\mathcal{F}in$, where $\mathcal{F}in$ is the ideal of all finite subsets of the natural numbers.
Since the corona of $\prod_{n}\mathbb{M}_{n}(\mathbb{C})$ is \emph{fully countably saturated} structure in the sense of the \emph{ model theory for metric structures} (see \cite{lfms} and \cite{FHS}) and its character density (the smallest cardinality of a dense subset) is the continuum, under \textbf{CH} it is possible to use a diagonalization argument to show that there are $2^{\aleph_{1}}$ automorphisms of each of these corona algebras. Therefore there are many non-trivial automorphisms (see \cite{FH}, \S2.4).
Given an ideal $\mathcal{J}$ on $\mathbb{N}$ and a sequence of C*-algebras $\{\mathcal{A}_{n}: n\in \mathbb{N}\}$, define the norm-closed ideal
\begin{equation}
\nonumber \bigoplus_{\mathcal{J}}\mathcal{A}_{n}=\{(a_{n})\in \prod_{n}\mathcal{A}_{n}: ~ \lim_{n\rightarrow \mathcal{J}}\|a_{n}\|=0\}
\end{equation}
of $\prod_{n} \mathcal{A}_{n}$, where $\lim_{n\rightarrow \mathcal{J}}\|a_{n}\|=0$ means that for every $\epsilon>0$ the set $\{n\in\mathbb{N}: \|a_{n}\|\geq\epsilon\}\in \mathcal{J}$. The quotient C*-algebra $\prod_{n} \mathcal{A}_{n}/\bigoplus_{\mathcal{J}}\mathcal{A}_{n}$ is usually called the \emph{reduced product} of the sequence $\{\mathcal{A}_{n}: n \in \mathbb{N}\}$ over the ideal $\mathcal{J}$. Clearly if $\mathcal{J}=\mathcal{F}in$ then $\prod_{n}\mathcal{A}_{n}/\bigoplus_{\mathcal{J}}\mathcal{A}_{n}$ is the corona of $\prod_{n}\mathcal{A}_{n}$ and operator algebraists usually call it the \emph{asymptotic sequence algebra} of the sequence $\{\mathcal{A}_{n}: n \in \mathbb{N}\}$.
If ideals $\mathcal{I}$ and $\mathcal{J}$ on $\mathbb{N}$ are Rudin-Keisler isomorphic (see Proposition \ref{main} (1) for the definition) via a bijection $\sigma : \mathbb{N}\setminus A \rightarrow \mathbb{N}\setminus B$ for $A\in\mathcal{I}$ and $B\in\mathcal{J}$, then for sequences of C*-algebras $\{\mathcal{A}_{n}\}$ and $\{\mathcal{B}_{n}\}$, an obvious isomorphism $\Phi$ between algebras $\prod_{n}\mathcal{A}_{n}/\bigoplus_{\mathcal{I}} \mathcal{A}_{n}$ and $\prod_{n}\mathcal{B}_{n}/\bigoplus_{\mathcal{J}} \mathcal{B}_{n}$ can be obtained when $\varphi_{n}:\mathcal{A}_{n}\cong \mathcal{B}_{\sigma(n)}$ for every $n\in \mathbb{N}\setminus A$, and $\Phi$ is defined by
\begin{equation}
\nonumber \Phi(\pi_{\mathcal{I}}((a_{n}))) = \pi_{\mathcal{J}}(\varphi_{n}(a_{n})),
\end{equation}
where $\pi_{\mathcal{I}}$ and $\pi_{\mathcal{J}}$ are respective canonical quotient maps. Let us call such an isomorphism \emph{strongly trivial}.
We will show that if the quotients of $\prod_{n}\mathbb{M}_{n}(\mathbb{C})$ are associated with analytic P-ideals on $\mathbb{N}$ then it is impossible to construct nontrivial isomorphisms of these algebras without appealing to some additional set-theoretic axioms. This is a consequence of our main result (Theorem \ref{1}) which implies the following corollary.
\begin{corollary}
It is relatively consistent with \textbf{ZFC} that for all analytic P-ideals $\mathcal{I}$ and $\mathcal{J}$ on $\mathbb{N}$ all isomorphisms between $\prod_{n}\mathbb{M}_{n}(\mathbb{C})/\bigoplus_{\mathcal{I}}\mathbb{M}_{n}(\mathbb{C})$ and $\prod_{n}\mathbb{M}_{n}(\mathbb{C})/\bigoplus_{\mathcal{J}}\mathbb{M}_{n}(\mathbb{C})$ are strongly trivial. In particular all automorphisms of the corona $\prod_{n}\mathbb{M}_{n}(\mathbb{C})/\bigoplus_{n}\mathbb{M}_{n}(\mathbb{C})$ are strongly trivial.
\end{corollary}
It is worth noticing that in general for sequences of separable unital C*-algebras $\mathcal{A}_{n}$ and $\mathcal{B}_{n}$ the question
of whether the algebras $\prod_{n}\mathcal{A}_{n}/\bigoplus_{n} \mathcal{A}_{n}$ and $\prod_{n}\mathcal{B}_{n}/\bigoplus_{n} \mathcal{B}_{n}$ are isomorphic under \textbf{CH} reduces to the
weaker question of whether they are elementary equivalent, when their unit balls
are considered as models for metric structures. This follows from the fact that two
$\kappa$-saturated elementary equivalent structures of character density $\kappa$ are isomorphic,
for any uncountable cardinal $\kappa$ \cite[Proposition 4.13]{FHS}.
In the main result of this paper we show that assuming there is a measurable cardinal, there is a countable support iteration of proper and $\omega^{\omega}$-bounding forcings of the form $\mathbb{P}_{\mathcal{I}}$, for a $\sigma$-ideal $\mathcal{I}$, such that in the forcing extension all (isomorphisms) automorphisms of quotients of $\prod_{n}\mathbb{M}_{k(n)}(\mathbb{C})$ over ideals generated by some Borel ideals on $\mathbb{N}$ have continuous representations and if these quotients are associated with analytic P-ideals then all such automorphisms are trivial. This generalizes the main result of \cite{FaSh} since the centers of these C*-algebras [see \S2] correspond to the Boolean algebras handled in \cite{FaSh}. The assumption that there exists a measurable cardinal is there merely to make sure that $\Pi^{1}_{2}$ sets in the generic extension have Baire-measurable uniformizations.
We use a slight modification of the Silver forcing instead of the creature forcing used in \cite{FaSh}. In section 3 we give a brief introduction to some the properties of these forcings and their countable support iterations.
As in \cite{FaSh} the results of this paper are consistent with the Calkin algebra having an outer automorphism [corollary \ref{654}].
We follow \cite{FaCalkin} and use the terminology 'FDD-algebras' (Finite Dimensional Decomposition) for spatial representations of $\prod_{n}\mathbb{M}_{k(n)}(\mathbb{C})$ on separable Hilbert spaces, but throughout this paper we usually identify FDD-algebras with $\prod_{n}\mathbb{M}_{k(n)}(\mathbb{C})$ for some sequence of natural numbers $\{k(n)\}$.\\
\textbf{ACKNOWLEDGMENTS}. I am indebted to my supervisor Ilijas Farah for illuminative suggestions and supervision. I would like to thank Marcin Sabok for pointing out that in lemma \ref{999} any large cardinal assumption can be removed. I would also like to thank the anonymous referee for making number of useful suggestions.
\section{ fdd-algebras and closed ideals associated with borel ideals}
For a separable infinite dimensional Hilbert space $H$ let $\mathcal{B}(H)$ denote the space of all bounded linear operators on $H$. For a C*-algebra $A$ we use $A_{\leq 1}$ to denote the unit ball of $A$.
\begin{definition}\label{129}
Fix a separable infinite dimensional Hilbert space $H$ with an orthonormal basis $\{e_{n}:n\in\mathbb{N}\}$.
Let $\vec{E} = (E_n)$ be a partition of $\mathbb{N}$ into finite intervals, i.e., a finite set of consecutive natural numbers, and $\mathcal{D}[\vec{E}]$ denote the von Neumann algebra of all operators in $\mathcal{B}(H)$ such that the subspace spanned by $\{e_{i} : i\in E_{n}\}$ is invariant. These algebras are called FDD-algebras.
\end{definition}
Clearly $\mathcal{D}[\vec{E}]$ is isomorphic to $\prod_{n=0}^{\infty}\mathbb{M}_{|E_{n}|}(\mathbb{C})$.
The unit ball of $\mathcal{D}[\vec{E}]$ is a Polish space when equipped with the strong operator topology and this allows us to use tools from descriptive set theory in this context.
For $M\subseteq \mathbb{N}$ let $P_{M}^{\vec{E}}$ be the projection on the closed span of $\bigcup_{n\in M} \{e_{i} : i \in \vec{E}_{n}\}$ and $\mathcal{D}_{M}[\vec{E}]$ be the closed ideal $P_{M}^{\vec{E}}\mathcal{D}_{M}[\vec{E}] P_{M}^{\vec{E}} = P_{M}^{\vec{E}}\mathcal{D}[\vec{E}]$. For a fixed $\vec{E}$ we often drop the superscript and write $P_{M}$ and $P_{n}$ instead of $P_{M}^{\vec{E}}$ and $P_{\{n\}}^{\vec{E}}$.
For a Borel ideal $\mathcal{J}$ on $\mathbb{N}$, the subspace $\mathcal{D}^{\mathcal{J}}[\vec{E}]=\overline{\bigcup_{X\in \mathcal{J}}\mathcal{D}_{X}[\vec{E}]}$
is a closed ideal of $\mathcal{D}[\vec{E}]$. Equivalently
\begin{equation}
\nonumber \mathcal{D}^{\mathcal{J}}[\vec{E}]=\{(a_{n})\in \mathcal{D}[\vec{E}]: ~ \lim_{n\rightarrow \mathcal{J}}\|a_{n}\|=0\}.
\end{equation}
Let $\mathcal{C}^{\mathcal{J}}[\vec{E}]=\mathcal{D}[\vec{E}]/\mathcal{D}^{\mathcal{J}}[\vec{E}]$ and $\pi_{\mathcal{J}}$ be the natural quotient map. For operators $a$ and $b$ in $\mathcal{D}[\vec{E}]$ we usually write $a=^{\mathcal{J}}b$ instead of $a-b\in \mathcal{D}^{\mathcal{J}}[\vec{E}]$.
An ideal $\mathcal{J}$ on $\mathbb{N}$ is a \emph{P-ideal} if for every sequence $\{A_{n}\}$ of sets in $\mathcal{J}$ there exists $A\in\mathcal{J}$ such that $A_{n}\setminus A$ is finite, for every $n$.
The following theorem is the main result of this paper.
\begin{theorem}\label{1}
Assume there is a measurable cardinal. There is a forcing extension in which for partitions $\vec{E}$ and $\vec{F}$ of the natural numbers into finite intervals, if $\mathcal{I}$ and $\mathcal{J}$ are Borel ideals on the natural numbers, then the following are true.
\begin{enumerate}
\item Any automorphism $\Phi : \mathcal{C}^{\mathcal{J}}[\vec{E}]\rightarrow \mathcal{C}^{\mathcal{J}}[\vec{E}]$ has a (strongly) continuous representation.
\item Any isomorphism $\Phi : \mathcal{C}^{\mathcal{I}}[\vec{E}]\rightarrow \mathcal{C}^{\mathcal{J}}[\vec{F}]$ has a continuous representation.\\
If $\mathcal{I}$ and $\mathcal{J}$ are analytic P-ideals then\\
\item Any automorphism $\Phi : \mathcal{C}^{\mathcal{J}}[\vec{E}]\rightarrow \mathcal{C}^{\mathcal{J}}[\vec{E}]$ has a *-homomorphism representation.
\item Any isomorphism $\Phi : \mathcal{C}^{\mathcal{I}}[\vec{E}]\rightarrow \mathcal{C}^{\mathcal{J}}[\vec{F}]$ has a *-homomorphism representation.\\
\end{enumerate}
\end{theorem}
The following corollary follows from the proof of theorem \ref{1} and does not require any large cardinal assumption. See \S5 for definition of local triviality.
\begin{corollary}\label{local}
There is a forcing extension in which if $\mathcal{I}$ and $\mathcal{J}$ are (P)-ideals on $\mathbb{N}$, any *-homomorphism $\Phi : \mathcal{C}^{\mathcal{I}}[\vec{E}]\rightarrow \mathcal{C}^{\mathcal{J}}[\vec{F}]$ has a locally (*-homomorphism) continuous representation.
\end{corollary}
In order to avoid making notations more complicated we only prove this theorem for automorphisms and it is easy to see that the same proof works for isomorphisms.
In our forcing extension every such isomorphism has a simple description as it turns out that these isomorphisms are implemented by isometries between "co-small" subspaces. For partitions $\vec{E}=(E_{n})$ and $\vec{F}=(F_{n})$ of $\mathbb{N}$ into finite intervals in the following proposition let $\mathcal{D}[\vec{E}]$ and $\mathcal{D}[\vec{F}]$ be the FDD-algebras associated with $\vec{E}$ and $\vec{F}$ with respect to fixed orthonormal basis $\{e_{n}: n\in \mathbb{N}\}$ and $\{f_{n}: n\in\mathbb{N}\}$ for Hilbert spaces $H$ and $K$ respectively. Also let
\begin{eqnarray}
\nonumber H_{n}&=&{span\{e_{i}: i\in E_{n}\}} \qquad P_{n}=Proj(H_{n})\\
\nonumber K_{n}&=&{span\{f_{i}: i\in F_{n}\}} \qquad Q_{n}=Proj(K_{n}).
\end{eqnarray}
\begin{proposition}\label{main}
Assume there is a measurable cardinal. There is a forcing extension in which the following holds. Assume $\mathcal{I}$, $\mathcal{J}$ are analytic P-ideals on $\mathbb{N}$ and $\vec{E}=(E_{n})$, $\vec{F}=(F_{n})$ are partitions of $\mathbb{N}$ into finite intervals. Then there is an isomorphism $\Phi: \mathcal{C}^{\mathcal{I}}[\vec{E}]\mapsto \mathcal{C}^{\mathcal{J}}[\vec{F}]$ if and only if
\begin{enumerate}
\item $\mathcal{I}$ and $\mathcal{J}$ are Rudin-Keisler isomorphic, i.e.,
there are sets $B\in \mathcal{I}$ and $C \in \mathcal{J}$ and a bijection $\sigma: \mathbb{N}\setminus B\mapsto \mathbb{N}\setminus C$ such that $X\in \mathcal{I}$ if and only if $\sigma[X]\in \mathcal{J}$, and
\item $|E_{n}|=|F_{\sigma(n)}|$ for every $n\in \mathbb{N}\setminus B$.
\end{enumerate}
Moreover, for every $n\in \mathbb{N}\setminus B$ there is a linear isometry $u_{n}: H_{n}\mapsto K_{\sigma(n)}$ such that if $u=\sum_{n\in\mathbb{N}\setminus B}u_{n}$ , then the map $a\mapsto uau^{*}$ is a representation of $\Phi$.
\end{proposition}
\begin{proof}
The inverse direction of the first statement is trivial. To prove the forward direction assume $\Phi: \mathcal{C}^{\mathcal{I}}[\vec{E}]\mapsto \mathcal{C}^{\mathcal{J}}[\vec{F}]$ is an isomorphism. Using theorem \ref{1} there is a forcing extension in which there is a *-homomorphism $\Psi: \mathcal{D}[\vec{E}]\mapsto \mathcal{D}[\vec{F}]$ which is a representation of $\Phi$. For every $n$ we have $\Psi(P_{n})(K)\subseteq Q_{m}(K)$ for some $m$. It is easy to see that since $\Phi$ is an isomorphism there are $B\in\mathcal{I}$, $C\in\mathcal{J}$ and a bijection $\sigma:\mathbb{N}\setminus B\mapsto \mathbb{N}\setminus C$ such that for every $n\in \mathbb{N}\setminus B$ we have $\Psi(P_{n})(K)= Q_{\sigma(n)}(K)$. The map $\sigma$ witnesses that $\mathcal{I}$ and $\mathcal{J}$ are Rudin-Keisler isomorphic. Moreover, for every one-dimensional projection $P\in \mathcal{B}(H_{n})$ the image, $\Psi(P)$, is also a one-dimensional projection in $B(K_{\sigma(n)})$. In particular $|E_{n}|=|F_{\sigma(n)}|$.
Now for every $n\in\mathbb{N}\setminus B$ assume $E_{n}=[k_{n},k_{n+1}]$ and define a unitary $a\in B(H_{n})$ by
\begin{equation*}
a(e_{k_{i}}) = \begin{cases}
e_{k_{i}+1} & k_{n}\leq i<k_{n+1}\\
e_{k_{n}} & i=k_{n+1} .
\end{cases}
\end{equation*}
Fix $\xi_{0}\in K_{\sigma(n)}$. Let $b=\Psi(a)$ and $\xi_{j}=b^{j}(\xi_{0})$ ($b^{j}$ is the $j$-th power of $b$) for each $0\leq j < |E_{n}|$. Then $\{\xi_{j}: 0\leq j< |E_{n}|\}$ forms a basis for $K_{\sigma(n)}$ and $e_{k_{j}}\mapsto \xi_{j}$ defines an isometry $u_{n}$ as required.\\
Now $u=\bigoplus_{n\in \mathbb{N}\setminus B}u_{n}$ is an isometry from $\bigoplus_{n\in \mathbb{N}\setminus B}H_{n}$ to $\bigoplus_{n\in \mathbb{N}\setminus C} K_{n}$ such that $\Psi(a)- uau^{*}\in \mathcal{D}^{\mathcal{J}}(\vec{F})$ for all $a\in \mathcal{D}[\vec{E}]$.
\end{proof}
As we mentioned in the introduction the result of Farah and Shelah \cite{FaSh} can be obtained from theorem $\ref{1}$.
\begin{corollary}
If there is measurable cardinal, there is a forcing extension in which
every isomorphism between quotient Boolean algebras $P(\mathbb{N})/\mathcal{I}$ and
$P(\mathbb{N})/\mathcal{J}$ over Borel ideals has a continuous representation.
\end{corollary}
\begin{proof}
Let $E_{n}=\{n\}$. Then $\mathcal{D}[\vec{E}]\cong \ell_{\infty}$ is the standard atomic masa (maximal abelian subalgebra) of $B(H)$ and for
\begin{equation}
\nonumber \hat{\mathcal{J}}=\{(\alpha_{n})\in \ell_{\infty} : \lim_{n\rightarrow\mathcal{J}} \alpha_{n}=0 \}
\end{equation}
clearly $\mathcal{C}^{\mathcal{J}}[\vec{E}]= \ell_{\infty}/\hat{\mathcal{J}}\cong C(st(P(\mathbb{N})/\mathcal{J}))$ where $st(P(\mathbb{N})/\mathcal{J})$ is the Stone space of $P(\mathbb{N})/\mathcal{J}$. The duality between categories implies that
every isomorphism $\Phi$ between $P(\mathbb{N})/\mathcal{I}$ and $P(\mathbb{N})/\mathcal{J}$ corresponds to an isomorphism $\tilde{\Phi}$ between $C(st(P(\mathbb{N})/\mathcal{I}))$ and $C(st(P(\mathbb{N})/\mathcal{J}))$. The continuous map witnessing the topological triviality of $\tilde{\Phi}$ corresponds to a continuous map witnessing the topological triviality of $\Phi$.
\end{proof}
For any partition $\vec{E}$ let $Z(\mathcal{C}^{\mathcal{J}}[\vec{E}])$ denote the center of $\mathcal{C}^{\mathcal{J}}[\vec{E}]$ and $U(n)$ be the compact group of all unitary $n\times n$ matrices equipped with the bi-invariant normalized Haar measure $\mu$. More generally the following are true.
\begin{lemma}\label{987}
For any ideal $\mathcal{J}$
\begin{equation}
\nonumber Z(\mathcal{C}^{\mathcal{J}}[\vec{E}])= \frac{Z(\mathcal{D}[\vec{E}])} {\mathcal{D}^{\mathcal{J}}[\vec{E}]\cap Z(\mathcal{D}[\vec{E}])}.
\end{equation}
\end{lemma}
\begin{proof}
Clearly we have $Z(\mathcal{D}[\vec{E}])/ (\mathcal{D}^{\mathcal{J}}[\vec{E}]\cap Z(\mathcal{D}[\vec{E}]))\subseteq Z(\mathcal{C}^{\mathcal{J}}[\vec{E}])$. For the other direction it is enough to show that for every $a+ \mathcal{D}^{\mathcal{J}}[\vec{E}]\in Z(\mathcal{C}^{\mathcal{J}}[\vec{E}])$ there exists a $a^{\prime}\in Z(\mathcal{D}[\vec{E}])$ such that $a-a^{\prime}\in \mathcal{D}^{\mathcal{J}}[\vec{E}] $, in other words every element of $Z(\mathcal{C}^{\mathcal{J}}[\vec{E}])$ can be lifted to an element of $Z(\mathcal{D}[\vec{E}])$. Let $a=(a_{n})$ be such that each $a_{n}$ belongs to $M_{|E_{n}|}(\mathbb{C})$ and $a+ \mathcal{D}^{\mathcal{J}}[\vec{E}]\in Z(\mathcal{C}^{\mathcal{J}}[\vec{E})]$. For every $n$ let
\begin{equation}
\nonumber a^{\prime}_{n}=\int_{u\in U(|E_{n}|)} ua_{n}u^{*} d\mu
\end{equation}
and since $\mu$ is bi-invariant, for every unitary $u\in M_{|E_{n}|}(\mathbb{C})$ we have $ua^{\prime}_{n}u^{*}=a^{\prime}_{n}$.
If $a^{\prime}=(a^{\prime}_{n})$ then $a^{\prime}\in Z(\mathcal{D}[\vec{E}])$ and $a-a^{\prime}\in \mathcal{D}^{\mathcal{J}}[\vec{E}]$.
\end{proof}
\begin{proposition}
$Z(\mathcal{C}^{\mathcal{J}}[\vec{E}])\cong C(st(P(\mathbb{N})/\mathcal{J}))$.
\end{proposition}
\begin{proof}
Clearly we have $Z(\mathcal{D}[\vec{E}])\cong \ell_{\infty}$ and $\mathcal{D}^{\mathcal{J}}[\vec{E}]\cap Z(\mathcal{D}[\vec{E}])\cong \hat{\mathcal{J}}$.
Therefore by lemma \ref{987} we have $Z(\mathcal{C}^{\mathcal{J}}[\vec{E}])\cong \ell_{\infty}/\hat{\mathcal{J}}\cong C(st(P(\mathbb{N})/\mathcal{J}))$.
\end{proof}
\section{groupwise silver forcing and forcings of the form $\mathbb{P}_{\mathcal{I}}$}
In this section we introduce the forcing used in this context and provide some preliminaries on the properties of these forcings which are used throughout this paper. To see more on forcing and these properties the reader may refer to \cite{Bart} and \cite{ShProper}.
A forcing notion $\mathbb{P}$ is called to be $Suslin$ if its underlying set is an
analytic set of reals and both $\leq$ and $\perp$ are analytic relations.
The following is similar to infinitely equal forcing EE \cite[\S 7.4.C]{Bart}.
Let $\vec{I} = (I_n)$ be a partition of $\mathbb{N}$ into non-empty finite intervals and $G_{n}$ be a finite set, for each $n\in\mathbb{N}$. We denote the set of the \emph{reals} by $\mathbb{R}=\prod_{n}G_{n}$ endowed with the product topology. For each $n$ define $F_{n}^{\vec{I}}=\prod_{i\in I_{n}}G_{i}$ and let $F^{\vec{I}}=\prod_{n\in\mathbb{N}}F^{\vec{I}}_{n}$.
Moreover for any $X\subseteq \mathbb{N}$ let $F_{X}^{\vec{I}}=\prod_{n\in X}F^{\vec{I}}_{n}$. In particular if $X$ is an interval such as $\{k, k+1,\dots, \ell\}$ we use $F_{[k,\ell]}$ to denote $F_{\{k, k+1, \dots, \ell\}}$. For a fixed partition $\vec{I}$ we sometimes drop the superscript $\vec{I}$.
Fix a partition $\vec{I}=(I_{n})$ of the natural numbers into finite intervals.
Define the groupwise Silver forcing $\mathbb{S}_{F^{\vec{I}}}$ associated with $F^{\vec{I}}$, to be the following forcing notion: A condition $p\in \mathbb{S}_{F^{\vec{I}}}$ is a function from $M\subseteq \mathbb{N}$ into $\bigcup_{n=0}^{\infty} F_{n}^{\vec{I}}$, such that $\mathbb{N} \setminus M$ is infinite and $p(n)\in F_{n}^{\vec{I}}$. A condition $p$ is stronger than $q$ if $p$ extends $q$. Each condition $p$ can be identified with $[p]$, the set of all its extensions to $\mathbb{N}$, as a compact subset of $F^{\vec{I}}$. For a generic $G$, $f = \bigcup \{p: p\in G\}$ is the generic real.
Recall that a forcing notion $\mathbb{P}$ is $\omega^{\omega}$-\emph{bounding} if for every $p\in \mathbb{P}$ and a $\mathbb{P}$-name for a function $\dot{f}: \omega\rightarrow \omega$ there are $q\leq p$ and $g\in \omega^{\omega}\cap V$ such that $q\Vdash \dot{f}(\check{n})\leq \check{g}(\check{n}) ~~\forall n$.
\begin{theorem}
$\mathbb{S}_{F^{\vec{I}}}$ is a proper and $\omega^{\omega}$-bounding forcing.
\end{theorem}
\begin{proof}
Let $\mathcal{M}\prec H_{\theta}$ for a large enough $\theta$, be a countable transitive model of \textbf{ZFC} containing $\vec{I}$ and $\mathbb{S}_{F^{\vec{I}}}$. Suppose $\{A_{n}~:~n\in\mathbb{N}\}$ is the set of all maximal antichains in $\mathcal{M}$ and $q\in \mathbb{S}_{F^{\vec{I}}}$ is given. First we claim that there exists $p\in\mathbb{S}_{F^{\vec{I}}}$ such that for every $n$ the set $\{q\in A_{n} ~:~ q~is~compatible ~with~p\}$ is finite. To see this let $p\leq_n q$ if and only if $q\subset p$ and the first $n$ elements that are not in the domain of $q$ are not in the domain of $p$.
We build a fusion sequence $ p_{0}\geq_{0} p_{1}\geq_{1} \dots p_{n}\geq_{n} p_{n+1}\geq_{n+1} \dots$ recursively. For the given $q$ let $p_{0}=q$ and suppose $p_{n}$ is chosen. Let $B= \{k_{1} \dots k_{n}\}$ be the set of first $n$ elements of $\mathbb{N} \setminus dom(p_n)$ ordered increasingly. Since $A_{n}$ is a maximal antichain, $p_{n}$ is compatible with some $s\in A_{n}$. Let $p_{n+1}= p_{n}\cup s\upharpoonright_{(k_{n},\infty)}$. Note that $p_{n+1}$ is compatible with only finitely many elements of $A_{n}$, namely, only possibly those elements $t\in A_{n}$ which $t(i)\neq s(i)$ for some $i\in [0, k_{n})$. Let $p=\bigcup_{n} p_{n}$ be the fusion of the above sequence. For every $n$ the set $C_{n}= \{q\in A_{n} ~:~ q~is~compatible ~with~p\}$ is finite and predense below $p$ for every $n$. Therefore $A_{n}\cap \mathcal{M}$ contains $C_{n}$ and is predense below $p$.
To see $\mathbb{S}_{F^{\vec{I}}}$ is $\omega^{\omega}$-bounding assume $\dot{f}$ is an $\mathbb{S}_{F^{\vec{I}}}$-name such that $q\Vdash \dot{f} : \mathbb{N} \rightarrow\mathbb{N}$. As above we build a fusion sequence $ q=p_{0}\geq_{0} p_{1}\geq_{1} \dots p_{n}\geq_{n} p_{n+1}\geq_{n+1} \dots$. Let $B$ be defined as above and $\{r_{j} : j< k\}$ be the list of all functions $r : B \rightarrow \bigcup_{i\in C} F_{i}^{\vec{I}}$ where $C$ is a finite set such that for every $1\leq j \leq n$ we have $r(k_{j})\in F_{i}^{\vec{I}}$ for some $i\in C$. Successively find $p_n = p_{n}^{0}\geq_{n} p_{n}^{1}\geq_{n} \dots \geq_{n}p_{n}^{k-1} $ such that:
\begin{equation}
\nonumber p_{n}^{j} \cup r_{j}\Vdash \dot{f}(n) = \check{a}_{n}^j.
\end{equation}
Let $p_{n+1}=\bigcup p_{n}^{j}$ and $D_n= \{a_{n}^j : j<k \}$. Now the fusion of this sequence $p$ forces that for every $n$ we have $f(n)\in D_{n}$. Define a ground model map $g : \mathbb{N} \rightarrow\mathbb{N}$ by $g(n)$ to be the largest element of $D_{n}$. Therefore $p$ forces that $g(n)\geq f(n)$ for all $n$.
\end{proof}
The following property is the main reason that we use $\mathbb{S}_{F^{\vec{I}}}$ in this context.
\begin{definition}\label{125}
We say a forcing notion $\mathbb{P}$ captures $F^{\vec{I}}$ if there exists a $\mathbb{P}$-name for a real $\dot{x}$ such that for every $p\in\mathbb{P}$ there is an infinite $M\subseteq \mathbb{N}$ such that for every $a\in F^{\vec{I}}_{M}$ there is $q_{a}\leq p$ such that $q_{a}\Vdash \dot{x}\upharpoonright_{ M} = \check{a}$.
\end{definition}
\begin{lemma}
For any partition $\vec{I}$ of $\mathbb{N}$ into finite intervals, $\mathbb{S}_{F^{\vec{I}}}$ captures $F^{\vec{I}}$.
\end{lemma}
\begin{proof}
Suppose $\dot{x}$ is the canonical $\mathbb{S}_{F^{\vec{I}}}$-name for the generic real and $p\in\mathbb{S}_{F^{\vec{I}}}$ is given. Let $M$ be an infinite subset of $\mathbb{N}\setminus dom(p)$ such that $\mathbb{N}\setminus (M\cup dom(p))$ is also infinite. For every $a\in F^{\vec{I}}_M$ let $q_{a}= p \cup a$. Since $\mathbb{N}\setminus (M\cup dom(p))$ is infinite, $q_{a}$ is a condition in $\mathbb{S}_{F^{\vec{I}}}$ and $q_{a}\Vdash \dot{x}\upharpoonright_{ M} = \check{a}$.
\end{proof}
\begin{lemma}\label{100}
For every $\mathbb{S}_{F^{\vec{I}}}$-name for a real $\dot{x}$ and $q\in \mathbb{S}_{F^{\vec{I}}}$ there are $p\leq q$ and a continuous function $f:p\rightarrow \mathbb{R}$ such that $p$ forces $f(\dot{r}_{gen})= \dot{x}$, where $\dot{r}_{gen}$ is the canonical name for the generic real.
\end{lemma}
\begin{proof}
Assume $q$ forces that $\dot{x}$ is a $\mathbb{S}_{F^{\vec{I}}}$-name for a real. By identifying each condition with the corresponding compact set we can find a fusion sequence $\{p_{s} :s\in \bigcup_{n}F_{[0,n)}^{\vec{I}}\}$ such that for each $s\in F_{[0,n)}^{\vec{I}}$ (here $s$ just would be used as an index) $p_{s}\Vdash \dot{x}\upharpoonright_{[0,n)} = u_{s}$ for some $u_{s}\in F_{[0,n)}^{\vec{I}} $. Let
\begin{equation}
\nonumber p=\bigcap_{n\in\mathbb{N}}\bigcup_{s\in F_{[0,n)}^{\vec{I}}} p_{s}
\end{equation}
be the fusion. For each $y\in p$ let $b\in F^{\vec{I}}$ be the branch such that $y\in p_{b\upharpoonright _{[0,n)}}$ for each $n$. Define $f(y)\upharpoonright_{[0,n)}= u_{b\upharpoonright _{[0,n)}}$. $f$ is a continuous map and $y\in p_{b\upharpoonright _C}$ implies $ d(f(y), \dot{x})< 2^{-n}$.
Therefore $p\Vdash f(\dot{r}_{gen})= \dot{x}$.
\end{proof}
The above lemma shows that $\mathbb{S}_{F^{\vec{I}}}$ satisfies the ,so called, \emph{continuous reading of names}. This can also be seen by noticing that
the groupwise Silver forcing can be viewed as a forcing with Borel $\mathcal{I}$-positive sets, $\mathbb{P}_{\mathcal{I}}= \mathcal{B}(\mathbb{R})/\mathcal{I}$, for a $\sigma$-ideal $\mathcal{I}$, where $\mathcal{I}$ is the $\sigma$-ideal $\sigma$-generated by partial functions with cofinite domains. These forcings are studied by J. Zapletal in \cite{ZapDes} and \cite{ZapId}. Since groupwise Silver forcings (as well as the random forcing, which will be used in our iteration) are proper and conditions are compact sets, by a theorem of Zapletal \cite[Lemma 2.2.1 and Lemma 2.2.3]{ZapDes} the continuous reading of names is equivalent to the forcing being $\omega^{\omega}$-bounding.
\begin{lemma}[J. Zapletal]\label{zap}
Let $\mathcal{I}$ be a $\sigma$-ideal on a Polish space $X$ and $\mathbb{P}_{\mathcal{I}}$ is a proper forcing. Then following are equivalent.
\begin{enumerate}
\item $\mathbb{P}_{\mathcal{I}}$ is $\omega^{\omega}$-bounding.
\item Compact sets are dense in $\mathbb{P}_{\mathcal{I}}$ and $\mathbb{P}_{\mathcal{I}}$ has continuous reading of names.
\end{enumerate}
\end{lemma}
Note that it is essential that compact conditions are dense in the poset, since the Cohen forcing is proper and of the form $\mathbb{P}_{\mathcal{I}}$, but it is not $\omega^{\omega}$-bounding, yet it does have the continuous reading of names.
In Zapletal's theory the countable support iteration of forcings of the form $\mathbb{P}_{\mathcal{I}}$ has been studied for reasonably definable ideals called $iterable$ \cite[Definition 3.1.1]{ZapDes}. The following is a generalization of the classical \emph{Fubini product} of two ideals.
\begin{definition}
[J. Zapletal] For a countable ordinal $\alpha$ and $\sigma$-ideals $\{\mathcal{I}_{\xi}: \xi\in\alpha\}$ on the reals, the Fubini product, $\prod_{\xi\in\alpha}\mathcal{I}_{\xi}$, is the ideal
on $\mathbb{R}^{\alpha}$ defined as the collection of all sets $A\subseteq \mathbb{R}^{\alpha}$ for which the player I has a winning strategy in the game $G(A)$ as follows: at stage $\beta\in \alpha$ player I plays a set $B_{\beta}\in \mathcal{I}_{\beta}$ and player II produces a real $r_{\beta}\in \mathbb{R}\setminus B_{\beta}$. Player II wins the game $G(A)$ if the sequence $\{r_{\beta}: \beta\in\alpha\}$ belongs to the set $A$.
\end{definition}
It is easy to see that $\prod_{\xi\in\alpha}\mathcal{I}_{\xi}$ is a $\sigma$-ideal on $\mathbb{R}^{\alpha}$ since player I can always combine countably many of his winning strategies into one. In the presence of large cardinals the game $G(A)$ is always determined for iterable ideals. However without large cardinals we need some additional definability assumptions on the ideals to guarantee that the game $G(A)$ is determined, see \cite{ZapDes} section 3.3.
Recall that for Polish spaces $X,Y$ and $A\subseteq X\times Y$, for any $x\in X$ the vertical section of $A$ at $x$ is the set $A_{x}=\{y\in Y: (x,y)\in A\}$.
\begin{definition}
A $\sigma$-ideal $\mathcal{I}$ on a polish space $X$ is $\Pi_{1}^{1}$ on $\Sigma_{1}^{1}$ if for every $\Sigma_{1}^{1}$ set $B\subseteq 2^{\mathbb{N}}\times X$ the set $\{x\in 2^{\mathbb{N}}: B_{x}\in \mathcal{I}\}$ is $\Pi_{1}^{1}$.
\end{definition}
Some commonly used ideals fail to be $\Pi_{1}^{1}$ on $\Sigma_{1}^{1}$, e.g. a $\sigma$-ideal $\mathcal{I}$ for which the forcing $\mathbb{P}_{\mathcal{I}}$ is proper and adds a dominating real is not $\Pi_{1}^{1}$ on $\Sigma_{1}^{1}$. However the $\sigma$-ideals corresponding to the Silver forcing, random forcing and many other natural forcings are $\Pi_{1}^{1}$ on $\Sigma_{1}^{1}$ . In fact
for a $\sigma$-ideal $\mathcal{I}$ on $\mathbb{R}$ if the poset $\mathbb{P}_{\mathcal{I}}$ consists of compact sets and is Suslin, proper and $\omega^{\omega}$-bounding, then $\mathcal{I}$ is $\Pi_{1}^{1}$ on $\Sigma_{1}^{1}$ (see \cite{ZapDes}, Appendix C).
The following is due to V. Kanovei and J. Zapletal.
\begin{theorem}\label{ZapKan}
Suppose $\alpha$ is a countable ordinal and $\{\mathcal{I}_{\xi}: \xi < \alpha\}$ is a sequence of $\Pi_{1}^{1}$ on $\Sigma_{1}^{1}$ $\sigma$-ideals on the reals. Then the poset $\mathcal{B}(\mathbb{R}^{\alpha})/\prod_{\xi\in\alpha}\mathcal{I}_{\xi}$ is forcing equivalent to the countable support iteration of the ground model forcings $\{\mathbb{P}_{\mathcal{I}_{\xi}},~\xi\leq\alpha\}$ of length $\alpha$.
\end{theorem}
\begin{proof}
The proof is similar to \cite[Lemma 3.3.1, corollary 3.3.2]{ZapDes}, where it is stated for the case $\mathcal{I}_{\xi}= \mathcal{I}$ for all $\xi<\alpha$.
\end{proof}
We will occasionally use the following property of countable support forcing iterations, which was defined in \cite{FaSh}, to prove our main theorem.
\begin{definition}
Assume $\mathbb{P}_{\kappa}=\{\mathbb{P}_{\xi}, \dot{\mathbb{Q}}_{\eta}:\xi\leq \kappa , \eta< \kappa\}$ is a countable support forcing iteration such that each $\dot{\mathbb{Q}}_{\eta}$ is a $\mathbb{P}_{\eta}$-name for a ground-model forcing notion which adds a generic real $\dot{g}_{\eta}$. We say $\mathbb{P}_{\kappa}$ has continuous reading of names if for every $\mathbb{P}_{\kappa}$-name $\dot{x}$ for a new real, the set of conditions $p$ such that there exist a countable $S\subset \kappa$, a compact $K\subset \mathbb{R}^{S}$, and a continuous $h: K\rightarrow \mathbb{R}$ such that
\begin{equation}
\nonumber p\Vdash "\langle\dot{g}_{\xi}: ~\xi\in S\rangle\in K ~and~ \dot{x}= h(\langle\dot{g}_{\xi} :~ \xi\in S\rangle)"
\end{equation}
is dense.
\end{definition}
\begin{proposition}\label{111}
If $\mathbb{P}_{\kappa}=\{\mathbb{P}_{\xi}, \dot{\mathbb{Q}}_{\eta}:\xi\leq \kappa , \eta< \kappa\}$ is a countable support iteration of ground model forcings such that each $\dot{\mathbb{Q}}_{\eta}$ is a $\mathbb{P}_{\eta}$-name for a Suslin, proper and $\omega^{\omega}$-bounding partial order of the form $\mathbb{P}_{\mathcal{I}}$ such that compact conditions form a dense subset, then $\mathbb{P}_{\kappa}$ has the continuous reading of names.
\end{proposition}
\begin{proof}
Suppose $\mathbb{Q}_{\xi}= \mathbb{P}_{\mathcal{I}_{\xi}}$ and $p\in \mathbb{P}_{\kappa}$ forces that $\dot{x}$ a $\mathbb{P}_{\kappa}$-name for a real. Let $S$ be the support of $p$ and $\mathcal{I}=\prod_{\xi\in S} \mathcal{I}_{\xi}$. If $\mathbb{P}_{\mathcal{I}}= \mathcal{B}(\mathbb{R}^{S})/ \mathcal{I}$, by theorem \ref{ZapKan} we can assume that $\dot{x}$ is a $\mathbb{P}_{\mathcal{I}}$-name and since $\mathbb{P}_{\mathcal{I}}$ is proper, $\omega^{\omega}$-bounding and compact conditions form a dense subset, by Zapletal's characterization of continuous reading of names for these posets, lemma \ref{zap}, there are a compact condition $q\leq p$ and a continuonus function $h: q\mapsto \mathbb{R}$ such that $q\Vdash h(\langle\dot{g}_{\xi} :~ \xi\in S\rangle)=\dot{x}$.
\end{proof}
Let $\mathcal{I}$ be a $\sigma$-ideal and $\mathcal{M}$ be an elementary submodel of some large enough structure containing $\mathcal{I}$. A real $x$ is called $\mathcal{M}$-generic if the set $\{B\in \mathbb{P}_{\mathcal{I}}\cap \mathcal{M}: x\in B\}$ is an $\mathcal{M}$-generic filter on $\mathbb{P}_{\mathcal{I}}$. The poset $\mathbb{P}_{\mathcal{I}}$ is proper if and only if for every such $\mathcal{M}$ and every $\mathcal{I}$-positive set $B\in \mathbb{P}_{\mathcal{I}}\cap\mathcal{M}$ the set $\{x\in B : \text{$x$ is $\mathcal{M}$-generic}\}$ is $\mathcal{I}$-positive \cite[Lemma 2.1.2]{ZapDes}.
The forcing used in this paper is a countable support iteration of the groupwise Silver forcings and the random forcing. Let $\mathbb{P}_{\kappa}=\{\mathbb{P}_{\xi}, \dot{\mathbb{Q}}_{\eta}:\xi\leq \kappa , \eta< \kappa\}$ be such a forcing of length $\kappa$. In lemma \ref{999} below, we will show that assuming \textbf{MA} in the ground model, any $\Sigma^{1}_{2}$ set in the generic extension by $\mathbb{P}_{\kappa}$ can be uniformized by a Baire-measurable map in the ground model. In order to prove this we first need the following lemma. It is proved in \cite{ZapId} but we include the proof here for the convenience of the reader.
\begin{lemma}\label{126}
Suppose $\mathcal{I}$ is a $\sigma$-ideal on a Polish space $X$ such that $\mathbb{P}_{\mathcal{I}}$ is proper. Let $Y$ be a Polish space and $p\in \mathbb{P}_{\mathcal{I}}$ forces that $\dot{B}$ is a Borel subset of $Y$. Then there is a Borel $\mathcal{I}$-positive condition $q\leq p$ and a ground model Borel set $D\subseteq q\times Y$ such that $q\Vdash \dot{D}_{\dot{r}_{gen}}=\dot{B}$.
\end{lemma}
\begin{proof}
The proof is carried out by induction on the Borel rank of $\dot{B}$. Since the forcing $\mathbb{P}_{\mathcal{I}}$ preserves $\aleph_{1}$ by possibly strengthening the condition $p$ we may assume that the Borel rank of $\dot{B}$ is forced to be $\leq \alpha$ for a fixed countable ordinal $\alpha$.
Let $\mathcal{M}$ be a countable elementary submodel of a large enough structure.
Assume $\dot{B}$ is forced to be a closed set. Fix a countable base $\mathcal{O}$ for the topology of the space $Y$. Since $\mathbb{P}_{\mathcal{I}}$ is proper we can find \cite{ZapDes} a Borel $I$-positive set $q\leq p$ (in fact $q$ is the set of all $\mathcal{M}$-generic reals in $p$) and a ground model Borel function $f: q\mapsto \mathcal{P}(\mathcal{O})$ such that $q\Vdash \check{f}(\dot{r}_{gen})=\{O\in \mathcal{O}: \dot{B}\cap O=\emptyset\}$. Define $D=\{(x,y)\in q\times Y : y\notin \bigcup f(x)\}$. It is easy to check that $D$ is the required Borel set. The proof for open sets is similar.
Now suppose $p$ forces that $\dot{B}=\bigcup_{n}\dot{B}_{n}$ where $\dot{B}_{n}$'s are sets of lower Borel rank. Let $q=\{x\in p : x \text{ is $\mathcal{M}$-generic}\}$. Using the inductive assumption for each $n\in \mathbb{N}$ find a maximal antichain $A(n)\subset \mathbb{P}_{\mathcal{I}}$ below $p$, such that for every condition $s\in A(n)$ there is a Borel set $D(s,n)\subset s\times Y$ such that $s\Vdash \dot{D}(s,n)_{\dot{r}_{gen}}=\dot{B}(n)$. For every $n\in \mathbb{N}$ let $D(n)=\bigcup\{D(s,n) : s\in \mathcal{M}\cap A(n)\}\cap q\times Y\subset q\times Y$. The condition $q$ forces that the generic real $\dot{r}_{gen}$ belongs to exactly one condition in the antichain $\mathcal{M}\cap A(n)$ for every $n$. Therefore $\dot{B}(n)=\bigcup\{\check{D}(s,n) : s\in \mathcal{M}\cap A(n)\}_{\dot{r}_{gen}}= \dot{D}(n)_{\dot{r}_{gen}}$. Now the set $D=\bigcup_{n} D(n)$ is clearly a Borel subset of $q\times Y$ and $q$ forces that $\dot{B}= \dot{D}_{\dot{r}_{gen}}$.
The countable intersection case is a similar argument.
\end{proof}
The following lemma can be ignored in proving theorem \ref{1} since it immediately follows from the large assumption. Nevertheless it implies that in order to get local triviality of isomorphisms or even *-homomorphisms of FDD-algebras, corollary \ref{local}, no large cardinal assumption is necessary.
\begin{lemma}\label{999}
Assume \textbf{MA} holds in the ground model and $\mathbb{P}_{\kappa}$ is a countable support iteration of length $\kappa$ of proper forcings of the form $\mathbb{P}_{\mathcal{I}}$ with compact conditions. If $\dot{C}$ is a $\mathbb{P}_{\kappa}$-name for a $\Sigma_{2}^{1}$ subset of $\mathbb{R}\times \mathbb{R}$ in the extension such that for every $\dot{x}\in \mathbb{R}$ the vertical section $\dot{C}_{\dot{x}}$ is non-empty, then there are $q\in\mathbb{P}_{\kappa}$ and a Baire-measurable map $h:\mathbb{R}\mapsto \mathbb{R}$ such that for every $\mathbb{P}_{\kappa}$-name $\dot{x}$ for a real
\begin{equation}
\nonumber q\Vdash (\dot{x},\check{h}(\dot{x}))\in \dot{C}.
\end{equation}
\end{lemma}
\begin{proof}
Let $\mathcal{I}_{\xi}$ be the $\sigma$-ideal associated with $\dot{\mathbb{Q}}_{\xi}$.
Since $\Sigma_{2}^{1}$ sets are projections of $\Pi_{1}^{1}$ sets and \textbf{MA} implies that all $\Sigma^1_{2}$ sets have the property of Baire, it is enough to uniformize $\Pi_{1}^{1}$ sets. Assume some $p\in \mathbb{P}_{\kappa}$ forces that $\dot{C}$ is a $\Pi_{1}^{1}$ subset of $\mathbb{R}\times \mathbb{R}$. There is a $\mathbb{P}_{\kappa}$-name $\dot{B}$ for a Borel subset of $\mathbb{R}^{3}$ such that $p\Vdash \mathbb{R}^{2} - pr_{\{1,2\}}(\dot{B})=\dot{C}$ where $pr_{\{1,2\}}$ is the projection on the first and second coordinates of $\mathbb{R}^{3}$. Let the countable set $S\subset \kappa$ denote the support of $p$ and
\begin{equation}
\nonumber \mathbb{P}_{S}=\{\mathbb{P}_{\xi}, \dot{\mathbb{Q}}_{\eta}:\xi\in S , \eta\in S\}
\end{equation}
and let $\mathcal{I}^{S}=\prod_{\xi\in S} \mathcal{I}_{\xi}$. Since these forcings are proper Suslin and $\omega^{\omega}$-bounding \cite[Lemma 4.3]{FaSh} we have $p\Vdash_{\mathbb{P}_{S}} \mathbb{R}^{2} - pr_{\{1,2\}}(\dot{B})=\dot{C}$.
Let $\alpha$ be the order-type of $S$. By forcing equivalence of $\mathbb{P}_{S}$ and $\mathbb{P}_{\mathcal{I}^{S}}=\mathcal{B}(\mathbb{R}^{S})/\mathcal{I}^{S}$ and for simplicity assume $p\in \mathbb{P}_{\mathcal{I}^{S}}$. Since $\mathbb{P}_{\mathcal{I}^{S}}$ is proper, by lemma \ref{126}, there is a ground model Borel set $D\subseteq \mathbb{R}^{\alpha}\times \mathbb{R}^{3}$
and $q\leq p$ such that $q\Vdash \dot{B}=\dot{D}_{\dot{r}_{gen}}$ where $\dot{r}_{gen}$ is the canonical $\mathbb{P}_{\mathcal{I}^{S}}$-name for the generic real in $\mathbb{R}^{\alpha}$. Therefore
\begin{equation}
q\Vdash \mathbb{R}^{2} - pr_{\{\alpha+1,\alpha+2\}}(\dot{D}_{\dot{r}_{gen}})=\dot{C}.
\end{equation}
Now since the set $E=\mathbb{R}^{\alpha+2} - pr_{\{1,\dots,\alpha+2\}}(D)$ is $\Pi_{1}^{1}$, by Kond\^{o}'s uniformization theorem, $E$ has a $\Pi_{1}^{1}$ and hence a Baire-measurable uniformization $g: pr_{\{1,\dots,\alpha+1\}}(E)\mapsto \mathbb{R}$.
Let $\mathcal{M}$ be an elementary submodel of some large enough structure containing $\mathcal{I}^{S}$ and $\mathbb{P}_{\kappa}$, and also let $t=\{x\in q : \text{$x$ is $\mathcal{M}$-generic}\}$. Since $\mathbb{P}_{\mathcal{I}^{S}}$ is proper, $t$ is a condition in $\mathbb{P}_{\mathcal{I}^{S}}$.
Fix $x\in t$ and note that since the sections of $\dot{C}$ are non-empty, for every $y\in \mathbb{R}$ we have
\begin{equation}
\nonumber [pr_{\{\alpha+1,\alpha+2\}}(\dot{D}_{x})]_{y}=\dot{C}_{y}\neq \emptyset
\end{equation}
Therefore $t\times \mathbb{R}\subseteq dom(g)$. For every $x\in t$ and $y\in \mathbb{R}$ we have $(x, y, g(x,y))\in E$.
Define the function $h: \mathbb{R}\mapsto \mathbb{R}$ by
\begin{equation}
\nonumber h(y)=g(\dot{r}_{gen},y)
\end{equation}
By above and (1) we have $t\Vdash (\dot{y},\check{h}(\dot{y}))\in \dot{C}$.
\end{proof}
\section{topologically trivial automorphisms of analytic p-ideal quotients of fdd-algebras}
In this section we study the automorphisms of quotients of FDD-algebras over ideals associated with analytic P-ideals with Baire-measurable representations. Our result resembles the fact that for an analytic P-ideal $\mathcal{J}$ any automorphism of $P(\mathbb{N})/\mathcal{J}$ with a Baire-measurable representation has an asymptotically additive representation (see \cite{FaAn}, \S 1.5).
\begin{definition}
A map $\mu : \mathcal{P}(\mathbb{N})\rightarrow [0,\infty]$ is a submeasure supported by $\mathbb{N}$ if for $A, B\subseteq \mathbb{N}$
\begin{eqnarray}
\nonumber &\mu(\emptyset)=0\\
\nonumber &\mu(A)\leq \mu(A\cup B)\leq\mu(A)+\mu(B).
\end{eqnarray}
It is lower semicontinuous if for all $A\subseteq \mathbb{N}$ we have
\begin{equation}
\nonumber \mu(A)= \lim_{n\rightarrow\infty}\mu(A\cap [1,n]).
\end{equation}
\end{definition}
For a lower semicontinuous submeasure $\mu$ let
\begin{equation}
\nonumber Exh(\mu)=\{A\subseteq\mathbb{N}~:~\lim_{n}\mu(A\setminus [1,n])=0\}.
\end{equation}
This is an $F_{\sigma\delta}$ P-ideal on $\mathbb{N}$ (see \cite{FaAn}) and by Solecki's theorem \cite{Sol} every analytic P-ideal is of the form $Exh(\mu)$ for some lower semicontinuous submeasure $\mu$.
For the rest of this section let $\mathcal{J}=Exh(\mu)$ be an analytic P-ideal on $\mathbb{N}$ for a lower semicontinuous submeasure $\mu$, containing all finite sets ($\mathcal{F}in\subseteq \mathcal{J}$). For each $a\in\mathcal{D}[\vec{E}]$ define $supp(a)\subseteq\mathbb{N}$ by
\begin{equation}\nonumber
supp(a)=\{n\in\mathbb{N}: P_{n}a\neq 0\}
\end{equation}
and in order to make notations simpler let $\hat{\mu}: \mathcal{D}[\vec{E}]\rightarrow [0, \infty]$ be $\hat{\mu}(a)= \mu(supp(a))$.
\begin{definition}[Approximate *-homomorphism] Assume $A$ and $B$ are unital C*-algebras. A map $\Psi: A \rightarrow B$ is an $\epsilon$-approximate unital *-homomorphism if for every
$a$ and $b$ in $A_{\leq 1}$ the following hold:
\begin{enumerate}
\item $\parallel \Psi(ab)-\Psi(a)\Psi(b)\parallel\leq \epsilon$
\item $\parallel \Psi(a+b) - \Psi(a)-\Psi(b)\parallel \leq \epsilon$
\item $\parallel \Psi(a^{*})- \Psi(a)^{*}\parallel \leq \epsilon$
\item $|\| \Psi(a)\| - \| a \||\leq \epsilon$
\item $\parallel\Psi(I) - I\parallel\leq \epsilon$
\end{enumerate}
We say $\Psi$ is $\delta$-approximated by a unital *-homomorphism $\Lambda$ if $\parallel\Psi(a) - \Lambda(a)\parallel\leq \delta$ for all $a\in A_{\leq 1}$.
\end{definition}
Next lemma is an Ulam-stability type result for finite-dimensional C*-algebras which will be required in the proof of lemma \ref{5}. To see a proof look at \cite[Theorem 5.1]{FaCalkin}.
\begin{lemma}\label{21}
There is a universal constant $K < \infty$ such that for every $\epsilon$ small enough , $A$ and $B$ finite-dimensional C*-algebras, every Borel-measurable $\epsilon$-approximate unital *-homomorphism $\Psi: A \rightarrow B$ can be $K\epsilon$-approximated by a unital *-homomorphism.
\end{lemma}
We will also use the following standard fact. To see a proof of this refer to \cite[Theorem 5.8]{FaCalkin}.
\begin{lemma}\label{22}
If $0 < \epsilon< 1/8$ then in every C*-algebra $A$ the following holds.
For every $a\in A$ satisfying $\| a- a^{2}\|\leq\epsilon$ and $\| a- a^{*}\|\leq\epsilon$, there is a projection $P\in A$
such that $\| P - a\| \leq 4\epsilon$.
\end{lemma}
Assume $\Phi : \mathcal{C}^{\mathcal{J}}[\vec{E}]\rightarrow \mathcal{C}^{\mathcal{J}}[\vec{E}]$ is an automorphism and $\mathcal{D}[\vec{E}]$ is equipped with the strong operator topology. Recall that if $M\subseteq \mathbb{N}$ then $P_{M}$ denotes the projection on the closed span of $\bigcup_{n\in M} \{e_{i} : i \in \vec{E}_{n}\}$. For each $n$ fix a finite set of operators $G_{n}$
which is $2^{-n}$-dense (in norm) in the unit ball of $\mathcal{D}_{\{n\}}[\vec{E}]\cong \mathbb{M}_{|E_{n}|}(\mathbb{C})$. Let $F=\prod_{n=1}^{\infty} G_{n}$ and $F_{M}= P_{M}F$ for any $M\subseteq \mathbb{N}$.
\begin{lemma}\label{5}
If an automorphism $\Phi : \mathcal{C}^{\mathcal{J}}[\vec{E}]\rightarrow \mathcal{C}^{\mathcal{J}}[\vec{E}]$ has a Baire-measurable representation $\Phi_{*}$, then it has a *-homomorphism representation.
\end{lemma}
\begin{proof}
First we show that $\Phi$ has a (strongly) continuous representation on $F$ and then we construct a *-homomorphism representation on $\mathcal{D}[\vec{E}]$ by using a similar argument used in chapter 6 of \cite{FaCalkin}.
The first part is a well-known fact (see \cite{FaAn}). To see this let $G$ be a dense $G_{\delta}$ set such that the restriction of $\Phi_{*}$ is continuous on $G$ and $G= \bigcap_{i=1}^{\infty}U_{i}$ where $U_{i}$ are dense open sets in $F$. Assume $U_{i+1}\subseteq U_{i}$ for each $i$. Recursively choose $1=n_{1}\leq n_{2}\leq \dots$ and $s_{i}\in F_{[n_{i},n_{i+1})}$ such that for every $a\in F$ if ${P}_{[n_{i},n_{i+1})}a= s_{i}$ then $a\in U_{i}$. Now let
\begin{equation}
\nonumber t_{0}= \sum_{i} s_{2i} \quad\quad t_{1}= \sum_{i} s_{2i+1}\\
\end{equation}
Let $Q_{0}=\sum_{i~even}{P}_{[n_{i},n_{i+1})}$ and $Q_{1}=\sum_{i~odd}{P}_{[n_{i},n_{i+1})}$. Define $\Psi$ on $F$ by
\begin{align}
\nonumber\Psi(a)=\Psi_{0}(a) + \Psi_{1}(a)
\end{align}
where
\begin{align}
\nonumber \Psi_{0}(a)= \Phi_{*}(Q_{0}a + t_{1}) - \Phi_{*}(t_{0})\\
\nonumber \Psi_{1}(a)= \Phi_{*}(Q_{1}a + t_{0}) - \Phi_{*}(t_{1}).
\end{align}
It is easy to see that $\Psi$ is a continuous representation of $\Phi$ on $F$.
By possibly replacing $\Psi$ with the map $a\rightarrow\Psi(a)\Psi(I)^{*}$ we can assume $\Psi$ is unital.
In order to find a *-homomorphism representation of $\Phi$, first we find a representation of $\Phi$ which is \emph{stabilized} by a sequence $\{u_{n}\}$ of orthogonal elements of $F$ in the sense to be made clear below.
\textbf{Claim 1.}
For all $n$ and $\epsilon>0$ there are $k>n$ and $u\in F_{[n,k)}$ such that for every $a$ and $b$ in $F$ satisfying ${P}_{[n,\infty)}a={P}_{[n,\infty)}b$ and ${P}_{[n,k)}a={P}_{[n,k)}b =u$, there exists $c\in \mathcal{D}[\vec{E}]$ such that $\|\Psi(a)-\Psi(b)- c\|<\epsilon$ and $\hat{\mu}({P}_{[k,\infty)}c)<\epsilon$.
\begin{proof}
Suppose claim fails for $n$ and $\epsilon>0$. Recursively build sequences $m_{i}, u_{i}, s_{i}$ and $t_{i}$ for $i\in \mathbb{N}$ as follows
\begin{enumerate}
\item[(a)] $n=m_{0}< m_{1}<m_{2}< \dots$,
\item[(b)] $u_{i}\in F_{[m_{i},m_{i+1})}$,
\item[(c)] $s_{i}$ and $t_{i}$ are elements of $F_{[0,n)}$,
\item[(d)] for every $c\in\mathcal{D}[\vec{E}]$ if $\|\Psi(s_{i}+u_{i})-\Psi(t_{i}+u_{i})- c\|<\epsilon$ then $\hat{\mu}({P}_{[i,\infty)}c)\geq\epsilon$.
\end{enumerate}
This can be easily done by our assumption.
Since $F_{[0,n)}$ is finite let $\langle s,t\rangle$ be a pair $\langle s_{i},t_{i}\rangle$ which appears infinitely often.
Note that $\Psi$ is a representation of an automorphism, therefore we can find $k$ large enough and $d, h\in \mathcal{D}^{\mathcal{J}}[\vec{E}]$ such that for every $j\in\mathbb{N}$
\begin{eqnarray}
\nonumber \|\Psi(s+\sum_{i}u_{i})-\Psi(s+u_{j})-\Psi(\sum_{i\neq j}u_{i})-d\|&<&\epsilon/3\\
\nonumber \|\Psi(t+\sum_{i}u_{i})-\Psi(t+u_{j})-\Psi(\sum_{i\neq j}u_{i})-h\|&<&\epsilon/3,
\end{eqnarray}
and
\begin{equation}
\hat{\mu}({P}_{[k,\infty)}d)\leq\epsilon/3,~~~~ \qquad \qquad ~~~~ \hat{\mu}({P}_{[i,\infty)}h)\leq\epsilon/3.
\end{equation}
Both $d$ and $h$ can be chosen to be $\Psi(0)$. Also fix a $c\in\mathcal{D}[\vec{E}]$ such that
\begin{equation}
\|\Psi(s+\sum_{i}u_{i})-\Psi(t+\sum_{i}u_{i}) - c\|<\epsilon/3.
\end{equation}
We will see that with these assumptions no such $c$ could belong to $\mathcal{D}^{\mathcal{J}}[\vec{E}]$.
For infinitely many $j\geq k$ we have
\begin{eqnarray}
\nonumber \|\Psi(s+u_{j})&-&\Psi(t+u_{j})-(d+h+c)\| \\
\nonumber&\leq& \|\Psi(s+\sum_{i}u_{i})-\Psi(s+u_{j})-\Psi(\sum_{i\neq j}u_{i})-d\|\\
\nonumber &+& \|\Psi(t+\sum_{i}u_{i})-\Psi(t+u_{j})-\Psi(\sum_{i\neq j}u_{i})-h\|\\
\nonumber &+& \|\Psi(s+\sum_{i}u_{i})-\Psi(t+\sum_{i}u_{i}) - c\|\\
\nonumber &<& \epsilon/3+\epsilon/3+\epsilon/3=\epsilon.
\end{eqnarray}
Hence by condition (d) we have $\hat{\mu}({P}_{[j,\infty)}(d+h+c))\geq\epsilon$ and
\begin{equation}
\nonumber \hat{\mu}({P}_{[j,\infty)}d)+\hat{\mu}({P}_{[j,\infty)}h)+\hat{\mu}({P}_{[j,\infty)}c)\geq\hat{\mu}({P}_{[j,\infty)}(d+h+c))\geq\epsilon.
\end{equation}
Therefore by (2) we have $\hat{\mu}({P}_{[j,\infty)}c)\geq\epsilon$ for infinitely many $j\geq k$.
Since $c$ was arbitrary this implies that for any $c$ satisfying (3) we have $\lim_{i\rightarrow\infty}\hat{\mu}({P}_{[i,\infty)}c)>\epsilon$. Hence
$\Psi(s+\sum_{i}u_{i})- \Psi(t+\sum_{i}u_{i})$ does not belong to $\mathcal{D}^{\mathcal{J}}[\vec{E}]$.
This is a contradiction since $(s+\sum_{i}u_{i}) - (t+\sum_{i}u_{i})$ is a compact operator and therefore $\Psi(s+\sum_{i}u_{i}) - \Psi(t+\sum_{i}u_{i})\in\mathcal{D}^{\mathcal{J}}[\vec{E}]$.
\end{proof}
We build two increasing sequences of natural numbers $(n_{i})$ and $(k_{i})$ such that $n_{i}< k_{i}< n_{i+1}$ for every $i$ and so called "stabilizers" $u_{i}\in F_{[n_{i},n_{i+1})}$ such that for all $a, b \in F$ which $ {P}_{[n_{i}, n_{i+1})}a= {P}_{[n_{i},n_{i+1})}b = u_{i}$ the following holds:
\begin{enumerate}
\item If ${P}_{[n_{i+1},\infty)}a={P}_{[n_{i+1},\infty)}b$ then there exists $c\in \mathcal{D}[\vec{E}]$ such that $\|[\Psi(a)-\Psi(b)] {P}_{[k_{i},\infty)} -c\|<2^{-n_{i}}$ and $\hat{\mu}({P}_{[k_{i},\infty)}c)<2^{-n_{i}}$.\\
\item If $ {P}_{[0,n_{i})}a= {P}_{[0,n_{i})}b$ then $\| [\Psi(a) - \Psi(b)]{P}_{[k_{i},\infty)}\| \leq 2^{-n_{i}}$.
\end{enumerate}
Assume $ n_{i}, k_{i-1}$ and $u_{i-1}$ have been chosen. By the claim above we can find $k_{i}$ and $u_{i}^{0}\in F_{[n_{i},k_{i})}$ such that $(1)$ holds. Now since $\Psi$ is strongly continuous we can find $n_{i+1}\geq k_{i}$ and $u_{i}\in F_{[n_{i},n_{i+1})}$ extending $u_{i}^{0}$ such that $(2)$ holds.
Let $J_{i}=[n_{i}, n_{i+1})$ and $\nu_{i} = \mathcal{D}_{J_{i}}[\vec{E}]$. Then $\mathcal{D}[\vec{E}]= \prod \nu_{i}$ and for $b \in \mathcal{D}[\vec{E}]$ we have $b=\sum_{j} b_{j} $ where $b_{j}\in \nu_{j}$. Note that $F_{J_{i}}$ is finite and $2^{-n_{i}+1}$-dense in $\nu_{i}$. Fix a linear ordering of $F_{J_{i}}$ and define $\sigma_{i}:~ \nu_{i} \longrightarrow F_{J_{i}}$ by letting $\sigma_{i}(b)$ to be the least element of $F_{J_{i}}$ which is in the $2^{-n_{i}+1}$- neighborhood of $b$. For $b\in \mathcal{D}[\vec{E}]_{\leq 1}$ let $b_{even}= \sum \sigma_{2i}(b_{2i})$ and $b_{odd}= \sum \sigma_{2i+1}(b_{2i+1})$. Both of these elements belong to $F$ and $b- b_{even}- b_{odd}$ is compact.\\
Define $\Lambda _{2i+1}: \nu_{2i+1}\longrightarrow \mathcal{D}[\vec{E}]$ by
\begin{equation}
\nonumber\Lambda_{2i+1}(a) = \Psi(u_{even} + \sigma_{2i+1}(a))- \Psi(u_{even}).
\end{equation}
Since $\Psi$ is continuous and $\sigma_{i}$ is Borel-measurable, $\Lambda_{2i+1}$ is Borel-measurable. Let $Q_{i}={P}_{[k_{i-1},k_{i+1})}$ with $k_{-1}=0$. Note that if $\mid i-j\mid>1$ then $Q_{i}$ and $Q_{j}$ are orthogonal.\\
Let $\Lambda~:~ \prod_{i=0}^{\infty}\nu_{2i+1}\longrightarrow \mathcal{D}[\vec{E}]$ be defined by
\begin{equation}
\nonumber\Lambda(b) = \Psi(u_{even} + b_{odd})- \Psi(u_{even}).
\end{equation}
Since $b-b_{odd}$ ia compact we have $\Psi(b)-\Lambda(b)\in \mathcal{D}^{\mathcal{J}}[\vec{E}]$. Therefore $\Lambda$ is a representation of $\Phi$ on $\prod_{i=0}^{\infty}\nu_{2i+1}$.
\textbf{Claim 2.} For $b=\sum_{j} b_{2j+1}\in \prod_{j=0}^{\infty} \nu_{2j+1}$, the operator $\Psi(b)-\sum_{i=0}^{\infty} Q_{2i+1}\Lambda_{2i+1}(b_{2i+1})$
belongs to $\mathcal{D}^{\mathcal{J}}[\vec{E}]$.
Since $\Lambda$ is a representation of $\Phi$ on $\prod_{i=0}^{\infty}\nu_{2i+1}$, there exists $c\in \mathcal{D}^{\mathcal{J}}[\vec{E}]$ such that for every large enough $l$, $\|[\Psi(b)+\Lambda(b)]Q_{2l+1}-c\|<2^{-n_{2l}}$ and $\hat{\mu}({P}_{[k_{2l},\infty)}c)<2^{-n_{2l}}$.
Let $b^{l}=\sum_{j=l}^{\infty}\sigma_{2j+1}(b_{2j+1})$ and apply (1) to $b^{l}$ and $b_{odd}$ implies that there exists $c^{\prime}\in \mathcal{D}[\vec{E}]$ such that $\|[\Psi(u_{even}+ b^{l})- \Psi(u_{even}-b_{odd})]Q_{2l+1}-c^{\prime}\|<2^{-n_{2l}}$ and $\hat{\mu}({P}_{[k_{2l},\infty)}c^{\prime})<2^{-n_{2l}}$.
Therefore
\begin{align} \nonumber
&\| Q_{2l+1}[\Psi(b)-\sum_{i}Q_{2i+1}\Lambda_{2i+1}(b_{2i+1})]-(c+c^{\prime})\|\\ \nonumber
&\leq\|Q_{2l+1} [\Psi(b)- \Lambda(b)]-c\| + \| Q_{2l+1}[\Lambda(b)- \sum_{i}Q_{2i+1}\Lambda_{2i+1}(b_{2i+1})]-c^{\prime}\|\\ \nonumber
&\leq 2^{-n_{2l}}+ \| Q_{2l+1}[\Lambda(b)-\Psi(u_{even}+ b^{l})-\Psi(u_{even})]-c^{\prime}\| \\ \nonumber
& + \| Q_{2l+1}[\Psi(u_{even}+ b^{l})-\Psi(u_{even})-\Lambda_{2l+1}(b_{2l+1})]\| \qquad \text{[Apply (2)]}\\\nonumber
&\leq 3. 2^{-n_{2l}}\nonumber.
\end{align}
Now for $d={P}_{[k_{2l},\infty)}(c+c^{\prime})+[\sum_{n=0}^{l}Q_{2n+1}\Lambda_{2n+1}(b_{2n+1})-{P}_{[0,k_{2l})}\Psi(b)]$ and any large enough $l$ we have
\begin{equation}
\nonumber \|\Psi(b)-\sum_{i}Q_{2i+1}\Lambda_{2i+1}(b_{2i+1})-d\|\leq\sum_{j=l}^{\infty}2^{-n_{2j}}
\end{equation}
and $\hat{\mu}({P}_{[k_{2l},\infty)}d)<2.2^{-n_{2l}}$. This completes the proof of the claim 2.
Now let $\Lambda^{\prime}_{2i+1}: \nu_{2i+1}\rightarrow Q_{2i+1}\mathcal{D}[\vec{E}]$ be defined as
\begin{equation}
\nonumber \Lambda^{\prime}_{2i+1}(b)=Q_{2i+1}\Lambda_{2i+1}(b).
\end{equation}
Let $c_{2i+1}=\Lambda_{2i+1}^{\prime}(I_{2i+1})$, where $I_{2i+1}$ is the unit of $\nu_{2i+1}$, and $\delta_{i}= max \{\| c_{2i+1}^{2} - c_{2i+1}\| , \| c_{2i+1}^{*} - c_{2i+1}\|\}$. We show that $\limsup_{i}\delta_{i}= 0$. Assume not; find $\delta> 0$ and an infinite set $M\subset 2\mathbb{N}+1$ such that for all $i\in M$ we have $max\{\| c_{i}^{2}-c_{i}\| , \| c_{i}^{*} - c_{i}\|\}>\delta$. Let $c = \sum_{i\in M} c_{i}$, by our previous claim if $P=\sum_{i\in M} Q_{i}$ then $\Psi(P) - c$ is compact. Therefore $c - c^{2}$ and $c - c^{*}$ are compact. Since $c_{i}$'s are orthogonal we have $c^{2}= \sum_{i\in M} c_{i}^{2}$ and $c^{*}= \sum_{i\in M} c_{i}^{*}$. Thus for large enough $i\in M$ we have $\| c_{i} - c_{i}^{2} \|= \| Q_{i}(c - c^{2})\|\leq \delta$ and $\| c_{i} - c_{i}^{*} \|= \| Q_{i}(c - c^{*})\|\leq \delta$, which is a contradiction.\\
Applying lemma \ref{22} to $c_{2i+1}$ for large enough $i$ we get projections $S_{2i+1}\leq Q_{2i+1}$ such that $\limsup_{i\rightarrow\infty} \| S_{2i+1} - \Lambda_{2i+1}^{\prime}(I_{2i+1})\| = 0$. Let
\begin{equation}
\nonumber\Lambda_{i}^{\prime\prime}(a) = S_{2i+1}\Lambda_{2i+1}^{\prime}(a)S_{2i+1}
\end{equation}
for $a\in \nu_{2i+1}$.
Now by re-enumerating indices we can assume $\Lambda_{i}^{\prime\prime}$ is $\epsilon$-approximate unital *-homomorphism, for small enough $\epsilon$.
Then $\Lambda^{\prime\prime} (a)= \sum_{i} \Lambda_{i}^{\prime\prime}(a)$ is a representation of $\Phi$ on $\prod_{i}\nu_{2i+1}$.
Let
\begin{align}
&\nonumber \delta_{i}^{0} = \sup_{a,b\in \nu_{2i+1}\leq 1}\{ \| \Lambda_{i}^{\prime\prime}(ab) - \Lambda_{i}^{\prime\prime}(a)\Lambda_{i}^{\prime\prime}(b)\|\} \\
&\nonumber \delta_{i}^{1} = \sup_{a,b\in \nu_{2i+1}\leq 1} \{\| \Lambda_{i}^{\prime\prime}(a+b) - \Lambda_{i}^{\prime\prime}(a)-\Lambda_{i}^{\prime\prime}(b)\|\} \\
&\nonumber \delta_{i}^{2} = \sup_{a \in \nu_{2i+1}\leq 1}\{\| \Lambda_{i}^{\prime\prime}(a^{*}) - \Lambda_{i}^{\prime\prime}(a)^{*}\| \} \\
&\nonumber \delta_{i}^{3} = \sup_{a \in \nu_{2i+1}\leq1} \{\| \Lambda_{i}^{\prime\prime}(a)\| - \| a\| \}. \\
\end{align}
We claim that $\lim_{i} max_{0\leq k\leq 3}\delta_{i}^{k} =0.$ We only show $\lim_{i} \delta_{i}^{0} = 0$ since the others are similar. Take $a$ and $b$ in $\sum_{i} \nu_{2i+1}$ such that ${P}_{J_{i}}a = a_{i}$ and ${P}_{J_{i}}b = b_{i}$ for all $i$. Since $\Psi(ab) - \Psi(a)\Psi(b)$ is compact, by claim 2 so is $\Lambda^{\prime\prime}(ab) -\Lambda^{\prime\prime}(a)\Lambda^{\prime\prime}(b)$ , which implies $\lim \delta_{i}^{0}=0$.
Let $\delta_{j}= max_{0\leq i\leq3}\{\delta_{j}^{i}\}$. Each $\Lambda_{j}^{\prime\prime}$ is a Borel measurable $\delta_{j}$-approximate *-homomorphism. Therefore by lemma \ref{21} for any large enough $j$ we can find a *-homomorphism $\Theta_{j}$ defined on $\nu_{2j+1}$ which is $K\delta_{j}$-approximation of $\Lambda_{j}^{\prime\prime}$. Define $\Theta ~:~ \sum_{i} \nu_{2i+1}\longrightarrow \mathcal{D}[\vec{E}]$ by $\Theta = \sum \Theta_{i}$. Since $\lim_{j} \delta_{j} = 0$, $\Theta$ is a representation of $\Phi$ on $ \sum_{i>n} \nu_{2i+1}$. Hence $\Theta$ can be extended to a *-homomorphism representation of $\Phi$ on $ \sum_{i} \nu_{2i+1}$.
By repeating the same argument for even intervals instead of odd intervals, one can get a *-homomorphism representation of $\Phi$ on $ \sum_{i} \nu_{2i}$. Now by combining these two representation we get the desired representation of $\Phi$.\\
\end{proof}
\section{automorphisms of borel quotients of fdd-algebras are topologically trivial}
This section is devoted to find local Baire-measurable representations of $\Phi$. For this section it is enough to assume $\mathcal{J}$ is a Borel ideal on natural numbers containing all finite sets and we also assume that all elements of the FDD-algebra are taken from the unit ball.
We say an automorphism $\Phi:\mathcal{C}^{\mathcal{J}}[\vec{E}]\rightarrow \mathcal{C}^{\mathcal{J}}[\vec{E}]$ is trivial if it has a representation which is *-homomorphism and that it is $\Delta^{1}_{2}$ if the set $\{(a,b) : \Phi(\pi_{\mathcal{J}}(a)) = \pi_{\mathcal{J}}(b)\}$ is $\Delta^{1}_{2}$.
Fix a partition $\vec{I} = (I_n)$ of natural numbers into finite intervals and for each $n$ fix a finite set $G_{n}$ of operators
which is $2^{-n}$-dense (in norm) in the unit ball of $\mathcal{D}_{\{n\}}[\vec{E}]\cong \mathbb{M}_{|E_{n}|}(\mathbb{C})$. As before let
\begin{equation}
\nonumber F_{n}^{\vec{I}}=\prod_{i\in I_{n}}G_{i}, \qquad\qquad~~~~~~~~ F^{\vec{I}}=\prod_{n\in\mathbb{N}}F_{n}^{\vec{I}}
\end{equation}
and for $M\subset \mathbb{N}$ let
\begin{equation}
\nonumber F_{M}^{\vec{I}}=\prod_{n\in M}F_{n}^{\vec{I}}.
\end{equation}
Note that each $F_{n}^{\vec{I}}$ is $2^{k-1}$-dense in $\mathcal{D}_{I_{n}}[\vec{E}]$ where $k$ is the smallest element of $I_{n}$.
Since each $G_{n}$ is finite, the product topology and the strong operator topology coincide on $F^{\vec{I}}$. For any $M\subseteq\mathbb{N}$ let $\hat{P}_{M}= P_{\cup_{n\in M}I_{n}}$.
\begin{lemma}
If a forcing notion $\mathbb{P}$ captures $F^{\vec{I}}$, then there is a $\mathbb{P}$-name $\dot{x}$ for a real such that for every $p\in\mathbb{P}$ there is an infinite $M\subset \mathbb{N}$ such that for every $a\in \mathcal{D}_{\cup_{n\in M}I_{n}}[\vec{E}]$ there is $q_{a}\leq p$ such that $q_{a}\Vdash \hat{P}_{M}\dot{x} =^{\mathcal{J}} \check{a}$.
\end{lemma}
\begin{proof}
Since the ideal $\mathcal{J}$ contains all finite sets and the sequence $\{F_{n}^{\vec{I}}\}$ is eventually dense in $\mathcal{D}_{\cup_{n\in M}I_{n}}[\vec{E}]$. The proof follows from the definition \ref{125}.
\end{proof}
Let $\mathcal{C}_{M}[\vec{E}]=\mathcal{D}_{M}[\vec{E}]/\mathcal{D}_{M}[\vec{E}]\cap\mathcal{D}^{\mathcal{J}}{\vec{E}}]$ and define the following ideals on $\mathbb{N}$.
\begin{eqnarray}
\nonumber &Triv_{\Phi}^{0}&=\{ M\subset\mathbb{N}:~ \Phi\upharpoonright \mathcal{C}_{M}[\vec{E}] ~ \text{has a strongly continuous representation}\}\\
\nonumber &Triv_{\Phi}^{1}&= \{M \subset\mathbb{N}:~ \Phi\upharpoonright \mathcal{C}_{M}[\vec{E}] ~is~ \Delta_{2}^{1}\}.
\end{eqnarray}
We say that $\Phi$ is \emph{locally topologically trivial} if $Triv_{\Phi}^{0}$ is non-meager and it is \emph{locally} $\Delta_{2}^{1}$ if $Triv_{\Phi}^{1}$ is non-meager.\\
The following lemma is well-known and is proved in \cite[lemma 4.5]{FaSh}, where $\mathbb{P}$ is countable support iteration of some creature forcings and the random forcing. Since groupwise Silver forcings as well as random forcing are also Suslin proper, $\omega^{\omega}$-bounding and have continuous reading of names the same proof works for $\mathbb{P}=\{\mathbb{P}_{\xi}, \dot{\mathbb{Q}}_{\eta}~ :~ \xi\leq \kappa , \eta< \kappa\}$, a countable support iteration of forcings such that each $\dot{\mathbb{Q}}_{\eta}$ is forced to be either some groupwise Silver forcing or the random forcing.
\begin{lemma}\label{222}
Assume $\mathbb{P}=\{\mathbb{P}_{\xi}, \dot{\mathbb{Q}}_{\eta}~ :~ \xi\leq \kappa , \eta< \kappa\}$ is as above and $\dot{x}$ is a $\mathbb{P}$-name for a real. For $A\subseteq \mathbb{R}$ a Borel set and $g:\mathbb{R}^{2}\rightarrow \mathbb{R}$ a Borel function, if $p\in\mathbb{P}$ is such that $\dot{x}$ is continuously read below $p$, then the set
\begin{equation}
\nonumber \{a: ~ p\Vdash g(\check{a},\dot{x})\in A \}
\end{equation}
is $\Delta_{2}^{1}$.
\end{lemma}
Note that since $\mathbb{P}$ is $\omega^{\omega}$- bounding we can assume all partitions of $\mathbb{N}$ into finite intervals in the generic extension by $\mathbb{P}$ are ground model partitions. We will use lemma \ref{222} to show that if all partitions are captured by some groupwise Silver forcings in stationary many steps of uncountable cofinality, then any automorphism $\Phi$ is forced to be locally $\Delta_{2}^{1}$ in the generic extension.
\begin{lemma}\label{4}
Assume $\mathbb{P}$ is a countable support iteration forcing notion of length $\mathfrak{c}^{+}$ as above such that for every partition ${\vec{I}}$ of $\mathbb{N}$ into finite intervals the set
\begin{equation}
\nonumber \{\xi< \mathfrak{c}^{+} : ~\Vdash_{\mathbb{P}_{\xi}} \dot{\mathbb{Q}_{\xi}} ~\text{captures}~ F^{\vec{I}} ~\text{and} ~cf(\xi)\geq \aleph_{1}\}
\end{equation}
is stationary. Then every automorphism $\Phi : \mathcal{C}^{\mathcal{J}}[\vec{E}]\rightarrow \mathcal{C}^{\mathcal{J}}[\vec{E}]$ is forced to be locally $\Delta_{2}^{1}$.\\
\end{lemma}
\begin{proof}
Let $\dot{\Phi}$ be a $\mathbb{P}$-name for an automorphism in the generic extension as above and $\dot{\Phi}_{*}$ be an arbitrary representation of $\dot{\Phi}$. Let $G\subset \mathbb{P}$ be a generic filter.\\
Assume $Triv^{1}_{int_{G}(\dot{\Phi})}$ is meager in $V[G]$ with a witnessing partition $\vec{I}=(I_{n})$, i.e. for every infinite $A\subset\mathbb{N}$ the set $\bigcup_{n\in A} I_{n}$ is not in $Triv^{1}_{int_{G}(\dot{\Phi})}$. Since our forcings have cardinality $<\mathfrak{c}^{+}$,
the set of all $\xi< \mathfrak{c}^{+}$ of uncountable cofinality such that $\vec{I}$ witnesses $Triv^{1}_{int_{G\upharpoonright \xi}(\dot{\Phi}\upharpoonright\xi)}$ is meager in $V[G\upharpoonright\xi]$ includes a club $C$ (cf. \cite{FaSh}) relative to the set $\{\xi<\mathfrak{c}^{+}: cf(\xi)>\aleph_{0}\}$.\\
By our assumption there is a stationary set $S$ of ordinals of uncountable cofinalities such that for all $\xi \in S$ we have $\Vdash_{\mathbb{P}_{\xi}} " \dot{\mathbb{Q}}_{\xi} $ adds a real $\dot{x}_{\xi}$ which captures $F^{\vec{I}}"$.
Fix $\eta\in S\cap C$. Let $\dot{y}$ be a $\mathbb{P}_{[\eta,\mathfrak{c}^{+}]}$-name such that
\begin{equation}
\nonumber \Phi(\pi_{\mathcal{J}}(\dot{x}_{\eta}))= \pi_{\mathcal{J}}(\dot{y}).
\end{equation}
Note that $\dot{x}_{\eta}$ is the generic real added by $\mathbb{Q}_{\eta}$ and since $\mathbb{P}$ has the continuous reading of names for any $p\in \mathbb{P}_{[\eta,\mathfrak{c}^{+}]}$ there are $q\leq p$, a countable set $S$ containing $\eta$, a compact set $K\subseteq \mathbb{R}^{S}$ and a continuous map $h : K\mapsto \mathbb{R}$ such that $q$ forces that $\check{h}(\langle \dot{x}_{\xi}: \xi\in S \rangle)= \dot{y}$. Since $\dot{\mathbb{Q}}_{\eta}$ captures $F^{\vec{I}}$ there is an infinite $A\subset \mathbb{N}$ such that if $M=\bigcup_{n\in A}I_{n}$ for every $a\in \mathcal{D}_{M}(\vec{E})$ there is $q_{a}\leq q$ such that $q_{a}\Vdash \check{P}_{M}\dot{x}_{\eta} =^{\mathcal{J}} \check{a}$ and therefore $ \Phi_{*}(\check{P}_{M})\dot{y}=^{\mathcal{J}} \Phi_{*}(\check{a})$. For every $a\in \mathcal{D}_{M}(\vec{E})$ we have
\begin{equation}
\nonumber \Phi_{*}(a)=^{\mathcal{J}}b \Longleftrightarrow q_{a}\Vdash b=^{\mathcal{J}} \Phi_{*}(\check{P}_{M})\check{h}(\langle \dot{x}_{\xi}: \xi\in S \rangle),
\end{equation}
so lemma \ref{222} implies that the set $\{(a,b)\in \mathcal{D}_{M}[\vec{E}]\times \mathcal{D}[\vec{E}] : \Phi(\pi_{\mathcal{J}}(a)) = \pi_{\mathcal{J}}(b)\}$
is $\Delta^{1}_{2}$.
Therefore $M$ is in $Triv^{1}_{int_{G\upharpoonright \eta}(\dot{\Phi}\upharpoonright \eta)}$, which contradicts the assumption that $\vec{I}$ witnesses the meagerness of $Triv^{1}_{int_{G\upharpoonright \eta}(\dot{\Phi}\upharpoonright \eta)}$.
\end{proof}
The following lemmas is very similar to \cite[lemma 4.9]{FaSh}.
\begin{lemma}\label{11}
Suppose $f$ and $g$ are functions such that each of them is a representation of a *-homomorphism from $\mathcal{C}^{\mathcal{J}}[\vec{E}]$ into $\mathcal{C}^{\mathcal{J}}[\vec{E}]$. Assume
\begin{equation}
\nonumber \Delta_{f,g,\mathcal{J}} = \{ a\in F^{\vec{I}}~:~ f(a)\neq^{\mathcal{J}} g(a)\}
\end{equation}
is null. Then $\Delta_{f,g,\mathcal{J}}$ is empty.
\end{lemma}
\begin{proof}
By inner regularity of the Haar measure we can find a compact set $K\subset F$ disjoint from $\Delta_{f,g,\mathcal{J}}$ of measure $>$ 1/2. Fix any $a\in F^{\vec{I}}$. Since the set $K+a$ also has measure $>1/2$, we can find $b\in K$ such that $b+a$ is also in $K$. Now we have
\begin{equation}
\nonumber f(a)=^{\mathcal{J}}f(a+b)-f(b)=^{\mathcal{J}} g(a+b)-g(b)=^{\mathcal{J}}g(a)
\end{equation}
\end{proof}
\begin{corollary}\label{2}
Suppose $f$ and $g$ are continuous functions such that each of them is a representations of a *-homomorphism from $\mathcal{D}[\vec{E}]$ into $\mathcal{C}(A)$ and the random forcing $\mathcal{R}$ forces that $f(\dot{x})=^{\mathcal{J}} g(\dot{x})$, where $\dot{x}$ is the canonical name for the random real. Then $f(a)=^{\mathcal{J}}g(a)$ for every $a \in \mathcal{D}[\vec{E}]$.
\end{corollary}
\begin{proof}
Let $\Delta_{f,g,\mathcal{J}}$ be as defined in previous lemma. If $\Delta_{f,g,\mathcal{J}}$ is null by lemma \ref{11} we are done. Assume $\Delta_{f,g,\mathcal{J}}$ has positive measure and $M$ is a countable model of \textbf{ZFC} containing codes for $f,g$ and $\mathcal{J}$, since $\dot{x}$ is the random real, $\dot{x}\in \Delta_{f,g,\mathcal{J}}$ and therefore $f(\dot{x})\neq^{\mathcal{J}} g(\dot{x})$ in the generic extension. But our assumption $f(\dot{x})=^{\mathcal{J}} g(\dot{x})$ is a $\Delta^{1}_{1}$ statement so it is true in $V$. Which is a contradiction.
\end{proof}
Recall that for $M\subset \mathbb{N}$, $P_{M}$ is the projection on the closed span of $\bigcup_{n\in M} \{e_{i} : i \in \vec{E}_{n}\}$.
\begin{lemma}\label{3}
Suppose $\mathcal{J}$ is a Borel ideal on $\mathbb{N}$. If $a\in \mathcal{D}[\vec{E}]\setminus \mathcal{D}^{\mathcal{J}}[\vec{E}]$ and $\mathcal{L}$ is a non-meager ideal on $\mathbb{N}$, then there exists $M\in\mathcal{L}$ such that $P_{M}a\notin \mathcal{D}^{\mathcal{J}}[\vec{E}]$.
\end{lemma}
\begin{proof}
Since $a$ does not belong to $\mathcal{D}^{\mathcal{J}}[\vec{E}]$ there is $\epsilon>0$ such that
\begin{equation}
\nonumber A=\{n\in \mathbb{N} : \|a_{n}\|>\epsilon\}\notin \mathcal{J}.
\end{equation}
Since $A\cap \mathcal{J}$ is a proper Borel ideal on $A$ there are disjoint finite sets $I_{n}$ such that $\bigcup_{n\in\mathbb{N}} I_{n}=A$ and for every infinite $X\subseteq \mathbb{N}$ the set $\bigcup_{n\in X} I_{n}\notin A\cap\mathcal{J}$. Let $\vec{J}=(J_{n})$ be a partition of $\mathbb{N}$ such that $J_{n}\cap A = I_{n}$ for every $n$. Since $\mathcal{L}$ is a non-meager ideal there exists an infinite $X\subseteq \mathbb{N}$ such that $\bigcup_{n\in X} J_{n}\in \mathcal{L}$. For $M=\bigcup_{n\in X} J_{n}$ we have $\bigcup_{n\in X} I_{n} \subseteq supp(P_{M}a)\notin \mathcal{J}$ and clearly $\|a_{n}\|\geq \epsilon$ for every $n\in \bigcup_{n\in X} I_{n}$. Hence $P_{M}a\notin \mathcal{D}^{\mathcal{J}}[\vec{E}]$.
\end{proof}
Next lemma shows that every locally topologically trivial automorphism in the extension is forced to have a "simple" definition.
\begin{lemma}\label{6}
Assume $\mathbb{P}=\{\mathbb{P}_{\xi}, \dot{\mathbb{Q}}_{\eta}~ :~ \xi\leq \mathfrak{c}^{+} , \eta< \mathfrak{c}^{+}\}$ is as above where $\dot{\mathbb{Q}}_{0}$ is the poset for the random forcing and assume $\dot{\Phi}$ is a $\mathbb{P}$-name for an automorphism which extends a locally topologically trivial ground model automorphism $\Phi : \mathcal{C}^{\mathcal{J}}[\vec{E}]\rightarrow \mathcal{C}^{\mathcal{J}}[\vec{E}]$ such that $int_{G}{\dot{\Phi}}$ is itself locally topologically trivial with the same local continuous maps witnessing local triviality of $\Phi$, then there exists a $q\in \mathbb{P}$ such that
\begin{equation}
\nonumber q\Vdash\{(a,b) : \Phi(\pi_{\mathcal{J}}(a)) = \pi_{\mathcal{J}}(b)\} \text{ is } \Pi^{1}_{2}.
\end{equation}
\end{lemma}
\begin{proof}
Let $\dot{g}_{\xi}$ be the canonical $\mathbb{Q}_{\xi}$-name for the generic real added by $\mathbb{Q}_{\xi}$ and let $\dot{y}$ be a $\mathbb{P}$-name such that $\dot{\Phi}(\pi_{\mathcal{J}}(\dot{g}_{0}))=\pi_{\mathcal{J}}(\dot{y})$. Note that $\dot{g}_{0}$ is the canonical $\mathbb{Q}_{0}$-name for the random real. Since $\mathbb{P}$ has continuous reading of names, we can find a condition $p$ with countable support $S$ containing $0$, a compact set $K\subset (F^{\vec{I}})^{S}$ and a continuous function $h: K\rightarrow F^{\vec{I}}$ such that $p\Vdash h(\langle \dot{g}_{\xi}:~\xi\in S\rangle)=\dot{y}$.\\
Let $\mathcal{Z}$ be the set of all pairs $(M,N,f)$ such that
\begin{enumerate}
\item $M,N\subseteq\mathbb{N}$.
\item $f : \mathcal{D}_{M}[\vec{E}]\rightarrow \mathcal{D}_{N}[\vec{E}]$ is a continuous representation of a *-homomorphism from $\mathcal{C}^{\mathcal{J}}_{M}[\vec{E}]$ into $\mathcal{C}^{\mathcal{J}}_{N}[\vec{E}]$.
\item $f(P_{M})=^{\mathcal{J}}P_{N}$.
\item $f(a)\in\mathcal{D}^{\mathcal{J}}[\vec{E}]$ if and only if $a\in \mathcal{D}^{\mathcal{J}}[\vec{E}]\cap \mathcal{D}_{M}[\vec{E}]$.
\item $p\Vdash f(\check{P}_{M}\dot{g}_{0})=^{\mathcal{J}} \check{P}_{N}\dot{y}$.
\end{enumerate}
It is not hard to see that conditions (1),(2),(3), and (4) are $\Pi_{1}^{1}$ and therefore by (co)analytic absoluteness still hold in the generic extension. Moreover by lemma \ref{222} condition (5) is $\Delta_{2}^{1}$. Therefore $\mathcal{Z}$ is $\Delta_{2}^{1}$. The set
\begin{equation}
\nonumber \Gamma = \{M~:~ (M,N, f)\in\mathcal{Z} \text{ for some $N$ and $f$}\}
\end{equation}
is an ideal on $\mathbb{N}$ and $ Triv_{\Phi}^{0}\subseteq\Gamma$. Since $\Phi$ is locally topologically trivial, $\Gamma$ is non-meager.
For any $M\in\Gamma$ let $f_{M}$ be such that $(M,N, f_{M})\in\mathcal{Z}$ for some $N\subseteq\mathbb{N}$. Let $\Phi_{*}$ be an arbitrary representation of the extension of $\Phi$ in the forcing extension.
\textbf{Claim 1:} For all $M\in \Gamma$ we have $f_{M}(a)=^{\mathcal{J}} \Phi_{*}(a)$ for every $a$ in $\mathcal{D}_{M}(\vec{E})$.
This clearly holds for any finite $M$. Assume $M\in \Gamma$ is infinite.
By our assumption $p$ forces that
\begin{equation}
\nonumber f_{M}(P_{M}\dot{g}_{0})=^{\mathcal{J}} P_{N}\Phi_{*}(\dot{g}_{0}).
\end{equation}
Now by corollary \ref{2}, since $P_{M}\dot{g}_{0}$ is the random real with respect to $\prod_{n\in M}G_{n}$, for every $a$ in $\mathcal{D}_{M}(\vec{E})$
\begin{equation}
\nonumber f_{M}(a)=^{\mathcal{J}}P_{N} \Phi_{*}(a).
\end{equation}
Let $d=(I - P_{N})\Phi_{*}(a)$. It's enough to show that $d\in \mathcal{D}^{\mathcal{J}}[\vec{E}]$. Let $c=\Phi^{-1}_{*}(d)$ and note that $(I-P_{M})c\in \mathcal{D}^{\mathcal{J}}[\vec{E}]$ since
\begin{equation}
\nonumber \Phi_{*}((I - P_{M})c)=^{\mathcal{J}} \Phi_{*}(I-P_{M})\Phi_{*}(a)(I-P_{N}) =^{\mathcal{J}} 0 .
\end{equation}
On the other hand we have
\begin{equation}
\nonumber f_{M}(P_{M}c)=^{\mathcal{J}}P_{N}\Phi_{*}(P_{M}c)=^{\mathcal{J}} 0.
\end{equation}
By assumption (4) we have $P_{M}c\in\mathcal{D}^{\mathcal{J}}[\vec{E}]$. This implies $c$ and hence $d$ belong to $\mathcal{D}^{\mathcal{J}}[\vec{E}]$.
As a consequence of claim (1) if $M\in \Gamma$ then $f_{M}$ witnesses that $M\in Triv_{\Phi}^{0}$ and therefore $\Gamma= Triv_{\Phi}^{0}$.
\textbf{Claim 2: } The following holds in the generic extension:
\begin{equation}
\nonumber\{(a,b) ~:~ \Phi(\pi_{\mathcal{J}}(a))= \pi_{\mathcal{J}}(b)\} = \{(a,b)~:~ (\forall (M, N, f)\in\mathcal{Z}) ~f(P_{M} a)=^{\mathcal{J}} P_{N}b\}.
\end{equation}
Suppose $\Phi(\pi_{\mathcal{J}}(a))= \pi_{\mathcal{J}}(b)$. Again let $\Phi_{*}$ be an arbitrary representation of the extension of $\Phi$ in the forcing extension. For any $(M, N, f)\in\mathcal{Z}$ by claim (1) we have $f(P_{M}a)=^{\mathcal{J}} \Phi_{*}(P_{M}a)=^{\mathcal{J}} P_{N}b$.
To see the other direction take $(a,b)$ such that $\Phi(\pi_{\mathcal{J}}(a))\neq \pi_{\mathcal{J}}(b)$. Since $\Phi$ is an automorphism we can find a $\mathcal{D}^{\mathcal{J}}[\vec{E}]$-positive element $c$ such that $\Phi_{*}(c)=^{\mathcal{J}} \Phi_{*}(a)-b$. Since $\Gamma$ is a non-meager ideal by lemma \ref{3} we can find an infinite $M\in\Gamma$ such that $P_{M}c$ is $\mathcal{D}^{\mathcal{J}}[\vec{E}]$-positive. Now for $(M, N, f_{M})\in \mathcal{Z}$ we have
\begin{eqnarray}
\nonumber f_{M}(P_{M}a)-P_{N}b =^{\mathcal{J}} \Phi_{*}(P_{M}a)- \Phi_{*}(P_{M})b =^{\mathcal{J}}\Phi_{*}(P_{M})(\Phi_{*}(a)- b)=^{\mathcal{J}}\Phi_{*}(P_{M}c)
\end{eqnarray}
and therefore $(a,b)$ does not belong to the left hand side of the equation.
This completes the proof since the right hand side of the equation is $\Pi_{2}^{1}$.
\end{proof}
\section{trivial automorphisms}
\textbf{Proof of theorem \ref{1}.}
Start with a countable model of \textbf{ZFC}+\textbf{MA} and consider the countable support iteration $\mathbb{P}=\{\mathbb{P}_{\xi},\dot{\mathbb{Q}}_{\eta}~:~\xi\leq \mathfrak{c}^{+}, \eta<\mathfrak{c}^{+}\}$ of forcings of the form $\mathbb{S}_{F^{\vec{I}}}$ and the random forcing such that
\begin{enumerate}
\item For every partition $\vec{I}$ of $\mathbb{N}$ into finite intervals the set $\{\xi : \mathbb{Q}_{\xi} \text{ is } \mathbb{S}_{F^{\vec{I}}} \text{ and } cf(\xi)>\aleph_{0} \}$ is stationary.
\item The set $\{\xi : \mathbb{Q}_{\xi}$ is the random forcing and $ cf(\xi)>\aleph_{0}\}$ is also a stationary set.
\end{enumerate}
Let $G$ be a generic filter on $\mathbb{P}$.
Fix a $\mathbb{P}$-name $\dot{\mathcal{J}}$ for a Borel ideal on $\mathbb{N}$ and a $\mathbb{P}$-name $\dot{\Phi}$ for an automorphism of $\mathcal{C}^{\mathcal{J}}[\vec{E}]$ in the extension. Since every partition is captured in stationary many steps of uncountable cofinalities, by lemma \ref{4} $\dot{\Phi}$ is forced to be a $\mathbb{P}$-name for a locally $\Delta_{2}^{1}$ automorphism. Each $\mathbb{P}_{\xi}$ is proper, hence no reals are added at stages of uncountable cofinality. For every $\eta$ with uncountable cofinality $H(\aleph_{1})^{V[G\upharpoonright \eta]}$ is the direct limit of $H(\aleph_{1})^{V[G\upharpoonright \xi]}$ for $\xi<\eta$. By a basic model theory fact there is a club $C$ relative to $\{\xi<\mathfrak{c}^{+}: cf(\xi)\geq \aleph_{1}\}$ such that for every $\xi\in C$ and $\dot{A}$ a $\mathbb{P}$-name for a set of reals we have
\begin{equation}
\nonumber (H(\aleph_{1}), int_{G\upharpoonright \xi}(\dot{A}\upharpoonright\xi))^{V[G\upharpoonright\xi]}\preceq (H(\aleph_{1}), int_{G}(\dot{A}))^{V[G]}.
\end{equation}
Therefore for every $\xi\in\textbf{C}$, $\dot{\Phi}\upharpoonright\xi$ is a $\mathbb{P}_{\xi}$-name for a locally $\Delta_{2}^{1}$ automorphism and $cf(\xi)>\aleph_{0}$. Fix such a $\xi$ and by (2) assume $\dot{\mathbb{Q}}_{\xi}$ is the name for the random forcing. By \textbf{MA} in the ground model and applying lemma \ref{999} locally we can find Baire-measurable and hence continuous representations of $\dot{\Phi}$ in $V$. Therefore $\dot{\Phi}$ is a $\mathbb{P}_{[\xi, \mathfrak{c}^{+}]}$-name for a locally topologically trivial automorphism which its local triviality is witnessed by ground model continuous maps.
Therefore lemma \ref{6} implies that $int_{G}(\dot{\Phi})$ is forced to be $\Pi_{2}^{1}$ in $V[G]$.
Since our assumption that there is a measurable cardinal implies that $\Pi_{2}^{1}$ sets have $\Pi_{2}^{1}$-uniformizations and all $\Pi_{2}^{1}$ sets have the property of Baire, the automorphism $int_{G}\dot{\Phi}$ has a Baire-measurable and hence a continuous representation. If $\mathcal{J}$ is a Borel P-ideal by lemma $\ref{5}$ we can get a representation of $int_{G}\dot{\Phi}$ which is a *-homomorphism.
\begin{flushright}
$\square$
\end{flushright}
The following corollary is essentially proved in \cite{FaSh} where the authors show the consistency of having all automorphisms of $P(\mathbb{N})/\mathcal{I}$ trivial for a Borel ideal $\mathcal{I}$ while the Calkin algebra has an outer automorphism.
\begin{corollary}\label{654}
It is relatively consistent with \textbf{ZFC} that all automorphisms of $\mathcal{C}^{\mathcal{J}}[\vec{E}]$ are (trivial) topologically trivial for a Borel (P-)ideal $\mathcal{J}$ and every partition $\vec{E}$ of natural numbers into finite intervals while the Calkin algebra has an outer automorphism.
\end{corollary}
\begin{proof}
Since $\mathbb{P}$ is a countable support iteration of proper $\omega^{\omega}$-bounding
forcings, it is proper and $\omega^{\omega}$-bounding \cite[$\S$ xVI.2.8(D)]{ShProper}. Hence the dominating number $\mathfrak{d}=\aleph_{1}$. This and the weak continuum hypothesis $2^{\aleph_{0}}<2^{\aleph_{1}}$ imply that the Calkin algebra has an outer automorphism (see \cite{FaCalkin}, the paragraph after the proof of Theorem 1.1). In order to get $2^{\aleph_{0}}<2^{\aleph_{1}}$ start with a model of \textbf{CH} and force with the poset consisting of all countable partial functions $f:\aleph_{3}\times \aleph_{1}\rightarrow \{0,1\}$ ordered by the reverse inclusion to add $\aleph_{3}$ so-called Cohen subsets of $\aleph_{1}$. This will increase $2^{\aleph_{1}}$ to $\aleph_{3}$ while preserving \textbf{CH}. Now force with $\mathbb{P}$ the iteration of length $\aleph_{2}$ as above to make all automorphisms of $\mathcal{C}^{\mathcal{J}}[\vec{E}]$ trivial. A simple $\Delta$-system argument shows that $\mathbb{P}$ is $\aleph_{2}$-cc and hence it preserves $2^{\aleph_{1}}$.
\end{proof}
\section{concluding remarks}
We don't know whether the large cardinal assumption in theorem \ref{1} is necessary. We have partially removed the need for this assumption in our proof, but as it is pointed out in \cite{FaSh} it is likely that one can completely remove it.
The forcing $\mathbb{P}$ used in this article in fact can be written as a countable support iteration of the random forcing and a single groupwise Silver forcing in the way described in the proof of the theorem \ref{1}. To see this notice that if two partitions of natural numbers $\vec{I}$ and $\vec{J}$ are such that $\vec{J}$ is coarser than $\vec{I}$, then $\mathbb{S}_{F^{\vec{J}}}$ captures $F^{\vec{I}}$. Let $\vec{J}=(J_{n})$ be such that $|J_{n}|=n$. It is enough to show that for every $\vec{I}$ there exists a condition $p$ in $\mathbb{S}_{F^{\vec{J}}}$ such that the partial order $\{q\in\mathbb{S}_{F^{\vec{J}}}: q\leq p\}$ is forcing equivalent to $\mathbb{S}_{F^{\vec{I}}}$. By the remark above we can assume $|I_{n}|=k_{n}$ is increasing. Let $p$ be such that $dom(p)=\mathbb{N}\setminus \{k_{1}, k_{2}, \dots\}$ . Clearly any such $p$ is a condition in $\mathbb{S}_{F^{\vec{J}}}$ since $|J_{n}|=n$. Now it is not hard to check that $\{q\in\mathbb{S}_{F^{\vec{J}}}: q\leq p\}$ and $\mathbb{S}_{F^{\vec{I}}}$ are forcing equivalent.
Note that the results of this paper and \cite{FaSh} can not be immediately modified to work for the category of compact metric groups; for example in \textbf{ZFC} the quotient group $\prod \mathbb{Z}/2\mathbb{Z}/\bigoplus \mathbb{Z}/2\mathbb{Z}$ has $2^{\mathfrak{c}}$ automorphisms and therefore it has nontrivial automorphisms, see \cite[Proposition~9]{FaLift}.
We end with the following question.\\ \\
\textbf{Question 7.1.}
Are there Borel ideals $\mathcal{I}$ and $\mathcal{J}$ such that the assertion that '$\prod_{n} \mathbb{M}_{n}(\mathbb{C})/\bigoplus_{\mathcal{I}} \mathbb{M}_{n}(\mathbb{C})$ is isomorphic to $\prod_{n} \mathbb{M}_{n}(\mathbb{C})/\bigoplus_{\mathcal{J}}\mathbb{M}_{n}(\mathbb{C})$' is independent from \textbf{ZFC}?
|
2,877,628,090,971 | arxiv | \section{Introduction} \label{intro}
The recent advance in deep learning and neural networks has provided a paradigm shift in various scientific fields. In particular, numerous deep neuron networks (DNNs) such as GoogLeNet~\cite{GooLeNet}, AlexNet~\cite{AlexNet}, Residual Network~\cite{ResNet}, and Neural
Architecture Search networks~\cite{zoph2017learning} have become prevalent standards for applications including autonomous transportation, automated manufacturing, natural language processing, intelligent warfare and smart health~\cite{lecun2015deep, schmidhuber2015deep, collobert2008unified}. Meanwhile, open-sourced deep learning frameworks have enabled users to develop customized machine learning systems based on the existing models. PyTorch~\cite{PyTorch}, Tensorflow~\cite{abadi2016tensorflow}, Keras~\cite{chollet2015keras}, MXNet~\cite{chen2015mxnet}, and Caffe~\cite{jia2014caffe} are examples of such tools.
The distribution of pre-trained neural networks is a promising trend and makes the utilization of DNNs easier. For instance, Caffe provides Model Zoo that includes built neural networks and pre-trained weights for various applications~\cite{caffe_modelZoo}. As the accessibility of models increases, a practical concern is the IP protection and Digital Right Management (DRM) of the distributed models. On the one hand, DL models are usually trained by allocating significant computational resources to process massive training data. The built models are therefore considered as the owner's IP and need to be protected to preserve the competitive advantages. On the other hand, malicious attackers may take advantage of the models for illegal usages. The potential problems need to be taken into account during the design and training of the DL models before the owners make their models publicly available.
Previous works have identified the importance of IP protection in DL domain and propose watermarking methodologies for DNNs. The authors of~\cite{uchida2017embedding, nagai2018digital} present a new approach for watermarking DNNs by embedding the IP information in the weights. The embedded watermark can be extracted by the owner assuming the details of the models are available to the owner (`white-box' setting). To provide IP protection for a remote neural network where the model is exposed as a service (`black-box' setting), the paper~\cite{merrer2017adversarial} proposes a zero-bit watermarking methodology by tweaking the decision boundary. The paper~\cite{DeepSigns} presents a generic watermarking framework for IP protection in both white-box and black-box scenarios by embedding the watermarks in the pdf of the activation sets of the target layers. To the best of our knowledge, there is no prior work that has targeted fingerprinting for DNNs.
This paper proposes \sys{}, a novel end-to-end framework that enables coherent integration of robust digital fingerprinting in contemporary deep learning models. \sys{}, for the first time, introduces a \textit{generic} functional fingerprinting methodology for DNNs. The proposed methodology is simultaneously \textit{user and model dependent}. \sys{} works by assigning a unique binary code-vector (a.k.a., \textit{fingerprint}) to each user and embedding the fingerprint information in the probabilistic distribution of the weights while preserving the accuracy. We demonstrate the robustness of our proposed framework against collusion and transformation attacks including model compression/pruning, and model fine-tuning. The explicit technical contributions of this paper are as follows:
\begin{itemize}
\item Proposing \sys{}, the first end-to-end framework for systematic deep learning IP protection and digital right management. A novel fingerprinting methodology is introduced to encode the pdf of the DL models and effectively trace the IP ownership as well as the usage of each distributed model.
\item Introducing a comprehensive set of qualitative and quantitative metrics to assess the performance of a fingerprinting methodology for (deep) neural networks. Such metrics provide new perspectives for model designers and enable coherent comparison of current and pending DL IP protection techniques.
\item Performing extensive proof-of-concept evaluation on various benchmarks including commonly used MNIST, CIFAR10 datasets. Our evaluations corroborate the effectiveness of \sys{} to detect IP ownership and track the individual culprits/colluders who use the model for unintended purposes.
\end{itemize}
\begin{table*}[ht!]
\centering
\caption{Requirements for an effective fingerprinting methodology of deep neural networks.}
\label{tab:required}
\scalebox{0.98}{
\begin{tabular}{|l||p{14cm}|}
\hline
\multicolumn{1}{|c||}{\textbf{Requirements}} & \multicolumn{1}{|c|}{\textbf{Description}} \\ \hline \hline
Fidelity & The functionality (e.g., accuracy) of the host neural network shall not be degraded as a result of fingerprints embedding.
\\ \hline
Uniqueness & The fingerprint need to be unique for each user, which enables the owner to trace the unintended usage of the distributed model conducted by any specific user.
\\ \hline
Capacity & The fingerprinting methodology shall be capable of embedding a large amount of information in the host neural network. \\ \hline
Efficiency & The overhead of fingerprints embedding and extraction shall be negligible. \\ \hline
Security & The fingerprint shall leave no tangible footprint in the host neural network; thus, an unauthorized individual cannot detect the presence of a fingerprint in the model.
\\ \hline
Robustness & The fingerprinting methodology shall be resilient against model modifications such as compression/pruning, fine-tuning. Furthermore, the fingerprints shall be resistant to collusion attacks where the adversaries try to produce an unmarked neural network using multiple marked models.
\\ \hline
Reliability & The fingerprinting methodology should yield minimal false negatives, suggesting that the embedded fingerprint should be detected with high probability.
\\ \hline
Integrity & The fingerprinting methodology should yield minimal false alarm (a.k.a., false positive). This means that the probability of an innocent user being accused as a colluder should be very low.
\\ \hline
Scalability & The fingerprinting methodology should be able to support a large number of users because of the nature of the model distribution and sharing.\\ \hline
Generality& The fingerprinting methodology should be applicable to various neural network architectures and datasets. \\ \hline
\end{tabular}}
\end{table*}
\section{Problem Formulation} \label{prob_form}
Fingerprinting is defined as the task of embedding a $v$-bit binary code-vector $\mathbf{c_j} \in \{0,1\}^v$ in the weights of a host neural network. Here, $j=1,...,n$ denotes the index for each distributed user where $n$ is the number of total users. The fingerprint information can be either embedded in one or multiple layers of the DNN model. The objective of fingerprinting is two-fold: (i) claiming the ownership of a specific neural network, and (ii) tracing the unintended usage of the model by distributed users. In the following sections, we formulate the requirements for digital fingerprinting in the context of DL and discuss possible attacks that might render the embedded fingerprints ineffective.
\subsection{Requirements} \label{reqiurements}
Table~\ref{tab:required} summarizes the requirement for an effective fingerprints in the deep neural network domain. In addition to fidelity, efficiency, security, capacity, reliability, integrity, and robustness requirements that are shared between fingerprinting and watermarking, a successful fingerprinting methodology should also satisfy the uniqueness, scalability, and collusion resilience criteria.
On the one hand, \textbf{uniqueness} is the intrinsic property of fingerprints. Since the model owner aims to track the usage of the model distributed to each specific user, the uniqueness of fingerprints is essential to ensure correct identification of the target user. On the other hand, as the number of participants involved in the distribution of neural networks increases, \textbf{scalability} is another key factor to perform IP protection and digital right management in large-scale settings. Particularly, the fingerprinting methodology should be able to accommodate a large number of distributed users.
Collusion attacks can result in the attenuation of the fingerprint from each colluder and have been identified as cost-effective attacks in the multi-media domain. In a traditional collusion attack, multiple users work together to produce an unmarked content using differently marked versions of the same content~\cite{wu2004collusion}. In the domain of DL, a group of users who have the same host neural network but different fingerprints may work collaboratively to construct a model where no fingerprints can be detected by the owner. Considering the practicality of such attacks, We include \textbf{collusion resilience} in the robustness requirement for DNN fingerprinting.
\subsection{Attack Models} \label{attacks}
Corresponding to the robustness requirements listed in Table~\ref{tab:required}, we discuss three types of DL domain-specific attacks that the fingerprinting methodology should be resistant to: model fine-tuning, model compression, and collusion attacks.
\vspace{0.3em}
\noindent \textbf{Model Fine-tuning.} Fine-tuning the pre-trained neural networks for transfer learning is a common practice since training a DL model from scratch is computationally expensive~\cite{shin2016deep}. For this reason, model fine-tuning can be an unintentional model transformation conducted by honest users or an intentional attack performed by malicious users. The parameters of the model are changed during fine-tuning, therefore the embedded fingerprints should be robust against this modification.
\vspace{0.3em}
\noindent \textbf{Model Compression.} Compressing the DNN models by parameter pruning is a typical technique to reduce the computational overhead of executing a neural network~\cite{han2015learning}. Genuine users may leverage parameter pruning to make their models compressed while adversaries may apply pruning to remove the fingerprints embedded by the owner. Since pruning alters the model parameters that carry the fingerprints information, an effective fingerprinting methodology shall be resistant to parameter pruning.
\vspace{0.3em}
\noindent \textbf{Collusion Attack.} Multiple attackers who have the same host neural network with different embedded fingerprints may perform collusion attacks to produce an unmarked model. We consider fingerprints averaging attack which is a common collusion attack and demonstrate how \sys{} is robust against such attacks.
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.9\textwidth]{FP_globalFlow.png}
\caption{\label{fig:global} \sys{} Global Flow: \sys{} performs fingerprinting on DL models by embedding the designated fingerprinting information in the distribution of weights for selected layers. To enable IP protection and digital right management, \sys{} allows the model owner to extract the embedded fingerprints for user identification as well as colluder detection after distributing the models.}
\end{figure*}
\section{Fingerprint Embedding} \label{fp_embed}
The global flow of \sys{} is illustrated in Figure~\ref{fig:global}. In order to trace the models that are distributed to individual users, the owner first assigns a specific code-vector to each user. Given the code-vector and an orthonormal basis matrix, a unique fingerprint is constructed to identify each user. The designed fingerprint is then embedded in the weights distribution for each user by fine-tuning the model with an additive \textit{embedding loss}. To identify a specific user, the owner assesses the weights of the marked layers in her model and extracts the corresponding code-vector. The decoded code-vector thus uniquely identifies the inquired user. In addition, \sys{} enables the owner to detect colluders who work collaboratively and try to generate a model where no fingerprints can be detected by the owner.
There are two types of fingerprint modulation mechanisms in the multi-media domain: (i) \textbf{orthogonal modulation}, and (ii) \textbf{coded modulation}~\cite{wu2004collusion}. In the rest of this section, we discuss how \sys{} framework adopts these two fingerprinting methods to provide a generic solution for DNNs.
\subsection{Orthogonal Fingerprinting} \label{orthog_embed}
As discussed in Section~\ref{reqiurements}, uniqueness is an essential requirement for fingerprinting to track individual users. \textit{Orthogonal modulation} is a technique that uses orthogonal signals to represent different information~\cite{wu2003data}. By using mutually orthogonal watermarks as fingerprints, the separability between users can be maximized. Given an orthogonal matrix $\mathbf{U}_{v \times v} = [\mathbf{u_1}, ..., \mathbf{ u_v}]$, the unique fingerprint for user $j$ can be constructed by assigning each column to a user:
\begin{equation} \label{eq:orthog_fp}
\mathbf{ f_j}= \mathbf{u_j},
\end{equation}
where $\mathbf{u_j}$ is the $j^{th}$ column of the matrix $\mathbf{U}$, $j=1,...v$. Here, $v$ orthogonal signals deliver $B=log_2 v$ bits information and can be recovered from $v$ correlators. The orthogonal matrix can be generated from element-wise Gaussian distribution~\cite{wu2004collusion}.
Regularizing neural networks for security purpose has been presented in previous works~\cite{CuRTAIL, DeepSigns}. However, none of these works focuses on the fingerprinting of the DL models. \sys{} embeds the constructed fingerprint in the target layers of the host model by adding the following term to the loss function conventionally used for training/fine-tuning deep neural networks:
\begin{equation} \label{eq:embed_loss}
\mathcal{L} = \mathcal{L}_0 \; + \gamma~MSE(\mathbf{f_j} - \mathbf{X}\mathbf{w}).
\end{equation}
Here, $\mathcal{L}_0$ is the conventional loss function (e.g. cross-entropy loss), $MSE$ is the mean square error function, $\gamma$ is the embedding strength that controls the trade-off between the two loss terms, $\mathbf{X}$ is the secret random projection matrix generated by the owner. $\mathbf{w}$ is the flattened averaged weights of the target layers for embedding the pertinent fingerprint.
As a proof-of-concept analysis, we embed the fingerprint $\mathbf{f_j}$ in the convolutional layer of the host neural network, thus the weight $\mathbf{W}$ is a 4D tensor $\mathbf{W} \in \mathbb{R}^{D \times D \times F \times H}$ where $D$ is the input depth, $F$ is the kernel size, and $H$ is the number of channels in the convolutional layer. The ordering of filter channels does not change the output of the neural network if the parameters in the consecutive layers are rearranged correspondingly~\cite{uchida2017embedding}. As such, we take the average of $\mathbf{W}$ over all channels and stretch the resulting tensor into a vector $\mathbf{w} \in \mathbb{R}^N$, where $N = D \times D \times F$. The rearranged weight vector $\mathbf{w}$ is then multiplied with a secret random matrix $\mathbf{X} \in \mathbb{R}^{v \times N}$ and compared with the fingerprint $\mathbf{f_j}$. The additional \textit{embedding loss} term $MSE(\mathbf{f_j} - \mathbf{Xw})$ inserts the fingerprint $\mathbf{f_j}$ in the distribution of the target layer weights by enforcing the model to minimize the embedding loss together with the conventional loss during the training/fine-tuning of the DNN model.
Since each user corresponds to a column vector in the orthogonal basis matrix, the maximum number of users is equal to the dimension of the fingerprint (which is also the number of orthogonal bases): $n=v$. Thus, the amount of customers that the same neural network can be distributed to is limited by the fingerprints dimension. Orthogonal fingerprints are developed based on spread spectrum watermarking~\cite{cox1997secure}. The straightforward concept and simplicity of implementation make orthogonal fingerprinting attractive to identification applications where only a small group of users are involved. Although orthogonality helps to distinguish individual users, the independent nature of orthogonal fingerprints makes it vulnerable to collusion attacks~\cite{wu2004collusion}.
\subsection{Coded Fingerprinting} \label{code_embed}
To support a large group of users and improve the collision resilience of the fingerprints, coded modulation is leveraged to introduce correlation between fingerprints~\cite{trappe2003anti,trappe2002collusion}. Similar ideas have been discussed in Antipodal CDMA-type watermarking where the correlation contributions only decrease at the locations where the watermarks code-bits are different~\cite{liu2005multimedia}. Correlation not only allows the system to support a larger number of fingerprints than the dimensionality of the orthogonal basis vectors, but also alleviates the attenuation of fingerprints due to collusion attacks. The challenge for coded fingerprinting is to design code-vectors such that (i) the correlations are introduced in a strategical way, and (ii) the correct identification of the users involved in a collusion attack is facilitated.
Anti-collusion codes (ACC) is proposed in~\cite{wu2004collusion} for coded fingerprinting and have the property that the composition of any subset of $K$ or fewer code-vectors is unique. This property allows the owner to identify a group of $K$ or fewer colluders accurately. A $K$-resilient \textit{AND-ACC} is a codebook where the element-wise composition is logic-AND and allows for the accurate identification of $K$ unique colluders from their composition. Previous works in the multi-media domain have shown that \textit{Balanced Incomplete Block Design} (BIBD) can be used to generate ACCs of binary values~\cite{yu2010group}. A $(v,k, \lambda)$-BIBD is a pair $(\mathcal{X}, \mathcal{A})$ where $\mathcal{A}$ is the collection of $k$-element subsets (blocks) of a $v$-dimension set $\mathcal{X}$ such that each pair of elements of $\mathcal{X}$ appear together exactly $\lambda$ times in the subsets~\cite{trappe2003anti, dinitz1992contemporary}. The $(v,k,\lambda)$-BIBD has $b=\sfrac{\lambda(v^2-v)}{(k^2-k)}$ blocks ($k$ is the block size) and can be represented by its corresponding incidence matrix $\mathbf{C}_{v \times b}$. The elements in the incidence matrix have binary values where:
\begin{equation*}
c_{ij} =
\begin{cases}
1, \; \text{if $i^{th}$ value occurs in $j^{th}$ block} \\
0, \; \text{otherwise}.
\end{cases}
\end{equation*}
By setting the number of concurrent occurrence to one ($\lambda=1$) and assigning the bit complement of columns of the incidence matrix $\mathbf{C}_{v \times b}$ as the code-vectors, the resulting $(v,k,1)$-BIBD code is $(k-1)$-resilient and supports up to $n=b$ users~\cite{trappe2003anti}. The theory of BIBD shows that the parameters satisfy the relationship $b>v$~\cite{dinitz1992contemporary}, which means the number of users (or fingerprints) is larger than the dimension of the orthogonal basis vectors. More specifically, the BIBD-ACC construction only requires $\mathcal{O}(\sqrt{n})$ basis vectors to accommodate $n$ users instead of $\mathcal{O}(n)$ in orthogonal fingerprinting scheme. Systematic approaches for constructing infinite families of BIBDs have been developed~\cite{colbourn2006handbook}, which provides a vast supply of ACCs.
Given the designed incidence matrix $\mathbf{C}_{v \times b}$, the coefficient matrix $\mathbf{B}_{v \times b}$ for fingerprints embedding can be computed from the linear mapping $b_{ij} = 2 c_{ij} - 1 $, thus $b_{ij} \in \left\{\pm 1\right \}$ corresponds to the antipolar form~\cite{proakis1994communication}. The fingerprint for $j^{th}$ user is then constructed from the orthogonal matrix $\mathbf{U}_{v \times v}$ and the coefficient matrix $\mathbf{B}_{v \times b}$ as follows:
\begin{equation} \label{eq:coded_fp}
\mathbf{ f_j}= \sum_{i=1}^v b_{ij} \mathbf{u_j},
\end{equation}
where $\mathbf{b_j} \in \left\{ \pm 1 \right \}^v$ is the coefficient vector associated with user $j$. Finally, The designed fingerprint $\mathbf{f_j}$ is embedded in the weights of the target model by adding the embedding loss to the conventional loss as shown in Equation~\ref{eq:embed_loss}.
Comparing the orthogonal fingerprinting in Equation~\ref{eq:orthog_fp} with the coded fingerprinting in Equation~\ref{eq:coded_fp}, one can see that orthogonal fingerprinting can be implemented by coded fingerprinting if an identity matrix is used as the ACC codebook $\mathbf{C}= \mathbf{I}$. This, in turn, means that the code-vector assigned to each user only has one element that equals to $1$ and all the others are zeros. Therefore, orthogonal fingerprinting can be considered as a special case of coded fingerprinting.
\section{Fingerprint Extraction} \label{fp_extraction}
For the purpose of fingerprints inquiry and colluder detection, the model owner assesses the weights of the marked layers, recovers the code-vector assigned to the user, and uses correlation statistics (orthogonal fingerprinting method) or BIBD ACC codebook (coded fingerprinting method) to identify colluders. Note that in the multi-media domain, there are two types of detection schemes for spread spectrum fingerprinting: blind or non-blind detection, depending on whether the original host signal is available in the detection stage or not. Non-blind detection has higher confidence in detection while blind detection is applicable in distributed detection settings~\cite{wu2004collusion, zhao2006fingerprint}. \sys{} leverages blind detection scheme and does not require the knowledge of the original content; thus content registration and storage resources are not needed. We discuss the workflow of extracting the code-vector from the marked weights and detecting participants in fingerprints collusion attacks for both fingerprinting methods in the following sections.
\subsection{Orthogonal Fingerprinting}\label{orthog_detect}
\vspace{0.3em}
\noindent \textbf{Code-vector extraction.} As discussed in Section~\ref{prob_form}, one objective of embedding fingerprints in the DNNs is to uniquely identify individual users. Since the fingerprint is determined by the corresponding code-vector, we formulate the problem of user identification as code-vector extraction from the marked weighs in each distributed model.
The embedding methodology of orthogonal fingerprinting is described in Section~\ref{orthog_embed}. In the inquiry stage, \sys{} first acquires the weights tensor $\mathbf{\widetilde{W_j}}$ of the pertinent marked layers for the target user $j$ and computes the flattened averaged version $\mathbf{\widetilde{w_j}}$. The fingerprint is recovered from the multiplication $\mathbf{\widetilde{f_j}} = \mathbf{X \widetilde{w_j}}$ where $\mathbf{X}$ is the random projection matrix specified by the owner. For simplicity, we use orthonormal columns to construct the basis matrix $\mathbf{U}$, thus the correlation score vector (which is also the coefficient vector) can be computed as follows:
\begin{equation} \label{eq:orthog_decode}
\mathbf{\widetilde{b_j}} = \mathbf{\widetilde{f_j}}^T \mathbf{U} = [\mathbf{\widetilde{f_j}}^T \mathbf{u_1}, ..., \mathbf{\widetilde{f_j}}^T \mathbf{u_v}].
\end{equation}
Since the fingerprints are orthogonal, only $j^{th}$ component in the correlation scores $\mathbf{\widetilde{b_j}}$ will have large magnitude while all the other elements will be nearly zeros. Finally, the code-vector $\mathbf{\widetilde{c_j}} \in \left\{0, 1\right \}^v$ assigned to $j^{th}$ user is extracted by element-wise hard-thresholding of the correlation vector $\mathbf{\widetilde{b_j}}$.
\vspace{0.5em}
\noindent \textbf{Colluder detection.}
Recall that the second objective of the owner for leveraging fingerprinting is to trace illegal redistribution or unintended usages of the models. Here we consider a typical linear collusion attack where $K$ colluders average their fingerprints and collaboratively generate a new model where the fingerprint is not detectable. To detect participants in the collusion attack, the owner first computes the correlation scores between the colluded fingerprint and each basis vector as shown in Equation~\ref{eq:orthog_decode}. Element-wise hard-thresholding is then performed on the correlation vector where the positions of ``1"s correspond to the indices of the colluders. According to Equation~\ref{eq:orthog_decode}, the magnitude of averaged fingerprints is attenuated by $\frac{1}{K}$ assuming there are $K$ colluders participating in the attack. As shown in~\cite{wu2004collusion}, $\mathcal{O}(\sqrt{\sfrac{v}{logv}})$ colluders are sufficient to defeat the fingerprinting system, where $v$ is the dimension of the fingerprint.
\subsection{Coded Fingerprinting} \label{coded_detect}
\vspace{0.5em}
\noindent \textbf{Code-vector extraction.} Similar to the extraction of orthogonal fingerprints, the owner acquires the weights in the marked layers $\mathbf{ \widetilde{W_j} }$ and computes its averaged flattened version $\mathbf{ \widetilde{w_j}}$, then extracts the colluders' fingerprint $\mathbf{ \widetilde{f_j} = X \widetilde{w_j}}$. The extracted fingerprint is then multiplied with the basis matrix to compute the correlation score vector $\mathbf{\widetilde{b_j}} = \mathbf{\widetilde{f_j}}^T \mathbf{U}$. Finally, the ACC code-vector $\mathbf{ \widetilde{c_{j}} }$ assigned to the $j^{th}$ user is decoded from $\mathbf{ \widetilde{b_{j}}}$ by hard-thresholding.
To illustrate the workflow of code-vector extraction for coded fingerprinting, let us consider a $(7,3,1)$-BIBD codebook given in Equation~\ref{eq:codebook_eg}. The coefficient vector of each fingerprint is constructed by mapping each column of the codebook $\mathbf{C}$ to the antipodal form $\left\{ \pm1 \right \}$. The fingerprints for all users are shown in Equation~\ref{eq:code_fp_eg}:
\begin{equation} \label{eq:codebook_eg}
\mathbf{C} =
\begin{pmatrix}
0 &0 &0 &1 &1 &1 &1 \\
0 &1 &1 &1 &0 &1 &1 \\
1 &0 &1 &0 &1 &0 &1 \\
0 &1 &1 &1 &1 &0 &0 \\
1 &1 &0 &0 &1 &1 &0 \\
1 &0 &1 &1 &0 &1 &0 \\
1 &1 &0 &1 &0 &0 &1
\end{pmatrix},
\end{equation}
\begin{equation} \label{eq:code_fp_eg}
\begin{cases}
\mathbf{f_1} = -\mathbf{u_1} -\mathbf{u_2} + \mathbf{u_3} -\mathbf{u_4} + \mathbf{u_5} + \mathbf{u_6} + \mathbf{u_7}, \\
\cdots \\
\mathbf{f_6} = +\mathbf{u_1} +\mathbf{u_2} - \mathbf{u_3}
-\mathbf{u_4} + \mathbf{u_5} +\mathbf{u_6} -\mathbf{u_7}, \\
\mathbf{f_7} = +\mathbf{u_1} +\mathbf{u_2} +\mathbf{u_3}
-\mathbf{u_4} -\mathbf{u_5}
-\mathbf{u_6} +\mathbf{u_7}, \\
\end{cases}
\end{equation}
where $\mathbf{u_i}~(i=1,...,7)$ are orthogonal columns of the matrix $\mathbf{U}$. For user $1$, her coefficient vector can be recovered by computing the correlation scores:
\begin{equation*}
\mathbf{\widetilde{b_1}} = \mathbf{f_1}^T[\mathbf{u_1}, ..., \mathbf{u_7}] = [-1, -1, +1, -1, +1, +1, +1].
\end{equation*}
The corresponding code-vector is then extracted by the inverse linear mapping $c_{ij} = \frac{1}{2}(b_{ij}+1)$. The resulting code-vector is $\mathbf{\widetilde{c_1}} = [0, 0, 1, 0, 1, 1, 1]$, which is exactly the same as the first column of $\mathbf{C}$. The consistency shows that BIBD AND-ACC codebooks can be leveraged to identify individual users.
\vspace{0.5em}
\noindent \textbf{Colluder detection.} Recall that in Section~\ref{code_embed}, we discuss the property of BIBD and its application for constructing anti-collusion codes. Here, we describe how to use the intrinsic asset of AND-ACC for colluder detection in fingerprints averaging attack. Assuming the positions of the marked layer are known to the colluders, they can perform element-wise average on their weight tensors in the pertinent layers and generate $\mathbf{W^{avg}}$ as the response to the owner's inquiry. The owner then computes the correlation vector $\mathbf{b^{avg}}$ as follows:
\begin{align} \label{eq:coded_avg}
\mathbf{f^{avg}} &= \mathbf{X w^{avg}}, \\
\mathbf{ b^{avg} } &= \mathbf{ (f^{avg})^T }\mathbf{U}.
\end{align}
The problem of identifying colluders based on the correlation statistics has been well addressed in conventional fingerprinting that is based on spread spectrum watermarking~\cite{cox1997secure, jain2000digital}. There are three main schemes: hard-thresholding detector, adaptive sorting detector, and sequential detector~\cite{wu2004collusion}. Hard-thresholding detector works by comparing each element in the correlation score vector $\mathbf{b}$ with a threshold $\tau$ to decide the corresponding bit (``0" or ``1") in the ACC code-vector. Adaptive sorting detector sorts the correlation scores in a descending order and iteratively narrow down the set of suspected users until the corresponding likelihood estimation of the colluder set stops increasing. Sequential detector directly estimates the colluder set from the pdf of the correlation statistics without decoding the ACC code-vector. For details about each detection method, we refer readers to the paper~\cite{wu2004collusion}.
\sys{} deploys hard-thresholding detector for colluders identification. The ACC code-vector is decoded from the correlation vector $\mathbf{b^{avg} } = [b^{avg}_1, ..., b^{avg}_v]$ by comparing each component with the threshold $\tau$:
\begin{equation} \label{eq:b_threshold}
c^{avg}_i=
\begin{cases}
1, \text{if $b^{avg}_i > \tau$}, \\
0, \text{otherwise}.
\end{cases}
\end{equation}
Given the ACC code-vector of the colluders $\mathbf{c^{avg}}$, the remaining problem is to find the subsets of columns from the codebook $\mathbf{C}$ such that their logic-AND composition is equal to $\mathbf{c^{avg}}$. For a $(v,k,1)$-BIBD-ACC, at most $(k-1)$ colluders can be uniquely identified.
As an example, we demonstrate the colluder detection scheme of \sys{} using the $(7,3,1)$-BIBD codebook given in Equation~\ref{eq:codebook_eg}. Assuming user $6$ and user $7$ collectively generate the averaged fingerprint:
\begin{align*}
\mathbf{f^{avg}} &= \frac{1}{2}(\mathbf{f_6}+\mathbf{f_7}), \\
&= \frac{1}{2} (2\mathbf{u_1}+2\mathbf{u_2}-2\mathbf{u_4}).
\end{align*}
The owner assesses the averaged fingerprint and computes the correlation scores as the following:
\begin{align*}
\mathbf{b^{avg}}&= \mathbf{(f^{avg})^T} \mathbf{U} = [1, 1, 0, -1, 0, 0, 0].
\end{align*}
The colluders' code-vector is then extracted according to decision rule in Equation~\ref{eq:b_threshold}:
\begin{equation*}
\mathbf{c^{avg}} = [1, 1, 0, 0, 0, 0, 0].
\end{equation*}
It can be observed that the logic-AND of column 6 and column 7 in the codebook $\mathbf{C}$ is exactly equal to $\mathbf{c^{avg}}$, while all the other compositions cannot produce the same result. This example shows that the two colluders can be uniquely identified using the designed BIBD AND-ACC codebook.
\section{Evaluation} \label{eval}
We evaluate the performance of \sys{} on MNIST~\cite{lecun1998mnist} and CIFAR10~\cite{krizhevsky2009learning} datasets and two different neural network architectures: convolutional neural networks and wide residual networks. The topologies of these two models are summarized in Table~\ref{tab:bench}. The fingerprints are embedded in the first convolutional layer of the underlying neural network. Since orthogonal fingerprinting can be considered as a sub-category of coded fingerprinting, we focus on the comprehensive evaluation of latter one. Both MNIST-CNN and CIFAR10-WRN benchmarks are used to assess the performance of coded fingerprinting while only MNIST-CNN benchmark is used to demonstrate the workflow of orthogonal fingerprinting.
\begin{table*}[t]
\centering
\caption{Benchmark neural network architectures. Here, ${64C3(1)}$ indicates a convolutional layer with $64$ output channels and ${3\times3}$ filters applied with a stride of 2, ${MP2(1)}$ denotes a max-pooling layer over regions of size ${2\times2}$ and stride of 1, and ${512FC}$ is a fully-connected layer consisting of $512$ output neurons. ReLU is used as the activation function in all the two benchmarks.}
\label{tab:bench}
\begin{tabular}{|c|c|c|}
\hline
Dataset & Model Type & Architecture \\ \hline
MNIST & CNN & 784-32C3(1)-32C3(1)-MP2(1)-64C3(1)-64C3(1)-512FC-10FC \\ \hline
CIFAR10 & WRN & Please refer to \cite{zagoruyko2016wide} \\ \hline
\end{tabular}
\end{table*}
\subsection{Coded Fingerprinting Evaluation} \label{coded_eval}
In the evaluations of coded fingerprinting, we use a $(31,6,1)$-BIBD AND-ACC codebook ($\mathbf{C}$) and assign each column as a code-vector for individual users. The codebook can accommodate $ n =\frac{v(v-1)}{k(k-1)} = 31$ users and is resilient to at most $(k-1)=5$ colluders, theoretically. The embedding strength in Equation~\ref{eq:embed_loss} is set to $\gamma=0.1$ and the pre-trained host neural network is fine-tuned with the additional embedding loss for $20$ epochs in order to embed the fingerprints. The threshold for extracting the code-vector is set to $\tau = 0.85$ in all experiments. We perform a comprehensive examination of the \sys{}' performance in the rest of this paper.
\vspace{0.5em}
\noindent\textbf{Fidelity.} To show that the insertion of fingerprints does not impair the original task, we compare the test accuracy of the baseline (host neural network without fine-tuning), the fine-tuned model without embedding fingerprints, and the fine-tuned model with fingerprints embedded. The comparison of results are summarized in Table~\ref{tab:test_acc_comparison}. It can be observed from the table that embedding fingerprints in the (deep) neural network does not induce any accuracy drop and can even slightly improve the accuracy of the fine-tuned model. Thus, \sys{} meets the fidelity requirement listed in Table~\ref{tab:required}.
\begin{table*}[]
\centering
\caption{Fidelity requirement. The baseline accuracy is preserved after fingerprint embedding in the underlying benchmarks.}
\label{tab:test_acc_comparison}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Benchmark & \multicolumn{3}{c|}{MNIST-CNN} & \multicolumn{3}{c|}{CIFAR10-WRN} \\ \hline
Setting & Baseline & \begin{tabular}[c]{@{}c@{}}Fine-tune without \\ fingerprint\end{tabular} & \begin{tabular}[c]{@{}c@{}}Fine-tune with \\ fingerprint\end{tabular} & Baseline & \begin{tabular}[c]{@{}c@{}}Fine-tune without \\ fingerprint\end{tabular} & \begin{tabular}[c]{@{}c@{}}Fine-tune with\\ fingerprint\end{tabular} \\ \hline
Test Accuracy (\%) & 99.52 & 99.66 & 99.72 & 91.85 & 91.99 & 92.03 \\ \hline
\end{tabular}
\end{table*}
\vspace{0.5em}
\noindent{\textbf{Uniqueness.}} The uniqueness of code modulated fingerprinting originates from the $(v,k.1)$-BIBD ACC codebook. Since the code-vector assigned to each user is the bit complement of columns of the incidence matrix~\cite{wu2004collusion} which has no repeated columns, individual users are uniquely identified by the associated ACC code-vectors.
\vspace{0.5em}
\noindent {\textbf{Scalability.}} Due to the intrinsic requirement of model distribution and sharing, the fingerprinting methodology should be capable of accommodating a large number of users. For a $(v,k,1)$-BIBD ACC codebook, the maximum number of users is decided by the code length $v$ and the block size $k$:
\begin{equation*}
n=\frac{v(v-1)}{k(k-1)}.
\end{equation*}
Systematic approaches to design various families of BIBDs have been well studied in previous literature~\cite{colbourn2006handbook,rodger2008design}. For instance, Steiner triple systems are families of $(v,3,1)$-BIBD systems and are shown to exist if and only if $v \equiv 1$ or $3$ (mod 6)~\cite{rodger2008design}. An alternative method to design BIBDs is to use projective and affine geometry in $d-$dimension over $Z_p$, where $p$ is of prime power. $(\frac{p^{d+1} -1}{p-1}, p+1, 1)$-BIBDs and $(p^d, p, 1)$-BIBDs can be constructed from projective and affine geometry~\cite{colbourn2006handbook, lidl1994introduction}. By choosing a large dimension of fingerprints in Steiner triple systems, or using projective geometry in a high dimensional space, the number of users allowed in our proposed framework can be sufficiently large. Therefore, the scalability of \sys{} is guaranteed by a properly designed BIBD ACC codebook. By expanding the ACC codebook, \sys{} supports IP protection and DRM when new users join in the model distribution system.
\vspace{0.5em}
\noindent{\textbf{Robustness, Reliability, and Integrity.}} We evaluate the robustness of \sys{} against fingerprints collusion attack and model modifications, including parameter pruning as well as model fine-tuning on MNIST and CIFAR10 benchmarks. For all attacks, we assume the fingerprinting method as well as the positions of the marked layers are known to the attackers. The code-vector extraction and colluder detection scheme are described in Section~\ref{coded_detect}.
We use a $(31,6,1)$-BIBD AND-ACC codebook and assume there are $31$ users in total. For a given number of colluders, $10,000$ random simulations are performed to generate different colluders sets from all users. When the colluder set is too large to be uniquely identified by the BIBD AND-ACC codebook, we consider all feasible colluder sets that match the extracted code-vector resulting from the fingerprints collusion and take the mean value of the detection rates as well as the false alarm rates. The average performance over $10,000$ random tests is used as the final metric. The details of the robustness tests against the three aforementioned attacks are explained in the following sections.
\vspace{0.5em}
\noindent \textbf{(I) Fingerprints collusion.} Figure~\ref{fig:v31_detection} shows the detection rates of \sys{} when different number of users participate in the collusion attack. As can be seen from Figure~\ref{fig:v31_detection}, the detection rate is $100\%$ when the number of colluders is smaller or equal to $5$, which means the collusion resilience level is $K_{max} = 5$ with the $(31,6,1)$-BIBD ACC codebook. When the number of colluders further increases, the detection rate starts to decrease, and finally reaches a stable value at $19.35\%$.
Along with the evaluation of detection rates, we also assess the false alarm rates of \sys{} using $10,000$ random simulations and summarize the results in Figure~\ref{fig:v31_falseAlarm}. It can be seen that the false accuse rate remains $0\%$ when the number of colluders does not exceed $5$, which is consistent with $K_{max}=5$ found in the evaluations of detection rates. When the number of colluders increases, the false alarm rate first increases and stays at a stable value at the end.
Comparing the detection performance of \sys{} on MNIST-CNN and CIFAR10-WRN benchmarks shown in Figures~\ref{fig:v31_detection}~and~\ref{fig:v31_falseAlarm}, one can observe that the detection rates and the false alarm rates are approximately the same for two benchmarks given the same number of colluders. The consistency across benchmarks derives from the correct code-vector extraction and the unique identification property of BIBD ACC codebooks. The colluder detection scheme of \sys{} can be considered as a high-level protocol which is completely independent of the underlying network architecture and the dataset. Therefore, our proposed framework meets the \textbf{generality} requirement listed in Table~\ref{tab:required}.
The high detection rates and low false alarm rates corroborate that \sys{} satisfies the \textbf{reliability} and \textbf{integrity} requirements in Table~\ref{tab:required}, respectively. Furthermore, the maximal number of colluders that the system can identify with $100\%$ detection rate and $0\%$ false alarm rate is found to be $K_{max}=5$, which is consistent with the theoretical tolerance ($k-1$) given by the BIBD AND-ACC. The consistency helps the owner to choose the proper ACC codebook based on her desired collusion resilience requirement.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.4\textwidth]{figs/v31_collusion_mnist_cifar10_ver3.png}
\caption{\label{fig:v31_detection} Detection (true positive) rates of fingerprints averaging attack. Using $(31,6,1)$-BIBD ACC codebook, up to 5 colluders can be uniquely identified with $100\%$ detection rate. }
\vspace{-0.3em}
\end{figure}
\begin{figure}[ht!]
\centering
\vspace{-0.65em}
\includegraphics[width=0.4\textwidth]{figs/v31_falseAlarm_ver2.png}
\caption{\label{fig:v31_falseAlarm} False alarm (false positive) rates of fingerprints averaging attack. Using a $(31,6,1)$-BIBD ACC codebook, no false accusement occurs if the number of colluders is smaller or equal to 5. }
\end{figure}
\begin{figure}[ht!]
\centering
\begin{subfigure}[h]{0.45\columnwidth}
\centering
\includegraphics[width=0.98\columnwidth]{figs/codebook_mnist_detect.png}
\caption{{\label{fig:codebook_mnist_detect}}}
\end{subfigure}
~
\begin{subfigure}[h]{0.45\columnwidth}
\centering
\includegraphics[width=0.98\columnwidth]{figs/codebook_cifar10_detect.png}
\caption{\label{fig:codebook_cifar10_detect}}
\end{subfigure}
\caption{Detection rates of fingerprints collusion attacks on (a) MNIST-CNN and (b) CIFAR10-WRN benchmarks when different ACC codebooks are used. The codebook with larger block size $k$ has better detection capability of collusion attacks.}
\end{figure}
\begin{figure}[ht!]
\centering
\begin{subfigure}[h]{0.45\columnwidth}
\centering
\includegraphics[width=0.98\columnwidth]{figs/codebook_mnist_falseAlarm.png}
\caption{{\label{fig:codebook_mnist_falseAlarm}}}
\end{subfigure}
~
\begin{subfigure}[h]{0.45\columnwidth}
\centering
\includegraphics[width=0.98\columnwidth]{figs/codebook_cifar10_falseAlarm.png}
\caption{\label{fig:codebook_cifar10_falseAlarm}}
\end{subfigure}
\caption{False alarm rates of fingerprints collusion attacks on (a) MNIST-CNN and (b) CIFAR10-WRN benchmarks. Given the same number of colluders, the $(31,6,1)$-BIBD ACC codebook has lower false alarm rates than the $(13.4.1)$-BIBD ACC codebook.}
\end{figure}
For a comprehensive evaluation of \sys{}, we further compare the robustness of our proposed framework when the $(31,6,1)$-BIBD ACC and the $(13,4,1)$-BIBD ACC codebook are used. The detection rates of the fingerprints collusion attacks on MNIST and CIFAR10 datasets are shown in Figures~\ref{fig:codebook_mnist_detect}~and~\ref{fig:codebook_cifar10_detect}, respectively. The false alarm rates are shown in Figures~\ref{fig:codebook_mnist_falseAlarm}~and~\ref{fig:codebook_cifar10_falseAlarm}. The comparison between two codebooks shows how the design of BIBD-ACC codebooks affects the collusion resistance of \sys{}. Particularly, it can be observed that the $(31,6,1)$-BIBD AND-ACC codebook has a collusion resilience level $K_{max}=5$ while the $(13,4,1)$-BIBD AND-ACC only has the resilience level $K_{max}=3$. In addition, the $(31,6,1)$-BIBD codebook has higher detection rate as well as lower false accuse rate compared to $(13,4,1)$-BIBD codebook given a specific number of colluders. Same conclusions hold for the collusion resilience against parameter pruning and model fine-tuning attacks. For simplicity, the results are not presented here.
\vspace{0.5em}
\noindent \textbf{(II) Model fine-tuning.} To evaluate the robustness against the fine-tuning attack, we retrain the fingerprinted model using only conventional cross-entropy loss as the objective function. The code-vector extraction and colluder detection scheme are the same as in the evaluation of fingerprints collusion attacks. The detection rates and false alarm rates of \sys{} on MNIST and CIFAR10 datasets are shown in Figures~\ref{fig:v31_finetune_detect}~and~\ref{fig:v31_finetune_falseAlarm}, respectively. Compared with Figures~\ref{fig:v31_detection}~and~\ref{fig:v31_falseAlarm}, where the robustness against collusion attacks is evaluated without model fine-tuning, the same trend can be observed and the collusion resistance level remains the same $K_{max}=5$, showing that \sys{} is robust against model fine-tuning attacks.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.4\textwidth]{figs/v31_finetune_detect.png}
\caption{\label{fig:v31_finetune_detect} Detect rates of fingerprint collusion with model fine-tuning. \sys{} attains high detection rate and the same resilience level $K_{max}=5$ even if the marked neural network is fine-tuned. }
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.4\textwidth]{figs/v31_finetune_falseAlarm.png}
\caption{\label{fig:v31_finetune_falseAlarm} False alarm rates of fingerprint collusion with model fine-tuning. The collusion resilience level $(K_{max}=5)$ is not affected by fine-tuning attack. }
\end{figure}
\vspace{0.5em}
\noindent \textbf{(III) Parameter pruning.} Parameter pruning alters the weights of the marked neural network. As such, we first evaluate the code-vector extraction (decoding) accuracy of \sys{} under different pruning rates. Figures~\ref{fig:v31_prune_mnist_decode_acc}~and~\ref{fig:v31_prune_cifar10_decode_acc} show the results on MNIST and CIFAR10 datasets, respectively. One can see that increasing the pruning rate leads to the drop of the test accuracy, while the code-vector can always be correctly decoded with $100\%$ accuracy. The superior decoding accuracy of the AND-ACC code-vectors under various pruning rates corroborates the robustness of our designed fingerprint code-vectors against the parameter pruning attack.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.4\textwidth]{figs/v31_prune_mnist_decode_acc.png}
\caption{\label{fig:v31_prune_mnist_decode_acc} Code-vector extraction accuracy and test accuracy under different pruning rates. The test accuracy of MNIST-CNN drops when the pruning rate when the pruning rates is larger than $95\%$ while the ACC decoding accuracy remains at $100\%$. }
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.4\textwidth]{figs/v31_prune_cifar10_decode_acc.png}
\caption{\label{fig:v31_prune_cifar10_decode_acc} Code-vector extraction accuracy and test accuracy under different pruning rates. The ACC decoding accuracy is robust up to $99.99\%$ pruning rate while the test accuracy of CIFAR10-WRN benchmark degrades when the pruning rate is greater than $90\%$. }
\end{figure}
We further assess the robustness of \sys{} for colluders identification against parameter pruning. Figures~\ref{fig:v31_mnist_prune_detect}~and~\ref{fig:v31_cifar10_prune_detect} show the detection rates of \sys{} under three different pruning rates ($10\%, 50\%, 99\%$) using MNIST-CNN and CIFAR10-WRN benchmarks, respectively. Similar to the trend shown in Figure~\ref{fig:v31_detection}, the same collusion resilience level $K_{max}=5$ is observed in Figures~\ref{fig:v31_mnist_prune_detect}~and~\ref{fig:v31_cifar10_prune_detect}; suggesting that the \textbf{reliability} and the \textbf{robustness} criteria are satisfied for the parameter pruning attacks as well.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.4\textwidth]{figs/v31_mnist_prune_detect_ver4.png}
\caption{\label{fig:v31_mnist_prune_detect} Detection rates of fingerprint collusion with three different pruning rates on MNIST dataset. The collusion resistance level $K_{max}=5$ is robust up to $99\%$ parameter pruning. }
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.4\textwidth]{figs/v31_cifar10_prune_detect_ver4.png}
\caption{\label{fig:v31_cifar10_prune_detect} Detection rates of the fingerprints collusion attack with three different pruning rates on CIFAR10 dataset. The collusion resistance level remains at $K_{max}=5$ when the parameter pruning is mounted on fingerprints collusion attacks.}
\end{figure}
To assess the integrity of \sys{}, the false alarm rates under three different pruning rates are also evaluated on MNIST and CIFAR10 datasets. From the experimental results shown in Figures~\ref{fig:v31_mnist_prune_falseAlarm}~and~\ref{fig:v31_cifar10_prune_falseAlarm}, one can see that no false accusement will occur if the number of colluders is smaller or equal to $K_{max}=5$, which is consistent with the evaluation of fingerprints collusion attacks. This consistency indicates that our colluder detection scheme satisfies the \textbf{integrity} criterion and is robust against parameter pruning attacks.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.4\textwidth]{figs/v31_mnist_prune_falseAlarm_ver4.png}
\caption{\label{fig:v31_mnist_prune_falseAlarm} False alarm rates of fingerprint collusion attacks on MNIST dataset where three pruning rates are tested. No false alarms will occur if the number of colluders does not exceed 5, showing the robustness and integrity of \sys{} against parameter pruning.}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.4\textwidth]{figs/v31_cifar10_prune_falseAlarm_ver4.png}
\caption{\label{fig:v31_cifar10_prune_falseAlarm} False alarm rates of collusion attacks on CIFAR10 dataset where three pruning rates are used. Innocent users will not be incorrectly accused if the number of colluders is at most 5. }
\vspace{-0.5em}
\end{figure}
In conclusion, the high detection rates and low false alarm rates under different attack scenarios corroborate that \sys{} satisfies the robustness, reliability, and integrity requirements discussed in Table~\ref{tab:required}. The consistency of detection performance across benchmarks indicates that \sys{} meets the generality requirement.
\noindent{\textbf{Efficiency.}} Here, we discuss the efficiency of the fingerprinting methodology in terms of the runtime overhead for fingerprints embedding and the efficiency of the AND-ACC codebook.
\begin{itemize}
\item \textit{Fingerprints Embedding Overhead.}
Since each distributed model needs to be retrained with the fingerprint embedding loss, it is necessary that the fingerprinting methodology has low runtime overhead of generating individual fingerprints. We evaluate the fingerprints embedding efficiency of \sys{} by retraining the unmarked host neural network for 20 epochs and 5 epochs. The robustness of the resulting two marked models against fingerprints collusion attacks is compared. Figures~\ref{fig:overhead_mnist_collusion_detect}~and~\ref{fig:overhead_mnist_collusion_falseAlarm} demonstrate the detection rates and false alarm rates of these two marked models using MNIST-CNN benchmark. As can be seen from the comparison, embedding fingerprints by retraining the neural network for 5 epochs is sufficient to ensure the collusion resistance of the embedded fingerprints, suggesting that the runtime overhead induced by \sys{} is negligible. We also observe that the marked models retrained for 5 epochs and 20 epochs have the same collusion resistance level against parameter pruning and model fine-tuning attacks.
\vspace{0.5em}
\item \textit{ACC Codebook Efficiency.} In the multi-media domain, the efficiency of an AND-ACC codebook for a given collusion resistance is defined as the number of users that can be supported per basis vector: $\beta = \frac{n}{v}$. For a $(v,k,1)$-BIBD AND-ACC, the codebook efficiency is:
\begin{equation} \label{eq:code_effic}
\beta= \frac{v-1}{k(k-1)}.
\end{equation}
Thus, for a fixed resilience level $k$, the
efficiency of an AND-ACC codebook constructed from BIBDs improves as
the code length increases~\cite{trappe2003anti}. \sys{} allows the owner to design an efficient coded fingerprinting methodology by choosing appropriate parameters for the BIBD ACC codebook.
In contrast with orthogonal fingerprinting where the number of users is the same as the fingerprint dimension $n=v$ (thus $\beta=1$), it has been proven that a $(v,k,\lambda)$-BIBD has $n \geq v$~\cite{dinitz1992contemporary}, meaning that the codebook efficiency of the BIBD construction satisfies $\beta \geq 1$. Equation~\ref{eq:code_effic} shows the trade-off between the code length $v$ and the collusion resilience level $k$. When the codebook efficiency is fixed, higher resistance level requires longer fingerprinting codes.
\end{itemize}
\begin{figure}[]
\centering
\includegraphics[width=0.4\textwidth]{figs/overhead_mnist_collusion_detect_ver2.png}
\caption{\label{fig:overhead_mnist_collusion_detect} Detection rates of fingerprint collusion attacks on MNIST-CNN benchmark. The same detection performance can be observed when the unmarked model is retrained for two different epochs for fingerprints embedding.}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.4\textwidth]{figs/overhead_mnist_collusion_falseAlarm_ver2.png}
\caption{\label{fig:overhead_mnist_collusion_falseAlarm} False alarm rates of fingerprints collusion attacks on MNIST-CNN benchmark when two different embedding epochs are used. Retraining for 5 epochs is sufficient to ensure low false alarm rates.}
\end{figure}
\subsection{Orthogonal Fingerprinting Evaluation} \label{orthog_eval}
We evaluate the orthogonal fingerprinting methodology on MNIST dataset using a group of 30 users. As expected, orthogonal fingerprinting has good ability to distinguish individual users, while the collusion-resilience and scalability are not competitive with coded fingerprinting. In addition, orthogonal fingerprinting is essentially a special case of coded fingerprinting whose codebook is an identity matrix. For this reason, we evaluate the uniqueness, collusion resilience, and scalability of orthogonal fingerprinting while the comprehensive assessment is not shown here.
\vspace{0.5em}
\noindent \textbf{Uniqueness.}
Each user can be uniquely identified by computing the correlation scores using Equation~\ref{eq:orthog_decode}. An example of user identification is shown in Figure~\ref{fig:orthog_uniq_user11} where user 11 is selected as the target. The correct user can be easily found from the position of the ``spike" in correlation scores due to the orthogonality of fingerprints.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.4\textwidth]{figs/orthog_uniq_mnist_user11_ver2.png}
\caption{\label{fig:orthog_uniq_user11} Correlation scores of the fingerprint assigned to user 11 with the fingerprints of all 30 users when orthogonal fingerprinting is applied. The target user can be uniquely identified from the position of the ``spike'' in the correlation statistics. }
\end{figure}
\vspace{0.5em}
\noindent \textbf{Collusion resilience.} We evaluate the collusion resistance of orthogonal fingerprinting according to the colluder identification scheme discussed in Section~\ref{orthog_detect}. The detection results of three colluders are shown in Figure~\ref{fig:orthog_collusion_mnist_user5_10_15}, which suggests that the three participants in the collusion attack can be accurately identified by thresholding the correlation scores. However, when more users contribute to the collusion of fingerprints, the correlation scores of true colluders attenuate fast and the colluder set cannot be perfectly identified. Figure~\ref{fig:orthog_collusion_mnist_user1_5_10_15_20_25_28} shows the detection results of seven colluders where user 30 is falsely accused with the decision threshold denoted by the red dashed line. As can be seen from Figure~\ref{fig:orthog_collusion_mnist_user1_5_10_15_20_25_28}, there is no decision threshold that can ensure complete detection of all colluders and no false alarms of innocent users. Thus, orthogonal fingerprinting suffers from fingerprints attenuation and has a high chance of false positives as well as false negatives when collusion happens.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.4\textwidth]{figs/orthog_collusion_mnist_user5_10_15_ver2.png}
\caption{\label{fig:orthog_collusion_mnist_user5_10_15} Detection results of three colluders (user 5, user 10, and user 15) participating in the collusion attack. The red dashed line is the threshold (0.3) that can catch all colluders correctly. }
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.4\textwidth]{figs/orthog_collusion_mnist_user1_5_10_15_20_25_28_ver2.png}
\caption{\label{fig:orthog_collusion_mnist_user1_5_10_15_20_25_28} Detection results of seven colluders (user 1, 5, 10, 15, 20, 25 and 28) participating in fingerprints collusion. User 30 is falsely accused if the red dashed line is used as the threshold. }
\end{figure}
\vspace{0.5em}
\noindent \textbf{Scalability.} Orthogonal fingerprinting requires the code length to be $\mathcal{O}(n)$ bits for accommodating $n$ users, which could be much larger than the code bits $\mathcal{O}(\sqrt{n})$ needed in coded fingerprinting. Thereby, the scalability of orthogonal fingerprinting is inferior to that of code modulated fingerprinting.
\section{Conclusion} \label{conclusion}
In this paper, we propose \sys{}, the first generic DL fingerprinting framework for IP protection and digital right management. Two fingerprinting methodologies, orthogonal fingerprinting and coded fingerprinting, are presented and compared. \sys{} works by embedding the fingerprints information in the probability density distribution of weights in different layers of a (deep) neural network. The performance of the proposed framework is evaluated on MNIST and CIFAR10 datasets using two network architectures. Our results demonstrate that \sys{} satisfies all criteria for an effective fingerprinting methodology, including fidelity, uniqueness, reliability, integrity, and robustness. \sys{} attains comparable accuracy to the baseline neural networks and resists potential attacks such as fingerprints collusion, parameter pruning, and model fine-tuning. The BIBD AND-ACC modulated fingerprinting of \sys{} has consistent colluders detection performance across benchmarks, suggesting that our framework is generic and applicable to various network architectures.
\bibliographystyle{IEEEtran}
|
2,877,628,090,972 | arxiv | \subsection*{1. Preface}
In Vol. 1, p. 49 of
{\it Higher Transcendental Functions} of the Bateman Project
\cite{Erdelyi HTF}
we read
"Of all integrals which contain
gamma functions in their
integrands the most important ones are the so-called Mellin-Barnes
integrals. Such integrals were first introduced by S. Pincherle,
in 1888 \cite{Pincherle 88};
their theory
has been developed in 1910
by H. Mellin (where there are references
to earlier work) \cite{Mellin 10} and they were used for a complete
integration of the hypergeometric differential equation
by E.W. Barnes \cite{Barnes 08}."
\newpage
\noindent
In the classical treatise on Bessel functions by
Watson \cite{Watson BESSEL}, p. 190, we read
"By using integrals of a type introduced by Pincherle and Mellin,
Barnes has obtained representations of Bessel functions ...."
\vsh\pni
Salvatore Pincherle (1853 -- 1936)
was Professor of Mathematics at the
University of Bologna from 1880 to 1928.
He retired from the University just after the International
Congress of Mathematicians that he had organized in Bologna,
following the invitation received at the previous Congress
held in Toronto in 1924.
He wrote several treatises and lecture notes on Algebra, Geometry,
Real and Complex Analysis.
His main book related to his scientific activity is entitled
"Le Operazioni Distributive e loro Applicazioni all'Analisi";
it was written in collaboration
with his assistant, Dr. Ugo Amaldi, and
was published in 1901 by Zanichelli, Bologna.
Pincherle can be considered one of the most prominent
founders of the Functional Analysis, as pointed out by J. Hadamard
in his review lecture "Le d\'eveloppement et le
r\^ole scientifique du Calcul fonctionnel",
given at the Congress of Bologna (1928).
A description of Pincherle's scientific works requested from him
by Mittag-Leffler, who was the Editor of Acta Mathematica,
appeared (in French) in 1925 on this prestigious
journal \cite{Pincherle TRAVAUX}.
A collection of selected papers (38 from 247 notes plus 24
treatises) was edited by Unione Matematica Italiana (UMI)
on the occasion of the centenary of his birth,
and published by Cremonese, Roma 1954.
Note that S. Pincherle was the first President of UMI, from 1922 to 1936.
Here we point out that the 1888 paper (in Italian)
of S. Pincherle on the
{\it Generalized Hypergeometric Functions}
led him to introduce the afterwards named Mellin-Barnes integral
to represent the solution of a
generalized hypergeometric differential equation investigated
by Goursat in 1883.
Pincherle's priority was
explicitly recognized by Mellin and Barnes themselves,
as reported below.
\vsh\pni
In 1907 Barnes, see p. 63 in \cite{Barnes 07}, wrote:
"The idea of employing contour integrals involving gamma functions
of the variable in the subject of integration appears to be due
to Pincherle, whose suggestive paper was the starting point of
the investigations of Mellin (1895) though the type of contour
and its use can be traced back to Riemann."
In 1910 Mellin, see p. 326ff in \cite{Mellin 10},
devoted a section (\S 10: Proof of Theorems of Pincherle)
to revisit the original work of Pincherle; in particular, he wrote
"Before we are going to prove this theorem, which is a special case of a
more general theorem of Mr. Pincherle, we want to describe more closely
the lines $L$ over which the integration preferably is to be carried
out." [free translation from German].
\newpage
\noindent
The Mellin-Barnes integrals are the essential tools
for treating the
two classes of higher transcendental functions known as $G$ and $H$
functions, introduced by Meijer (1946) \cite{Meijer 46}
and Fox (1961) \cite{Fox 61} respectively, so
Pincherle can be considered their precursor.
For an exhaustive treatment of the Mellin-Barnes integrals we refer
to the recent monograph by Paris and Kaminski
\cite{ParisKaminski BOOK01}.
\vsh\pni
The purpose of our paper is to let know the community
of scientists interested in special functions the
pioneering 1888 work by Pincherle, that, in the author's
intention, was devoted
to compare two different generalizations of the
Gauss hypergeometric function due to Pochhammer and to
Goursat. Incidentally, for a particular case of the Goursat function,
Pincherle used an integral representation in the complex plane
that in future was adopted by Mellin and Barnes for their
treatment of the {\it generalized hypergeometric functions}
known as $\,_pF_q (z)\,. $
We also intend to show,
in the original part of our paper,
that, by extending the original arguments
by Pincherle, we are able to provide
the Mellin-Barnes integral representation of the transcendental
functions introduced by Meijer (the so-called $G$ functions).
\vsh\pni
The paper is divided as follows.
In Section 2
we report the major statements and
results of the 1888 paper by Pincherle.
In Section 3 we show how it is possible to originate from these
results the Meijer $G$ functions by a proper
generalization of Pincherle's method.
Finally, Section 4 is devoted to the conclusions.
We find it convenient to reserve an Appendix for recalling
some basic notions for the generalized hypergeometric functions
and the Meijer $G$ functions.
\subsection*{2. The Pochhammer and Goursat generalized hypergeometric
functions via Pincherle's arguments}
The 1888 paper by Pincherle is based on what he called
"duality principle",
which relates linear differential equations with
rational coefficients to linear difference equations with
rational coefficients.
Let us remind that the sentence "rational coefficients" means
that the coefficients are in general rational functions
(\ie ratio between two polynomials)
of the independent variable and, in particular, polynomials.
\vsh\pni
By using this principle with polynomial coefficients,
Pincherle showed that
two generalized hypergeometric functions
proposed and investigated respectively by Pochhammer (1870), see
\cite{Pochhammer 70}, and by Goursat (1883),
see \cite{Goursat CR83,Goursat ENS83},
can be obtained and related to each othe
\footnote{In fact, translating from Italian,
the author so writes in introducing his paper:
"It is known that to any linear differential
equation with rational coefficients one may let correspond
a linear difference equation with rational coefficients.
In other words, if the former equation is given, one can immediately
write the latter one and viceversa;
furthermore, from the integral of the one,
the integral of the latter can be easily deduced.
This {\it relationship} appears to be originated by a sort of
{\it duality principle} of which, in this note, I want to treat
an application concerning generalized hypergeometric functions."}.
\vsh\pni
The generalized hypergeometric functions introduced by Pochhammer
and Goursat considered by Pincherle are solutions
of linear differential equations of order $n$
with polynomial coefficients, that we report in Appendix.
As a matter of fact, the duality principle states the correspondence
between a linear {\it ordinary differential equation} ($ODE$)
and a linear {\it finite difference equation} ($FDE$).
The coefficients of both equations are assumed to be
{\it rational} functions, in particular {\it polynomials}.
In his analysis \cite{Pincherle 88} Pincherle considered
the correspondence between the following equations,
$$
\sum_{h=0}^m
\left} \def\r{\right( a_{h\,0} + a_{h\,1}\,\hbox{e}^{-t}
+ a_{h\,2}\,\hbox{e}^{-2t} + \dots + + a_{h\,p}\,\hbox{e}^{-pt} \r)
\psi^{(h)} (t) =0, \eqno(2.1)$$
$$ \sum_{k=0}^p
\left} \def\r{\right[ a_{0\,k} + a_{1 \,k} (x +k)
+ a_{2\,k} (x +k)^2 + \dots + a_{m\,k} (x +k)^m \r]
f (x +k) =0,\eqno(2.2)$$
where $\psi(t)$ and $f(x)$ are analytic functions.
These functions are required to be related to each other
through a Laplace-type transformation
$ \psi(t) \,{\leftrightarrow} \, f(x)$ defined by the formulas
$$ \hbox{(a)} \quad} \def\qq{\qquad
f(x) = \int_l \hbox{e}^{-x t}\, \psi (t)\, dt\,,
\qq
\hbox{(b)} \quad} \def\qq{\qquad
\psi (t)=\rec{2\pi i}\,\int_L \hbox{e}^{+x t}\, f (x)\,dx \,,
\eqno(2.3)$$
where $l$ and $L $ are appropriate integration paths in the
complex $t$ and $x$ plane, respectively.
\vsh\pni
The singular points of the $ODE$ are the roots of the polynomial
$$
a_{m\,0} + a_{m\,1}\,z
+ a_{m\,2}\,z^2 + \dots + a_{m\,p}\,z^p =0\,.
\eqno(2.4)$$
whereas the singular points of the $FDE$ are the rots of
the polynomial
$$
a_{0\,0} + a_{1 \,0} z
+ a_{2\,0} z^2 + \dots + a_{m\,0} z^m = 0\,.
\eqno(2.5)$$
For the details of the above correspondence
Pincherle refers to the 1885 fundamental paper by Poincar\'
\footnote{For an account of Poincar\'e's theorem upon which
Pincherle based his analysis the interested reader can consult
the recent book by Elaydi \cite{Elaydi DE99}, pp. 320-323.}
\cite{Poincare 85},
and his own 1886 note \cite{Pincherle 86}.
Here we limit ourselves to point out what can be easily seen
from a formal comparison between the $ODE$ (2.1) and the $FDE$ (2.2).
We recognize
that the degree $p$ of the coefficients in $\hbox{e}^{-t}$ of the $ODE$
provides the order of the $FDE$, and that the order $m$
of the $ODE$ gives the degree in $x$
of the coefficients of the $FDE$.
Viceversa, the degree $m$ of the coefficients of the $FDE$ provides
the order of the $ODE$, and the order $p$ of the $FDE$
gives the degree in $\hbox{e}^{-t}$ of the coefficients
of the $ODE$.
\vsh\pni
Pincherle's intention was to apply the above duality principle
in order to
compare the generalized
hypergeometric function introduced by Pochhammer and governed by
(A.7) with that by Goursat governed by (A.6).
Using his words, he proved that the family of the Pochhammer functions
(of arbitrary order $p$) originates
from a linear $FDE$ (of order $p$) whose coefficients are polynomials
of the first degree in $x\,,$
and that the family of the Goursat functions (of arbitrary order $m$)
originates from a linear $ODE$ (of order $m$) whose
coefficients are polynomials of the first degree in $x = \hbox{e}^{-t}\,. $
As a consequence of the duality principle there is a
mutual correspondence between the properties of the functions
belonging to one family and to the other.
\vsh\pni
For the Pochhammer function he started from
the $ODE$ of the first order
$$ \qq \qq \left} \def\r{\right( a_{0\,0} + a_{0 \,1}\, \hbox{e}^{-t}
+ a_{0 \,2}\, \hbox{e}^{-2 t}
+ \dots + a_{0\,p}\, \hbox{e} ^{-pt} \r) \,\psi(t) \qq\qq\qq\qq
\eqno(2.6)$$
$$ + \left} \def\r{\right( a_{1\,0} + a_{1 \,1}\, \hbox{e}^{-t}
+ a_{1 \,2}\, \hbox{e}^{-2 t}
+ \dots + a_{1\,p}\, \hbox{e} ^{-pt} \r) \,\psi^{(1)}(t)
= 0\,, $$
to be put in correspondence with the $FDE$
$$ \left} \def\r{\right( a_{0\,0} + a_{1 \,0} x\r) f(x)+
\left} \def\r{\right[ a_{0\,1} + a_{1 \,1} (x+1)\r] f(x+1)+
\left} \def\r{\right[ a_{0\,2} + a_{1 \,2} (x+2)\r] f(x+2) \eqno(2.7)$$
$$ + \dots + \left} \def\r{\right[ a_{0\,p} + a_{1 \,p} (x+p)\r] f(x+p) =0\,.
$$
In this case Pincherle was able to show that the solution $f(x)$
of the $FDE$ (2.7), obtained through the formula (a) in (2.3),
depends on $p$ parameters, whose logarithms are the singular
points of the $ODE$ (2.6). With respect to each of these
parameters, $f(x)$ satisfies a linear $ODE$ of the Pochhammer type
of order $p\,.$
\vsh\pni
For the Goursat function he started from a $FDE$ of the first order
$$ \qq \qq \qq \qq \left} \def\r{\right[ a_{0\,0} + a_{1 \,0}\, x + a_{2 \,0}\, x^2
+ \dots + a_{m\,0}\, x ^m \r] \,f(x )\qq\qq\qq
\eqno(2.8)$$
$$ + \left} \def\r{\right[ a_{0\,1} + a_{1 \,1}\,(x+1) + a_{2 \,1}\,(x+1)^2
+ \dots + a_{m\,1}\, (x+1) ^m \r] \,f(x+1) =0\,,$$
to be put in correspondence to the linear $ODE$ of order $m$
$$\qq \left} \def\r{\right( a_{0\,0} + a_{0 \,1} \hbox{e}^{-t}\r)\, \psi(t) +
\left} \def\r{\right( a_{1\,0} + a_{1 \,1} \hbox{e}^{-t}\r)\, \psi^{(1)}(t) +
\left} \def\r{\right( a_{2\,0} + a_{2 \,1} \hbox{e}^{-t}\r)\, \psi^{(2)}(t)
\qq \eqno(2.9)$$
$$ + \dots
+ \left} \def\r{\right( a_{m\,0} + a_{m \,1} \hbox{e}^{-t}\r)\, \psi^{(m)}(t) =0\,.
$$
Using a result of Mellin, see \cite{Mellin 86,Mellin 87},
Pincherle wrote
the solution of the $FDE$ (2.8) as
$$ f(x ) = c^x \prod_{\nu =1}^m
{\Gamma(x -\rho _\nu )\over \Gamma(x -\sigma _\nu)}\,,
\eqno(2.10)$$
where
the $\rho _\nu$'s and $\sigma _\nu$'s are respectively the roots of the
algebraic equations
$$\cases{ a_{0\,0} + a_{1 \,0}\, x
+ \dots + a_{m\,0}\, x ^m = a_{m\,0}\,
{\displaystyle \prod_{\nu =1}^m}
(x -\rho _\nu )= 0\,,\cr\cr
a_{0\,1} + a_{1 \,1}\, (x +1)
+ \dots + a_{m\,1}\, (x +1)^m =
a_{m\,1}\, {\displaystyle \prod_{\nu =1}^m}
(x -\sigma _\nu ) =0\,.
\cr}
\eqno(2.11)
$$
and $c$ is a constant. If $a_{m\,0},\,a_{m\,1}$ are both
different from zero, we can assume $c = - a_{m\,0}/ a_{m\,1}\,.$
\vsh\pni
Pincherle showed that, by setting $z= c\,\hbox{e}^{t}\,,$
the $ODE$ of order $m$ (2.9) is nothing but
the Goursat differential equation (A.6).
\vsh\pni
Furthermore, in the special case $a_{m\,1} =0\,,$ he gave the following
relevant formula for the solution
$$ \psi(t) = \rec{2\pi i} \,\int_{a-i\infty} ^{a+i\infty}
{\Gamma(x-\rho_1)\,\Gamma(x-\rho_2)\dots \Gamma(x-\rho_m)
\over
\Gamma(x-\sigma _1)\,\Gamma(x-\sigma_2)\dots \Gamma(x-\sigma_{m-1})}
\, \hbox{e}^{xt} \, dx \eqno(2.12)$$
where $a > \Re \{\rho _1, \rho _2, \dots, \rho _m\}\,. $
We recognize in (2.12) the first example in the literature
of the (afterwards named) Mellin-Barnes integral.
\vsh\pni
The convergence of the integral was proved by Pincherle
by using his asymptotic formula for $\Gamma(a + i\eta)$
as $\eta \to \pm \infty
\footnote{We also note the priority of Pincherle in
obtaining this asymptotic formula, as outlined by Mellin, see {\it e.g.}\ } \def\ie{{\it i.e.}\
\cite{Mellin 91}, pp. 330-331, and \cite{Mellin 10}, p.309.
In his 1925 "Notices sur les travaux"
\cite{Pincherle TRAVAUX} (p. 56, \S 16) Pincherle wrote
"Une expression asymptotique de $\Gamma(x)$ pour $x \to \infty$
dans le sens imaginaire qui se trouve dans
\cite{Pincherle 88} a \'et\'e attribu\'ee \`a
d'autres auteurs, mais M. Mellin m'en a
r\'ecemment r\'evendiqu\'e la priorit\'e."
This formula is fundamental to investigate the convergence of the
Mellin-Barnes integrals, as one can recognize from the detailed
analysis by Dixon and Ferrar \cite{DixonFerrar 36}, see also
\cite{ParisKaminski BOOK01}.}.
So, for a solution of a particular case of the Goursat equation,
Pincherle provided an integral representation
that later was adopted by Mellin and Barnes for their
treatment of the generalized hypergeometric functions
$\,_p F_q (z)\,.$
Since then, the merits of Mellin and Barnes were so well recognized
that their names were attached to the integrals
of this type; on the other hand,
after the 1888 paper (written in Italian),
Pincherle did not pursue on this topic,
so his name was no longer related
to these integrals and, in this respect, his 1888 paper
was practically ignored.
\subsection*{3. The Meijer transcendental function via Pincherle's arguments}
In more recent times
other families of higher transcendental functions
have been introduced to generalize the hypergeometric function
based on their representation by Mellin-Barnes type integrals.
We especially refer to the so-called $G$ and $H$ functions,
briefly recalled in the Appendix.
\vsh\pni
In this section (the original part of our paper)
we show that by extending the original arguments by Pincherle
based on the duality principle we are able to provide
the differential equation and the
Mellin-Barnes integral representation of the $G$ functions.
However, we note that these arguments, being
based on equations with rational coefficients,
do not allow us to treat the Fox $H$ functions,
since for them an ordinary differential equation
cannot be found in the general case.
\vsh\pni
Our starting point is still the "duality principle"
that involves a {\it $FDE$ of the first order
as in Pincherle's approach for the Goursat function},
but, at variance of Eq. (2.8), we now allow
that the degree of the two
polynomial coefficients are not necessarily equal.
Setting $p, q$ the degrees of these coefficients, our $FDE$
reads
$$\qq \qq \qq \qq \left} \def\r{\right[ a_{0\,0} + a_{1 \,0}\, x + a_{2 \,0}\, x^2
+ \dots + a_{p\,0}\, x ^p \r] \,f(x )\qq \qq \qq\eqno(3.1)$$
$$ + \left} \def\r{\right[ a_{0\,1} + a_{1 \,1}\,(x+1) + a_{2 \,1}\,(x+1)^2
+ \dots + a_{q\,1}\, (x+1) ^q \r] \,f(x+1) =0\,.$$
We can prove after some algebra
that the associated $ODE$
turns out to be independent of the
order relation between $p$ and $q$ and reads
$$
\sum_{h=0}^p a_{h\,0} \, \psi^{(h)} (t) +
\hbox{e}^{-t} \, \sum_{h=0}^q a_{h\,1}\,
\psi^{(h)} (t) =0\,.\eqno(3.2)$$
As we have learnt from Pincherle's analysis,
the solution $\psi(t)$ of the $ODE$ (3.2) can
be expressed in terms
of the solution $f(x)$ of the $FDE$ (3.1), according to
the integral representation (b) in Eq. (2.3).
\vsh\pni
Now, in view of Mellin's results used by Pincherle
(see also Milne-Thomson \cite{MilneThomson FDE51}, \S 11.2, p. 327),
we can write
the solution of (3.1) in terms of products and fractions
of $\Gamma$ functions.
Denoting by
$\rho _j$ ($j=0,1,\dots ,p$) and $\sigma_k$ ($k=0,1,\dots,q$)
the roots of the
algebraic equations
$$\cases{ a_{0\,0} + a_{1 \,0}\, x
+ \dots + a_{p\,0}\, x ^p = a_{p\,0}\,
{\displaystyle \prod_{j =1}^p}
(x -\rho _j )= 0\,,\cr\cr
a_{0\,1} + a_{1 \,1}\, (x +1)
+ \dots + a_{q\,1}\, (x +1)^q =
a_{q\,1}\, {\displaystyle \prod_{k =1}^q}
(x -\sigma_k ) =0\,.
\cr}
\eqno(3.3) $$
we can write the required solution as
$$ f(x) = c^x\,
{\prod_{j =1}^p \Gamma(x-\rho_j)\over
\prod_{k =1}^q \Gamma(x -\sigma_k) }\,,
\quad} \def\qq{\qquad c = - { a_{p\,0} \over a_{q\,1}}\,.
\eqno(3.4) $$
We note, by using the known properties
of the Gamma function,
that Eq. (3.4) can be re-written
in the following alternative form
$$ f(x) = c^x\,
{\prod_{k =1}^q \Gamma(1+\sigma_k -x)\over
\prod_{j =1}^p \Gamma(1 +\rho_j -x) }\,,
\quad} \def\qq{\qquad c = (-1)^{p-q+1}\, { a_{p\,0} \over a_{q\,1}}\,.
\eqno(3.5) $$
Furthermore,
introducing the integers $m,n$ such that
$ \,0\le m \le q\,,$ $\, 0\le n \le p\,, $
we can combine the previous formulas (3.4)-(3.5)
and obtain the alternative form
$$ f(x) = c^x\,
{\prod_{j =1}^n \Gamma(x-\rho_j)\,
\prod_{k =1}^m \Gamma(1+\sigma_k -x)
\over
\prod_{j =n+1}^p \Gamma(1+\rho_j -x)\,
\prod_{k =m+1}^q \Gamma(x- \sigma_k)} \,, \eqno(3.6)$$
with
$$ c = (-1)^{m+n -p+1}\, { a_{p\,0} \over a_{q\,1}}\,.
\eqno(3.7) $$
We note that Eq. (3.6) reduces to the Pincherle expression (2.10)
by setting $\{n=p=q\,,\, m=0\}$, and to Eqs (3.4), (3.5)
by setting $\{n=p\,,\,m=0\}$,
$\{n=0\,,\,m=q\}$, respectively.
By adopting the form (3.6)-(3.7) we have the most general
expression for $f(x)$ which in its turn allows us to arrive
at the most general solution $\psi(t)$
of the corresponding $ODE$ (3.2) in the form
$$ \psi(t)
=\rec{2\pi i}\,\int_L c^x \,
{\prod_{j =1}^n \Gamma(x-\rho_j)\,
\prod_{k =1}^m \Gamma(1+\sigma_k -x)
\over
\prod_{j =n+1}^p \Gamma(1+\rho_j -x)\,
\prod_{k =m+1}^q \Gamma(x- \sigma_k)} \,
\hbox{e}^{xt}\, dx \,,
\eqno(3.8)$$
where $L$ is an appropriate integration path in the
complex $x$ plane.
\vsh\pni
Now, starting from (3.2) and (3.7)-(3.8) it is
not difficult to arrive at the general $G$ function
namely at its $ODE$
and at its Mellin-Barnes integral representation,
both given in Appendix.
For this purpose we need only to carry out some algebraic manipulations
and obvious transformations of variables.
\vsh\pni
We first note that using (3.3) the $ODE$ (3.2) reads
$$ \left} \def\r{\right[ a_{p\,0}\, \prod_{j=1}^p
\left} \def\r{\right( {d\over dt} -\rho _j\r) +
a_{q\,1}\,\hbox{e}^{-t}\, \prod_{k=1}^q
\left} \def\r{\right({d\over dt} -\sigma_k -1\r)\r]\,\psi(t)=0\,. \eqno(3.9)$$
Then, putting
$$ z = c\, \hbox{e}^t\,, \quad} \def\qq{\qquad u(z) = \psi[t(z)]\,,
\quad} \def\qq{\qquad a_j= 1 + \rho _j\,,\quad} \def\qq{\qquad b_k= 1+ \sigma_k\,, \eqno(3.10)$$
and using (3.7), we get from (3.9)
$$ \left} \def\r{\right[(-1)^{p-m-n}z
\prod_{j=1}^p\left} \def\r{\right(z{d \over dz}-a_j+1\r)-
\prod_{k=1}^q\left} \def\r{\right(z{d \over dz}-b_k\r)\r]
u(z)=0 \,, \eqno(3.11)$$
which is just the $ODE$ satisfied by the Meijer $G$ function
of orders $m,n,p,q$, see (A.10).
Of course, at least formally, the Mellin-Barnes
integral
representation of the $G$ function (A.8)-(A.9)
is recovered as well and reads
(setting $s = x$)
$$ u(z) = \rec{2\pi i}\,
\int_L
{
\prod_{k=1}^m \Gamma(b_k- s)\, \prod_{j=1}^n \Gamma(1-a_j + s)
\over
\prod_{k=m+1}^q \Gamma(1-b_k + s)\,\prod_{j=n+1}^p \Gamma(a_j- s)
}
\, z^s \, ds\,.\eqno(3.12)$$
\newpage
\subsection*{4. Conclusions}
We have revisited the 1888 paper (in Italian) by Pincherle
on generalized hypergeometric functions,
based on the duality principle between linear differential
equations and linear difference equation with rational coefficients.
We have pointed out the pioneering contribution
of the Italian mathematician towards the afterwards named Mellin-Barnes
integral representation that he was able to provide
for a special case of a generalized hypergeometric
function introduced by Goursat in 1883.
By extending his original arguments
we have shown how to formally derive the ordinary differential
equation and
the Mellin-Barnes integral representation of the
$G$ functions introduced by Meijer in 1936-1946.
So, in principle, Pincherle could have introduced
the $G$ functions much before Meijer if he had intended to
pursue his original arguments in this direction.
Finally, we like to point out that
the so-called Mellin-Barnes integrals are an efficient
tool to deal with the higher transcendental functions.
In fact, for a pure mathematics view point
they facilitate the representation
of these functions
(as formerly indicated by Pincherle),
and for an applied mathematics view point
they can be successfully adopted to compute
the same functions.
In this respect we refer to the recent paper by
Mainardi, Luchko and Pagnini \cite{Mainardi LUMAPA01},
who have computed the solutions of diffusion-wave equations
of fractional order by using their Mellin-Barnes
integral representation.
\subsection*{Acknowledgements}
\noindent
Research performed under the auspices of the National Group of Mathematical
Physics (G.N.F.M. - I.N.D.A.M.) and partially supported
by the Italian Ministry
of University (M.I.U.R) through the Research Commission of the
University of Bologna and by the National Institute of Nuclear
Physics (INFN) through the Bologna branch (Theoretical Group).
The authors are grateful to Prof. R. Gorenflo for the discussions
and the helpful comments.
\subsection*{Appendix: Some generalizations of the hypergeometric functions}
The purpose of this Appendix is to provide a survey of
some higher transcendental functions that have been proposed
for generalizing the
hypergeometric function.
In particular we shall consider the functions investigated by Pochhammer
(1870) and Goursat (1883), that have interested Pincherle
in his 1888 paper, and the $G$ functions introduced by Meijer
(1936-1946), since they are re-derived in our
present analysis by extending the
arguments by Pincherle.
Our survey is essentially based on the classical handbook
of the Bateman Project
\cite{Erdelyi HTF} and on the more
recent treatise by Kiryakova \cite{Kiryakova 94}.
\vsh\pni
Let us start by recalling the classical {\it hypergeometric
equation}.
If a homogeneous linear differential equation of the second order
has at most three singular points we may assume that these
are $0, 1, \infty\,. $ If all these singular points are "regular", then
the equation can be reduced to the form
$$ z(1-z) \, {d^2 u \over dz^2} +
[c-(a +b +1) z ] \, {d u \over dz} - ab \, u(z)
=0\,,
\eqno(A.1)
$$
where $a,b,c$ are arbitrary complex constants.
This is the {\it hypergeometric equation}.
Taking $c \ne 0,-1,-2,\dots\,,$
and defining the Pochhammer symbol
$$ (\alpha )_n = {\Gamma(\alpha +n)\over \Gamma(\alpha )}\,,
\; \hbox{\ie} \;
(\alpha )_0=1,,\;
(\alpha)_n =\alpha (\alpha+1)\dots (\alpha +n-1)\,,\; n=1,2,\dots$$
then the solution of Eq. (A.1), regular at $z=0\,, $
known as {\it Gauss hypergeometric function},
turns out to be
$$ u(z) = \sum_{n=0}^\infty
{ (a)_n\, (b)_n\over (c)_n n!}\, {\displaystyle z^n} := F(a,b;c;z)
\,.\eqno(A.2)$$
\vsh\pni
The above hypergeometric series can be generalized by introducing
$p$ parameters $a_1,\dots a_p$
(the numerator-parameters) and
$q$ parameters $b_1, \dots, b_q$
(the denominator-parameters).
The ensuing series
$$ \sum_{n=0}^\infty
{ (a_1)_n\,\dots (a_p)_n\over (b_1)_n \dots (b_q)_n}
\, {\displaystyle {z^n \over n!}}
\, :=
\; _pF_q (a_1, \dots, a_p ; b_1, \dots, b_q ;z) \, ,\eqno(A.3)$$
or, in a more compact form,
$$\sum_{n=0}^{\infty}
{\Pi_{j=1}^p (a_j)_n \over \Pi_{k=1}^q (b_j)_n} \,
{\displaystyle {z^n \over n!}} \,:=
\; _pF_q \left} \def\r{\right[ (a_j)_1^p ; (b_k)_1^q ;z) \r]
\eqno(A.3')$$
is known as the {\it generalized hypergeometric series}.
In general
(excepting certain integer values of the parameters
for which the series fails to make sense or terminate
\footnote{If at least one of the denominator parameters $b_k$
($k =1,\dots, q$) is zero or a negative integer, Eq. (A.3) has no meaning
at all, since the denominator of the general term vanishes
for a sufficiently large index. If some of the numerator
parameters are zero or negative integers, then the series terminates
and turns into a {\it hypergeometric polynomial}.})
the series $\, _pF_q$ converges for all finite $z$ if $p\le q\,,$
converges for $|z|<1$ if $p=q+1\,,$ and diverges for
all $z \ne 0$ if $p>q+1\,. $
The resulting {\it generalized hypergeometric function}
$u(z) =\, _pF_q$
will
satisfy a {\it generalized hypergeometric equation}.
If we note that Eq. (A.1) satisfied by $u(z) = F(a,b;c;z) =
\,_2F_1(a,b;c;z) $
can be written in the equivalent form
(see {\it e.g.}\ } \def\ie{{\it i.e.}\ Rainville \cite{Rainville SF60}, \S 46, p. 75) :
$$ \left} \def\r{\right[z
\left} \def\r{\right(z{d \over dz}+a\r)\left} \def\r{\right(z{d \over dz} +b \r)-
z{d \over dz} \left} \def\r{\right(z{d \over dz}+c-1\r)\r]
u(z)=0 \,, \eqno(A.1')$$
we arrive at the equation of order $n=q+1$ for $u(z)
=\, _pF_q \left} \def\r{\right[ (a_j)_1^p ; (b_k)_1^q ;z) \r]\,:$
$$ \left} \def\r{\right[z
\prod_{j=1}^p\left} \def\r{\right(z{d \over dz}+a_j\r)-
z{d \over dz}\prod_{k=1}^{q}\left} \def\r{\right(z{d \over dz}+b_k-1\r)\r]
u(z)=0 \,. \eqno(A.4)$$
The above equation containing the operator $z d/dz$ can be
written in a more explicit form by using $D = d/dz$,
see {\it e.g.}\ } \def\ie{{\it i.e.}\ \cite{Erdelyi HTF} \S 42, p.184.
Distinguishing between the cases $p\le q$
and $p=q+1\,, $ we get
the following general equations
in $v= v(z)\,:$
$$ z^q D^{q+1} v +
\sum_{\nu =1}^q z^{\nu -1} (A_\nu z - B_\nu) \, D^\nu v + A_0 v = 0\,,
\quad} \def\qq{\qquad p \le q\,, \eqno(A.5)$$
$$ z^q (1-z)D^{q+1} v +
\sum_{\nu =1}^q z^{\nu -1} (A_\nu z - B_\nu ) \, D^\nu v
+ A_0 v = 0\,,
\quad} \def\qq{\qquad p =q+1\,, \eqno(A.6)$$
where $A_0, A_\nu, B_\nu$ are constants.
Eq. (A.5) has two singular points, $z=0, \infty$ of which
$z=0$ is of regular type, whereas
Eq. (A.6) has three singular points, $z=0,1, \infty$ of
regular type, like Eq. (A.1).
An equation of the same type as Eq. (A.6) was formerly introduced in
1883 by Goursat \cite{Goursat CR83,Goursat ENS83}
in his essay on hypergeometric
functions of higher order.
\vsh\pni
Another generalization of the Gauss hypergeometric equation
was previously proposed in 1870
by Pochhammer \cite{Pochhammer 70}.
He
investigated the most general homogeneous linear differential equation
of the order $n$ ($n>2$) of Fuchsian type, namely with only "regular"
singular points in $\{a_1, a_2, \dots, a_n,\infty\}\,. $
The Pochhammer function thus satisfies a differential equation of
the type
$$ \phi _n(z) \, {d^n w \over dz^n} +
\dots + \phi _1(z) \, {d w \over dz} + \phi_0\, w(z) = 0\,,
\eqno(A.7)
$$
where the coefficients $\phi _\nu (z) $ ($\nu = 0, 1 \dots, n$) are
polynomials of degree $\nu \,, $ with
$\phi _n(z) = (z-a_1)(z-a_2)\dots (z-a_n)\,. $
\vsh\pni
The $\,_pF_q\,$ functions satisfying Eqs (A.5)-(A.6)
and the Pochhammer functions satisfying Eq. (A.7)
are not the only generalizations of the Gauss hypergeometric
function (A.2). In 1936 Meijer \cite{Meijer 36}
introduced a new class of transcendental
functions, the so called $G$ functions, which provide
an interpretation of the symbol $\,_pF_q$ when
$ p >q+1\,. $ Originally, the $G$ function was defined in a manner
resembling (A.2). Later \cite{Meijer 46}, this definition was replaced
by one in terms of Mellin-Barnes type integrals.
The latter definition has the advantage that it allows
a greater freedom in the relative values of $p$ and $q$.
Here, following \cite{Erdelyi HTF}, we shall complete
Meijer's definition so as to include all values of $p$ and $q$
without placing any (non-trivial) restriction on $m$ and $n$.
One defines
$$ G^{m,n}_{p,q}
\left[ z \left\vert
{a_1, \dots , a_p\atop b_1, \dots , b_q}
\right.
\right]=
G^{m,n}_{p,q}
\left[ z \left\vert
(a_j)_1^p \atop (b_j)_1^q
\right.
\right] =
\rec{2\pi i}\,
\int_L {\cal{G}}^{m,n}_{p,q} (s) \, z^s \, ds\,, \eqno(A.8)$$
where
$ L$ is a suitably chosen path, $z \neq 0\,,$
$ z^s :=\hbox{exp} \left} \def\r{\right[ s(\ln |z| + i \, \hbox{arg} \,z)\r]$
with a single valued branch of $\hbox{arg}\, z$,
and the integrand is defined as follows
$$ {\cal{G}}^{m,n}_{p,q} (s)=
{
\prod_{k=1}^m \Gamma(b_k- s)\, \prod_{j=1}^n \Gamma(1-a_j + s)
\over
\prod_{k=m+1}^q \Gamma(1-b_k + s)\,\prod_{j=n+1}^p \Gamma(a_j- s)
}\,.\eqno(A.9) $$
In (A.9) an empty product is interpreted as 1, the integers
$m,n,p,q$ (known as {\it orders of the $G$ function}) are such that
$0\le m \le q\,,$
$\,0 \le n \le p\,,$ and the parameters
$a_j$ and $b_k$ are such that
no pole of $\Gamma(b_k-s), $ $k=1,\dots, m,$
coincides with any pole of $\Gamma(1-a_j+s), $ $j=1,\dots, n.$
For the details of the integration path,
which can be of three different types,
we refer to \cite{Erdelyi HTF} (see also \cite{Kiryakova 94}
where an illustration of what these contours can be like
is found).
\vsh\pni
One can establish that the Meijer $G$ function $u(z)$
satisfies the linear ordinary differential equation of
generalized hypergeometric type,
see {\it e.g.}\ } \def\ie{{\it i.e.}\ \cite{Kiryakova 94} [p. 316, Eq. (A.19)],
$$ \left} \def\r{\right[(-1)^{p-m-n}z
\prod_{j=1}^p\left} \def\r{\right(z{d \over dz}-a_j+1\r)-
\prod_{k=1}^q\left} \def\r{\right(z{d \over dz}-b_k\r)\r]
u(z)=0 \,. \eqno(A.10)$$
For more details on the Meijer function and on the
singular points of the above differential equation we
refer to \cite{Kiryakova 94}. Here we limit ourselves to
show how
the generalized hypergeometric function $\,_pF_q$
can be expressed in terms of a Meijer $G$ function
and thus in terms of Mellin-Barnes integral. We have
$$ _pF_q ( (a)_p ;(b)_q ;z ) =
{\Pi_{k=1}^q \Gamma(b_k) \over
\Pi_{j=1}^p \Gamma(a_j)}
G^{1,p}_{p,q+1}
\left[- z \left\vert
{\hfill (1-a_j)_{1}^{p} \hfill \atop
\hfill 0,\, (1-b_k)_{1}^{q} \hfill}
\right. \right] \,, \eqno(A.11)$$
$$ G^{1,p}_{p,q+1} =
{1 \over 2 \pi i} \,\int_{- i \infty}^{+ i\infty}
{\Gamma(a_1+s) \cdots \Gamma(a_p+s)\Gamma(-s) \over
\Gamma(b_1+s) \cdots \Gamma(b_q+s)}
(-z)^s \, ds \,, \eqno(A.12) $$
$$ a_j \ne 0,-1,-2,\dots; \quad} \def\qq{\qquad j=1,\dots,p; \quad} \def\qq{\qquad
|\hbox{arg}(1-zi)|<\pi \,. \eqno(A.13)$$
Here the path of integration is the imaginary axis (in the complex
$s$-plane) which can be deformed, if necessary, in order to
separate the poles of $\Gamma(a_j+s)$, $j=1,\dots,p$
from those of $\Gamma(-s)\,. $
\vsh\pni
Though the $G$ functions are quite general in nature, there still
exist examples of special functions,
like the Mittag-Leffler and the Wright functions,
which do not form their particular cases.
A more general class which includes
those functions can be achieved by introducing the
Fox $H$ functions \cite{Fox 61}, whose representation in terms of the
Mellin-Barnes integral is a straightforward generalization of that
for the $G$ functions.
For this purpose we need to add to the sets of the complex
parameters $a_j$ and $b_k$ the new sets of positive numbers $\alpha _j$
and $\beta _k$ with $j=1,\dots, p,$ $k=1,\dots,q,$
and modify in the integral of (A.8) the kernel
$ {\cal{G}}^{m,n}_{p,q} (s)$ into
$$ {\cal{H}}^{m,n}_{p,q} (s)=
{
\prod_{k=1}^m \Gamma(b_k- \beta_k s)\,
\prod_{j=1}^n \Gamma(1-a_j + \alpha _j s)
\over
\prod_{k=m+1}^q \Gamma(1-b_k + \beta_k s)\,
\prod_{j=n+1}^p \Gamma(a_j- \alpha_j s)
}\,.\eqno(A.14) $$
Then the Fox $H$ function turns out to be defined as
$$ H^{m,n}_{p,q}(z) = H^{m,n}_{p,q}
\left[z \left\vert
{\hfill (a_j ,\alpha _j )_{j=1,\dots,,p} \hfill \atop
\hfill (b_k,\beta _k)_{k=1, \dots,q} \hfill}
\right. \right]=
\rec{2\pi i}\,
\int_L {\cal {H}}^{m,n}_{p,q} (s) \, z^s \, ds\,.\eqno(A.15)$$
We do not pursue furthermore in our survey:
we refer the interested reader to the treatises
on Fox $H$ functions
by Mathai and Saxena \cite{MathaiSaxena H},
Srivastava, Gupta and Goyal \cite{Srivastava H} and
references therein.
|
2,877,628,090,973 | arxiv | \section{Introduction}\label{sec1}
Quantum entanglement \cite{Ent1,Ent2,Ent3} is a quintessential feature of quantum mechanics which distinguishes the quantum from the classical world and plays an important role in quantum information processing. One distinguished property of quantum entanglement without any classical
counterpart is its limited shareability in multipartite quantum systems, known as the monogamy of entanglement (MoE) \cite{BMT,JSK}.
MoE is the fundamental ingredient in many quantum information processing tasks such as the security proof in quantum cryptographic scheme \cite{BCH}
and the security analysis of quantum key distribution \cite{MP}.
For a tripartite quantum state $\rho_{ABC}$ with its reduced
density matrices $\rho_{AB}=\Tr_C {\rho_{ABC}}$ and $\rho_{AC}=\Tr_B \rho_{ABC}$, mathematically MoE can be characterized in terms of some bipartite entanglement measure $\varepsilon$ as
$\varepsilon(\rho_{A|BC})\geqslant\varepsilon(\rho_{AB})+\varepsilon(\rho_{AC})$, where $\varepsilon(\rho_{A|BC})$ denotes the shared entanglement between subsystems $A$ and $BC$, which measures the degree of entanglement between $A$ and $BC$, and $\varepsilon(\rho_{AB})$ ($\varepsilon(\rho_{AC})$) is the bipartite
entanglement between A and B (A and C). This inequality conveys the MoE principle that the amount of
entanglement shared between $A $ and $B$ restricts the
possible amount of entanglement between $A$ and $C$ so
that their sum does not exceed the total bipartite
entanglement between $A$ and the composite $BC$
system. Note that the monogamy inequalities provide an upper bound for bipartite sharability of entanglement in a multipartite system. It is also known that the
assisted entanglement $\varepsilon^a$ \cite{GG12007,GG2007}, which is a dual amount to bipartite
entanglement measures, has a dually monogamous property
in multipartite systems. This dually monogamous
property of entanglement is also characterized as a polygamy inequality, which is quantitatively displayed as $\varepsilon^a(\rho_{A|BC})\leqslant\varepsilon^a(\rho_{AB})+\varepsilon^a(\rho_{AC})$ for a tripartite system, where $\varepsilon^a(\cdot)$ is the corresponding entanglement measure of assistance associated to $\varepsilon$. Similarly, a polygamy inequality sets a lower bound for the distribution of bipartite entanglement in multipartite systems.
The first monogamy relation was proven by Coffman {\it et al.} \cite{CV} based on the squared concurrence for arbitrary three-qubit states, known as the CKW inequality. Later, it was generalized to multipartite systems \cite{OTJ,BYK}. Besides concurrence, the monogamy relations are also given by various entanglement measures for multipartite systems \cite{HT,AG,Kim2009,BYK2,ZXN,JZX,Kim2016T1,JZX1,LY,JZX2,JZX5,Kim2018}. The polygamy relation was first obtained in terms of the tangle of assistance for three-qubit systems \cite{GG12007}, and then generalized to multiqubit systems and arbitrary dimensional multipartite systems \cite{Kim2018,Kim2010,BF2009,Kim2012,JZX3,KJS20181,KJS20182,Shi1}.
However, it was found that the CKW inequality is invalid for higher-dimensional systems \cite{OYC}. In \cite{LC} the authors discovered that in some higher-dimensional systems there is no nontrivial monogamy relation satisfied by any additive entanglement measures. It seems that only the squashed entanglement satisfies the monogamy relation for arbitrary dimensional systems \cite{CM}. Therefore, the MoE for high dimensional systems has attracted much attention.
In \cite{Kim2008} Kim {\it et al.} proved that the $n$-qudit generalized $W$-class (GW) states satisfy the monogamy inequality in terms of the squared concurrence. In \cite{Choi2015} Choi and Kim showed that the superposition of the generalized $W$-class states and vacuum (GWV) states satisfy the strong monogamy inequality based on the squared convex roof extended negativity. In \cite{Kim2016} Kim focused on a large class of mixed states that are in a partially coherent superposition of a generalized $W$-class state and the vacuum, and showed that those states obey the strong monogamy inequality by using the squared convex roof extended negativity. Very recently, Shi {\it et al.} presented in \cite{Shi2020} new monogamy and polygamy relations with respect to any partition for $n$-qudit GWV states by using the analytical formula of the Tsallis-$q$ entanglement (T$q$E) \cite{Kim2010}. Moreover, Liang {\it et al.} presented in \cite{Liang2020} the monogamy and polygamy relations for GWV states in terms of the R\'enyi-$\alpha$ entanglement (R$\alpha$E) \cite{Renyil,KimR2010}. Inspired by these developments, we investigate further the monogamy and polygamy relations for the GWV states in high dimensional quantum systems.
In this paper, by using the Hamming weight of the binary vector associated with the distribution of subsystems, we establish a class of monogamy and polygamy relations for the GWV states based on T$q$E and R$\alpha$E. We derive monogamy and polygamy inequalities which are tighter than those given in \cite{Shi2020,Liang2020}, thus giving rise to finer characterizations of the entanglement distributions among the high dimensional quantum subsystems for the GWV states.
\section{Tighter monogamy and polygamy relations based on T$q$E and T$q$EoA for GWV states}\label{sec3}
A class of $n$-qubit $W$-class states and $n$-qudit generalized $W$-class states are, respectively, defined by
\begin{eqnarray}
\ket{\psi}_{A_1 A_2 ... A_n}=&
a_1 \ket{10\cdots0}+a_2 \ket{01\cdots0}+...+a_n \ket{00\cdots1}
\label{qubitWstate}
\end{eqnarray}
and
\begin{eqnarray}
\left|W_n^d \right\rangle_{A_1\cdots A_n}=\sum_{i=1}^{d-1}(&a_{1i}{\ket {i0\cdots 0}} +a_{2i}{\ket {0i\cdots 0}}+\cdots +a_{ni}{\ket {00\cdots 0i}}),
\label{quditWstate}
\end{eqnarray}
where $\sum_{i=1}^{n}|a_i|^2 =1$ and $\sum_{s=1}^{n}\sum_{i=1}^{d-1}|a_{si}|^2=1$. When $d=2$, (\ref{quditWstate}) reduces to the $n$-qubit $W$-class states.
Choi and Kim introduced in \cite{Choi2015} the GWV state $\ket{\psi}_{A_1\cdots A_n}$,
\begin{equation}
\ket{\psi}_{A_1A_2\cdots A_n}=\sqrt{p}\left|W_n^d \right\rangle_{A_1\cdots A_n}+\sqrt{1-p}\ket{0\cdots 0}_{A_1\cdots A_n}
\label{GWV}
\end{equation}
for $0\leqslant p \leqslant 1$.
Let $\rho_{A_{i_1}\cdots A_{i_{m}}}$ denotes the reduced density matrix with respect to $\ket{\psi}_{A_1\cdots A_n}$ in $m$-qudit subsystems $A_{i_1}\cdots A_{i_{m}}$ with $2 \leqslant m \leqslant n-1$. It has been shown that for any pure state decomposition of $\rho_{A_{i_1}\cdots A_{i_{m}}}$,
\begin{equation}
\rho_{A_{i_1}\cdots A_{i_{m}}}=\sum_{k}q_k\ket{\phi_k}_{A_{i_1}\cdots A_{i_{m}}}\bra{\phi_k},
\label{rhoa1aj1ajm-1}
\end{equation}
$\ket{\phi_k}_{A_{i_1}\cdots A_{i_{m}}}$ is a superposition of an $m$-qudit generalized $W$-class state and vacuum \cite{Choi2015}. Moreover,
for an $n$-qudit GWV state $\ket{\psi}_{A_1A_2\cdots A_n}$ and an
arbitrary partition $P=\{P_1,\cdots,P_r\}$ of the set $S=\{A_1,\cdots,A_n\}$, $r\leqslant n$,
$P_i\cup P_j=\emptyset$ $(i\neq j)$ and $\bigcup_iP_i=S$,
the state $\ket{\psi}_{P_1P_2\cdots P_r}$ is also a GWV state \cite{Kim2016}.
The T$q$E of a bipartite pure state $|\psi\rangle_{AB}$ is defined as \cite{Kim2010}
\begin{equation}
T_q(|\psi\rangle_{AB})=S_q(\rho_A)=\frac{1}{q-1}(1-{\rm tr}\rho_A^q),
\end{equation}
where $q>0$ and $q\neq1$.
When $q$ tends to 1, $T_q(\rho)$ converges to the von Neumann entropy, i.e., $\lim\limits_{q\rightarrow 1}T_q(\rho)=-{\rm tr}\rho\log_2\rho=S(\rho)$.
The Tsallis-$q$ entanglement of a bipartite mixed state $\rho_{AB}$ is given by
\begin{equation}
T_q(\rho_{AB})=\min\limits_{\{p_i,|\psi_i\rangle\}}\sum\limits_{i}p_iT_q(|\psi_i\rangle)
\end{equation}
with the minimum taken over all possible pure state decompositions of $\rho_{AB}$. As a dual concept of T$q$E, its Tsallis-$q$ entanglement of assistance (T$q$EoA) is defined as
\begin{equation}
T_q^a(\rho_{AB})=\max\limits_{\{p_i,|\psi_i\rangle\}}\sum\limits_{i}p_iT_q(|\psi_i\rangle)
\end{equation}
with the maximum taken over all possible pure state decompositions of $\rho_{AB}$.
There is an analytic relationship
between the Tsallis-$q$ entanglement and concurrence \cite{Concurrence1,Concurrence2} for $q\in[\frac{5-\sqrt{13}}{2}, \frac{5+\sqrt{13}}{2}]$ \cite{Yuan2016},
\begin{equation}\label{T1}
T_q(|\psi\rangle_{AB})=g_q(\mathcal{C}^2(|\psi\rangle_{AB})),
\end{equation}
where
\begin{equation}\label{T2}
g_q(x)=\frac{1}{q-1}\Big[1-\Big(\frac{1+\sqrt{1-x}}{2}\Big)^q-\Big(\frac{1-\sqrt{1-x}}{2}\Big)^q\Big].
\end{equation}
It has also been shown that $T_q(|\psi\rangle)=g_q(\mathcal{C}^2(|\psi\rangle))$ for any $2\otimes m~(m\geqslant2)$ pure state $|\psi\rangle$,
and $T_q(\rho)=g_q(\mathcal{C}^2(\rho))$ for 2-qubit mixed state $\rho$ \cite{Kim2010}.
Therefore, (\ref{T1}) holds for any $q$ such that $g_q(x)$ in (\ref{T2}) is monotonically increasing and convex.
Let $\rho_{A_{i_1}A_{i_2}\cdots A_{i_m}}$ be the reduced density matrix of $\ket{\psi}_{A_1\cdots A_n}$ in (\ref{GWV}). Denote by $\{P_1,\cdots,P_r\}$ a partition of the set $\{A_{i_1},A_{i_2},\cdots,A_{i_m}\}$, $r\leqslant m\leqslant n$. In \cite{Shi2020} Shi {\it et al.} proved that
\begin{eqnarray}\label{T8}
T_q(\rho_{A_{i_1}|A_{i_2}\cdots A_{i_m}})=g_q(C^2(\rho_{A_{i_1}|A_{i_2}\cdots A_{i_m}}))
\end{eqnarray}
for $q\in[\frac{5-\sqrt{13}}{2},\frac{5+\sqrt{13}}{2}]$, and
\begin{eqnarray}\label{T7}
T_q^a(\rho_{A_{i_1}|A_{i_2}\cdots A_{i_m}})=T_q(\rho_{A_{i_1}|A_{i_2}\cdots A_{i_m}})=g_q(C^2(\rho_{A_{i_1}|A_{i_2}\cdots A_{i_m}}))
\end{eqnarray}
for $q\in[\frac{5-\sqrt{13}}{2}, 2]\cup[3, \frac{5+\sqrt{13}}{2}]$.
Furthermore, for GWV states the authors in \cite{Shi2020} established the monogamy relation
\begin{eqnarray}\label{T3}
T_q^\mu(\rho_{P_1|P_2\cdots P_r})\geqslant \sum_{j=2}^{r} T_q^\mu(\rho_{P_1P_j})
\end{eqnarray}
for $q\in[\frac{5-\sqrt{13}}{2},\frac{5+\sqrt{13}}{2}]$ and $\mu\in[2,\infty)$, and the general monogamy relation,
\begin{eqnarray}\label{T4}
T_q^\gamma(\rho_{P_1|P_2\cdots P_r})&\geqslant& \sum_{j=2}^{t}(2^{\frac{\gamma}{\mu}}-1)^{j-2}T_q^\gamma(\rho_{P_1P_j})+(2^{\frac{\gamma}{\mu}}-1)^t\sum_{j=t+1}^{r-1}T_q^\gamma(\rho_{P_1P_j})\nonumber\\
&&\ \ +(2^{\frac{\gamma}{\mu}}-1)^{t-1}T_q^\gamma(\rho_{P_1P_r}),
\end{eqnarray}
conditioned that $T_q(\rho_{P_1P_i})\leqslant T_q(\rho_{P_1|P_{i+1}\cdots P_r}) $ for $i=2,3,\cdots, t$, and $T_q(\rho_{P_1P_j})\geqslant T_q(\rho_{P_1|P_{j+1}\cdots P_r})$ for $j=t+1,\cdots, r-1$ with $\gamma\in[0,\mu]$ and $\mu\in[2,\infty)$.
For $q\in[\frac{5-\sqrt{13}}{2}, 2]\cup[3, \frac{5+\sqrt{13}}{2}]$, the following polygamy relation based on the T$q$EoA has been obtained \cite{Shi2020},
\begin{eqnarray}\label{T5}
(T_q^a(\rho_{P_1|P_2\cdots P_k}))^\mu\leqslant \sum_{j=2}^{r} (T_q^a(\rho_{P_1P_j}))^\mu
\end{eqnarray}
with $\mu\in(0,1]$.
Next, we provide a class of monogamy and polygamy inequalities which are tighter than inequalities (\ref{T3}), (\ref{T4}) and (\ref{T5}), respectively, by using the following lemma \cite{Yang2019}.
\begin{lemma}\label{lem1}
For any real numbers $x, k$ and $t$, we have
$\mathrm{(a)}$ $(1+x)^t\geqslant 1+\frac{(1+k)^t -1}{k^t} x^t$ for $0\leqslant x \leqslant k\leqslant1$, $t\geqslant1$;
$\mathrm{(b)}$ $(1+x)^t\geqslant 1+\frac{(1+k)^t -1}{k^t} x^t$ for $x\geqslant k\geqslant 1$, $0\leqslant t\leqslant1$;
$\mathrm{(c)}$ $(1+x)^t \leqslant 1+\frac{(1+k)^t -1}{k^t}x^t$ for $0\leqslant x \leqslant k\leqslant1$, $0\leqslant t\leqslant1$.
\end{lemma}
\subsection{Tighter monogamy relations in terms of T$q$E}\label{section3-1}
For any nonnegative integer $j$ and its binary expansion $j=\sum\limits_{i=0}^{s-1} j_i 2^i$, with $\log_{2}j \leqslant s$ and $j_i \in \{0, 1\}$ for $i=0, \cdots, s-1$,
one can define a unique binary vector
$\overrightarrow{j}=\left(j_0,~ j_1,~\cdots,~j_{s-1}\right)$. The Hamming weight $\omega_{H}(\overrightarrow{j})$ of $\overrightarrow{j}$ is defined as the number of $1's$ in $\{j_0,~j_1,~\cdots,~j_{s-1}\}$.
The Hamming weight $\omega_{H}(\overrightarrow{j})$ is bounded above by $\log_{2}j$,
\begin{equation}\label{weight}
\omega_{H}\left(\overrightarrow{j}\right)\leqslant \log_{2}j \leqslant j.
\end{equation}
In the following we denote by $\rho_{A_{i_1}A_{i_2}\cdots A_{i_{m}}}$ the reduced density matrices of a GWV state $\ket{\psi}_{A_1\cdots A_n}$ given in (\ref{GWV}), and $\{P,P_0,P_1,\cdots,P_{r-1}\}$ a partition of the set $\{A_{i_1},A_{i_2},\cdots,A_{i_m}\}$, $r\leqslant m-1\leqslant n-1$.
\begin{theorem}\label{thm10}
If \begin{equation}\label{thm10:1}
k T_q^2(\rho_{PP_j})\geqslant T_q^2(\rho_{PP_{j+1}})\geqslant 0
\end{equation}
for $j=0,1,\cdots, r-2$ and $0<k\leqslant 1$, we have
\begin{eqnarray}\label{thm10:2}
T_q^\beta(\rho_{P|P_0\cdots P_{r-1}})\geqslant\sum\limits_{j=0}^{r-1}(\mathcal{K}_\beta)^{\omega_H(\overrightarrow{j})}T_q^\beta(\rho_{ PP_j}),
\end{eqnarray}
where $\beta\in[2, \infty)$, $q\in [\frac{5-\sqrt{13}}{2},\frac{5+\sqrt{13}}{2}]$, $\mathcal{K}_\beta=\frac{(1+k)^\frac{\beta}{2}-1}{k^\frac{\beta}{2}}$.
\end{theorem}
\begin{proof}
From inequality (\ref{T3}), one has $T_q^2(\rho_{P|P_0\cdots P_{r-1}})\geqslant \sum\limits_{j=0}^{r-1} T_q^2(\rho_{PP_j})$. Thus, it is sufficient to show that
\begin{equation}\label{thm10:3}
\Bigg(\sum\limits_{j=0}^{r-1} T_q^2(\rho_{PP_j})\Bigg)^{\frac{\beta}{2}}
\geqslant\sum\limits_{j=0}^{r-1}(\mathcal{K}_\beta)^{\omega_H(\overrightarrow{j})}T_q^\beta(\rho_{ PP_j}).
\end{equation}
First, we prove that the inequality (\ref{thm10:3}) holds for the case of $r=2^s$ by using mathematical induction on $s$.
For $s=1$, using Lemma \ref{lem1} $\mathrm{(a)}$, we have
\begin{eqnarray}\label{thm10:4}
(T_q^2(\rho_{PP_0})+T_q^2(\rho_{PP_1}))^{\frac{\beta}{2}}
&=&T_q^\beta(\rho_{PP_0}) \Big(1+\frac{T_q^2(\rho_{PP_1})}{T_q^2(\rho_{PP_0})}\Big)^{\frac{\beta}{2}} \nonumber \\
&\geqslant& T_q^\beta(\rho_{PP_0}) \Bigg[1+\mathcal{K}_\beta\Bigg(\frac{T_q^\beta(\rho_{PP_1})}{T_q^\beta(\rho_{PP_0})}\Bigg)\Bigg] \nonumber \\
&=&T_q^\beta(\rho_{PP_0})+\mathcal{K}_\beta T_q^\beta(\rho_{PP_1}).
\end{eqnarray}
Thus, the inequality (\ref{thm10:3}) holds for $s=1$.
Assume that the inequality (\ref{thm10:3}) holds for $r=2^{s-1}$ with $s\geqslant 2$. Consider the case of $r=2^s$.
From (\ref{thm10:1}) we have $T_q^2(\rho_{PP_{j+2^{s-1}}})\leqslant k^{2^{s-1}}T_q^2(\rho_{PP_j})$ for $j=0,1,\cdots, 2^{s-1}-1$.
Therefore,
\begin{equation*}
\frac{\sum\nolimits_{j=2^{s-1}}^{2^s-1}T_q^2(\rho_{PP_j})}{\sum\nolimits_{j=0}^{2^{s-1}-1}
T_q^2(\rho_{PP_j})}\leqslant k^{2^{s-1}}\leqslant k\leqslant1.
\end{equation*}
Again using Lemma \ref{lem1} $\mathrm{(a)}$, we have
\begin{eqnarray}\label{thm10:5}
\Bigg(\sum\limits_{j=0}^{2^s-1}T_q^2(\rho_{PP_j})\Bigg)^{\frac{\beta}{2}}
&=&\Bigg(\sum\limits_{j=0}^{2^{s-1}-1}T_q^2(\rho_{PP_j})\Bigg)^{\frac{\beta}{2}}
\Bigg(1+\frac{\sum_{j=2^{s-1}}^{2^s-1}T_q^2(\rho_{PP_j})}{\sum_{j=0}^{2^{s-1}-1}T_q^2
(\rho_{PP_j})}\Bigg)^{\frac{\beta}{2}} \nonumber\\
&\geqslant& \Bigg(\sum\limits_{j=0}^{2^{s-1}-1}T_q^2(\rho_{PP_j})\Bigg)^{\frac{\beta}{2}}
\Bigg[1+\mathcal{K}_\beta\Bigg(\frac{\sum_{j=2^{s-1}}^{2^s-1}T_q^2(\rho_{PP_j})}{\sum_{j=0}^{2^{s-1}-1}T_q^2
(\rho_{PP_j})}\Bigg)^{\frac{\beta}{2}}\Bigg] \nonumber\\
&=&\Bigg(\sum\limits_{j=0}^{2^{s-1}-1}T_q^2(\rho_{PP_j})\Bigg)^{\frac{\beta}{2}}+\mathcal{K}_\beta\Bigg(\sum\limits_{j=2^{s-1}}^{2^s-1}T_q^2(\rho_{PP_j})\Bigg)
^{\frac{\beta}{2}}.
\end{eqnarray}
From the induction hypothesis, we have
\begin{equation}\label{thm10:6}
\Bigg(\sum\limits_{j=0}^{2^{s-1}-1}T_q^2(\rho_{PP_j})\Bigg)^{\frac{\beta}{2}} \geqslant
\sum\limits_{j=0}^{2^{s-1}-1}(\mathcal{K}_\beta)^{\omega_H(\overrightarrow{j})}
T_q^\beta(\rho_{PP_j}).
\end{equation}
By relabeling the subsystems, we can easily get
\begin{equation}\label{thm10:7}
\Bigg(\sum\limits_{j=2^{s-1}}^{2^s-1}T_q^2(\rho_{PP_j})\Bigg)^{\frac{\beta}{2}} \geqslant
\sum\limits_{j=2^{s-1}}^{2^s-1}(\mathcal{K}_\beta)^{\omega_H(\overrightarrow{j})-1}T_q^\beta(\rho_{PP_j}).
\end{equation}
From inequality (\ref{thm10:5}) together with inequalities (\ref{thm10:6}) and (\ref{thm10:7}), we have
\begin{equation}
\Bigg(\sum\limits_{j=0}^{2^s-1}T_q^2(\rho_{PP_j})\Bigg)^{\frac{\beta}{2}}\geqslant
\sum\limits_{j=0}^{2^s-1}(\mathcal{K}_\beta)^{\omega_H(\overrightarrow{j})}T_q^\beta(\rho_{PP_j}).
\end{equation}
Now we extend the above conclusion to arbitrary integer $r$. Note that there always exists some $s$ such that $0<r\leqslant 2^s$. Let us consider a $(2^s+1)$-partite quantum state,
\begin{equation}\label{thm10:8}
\gamma_{PP_0P_1\ldots P_{2^s-1}}=\rho_{PP_0P_1\cdots P_{r-1}}\otimes \sigma_{P_r\cdots P_{2^s-1}},
\end{equation}
which is the tensor product of $\rho_{PP_0P_1\cdots P_{r-1}}$ and an arbitrary $(2^s-r)$-partite state $\sigma_{P_r\cdots P_{2^s-1}}$.
As just proved above for this state, we have
\begin{equation}
T_q^\beta(\gamma_{P|P_0P_1\cdots P_{2^s-1}})
\geqslant\sum\limits_{j=0}^{2^s-1}(\mathcal{K}_\beta)^{\omega_H(\overrightarrow{j})}T_q^\beta(\gamma_{PP_j}),
\end{equation}
where $\gamma_{PP_j}$ is the reduced density matrix of $\gamma_{PP_0P_1\cdots P_{2^s-1}}$, $j=0,1,\cdots,2^s-1$.
Taking into account the following obvious facts: $T_q\left(\gamma_{P|P_0 P_1 \cdots P_{2^s-1}}\right)=T_q\left(\rho_{P|P_0 P_1 \cdots P_{r-1}}\right)$,
$T_q\left(\gamma_{PP_j}\right)=0$ for $j=r, \cdots , 2^s-1$,
and $T_q(\gamma_{PP_j})=T_q(\rho_{PP_j})$ for each $j=0, \cdots , r-1$, we get
\begin{eqnarray}
T_q^\beta(\rho_{P|P_0P_1\cdots P_{r-1}}) &=&T_q^\beta(\gamma_{P|P_0P_1\cdots P_{2^s-1}}) \nonumber\\
&\geqslant& \sum\limits_{j=0}^{2^s-1}(\mathcal{K}_\beta)^{\omega_H(\overrightarrow{j})}T_q^\beta(\gamma_{PP_j}) \nonumber\\
&= &\sum\limits_{j=0}^{r-1}(\mathcal{K}_\beta)^{\omega_H(\overrightarrow{j})}T_q^\beta(\rho_{PP_j}).
\end{eqnarray}
This completes the proof.
\end{proof}
\begin{remark} Since $(\mathcal{K}_\beta)^{\omega_H(\overrightarrow{j})}\geqslant 1$ for any $\beta\geqslant2$, we have
\begin{equation}\label{re3}
T_q^\beta(\rho_{P|P_0\cdots P_{r-1}})\geqslant\sum\limits_{j=0}^{r-1}(\mathcal{K}_\beta)^{\omega_H(\overrightarrow{j})}T_q^\beta(\rho_{ PP_j})\geqslant\sum\limits_{j=0}^{r-1}T_q^\beta(\rho_{ PP_j}).
\end{equation}
Therefore, we provide a monogamy relation based on TqE with larger lower bound than (\ref{T3}) in Ref.~\cite{Shi2020} .
In the Theorem \ref{thm10} when $\kappa=1$, for any GWV states in the order of the partitions $P_0, P_1, \cdots, P_{r-1}$ satisfying $ T_q(\rho_{PP_j})\geqslant T_q(\rho_{PP_{j+1}})\geqslant 0$, $j=0,1,\cdots, r-2$ , we get
\begin{equation}
T_q^\beta(\rho_{P|P_0\cdots P_{r-1}})\geqslant\sum\limits_{j=0}^{r-1}(2^{\frac{\beta}{2}}-1)^{\omega_H(\overrightarrow{j})}T_q^\beta(\rho_{ PP_j}),
\end{equation}
which is tighter than the inequality
(\ref{T3}) in \cite{Shi2020}.
When $0<\kappa<1$, for the GWV states satisfying certain conditions (\ref{thm10:1}), we can also improve the monogamy relations of Ref. \cite{Shi2020} from (\ref{re3}).
Furthermore, as $\frac{(1+k)^\frac{\beta}{2}-1}{k^\frac{\beta}{2}}$ is a decreasing function of $k$ for $k\in(0,1]$, $\beta\geqslant2$, the inequality (\ref{thm10:2}) gets tighter as $k$ decreases.
\end{remark}
\begin{theorem}\label{thm11}
When $q\in [\frac{5-\sqrt{13}}{2},\frac{5+\sqrt{13}}{2}]$, we have
\begin{eqnarray}\label{thm11:1}
T_q^\beta(\rho_{P|P_0\cdots P_{r-1}})\geqslant\sum\limits_{j=0}^{r-1}(\mathcal{K}_\beta)^{j}T_q^\beta(\rho_{ PP_j})
\end{eqnarray}
conditioned that
\begin{equation}\label{thm11:2}
k T_q^2(\rho_{PP_l})\geqslant \sum\limits_{j=l+1}^{r-1}T_q^2(\rho_{PP_{j}})
\end{equation}
for $l=0,1,\cdots, r-2$, $0<k\leqslant1$, where $\beta\in[2,\infty)$ and $\mathcal{K}_\beta=\frac{(1+k)^\frac{\beta}{2}-1}{k^\frac{\beta}{2}}$.
\end{theorem}
\begin{proof}
From inequality (\ref{T3}), we only need to prove
\begin{equation}\label{thm11:3}
\Big(\sum_{j=0}^{r-1}T_q^2\left(\rho_{PP_j}\right)\Big)^{\frac{\beta}{2}}
\geqslant \sum_{j=0}^{r-1} ( \mathcal{K}_\beta)^{j}T_q^\beta(\rho_{PP_j}).
\end{equation}
We use mathematical induction on $r$ here. It is obvious that inequality (\ref{thm11:3}) holds for $r=2$ from (\ref{thm10:4}). Assume that it also holds for any positive integer less than $r$. Since $\frac{\sum\limits_{j=1}^{r-1}T_q^2(\rho_{PP_{j}})}{T_q^2(\rho_{PP_0})}\leqslant k$, we have
\begin{eqnarray}
\left(\sum_{j=0}^{r-1}T_q^2\left(\rho_{PP_j}\right)\right)^{\frac{\beta}{2}}
&=&T_q^\beta(\rho_{PP_0})
\Bigg(1+\frac{\sum_{j=1}^{r-1}T_q^2(\rho_{PP_j})}
{T_q^2(\rho_{PP_0})} \Bigg)^{\frac{\beta}{2}}\nonumber\\
&\geqslant& T_q^\beta(\rho_{PP_0})\Bigg[1+ \mathcal{K}_\beta\Bigg(\frac{\sum\nolimits_{j=1}^{r-1}T_q^2(\rho_{PP_{j}})}{T_q^2(\rho_{PP_0})}\Bigg)^{\frac{\beta}{2}}\Bigg]\nonumber\\
&=&T_q^\beta(\rho_{PP_0})+\mathcal{K}_\beta\Bigg(\sum\limits_{j=1}^{r-1}T_q^2(\rho_{PP_{j}})\Bigg)^{\frac{\beta}{2}}\nonumber\\
&\geqslant& T_q^\beta(\rho_{PP_0})+\mathcal{K}_\beta \sum\limits_{j=1}^{r-1}(\mathcal{K}_\beta)^{j-1}T_q^\beta(\rho_{ PP_j})\nonumber\\
&=&\sum\limits_{j=0}^{r-1}(\mathcal{K}_\beta)^{j}T_q^\beta(\rho_{ PP_j}),
\end{eqnarray}
where the first inequality is due to Lemma \ref{lem1} $\mathrm{(a)}$ and the second inequality is due to the induction hypothesis.
\end{proof}
According to inequality (\ref{weight}), we obtain
\begin{equation*}
T_q^\beta(\rho_{P|P_0\cdots P_{r-1}})\geqslant\sum\limits_{j=0}^{r-1}(\mathcal{K}_\beta)^{j}T_q^\beta(\rho_{ PP_j})\geqslant\sum\limits_{j=0}^{r-1}(\mathcal{K}_\beta)^{\omega_H(\overrightarrow{j})}T_q^\beta(\rho_{ PP_j})
\end{equation*}
for $\beta\geqslant2$. Therefore the inequality (\ref{thm11:1}) of Theorem \ref{thm11} is tighter than the inequality (\ref{thm10:2}) of Theorem \ref{thm10} under certain conditions.
In general, the conditions (\ref{thm11:2}) is not always satisfied. We derive the following monogamy inequality with different conditions.
\begin{theorem}\label{thm14}
When $q\in [\frac{5-\sqrt{13}}{2},\frac{5+\sqrt{13}}{2}]$, we have
\begin{eqnarray}\label{thm14:1}
T_q^\beta(\rho_{P|P_0\cdots P_{r-1}})&\geqslant& \sum_{j=0}^{t}(\mathcal{K}_\beta)^{j}T_q^\beta(\rho_{PP_j})+(\mathcal{K}_\beta )^{t+2}\sum_{j=t+1}^{r-2}T_q^\beta(\rho_{PP_j})\nonumber\\
&&\ \ \
+(\mathcal{K}_\beta)^{t+1}T_q^\beta(\rho_{PP_{r-1}})
\end{eqnarray}
conditioned that
$k T_q^2(\rho_{PP_i})\geqslant T_q^2(\rho_{P|P_{i+1}\cdots P_{r-1}})$ for $i=0,1,\cdots, t$ and $T_q^2(\rho_{PP_j})\leqslant k T_q^2(\rho_{P|P_{j+1}\cdots P_{r-1}})$ for $j=t+1,\cdots, r-2$, $\forall 0<k\leqslant1$, $0\leqslant t\leqslant r-3$, $r\geqslant3$, where $\beta\in[2,\infty)$ and $\mathcal{K}_\beta=\frac{(1+k)^\frac{\beta}{2}-1}{k^\frac{\beta}{2}}$.
\end{theorem}
\begin{proof}
From Theorem \ref{thm10} for the case $r=2$, we have
\begin{eqnarray}\label{thm14:2}
T_q^\beta(\rho_{P|P_0\cdots P_{r-1}})
&\geqslant& T_q^\beta(\rho_{PP_0})+\mathcal{K}_\beta T_q^\beta(\rho_{P|P_1\cdots P_{r-1}})\nonumber\\
&\geqslant& \cdots\nonumber\\
&\geqslant& \sum_{j=0}^{t}(\mathcal{K}_\beta)^{j}T_q^\beta(\rho_{PP_j})+(\mathcal{K}_\beta)^{t+1}T_q^\beta(\rho_{P|P_{t+1}\cdots P_{r-1}}).
\end{eqnarray}
Since $T_q^2(\rho_{PP_j}) \leqslant k T_q^2(\rho_{P|P_{j+1}\cdots P_{r-1}})$ for $j=t+1,\cdots, r-2$, using Theorem \ref{thm10} again we have
\begin{eqnarray}\label{thm14:3}
T_q^\beta(\rho_{P|P_{t+1}\cdots P_{r-1}})
&\geqslant&\mathcal{K}_\beta T_q^\beta(\rho_{PP_{t+1}})+T_q^\beta(\rho_{P|P_{t+2}\cdots P_{r-1}})\nonumber\\
&\geqslant& \cdots\nonumber\\
&\geqslant& \mathcal{K}_\beta\left(\sum_{j=t+1}^{r-2}T_q^\beta(_{PP_j})\right)+T_q^\beta(_{PP_{r-1}}).
\end{eqnarray}
Combining (\ref{thm14:2}) and (\ref{thm14:3}), we get the inequality (\ref{thm14:1}).
\end{proof}
\begin{remark}
From Theorem \ref{thm14}, if $k T_q^2(\rho_{PP_j})\geqslant T_q^2(\rho_{P|P_{j+1}\cdots P_{r-1}})$ for all $j=0,1,\cdots, r-2$, one has
\begin{eqnarray}\label{thm14:4}
T_q^\beta(\rho_{P|P_0\cdots P_{r-1}})\geqslant \sum_{j=0}^{r-1}(\mathcal{K}_\beta)^{j}T_q^\beta(\rho_{PP_j}).
\end{eqnarray}
\end{remark}
Next, using Lemma \ref{lem1} $\mathrm{(b)}$, we further improve the monogamy inequality (\ref{T4}) provided in \cite{Shi2020}.
\begin{lemma}\label{lem2}
If $T_q^\mu(\rho_{P_1P_3}) \geqslant k T_q^\mu(\rho_{P_1P_2})$, we have for $q\in[\frac{5-\sqrt{13}}{2},\frac{5+\sqrt{13}}{2}]$,
\begin{eqnarray}\label{lem2:1}
T_q^\gamma(\rho_{P_1|P_2P_3})\geqslant T_q^\gamma(\rho_{P_1P_2})+ \mathcal{K}_\gamma T_q^\gamma(\rho_{P_1P_3}),
\end{eqnarray}
where $\gamma\in[0,\mu]$, $\mu\in[2,\infty)$, $\mathcal{K}_\gamma=\frac{(1+k)^\frac{\gamma}{\mu}-1}{k^{\frac{\gamma}{\mu}}}$ and $k\in[1,\infty)$.
\end{lemma}
\begin{proof}
From (\ref{T3}), we have $T_q^\mu(\rho_{P_1|P_2P_3})\geqslant T_q^\mu(\rho_{P_1P_2})+T_q^\mu(\rho_{P_1P_3})$ for $\mu\in[2,\infty)$. Hence, we get
\begin{eqnarray} T_q^\gamma(\rho_{P_1|P_2P_3})&=&(T_q^\mu(\rho_{P_1|P_2P_3}))^{\frac{\gamma}{\mu}}\nonumber\\
&\geqslant&(T_q^\mu(\rho_{P_1P_2})+T_q^\mu(\rho_{P_1P_3}))^{\frac{\gamma}{\mu}}\nonumber\\
&=&T_q^\gamma(\rho_{P_1P_2})\Bigg[1+\frac{T_q^\mu(\rho_{P_1P_3})}{T_q^\mu(\rho_{P_1P_2})}\Bigg]^{\frac{\gamma}{\mu}}\nonumber\\
&\geqslant& T_q^\gamma(\rho_{P_1P_2})\Bigg(1+\mathcal{K}_\gamma\Bigg(\frac{T_q(\rho_{P_1P_3})}{T_q(\rho_{P_1P_2})}\Bigg)^{\gamma}\Bigg)\nonumber\\
&= &T_q^\gamma(\rho_{P_1P_2})+\mathcal{K}_\gamma T_q^\gamma(\rho_{P_1P_3}).
\end{eqnarray}
Here the second inequality is due to Lemma \ref{lem1} $\mathrm{(b)}$.
\end{proof}
Now we generalize our results to multipartite GWV states. The proof is similar to the proof of Theorem \ref{thm14} by using Lemma \ref{lem1} $\mathrm{(b)}$ and Lemma \ref{lem2}.
\begin{theorem}\label{thm6}
If $k T_q^\mu(\rho_{PP_i})\leqslant T_q^\mu(\rho_{P|P_{i+1}\cdots P_{r-1}})$ for $i=0,1,\cdots, t$, and $T_q^\mu(\rho_{PP_j})\geqslant k T_q^\mu(\rho_{P|P_{j+1}\cdots P_{r-1}})$ for $j=t+1,\cdots, r-2$, $ \forall k\geqslant1$, $0\leqslant t\leqslant r-3$ and $r\geqslant 3$, we have for $q\in [\frac{5-\sqrt{13}}{2},\frac{5+\sqrt{13}}{2}]$,
\begin{eqnarray}\label{thm6:1}
T_q^\gamma(\rho_{P|P_0\cdots P_{r-1}})&\geqslant&\sum_{j=0}^{t}(\mathcal{K}_\gamma)^{j}T_q^\gamma(\rho_{PP_j})+(\mathcal{K}_\gamma )^{t+2}\sum_{j=t+1}^{r-2}T_q^\gamma(\rho_{PP_j})\nonumber\\
&&\ \ \ +(\mathcal{K}_\gamma)^{t+1}T_q^\gamma(\rho_{PP_{r-1}})
\end{eqnarray}
with $\gamma\in[0,\mu], \mu \in [2,\infty)$ and $\mathcal{K}_\gamma=\frac{(1+k)^\frac{\gamma}{\mu}-1}{k^\frac{\gamma}{\mu}}$.
\end{theorem}
\begin{remark}
Since $\frac{(1+k)^\frac{\gamma}{\mu}-1}{k^{\frac{\gamma}{\mu}}}\geqslant 2^{\frac{\gamma}{\mu}}-1$ for $\frac{\gamma}{\mu}\in[0,1]$ and $k\in[1, \infty)$, our new monogamy relation (\ref{thm6:1}) for T$q$E is better than (\ref{T4}) given in \cite{Shi2020} which is just a special case of ours for $k=1$. Moreover, the larger the $k$, the tighter the inequality (\ref{thm6:1}).
\end{remark}
\begin{example}\label{exm1}
Consider the 3-qubit GW state
\begin{eqnarray}
&\ket{\psi}_{A_1A_2A_3}
=\frac{1}{\sqrt{6}}\ket{100}+\frac{1}{\sqrt{6}}\ket{010}+\frac{2}{\sqrt{6}}\ket{001}.
\end{eqnarray}
From the definition of concurrence \cite{Concurrence1}, we get $C(\rho_{A_1|A_2A_3})=\frac{\sqrt{5}}{3}$, $C(\rho_{A_1A_2})=\frac{1}{3}$ and $C(\rho_{A_1A_3})=\frac{2}{3}$. When $q=2$, using (\ref{T8}) we have
$T_2(\ket{\psi}_{A_1|A_2A_3})=\frac{5}{18}$, $T_2(\rho_{A_1A_2})=\frac{1}{18}$ and $T_2(\rho_{A_1A_3})=\frac{2}{9}$.
Choosing $\mu=3$, we have $1\leqslant k\leqslant 64$ from Lemma \ref{lem2}. Thus, $T_2^\gamma(\ket{\psi}_{A_1|A_2A_3})\geqslant \left(\frac{1}{18}\right)^\gamma+\frac{(1+k)^\frac{\gamma}{3}-1}
{k^{\frac{\gamma}{3}}}\left(\frac{2}{9}\right)^\gamma$ from our result (\ref{lem2:1}), and $T_2^\gamma(\ket{\psi}_{A_1|A_2A_3})\geqslant\left(\frac{1}{18}\right)^\gamma
+\left(2^{\frac{\gamma}{3}}-1\right)\left(\frac{2}{9}\right)^\gamma$ from the result given in \cite{Shi2020}. One can see that our result is better than the result
in \cite{Shi2020} for $\gamma\in[0,3]$, and the inequality is tighter as $k$ increases, see Fig. \ref{Fig1}.
\end{example}
\begin{figure}[H]
\centering
\includegraphics[width=8cm]{Fig1}
\caption{{\small The vertical axis is the the lower bound of the Tsallis-$q$ entanglement $T_2(\ket{\psi}_{A_1|A_2A_3})$. The black solid line is the exact values of $T_2(\ket{\psi}_{A_1|A_2A_3})$. The red dashed (green dot-dashed) line represents the lower bound from our results for the case of $k=64$ ($k=10$). The blue dotted line represents the lower bound from the result in \cite{Shi2020}.}}
\label{Fig1}
\end{figure}
\subsection{Tighter polygamy relations in terms of T$q$EoA}\label{sec3-2}
In this section, we present the polygamy inequalities for GWV states based on T$q$EoA, which improves the inequality (\ref{T5}).
\begin{theorem}\label{thm7}
If the subsystems $P_0,P_1,\cdots,P_{r-1}$ satisfy
\begin{equation}\label{thm7:1}
k T_q^a(\rho_{PP_j})\geqslant T_q^a(\rho_{PP_{j+1}})\geqslant0
\end{equation}
with $j=0,1,\cdots, r-2, 0<k\leqslant 1$, then
\begin{eqnarray}\label{thm7:2}
[T_q^a(\rho_{P|P_0\cdots P_{r-1}})]^\mu\leqslant\sum\limits_{j=0}^{r-1}(\mathcal{K}_\mu)^{\omega_H(\overrightarrow{j})}[T_q^a(\rho_{ PP_j})]^\mu,
\end{eqnarray}
where $\mu\in (0,1]$, $q\in[\frac{5-\sqrt{13}}{2},2]\cup [3,\frac{5+\sqrt{13}}{2}]$ and $\mathcal{K}_\mu=\frac{(1+k)^\mu-1}{k^\mu}$.
\end{theorem}
\begin{proof}
Since $T_q^a(\rho_{P|P_0\cdots P_{r-1}})\leqslant \sum\limits_{j=0}^{r-1} T_q^a(\rho_{PP_j})$ from inequality (\ref{T5}), it is sufficient to prove that
\begin{equation}\label{thm7:3}
\left(\sum\limits_{j=0}^{r-1} T_q^a(\rho_{PP_j})\right)^\mu
\leqslant\sum\limits_{j=0}^{r-1}(\mathcal{K}_\mu)^{\omega_H(\overrightarrow{j})}[T_q^a(\rho_{ PP_j})]^\mu.
\end{equation}
First, we prove that the inequality (\ref{thm7:3}) holds for the case of $r=2^s$ by using mathematical induction on $s$.
For $s=1$, using Lemma \ref{lem1} $\mathrm{(c)}$ we have
\begin{eqnarray}\label{thm7:4}
(T_q^a(\rho_{PP_0})+T_q^a(\rho_{PP_1}))^\mu
&=&(T_q^a(\rho_{PP_0}))^\mu \Big(1+\frac{T_q^a(\rho_{PP_1})}{T_q^a(\rho_{PP_0})}\Big)^\mu \nonumber \\
&\leqslant& (T_q^a(\rho_{PP_0}))^\mu \Bigg[1+\mathcal{K}_\mu\Bigg(\frac{T_q^a(\rho_{PP_1})}{T_q^a(\rho_{PP_0})}\Bigg)^\mu\Bigg] \nonumber \\
&=&(T_q^a(\rho_{PP_0}))^\mu+\mathcal{K}_\mu(T_q^a(\rho_{PP_1}))^\mu.
\end{eqnarray}
Assume that the inequality (\ref{thm7:3}) holds for $r=2^{s-1}$ with $s\geqslant 2$. Consider the case of $r=2^s$.
From (\ref{thm7:1}) we have $T_q^a(\rho_{PP_{j+2^{s-1}}})\leqslant k^{2^{s-1}}T_q^a(\rho_{PP_j})$ for $j=0,1,\cdots, 2^{s-1}-1$.
Therefore,
\begin{equation*}
0\leqslant\frac{\sum\nolimits_{j=2^{s-1}}^{2^s-1}T_q^a(\rho_{PP_j})}{\sum\nolimits_{j=0}^{2^{s-1}-1}
T_q^a(\rho_{PP_j})}\leqslant k^{2^{s-1}}\leqslant k\leqslant1.
\end{equation*}
Again using Lemma \ref{lem1} $\mathrm{(c)}$, we have
\begin{eqnarray}\label{thm7:5}
\Bigg(\sum\limits_{j=0}^{2^s-1}T_q^a(\rho_{PP_j})\Bigg)^\mu
&=&\Bigg(\sum\limits_{j=0}^{2^{s-1}-1}T_q^a(\rho_{PP_j})\Bigg)^\mu
\Bigg(1+\frac{\sum_{j=2^{s-1}}^{2^s-1}T_q^a(\rho_{PP_j})}{\sum_{j=0}^{2^{s-1}-1}T_q^a
(\rho_{PP_j})}\Bigg)^\mu \nonumber\\
&\leqslant& \Bigg(\sum\limits_{j=0}^{2^{s-1}-1}T_q^a(\rho_{PP_j})\Bigg)^\mu
\Bigg[1+\mathcal{K}_\mu\Bigg(\frac{\sum_{j=2^{s-1}}^{2^s-1}T_q^a(\rho_{PP_j})}{\sum_{j=0}^{2^{s-1}-1}T_q^a
(\rho_{PP_j})}\Bigg)^\mu\Bigg] \nonumber\\
&=&\Bigg(\sum\limits_{j=0}^{2^{s-1}-1}T_q^a(\rho_{PP_j})\Bigg)^\mu+\mathcal{K}_\mu\Bigg(\sum\limits_{j=2^{s-1}}^{2^s-1}T_q^a(\rho_{PP_j})\Bigg)
^\mu.
\end{eqnarray}
From the induction hypothesis, we have
\begin{equation}\label{thm7:6}
\Bigg(\sum\limits_{j=0}^{2^{s-1}-1}T_q^a(\rho_{PP_j})\Bigg)^\mu \leqslant
\sum\limits_{j=0}^{2^{s-1}-1}(\mathcal{K}_\mu)^{\omega_H(\overrightarrow{j})}
[T_q^a(\rho_{PP_j})]^\mu.
\end{equation}
By relabeling the subsystems, we get
\begin{equation}\label{thm7:7}
\Bigg(\sum\limits_{j=2^{s-1}}^{2^s-1}T_q^a(\rho_{PP_j})\Bigg)^\mu \leqslant
\sum\limits_{j=2^{s-1}}^{2^s-1}(\mathcal{K}_\mu)^{\omega_H(\overrightarrow{j})-1}[T_q^a(\rho_{PP_j})]^\mu.
\end{equation}
Hence we have
\begin{equation}
\Bigg(\sum\limits_{j=0}^{2^s-1}T_q^a(\rho_{PP_j})\Bigg)^\mu\leqslant
\sum\limits_{j=0}^{2^s-1}(\mathcal{K}_\mu)^{\omega_H(\overrightarrow{j})}[T_q^a(\rho_{PP_j})]^\mu.
\end{equation}
Now we extend the above conclusion to arbitrary integer $r$. Note that there always exists some $s$ such that $0<r\leqslant 2^s$. Let us consider a $(2^s+1)$-partite quantum state
\begin{equation}\label{thm7:8}
\gamma_{PP_0P_1\cdots P_{2^s-1}}=\rho_{PP_0P_1\cdots P_{r-1}}\otimes \sigma_{P_r\cdots P_{2^s-1}},
\end{equation}
which is the tensor product of $\rho_{PP_0P_1\cdots P_{r-1}}$ and an arbitrary $(2^s-r)$-partite state $\sigma_{P_r\cdots P_{2^s-1}}$.
Similar to the proof above, we have
\begin{equation}
[T_q^a(\gamma_{P|P_0P_1\cdots P_{2^s-1}})]^\mu
\leqslant\sum\limits_{j=0}^{2^s-1}(\mathcal{K}_\mu)^{\omega_H(\overrightarrow{j})}[T_q^a(\gamma_{PP_j})]^\mu,
\end{equation}
where $\gamma_{PP_j}$ is the reduced density matrix of $\gamma_{PP_0P_1\cdots P_{2^s-1}}$, $j=0,1,\ldots,2^s-1$.
Taking into account the following obvious facts: $T_q^a\left(\gamma_{P|P_0 P_1 \cdots P_{2^s-1}}\right)=T_q^a\left(\rho_{P|P_0 P_1 \cdots P_{r-1}}\right)$,
$T_q^a\left(\gamma_{PP_j}\right)=0$ for $j=r, \cdots , 2^s-1$,
and $T_q^a(\gamma_{PP_j})=T_q^a(\rho_{PP_j})$ for each $j=0, \cdots , r-1$, we get
\begin{eqnarray}
[T_q^a(\rho_{P|P_0P_1\cdots P_{r-1}})]^\mu &=&[T_q^a(\gamma_{P|P_0P_1\cdots P_{2^s-1}})]^\mu \nonumber\\
&\leqslant& \sum\limits_{j=0}^{2^s-1}(\mathcal{K}_\mu)^{\omega_H(\overrightarrow{j})}[T_q^a(\gamma_{PP_j})]^\mu \nonumber\\
&=& \sum\limits_{j=0}^{r-1}(\mathcal{K}_\mu)^{\omega_H(\overrightarrow{j})}[T_q^a(\rho_{PP_j})]^\mu.
\end{eqnarray}
This completes the proof.
\end{proof}
\begin{remark}
Since $\Big(\frac{(1+k)^\mu-1}{k^\mu}\Big)^{\omega_H(\overrightarrow{j})}\leqslant 1$
for $\mu\in(0,1]$ and $k\in(0,1]$, our new polygamy relation for T$q$EoA is tighter than inequality (\ref{T5}) in \cite{Shi2020} under certain assumptions for the GWV states . In particular, when $k=1$, we get a tighter polygamy inequality
\begin{eqnarray}\label{thm7:10}
[T_q^a(\rho_{P|P_0\cdots P_{r-1}})]^\mu\leqslant\sum\limits_{j=0}^{r-1}(2^\mu-1)^{\omega_H(\overrightarrow{j})}[T_q^a(\rho_{ PP_j})]^\mu
\end{eqnarray}
for any GWV states without the assumptions.
If one takes $q=2$, our polygamy inequality (\ref{thm7:10}) gives rise to the one in \cite{Shi1}. Furthermore, the inequality (\ref{thm7:2}) gets tighter as $k$ decreases.
\end{remark}
\begin{example}\label{exm2}
Let us consider the 4-qubit GW state,
\begin{eqnarray}
&\ket{\psi}_{A_1A_2A_3A_4}
=0.3\ket{0001}+0.4\ket{0010}+{0.5}\ket{0100}+\sqrt{0.5}\ket{1000}.
\end{eqnarray}
We have $\rho_{A_1A_2A_3}=0.09\ket{000}\bra{000}+\ket{\phi}\bra{\phi}$ with $\ket{\phi}=0.4\ket{001}+0.5\ket{010}+\sqrt{0.5}\ket{100}$,
$C(\rho_{A_1A_2})=\frac{\sqrt{2}}{2}$ and $C(\rho_{A_1A_3})=\frac{2\sqrt{2}}{5}$.
Set $q=2$. We obtain
$$
T^a_{2}(\rho_{A_1A_2})=g_2\left(C^2(\rho_{A_1A_2})\right)=\frac{1}{4},~~ T^a_{2}(\rho_{A_1A_3})=g_2\left(C^2(\rho_{A_1A_3})\right)=\frac{4}{25}.
$$
Therefore, $[T^a_{2}(\rho_{A_1|A_2A_3})]^\mu\leqslant\left(\frac{1}{4}\right)^\mu+\frac{(1+k)^\mu-1}{k^\mu}\left(\frac{4}{25}\right)^\mu$ from (\ref{thm7:2}), $T^a_{2}(\rho_{A_1|A_2A_3})]^\mu\leqslant\left(\frac{1}{4}\right)^\mu+(2^\mu-1)\left(\frac{4}{25}\right)^\mu$ from (\ref{thm7:10}) and $[T^a_{2}(\rho_{A_1|A_2A_3})]^\mu\leqslant \left(\frac{1}{4}\right)^\mu+\left(\frac{4}{25}\right)^\mu$ from (\ref{T5}), where $k\in [0.64,1]$ from the condition (\ref{thm7:1}). One can see that our result is better than the ones in \cite{Shi1,Shi2020}, and the smaller the $k$ is, the tighter relation is, see Fig.~\ref{Fig2}.
\end{example}
\begin{figure}[H]
\centering
\includegraphics[width=8cm]{Fig2}
\caption{{\small The vertical axis is the upper bound of the Tsallis-$q$ entanglement of assistance $T_2^a(\rho_{A_1|A_2A_3})$. The blue dashed line represents the upper bound from our result (\ref{thm7:2}) for $k=0.64$, the red dot-dashed line represents the upper bound from the result in \cite{Shi1}, and the black solid line represents the upper bound from the result in \cite{Shi2020}.}}
\label{Fig2}
\end{figure}
Similar to the improvement from the inequality (\ref{thm10:2}) to the inequality (\ref{thm11:1}), we can analogously improve the polygamy inequality
of Theorem \ref{thm7} under certain conditions.
\begin{theorem}\label{thm8}
For $q\in[\frac{5-\sqrt{13}}{2},2]\cup [3,\frac{5+\sqrt{13}}{2}]$ we have
\begin{eqnarray}\label{thm8:1}
[T_q^a(\rho_{P|P_0\cdots P_{r-1}})]^\mu\leqslant\sum\limits_{j=0}^{r-1}(\mathcal{K}_\mu)^{j}[T_q^a(\rho_{ PP_j})]^\mu
\end{eqnarray}
conditioned that
\begin{equation}\label{thm8:2}
k T_q^a(\rho_{PP_l})\geqslant \sum\limits_{j=l+1}^{r-1}T_q^a(\rho_{PP_{j}})
\end{equation}
for $l=0,1,\cdots, r-2$ and $0<k\leqslant1$, where $\mu\in (0,1]$ and $\mathcal{K}_\mu=\frac{(1+k)^\mu-1}{k^\mu}$.
\end{theorem}
\begin{proof}
From the inequality (\ref{T5}), we only need to prove
\begin{equation}\label{thm8:3}
\Big(\sum_{j=0}^{r-1}T_q^a\left(\rho_{PP_j}\right)\Big)^{\mu}
\leqslant \sum_{j=0}^{r-1} ( \mathcal{K}_\mu)^{j}[T_q^a(\rho_{PP_j})]^\mu.
\end{equation}
We use mathematical induction on $r$ here. It is obvious that the inequality (\ref{thm8:3}) holds for $r=2$ from (\ref{thm7:4}). Assume that it also holds for any positive integer less than $r$. Since $0\leqslant {\sum\limits_{j=1}^{r-1}T_q^a(\rho_{PP_{j}})}/{T_q^a(\rho_{PP_0})}\leqslant k$, we have
\begin{eqnarray}
\left(\sum_{j=0}^{r-1}T_q^a\left(\rho_{PP_j}\right)\right)^{\mu}
&=&[T_q^a(\rho_{PP_0})]^{\mu}
\Bigg(1+\frac{\sum_{j=1}^{r-1}T_q^a(\rho_{PP_j})}
{T_q^a(\rho_{PP_0})} \Bigg)^{\mu}\nonumber\\
&\leqslant& [T_q^a(\rho_{PP_0})]^{\mu}\Bigg[1+ \mathcal{K}_\mu\Bigg(\frac{\sum\nolimits_{j=1}^{r-1}T_q^a(\rho_{PP_{j}})}{T_q^a(\rho_{PP_0})}\Bigg)^{\mu}\Bigg]\nonumber\\
&=&[T_q^a(\rho_{PP_0})]^{\mu}+\mathcal{K}_\mu\Bigg(\sum\limits_{j=1}^{r-1}T_q^a(\rho_{PP_{j}})\Bigg)^{\mu}\nonumber\\
&\leqslant& [T_q^a(\rho_{PP_0})]^{\mu}+\mathcal{K}_\mu \sum\limits_{j=1}^{r-1}(\mathcal{K}_\mu)^{j-1}[T_q^a(\rho_{ PP_j})]^\mu \nonumber\\
&=&\sum\limits_{j=0}^{r-1}(\mathcal{K}_\mu)^{j}[T_q^a(\rho_{ PP_j})]^\mu,
\end{eqnarray}
where the first inequality is due to Lemma \ref{lem1} $\mathrm{(c)}$ and the second inequality is due to the induction hypothesis.
\end{proof}
Since
$[T_q^a(\rho_{P|P_0\cdots P_{r-1}})]^\mu\leqslant\sum\limits_{j=0}^{r-1}(\mathcal{K}_\mu)^{j}[T_q^a(\rho_{ PP_j})]^\mu\leqslant\sum\limits_{j=0}^{r-1}(\mathcal{K}_\mu)^{\omega_H(\overrightarrow{j})}[T_q^a(\rho_{ PP_j})]^\mu$
for $\mu\in(0,1]$, the inequality (\ref{thm8:1}) of Theorem \ref{thm8} is tighter than the inequality (\ref{thm7:2}) of Theorem \ref{thm7} under the conditions.
Similarly, we provide a more general result by changing the conditions of the Theorem \ref{thm8}.
\begin{theorem}\label{thm9:1}
For $q\in[\frac{5-\sqrt{13}}{2},2]\cup [3,\frac{5+\sqrt{13}}{2}]$ we have
\begin{eqnarray}\label{thm9:2}
[T_q^a(\rho_{P|P_0\cdots P_{r-1}})]^\mu&\leqslant& \sum_{j=0}^{t}(\mathcal{K}_\mu)^{j}[T_q^a(\rho_{PP_j})]^\mu+(\mathcal{K}_\mu )^{t+2}\sum_{j=t+1}^{r-2}[T_q^a(\rho_{PP_j})]^\mu\nonumber\\
&&\ \ \ +(\mathcal{K}_\mu)^{t+1}[T_q^a(\rho_{PP_{r-1}})]^\mu
\end{eqnarray}
conditioned that
$k T_q^a(\rho_{PP_i})\geqslant T_q^a(\rho_{P|P_{i+1}\cdots P_{r-1}})$ for $i=0,1,\cdots, t$ and $T_q^a(\rho_{PP_j})\leqslant k T_q^a(\rho_{P|P_{j+1}\cdots P_{r-1}})$ for $j=t+1,\cdots, r-2$, $ \forall 0<k\leqslant1$, $0\leqslant t\leqslant r-3$, $r\geqslant3$, where $\mu\in (0,1]$ and $\mathcal{K}_\mu=\frac{(1+k)^\mu-1}{k^\mu}$.
\end{theorem}
The proof is similar to the one of Theorem \ref{thm14}, by using the inequality (\ref{thm7:2}) for the case $r=2$ and Lemma \ref{lem1} $\mathrm{(c)}$.
\begin{remark}
Note that if $k T_q^a(\rho_{PP_j})\geqslant T_q^a(\rho_{P|P_{j+1}\cdots P_{r-1}})$ for all $j=0,1,\cdots, r-2$, one has
\begin{eqnarray}\label{thm9:6}
[T_q^a(\rho_{P|P_0\cdots P_{r-1}})]^\mu\leqslant \sum_{j=0}^{r-1}(\mathcal{K}_\mu)^{j}[T_q^a(\rho_{PP_j})]^\mu.
\end{eqnarray}
Due to that $T_q(\rho_{P|P_0\cdots P_{r-1}})=T_q^a(\rho_{P|P_0\cdots P_{r-1}})$ for $q\in[\frac{5-\sqrt{13}}{2}, 2]\cup[3, \frac{5+\sqrt{13}}{2}]$, the above inequalities (\ref{thm7:2}), (\ref{thm8:1}) and (\ref{thm9:2}) also give the upper bounds of $T_q(\rho_{P|P_0\cdots P_{r-1}})$ for GWV states $\ket{\psi}_{A_1\cdots A_n}$.
\end{remark}
\section{Tighter monogamy and polygamy relations based on R$\alpha$E and R$\alpha$EoA for GWV states}\label{sec4}
For a bipartite pure state $|\psi\rangle_{AB}$, the R\'{e}nyi-$\alpha$ entanglement (R$\alpha$E) is defined as \cite{KimR2010} $E_\alpha(|\psi\rangle_{AB})=S_\alpha(\rho_A)$, where $S_\alpha(\rho)= \frac{1}{1-\alpha}\log (\mbox{tr} \rho^\alpha)$ with $\alpha >0$, $\alpha \neq 1$. The $S_{\alpha}(\rho)$ converges to the von Neumann entropy when $\alpha$ tends to 1.
For a bipartite mixed state $\rho_{AB}$, the R\'{e}nyi-$\alpha$ entanglement is given by
\begin{equation*}
E_{\alpha}\left(\rho_{AB} \right)=\min\limits_{\{p_i, |\psi_i\rangle\}} \sum_i p_i E_{\alpha}(|\psi_i \rangle_{AB}),
\end{equation*}
where the minimum is taken over all possible pure state decompositions of
$\rho_{AB}$.
As a dual concept to R$\alpha$E, the
R\'{e}nyi-$\alpha$ entanglement of assistance (R$\alpha$EoA) is given by
\begin{equation}
E^{a}_{\alpha}\left(\rho_{AB} \right)=\max \limits_{\{p_i, |\psi_i\rangle\}}\sum_i p_i E_{\alpha}(|\psi_i \rangle_{AB}),
\label{EoA}
\end{equation}
where the maximum is taken over all possible pure state decompositions of $\rho_{AB}$.
In \cite{KimR2010} the authors have derived an analytical relation between the R\'{e}nyi-$\alpha$ entanglement and concurrence for any two-qubit mixed state $\rho_{AB}$,
\begin{eqnarray}
E_\alpha \left( {\rho_{AB} } \right) = f_\alpha \left[ {C^2 \left( \rho_{AB} \right)} \right],
\label{q6}
\end{eqnarray}
where $\alpha\in [1,\infty)$ and $f_\alpha \left( x \right)$ has the form
\begin{equation}
f_\alpha \! \left( x \right)\!= \!\frac{1}{{1 - \alpha }}\!\log _2 \!\left[ {\left( {\frac{{1 \!-\!
\sqrt {1 - x} }}{2}} \right)^\alpha \!\!\!\! +\! \left( {\frac{{1 \!+\! \sqrt {1 - x} }}{2}}
\right)^\alpha } \right].
\label{q7}
\end{equation}
Set $x=y^2$ and denote $\tilde{f}_\alpha(y)=f_\alpha(y^2)$. Then the function $\tilde{f}_\alpha(y)$ is monotonically increasing and convex for $y\in[0,1]$. Later, Wang \emph{et al}. \cite{Wang2016} showed that (\ref{q6}) also holds for $\alpha\in[\frac{\sqrt7-1}{2},\infty)$.
Quite recently, Liang {\it et al.} \cite{Liang2020} provided the following analytic formulas for R$\alpha$E and R$\alpha$EoA,
\begin{eqnarray}\label{E1}
E_\alpha(\rho_{A_{i_1}|A_{i_2}\cdots A_{i_m}})=f_\alpha(C^2(\rho_{A_{i_1}|A_{i_2}\cdots A_{i_m}}))
\end{eqnarray}
for $\alpha\in[\frac{\sqrt 7 - 1}{2}, \infty)$, and
\begin{eqnarray}\label{E2}
E_\alpha^a(\rho_{A_{i_1}|A_{i_2}\cdots A_{i_m}})=E_\alpha(\rho_{A_{i_1}|A_{i_2}\cdots A_{i_m}})=f_\alpha(C^2(\rho_{A_{i_1}|A_{i_2}\cdots A_{i_m}}))
\end{eqnarray}
for $\alpha\in [\frac{\sqrt7-1}{2}, \frac{\sqrt{13}-1}{2}]$, together with
the following monogamy relation based on R$\alpha$E for GWV states,
\begin{eqnarray}\label{E3}
E_\alpha^\mu(\rho_{P_1|P_2\cdots P_r})\geqslant \sum_{j=2}^{r} E_\alpha^\mu(\rho_{P_1P_j})
\end{eqnarray}
with $\mu\in[2,\infty)$ and $\alpha\in [\frac{\sqrt7-1}{2}, \infty)$, as well as
the following polygamy inequalities based on R$\alpha$EoA,
\begin{eqnarray}\label{E5}
(E_\alpha^a(\rho_{P_1|P_2\cdots P_r}))^\mu\leqslant \sum_{j=2}^{r} (E_\alpha^a(\rho_{P_1P_j}))^\mu
\end{eqnarray}
with $\mu\in (0,1]$ and $\alpha\in [\frac{\sqrt7-1}{2}, \frac{\sqrt{13}-1}{2}]$.
Instead of the T$q$E and T$q$EoA used in Theorems of section \ref{sec3}, next we consider the R$\alpha$E and R$\alpha$EoA. The proofs of the Theorems given below are similar to the cases for T$q$E and T$q$EoA.
\subsection{Tighter monogamy relations in terms of R$\alpha$E}\label{sec4-1}
With a similar approach to T$q$E, we first present the following tighter weighted monogamy relations based on R$\alpha$E for GWV states.
\begin{theorem}\label{thm12}
If the subsystems $P_0,P_1,\cdots,P_{r-1}$ satisfy
\begin{equation}\label{thm12:1}
k E_\alpha^2(\rho_{PP_j})\geqslant E_\alpha^2(\rho_{PP_{j+1}})\geqslant 0
\end{equation}
for $j=0,1,\cdots, r-2$ and $0<k\leqslant 1$, we have
\begin{eqnarray}\label{thm12:2}
E_\alpha^\beta(\rho_{P|P_0\cdots P_{r-1}})\geqslant\sum\limits_{j=0}^{r-1}(\mathcal{K}_\beta)^{\omega_H(\overrightarrow{j})}E_\alpha^\beta(\rho_{ PP_j}),
\end{eqnarray}
for $\alpha\in [\frac{\sqrt7-1}{2}, \infty)$, where $\beta\in[2,\infty)$ and $\mathcal{K}_\beta=\frac{(1+k)^\frac{\beta}{2}-1}{k^\frac{\beta}{2}}$.
\end{theorem}
Since $\Big(\mathcal{K}_\beta\Big)^{\omega_H(\overrightarrow{j})}\geqslant 1$ for $\beta\geqslant2$ and $0<k\leqslant 1$, our new polygamy relation for R$\alpha$E is tighter than the inequality (\ref{E3}) in \cite{Liang2020} under certain conditions for the GWV states. Moreover, one can find that the inequality (\ref{thm12:2}) gets tighter as $k$ decreases.
\begin{example}\label{exm3}
Consider the 3-qubit GW state
\begin{eqnarray}
&\ket{\psi}_{A_1A_2A_3}
=\frac{1}{\sqrt{6}}\ket{100}+\frac{2}{\sqrt{6}}\ket{010}+\frac{1}{\sqrt{6}}\ket{001}.
\end{eqnarray}
We have $$C(\ket{\psi}_{A_1|A_2A_3})=\frac{\sqrt{5}}{3}, C(\rho_{A_1A_2})=\frac{2}{3}, C(\rho_{A_1A_3})=\frac{1}{3}.$$
Choosing $\alpha=2$, from (\ref{E1}) one has
$$E_2(\ket{\psi}_{A_1|A_2A_3})=\mathrm{log}_2\left(\frac{18}{13}\right),~~ E_2(\rho_{A_1A_2})=\mathrm{log}_2\left(\frac{9}{7}\right), ~~ E_2(\rho_{A_1A_3})=\mathrm{log}_2\left(\frac{18}{17}\right).$$
Then $E_2^\beta(\ket{\psi}_{A_1|A_2A_3})\geqslant\Big[\mathrm{log}_2\Big(\frac{9}{7}\Big)\Big]^\beta+\frac{(1+k)^\frac{\beta}{2}-1}
{k^{\frac{\beta}{2}}}\Big[\mathrm{log}_2\Big(\frac{18}{17}\Big)\Big]^\beta$ from our result (\ref{thm12:2}), and $E_2^\beta(\ket{\psi}_{A_1|A_2A_3})\geqslant\Big[\mathrm{log}_2\Big(\frac{9}{7}\Big)\Big]^\beta+\Big[\mathrm{log}_2\Big(\frac{18}{17}\Big)\Big]^\beta$ from the result (\ref{E3}), where $0.52\leqslant k\leqslant1$. One can see that our result is better than the result (\ref{E3}) in \cite{Liang2020} for $\beta\geqslant2$, see Fig.~\ref{Fig3}.
\end{example}
\begin{figure}[H]
\centering
\includegraphics[width=8cm]{Fig3}
\caption{\small The vertical axis is the lower bound of the R\'{e}nyi-$\alpha$ entanglement $E_2(\ket{\psi}_{A_1|A_2A_3})$. The solid black line represents the exact values of $E_2(\ket{\psi}_{A_1|A_2A_3})$, the red dot-dashed line represents the lower bound from our results for $k=0.52$, and the dashed blue line represents the lower bound from the result (\ref{E3}) in \cite{Liang2020}.}
\label{Fig3}
\end{figure}
The inequality (\ref{thm12:2}) can be further improved under certain conditions.
\begin{theorem}\label{thm13}
When $\alpha\in [\frac{\sqrt7-1}{2}, \infty)$, we have
\begin{eqnarray}\label{thm13:1}
E_\alpha^\beta(\rho_{P|P_0\cdots P_{r-1}})\geqslant\sum\limits_{j=0}^{r-1}(\mathcal{K}_\beta)^{j}E_\alpha^\beta(\rho_{ PP_j})
\end{eqnarray}
conditioned that
\begin{equation}\label{thm13:2}
k E_\alpha^2(\rho_{PP_l})\geqslant \sum\limits_{j=l+1}^{r-1}E_\alpha^2(\rho_{PP_{j}})
\end{equation}
for $l=0,1,\cdots, r-2$ and $0<k\leqslant 1$, where $\beta\in[2,\infty)$ and $\mathcal{K}_\beta=\frac{(1+k)^\frac{\beta}{2}-1}{k^\frac{\beta}{2}}$.
\end{theorem}
In general, we have the following monogamy inequality.
\begin{theorem}\label{thm15}
When $\alpha\in [\frac{\sqrt7-1}{2}, \infty)$, we have
\begin{eqnarray}\label{thm15:1}
E_\alpha^\beta(\rho_{P|P_0\cdots P_{r-1}})&\geqslant& \sum_{j=0}^{t}(\mathcal{K}_\beta)^{j}E_\alpha^\beta(\rho_{PP_j})+(\mathcal{K}_\beta )^{t+2}\sum_{j=t+1}^{r-2}E_\alpha^\beta(\rho_{PP_j})\nonumber\\
&&\ \ \ +(\mathcal{K}_\beta)^{t+1}E_\alpha^\beta(\rho_{PP_{r-1}})
\end{eqnarray}
conditioned that
$k E_\alpha^2(\rho_{PP_i})\geqslant E_\alpha^2(\rho_{P|P_{i+1}\cdots P_{r-1}})$ for $i=0,1,\cdots, t$ and $E_\alpha^2(\rho_{PP_j})\leqslant k E_\alpha^2(\rho_{P|P_{j+1}\cdots P_{r-1}})$ for $j=t+1,\cdots, r-2$, $ \forall 0<k\leqslant1$, $0\leqslant t\leqslant r-3$ and $r\geqslant3$, where $\beta\in[2,\infty)$ and $\mathcal{K}_\beta=\frac{(1+k)^\frac{\beta}{2}-1}{k^\frac{\beta}{2}}$.
\end{theorem}
\begin{remark}
If $k E_\alpha^2(\rho_{PP_j})\geqslant E_\alpha^2(\rho_{P|P_{j+1}\cdots P_{r-1}})$ for all $j=0,1,\cdots, r-2$, we have
\begin{eqnarray}\label{thm15:4}
E_\alpha^\beta(\rho_{P|P_0\cdots P_{r-1}})\geqslant \sum_{j=0}^{r-1}(\mathcal{K}_\beta)^{j}E_\alpha^\beta(\rho_{PP_j}).
\end{eqnarray}
\end{remark}
\subsection{Tighter polygamy relations in terms of R$\alpha$EoA }\label{sec4-2}
Now we establish the tighter polygamy relations for R$\alpha$EoA by using a similar approach to T$q$EoA.
\begin{theorem}\label{thm3}
If the subsystems $P_0,P_1,\cdots,P_{r-1}$ satisfy
\begin{equation}\label{thm3:1}
k E_\alpha^a(\rho_{PP_j})\geqslant E_\alpha^a(\rho_{PP_{j+1}})\geqslant0
\end{equation}
for $j=0,1,\cdots, r-2$ and $0<k\leqslant1$, we have
\begin{eqnarray}\label{thm3:2}
[E_\alpha^a(\rho_{P|P_0\cdots P_{r-1}})]^\mu\leqslant\sum\limits_{j=0}^{r-1}(\mathcal{K}_\mu)^{\omega_H(\overrightarrow{j})}[E_{\alpha}^a(\rho_{ PP_j})]^\mu,
\end{eqnarray}
where $\mu\in(0,1]$, $\alpha\in[\frac{\sqrt7-1}{2}, \frac{\sqrt{13}-1}{2}]$ and $\mathcal{K}_\mu=\frac{(1+k)^\mu-1}{k^\mu}$.
\end{theorem}
Since $\Big(\mathcal{K}_\mu\Big)^{\omega_H(\overrightarrow{j})}\leqslant 1$
for $\mu\in(0,1]$ and $k\in(0,1]$, our new polygamy inequality for R$\alpha$EoA is tighter than the inequality (\ref{E5}) in \cite{Liang2020} under certain conditions for the GWV states.
Also, one finds that the smaller the $k$ is, the tighter the inequality (\ref{thm3:2}) is.
\begin{example}\label{exm4}
Let us again consider the 4-qubit GW state presented in Example \ref{exm2}.
Choosing $\alpha=1.2$ we have
$$E^a_{1.2}(\rho_{A_1A_2})=f_{1.2}\Big[\Big(\frac{\sqrt{2}}{2}\Big)^2\Big]\approx 0.549339, ~E^a_{1.2}(\rho_{A_1A_3})=f_{1.2}\Big[\Big(\frac{2\sqrt{2}}{5}\Big)^2\Big]\approx 0.372954.$$
From (\ref{thm3:1}), we get $k\in [0.68, 1]$. Then our inequality (\ref{thm3:2}) yields that $[E_{1.2}^a(\rho_{A_1|A_2A_3})]^\mu \leqslant 0.549339^\mu+\frac{(1+k)^\mu-1}{k^\mu}0.372954^\mu$, and $[E_{1.2}^a(\rho_{A_1|A_2A_3})]^\mu \leqslant 0.549339^\mu+(2^\mu-1)0.372954^\mu$ for $k=1$. While, the inequality (\ref{E5}) yields $[E_{1.2}^a(\rho_{A_1|A_2A_3})]^\mu \leqslant 0.549339^\mu+0.372954^\mu$. Hence, our results are better than one in \cite{Liang2020}, and the inequality gets tighter as $k$ decreases, see Fig. \ref{Fig4}.
\end{example}
\begin{figure}[H]
\centering
\includegraphics[width=8cm]{Fig4}
\caption{The vertical axis is the upper bound of the R\'{e}nyi-$\alpha$ entanglement of assistance $E_{1.2}^a(\rho_{A_1|A_2A_3})$. The red dashed (green dot-dashed, blue dotted) line represents the upper bound from our result (\ref{thm3:2}) for $k=1$ ($k=0.8$, $k=0.7$), and the black solid line represents the upper bound from (\ref{E5}) in \cite{Liang2020}.}
\label{Fig4}
\end{figure}
Analogously we can improve the polygamy inequality of Theorem \ref{thm3} under certain conditions.
\begin{theorem}\label{thm4}
For $\alpha\in[\frac{\sqrt7-1}{2},\frac{\sqrt{13}-1}{2}]$ we have
\begin{eqnarray}\label{thm4:0}
[E_\alpha^a(\rho_{P|P_0\cdots P_{r-1}})]^\mu\leqslant\sum\limits_{j=0}^{r-1}(\mathcal{K}_\mu)^{j}[E_{\alpha}^a(\rho_{ PP_j})]^\mu
\end{eqnarray}
conditioned that
\begin{equation}\label{thm4:1}
k E_\alpha^a(\rho_{PP_l})\geqslant \sum\limits_{j=l+1}^{r-1}E_\alpha^a(\rho_{PP_{j}}),
\end{equation}
for $l=0,1,\cdots, r-2, 0<k\leqslant1$, where $\mu\in (0,1]$ and $\mathcal{K}_\mu=\frac{(1+k)^\mu-1}{k^\mu}$.
\end{theorem}
Due to $\omega_H(\overrightarrow{j})\leqslant j$, one has $\sum\limits_{j=0}^{r-1}\Big(\mathcal{K}_\mu\Big)^{j}[E_\alpha^a(\rho_{ PP_j})]^\mu\leqslant\sum\limits_{j=0}^{r-1}\Big(\mathcal{K}_\mu\Big)^{\omega_H(\overrightarrow{j})}[E_\alpha^a(\rho_{ PP_j})]^\mu$
for $\mu\in(0,1]$ and $k\in(0,1]$. Therefore, the inequality (\ref{thm4:0}) of Theorem \ref{thm4} is tighter than the inequality (\ref{thm3:2}) of Theorem \ref{thm3}.
Similar to the case of T$q$EoA, we also have the following polygamy relation for R$\alpha$EoA under certain conditions.
\begin{theorem}\label{thm4:3}
For $\alpha\in[\frac{\sqrt7-1}{2},\frac{\sqrt{13}-1}{2}]$ we have
\begin{eqnarray}\label{thm4:4}
[E_\alpha^a(\rho_{P|P_0\cdots P_{r-1}})]^\mu&\leqslant& \sum_{j=0}^{t}(\mathcal{K}_\mu)^{j}[E_\alpha^a(\rho_{PP_j})]^\mu+(\mathcal{K}_\mu )^{t+2}\sum_{j=t+1}^{r-2}[E_\alpha^a(\rho_{PP_j})]^\mu\nonumber\\
&&\ \ \ +(\mathcal{K}_\mu)^{t+1}[E_\alpha^a(\rho_{PP_{r-1}})]^\mu
\end{eqnarray}
conditioned that
$k E_\alpha^a(\rho_{PP_i})\geqslant E_\alpha^a(\rho_{P|P_{i+1}\cdots P_{r-1}})$ for $i=0,1,\cdots, t$ and $E_\alpha^a(\rho_{PP_j})\leqslant k E_\alpha^a(\rho_{P|P_{j+1}\cdots P_{r-1}})$ for $j=t+1,\cdots, r-2$, $ \forall 0<k\leqslant1, 0\leqslant t\leqslant r-3, r\geqslant 3$, where $\mu\in (0,1]$ and $\mathcal{K}_\mu=\frac{(1+k)^\mu-1}{k^\mu}$.
\end{theorem}
\begin{remark}
If $k E_\alpha^a(\rho_{PP_j})\geqslant E_\alpha^a(\rho_{P|P_{j+1}\cdots P_{r-1}})$ for all $j=0,1,\cdots, r-2$, then
\begin{eqnarray}\label{thm9:6}
[E_\alpha^a(\rho_{P|P_0\cdots P_{r-1}})]^\mu\leqslant \sum_{j=0}^{r-1}(\mathcal{K}_\mu)^{j}[E_\alpha^a(\rho_{PP_j})]^\mu.
\end{eqnarray}
Since $E_\alpha(\rho_{P|P_0\cdots P_{r-1}})=E_\alpha^a(\rho_{P|P_0\cdots P_{r-1}})$ for $\alpha\in[\frac{\sqrt7-1}{2},\frac{\sqrt{13}-1}{2}]$, the above inequalities (\ref{thm3:2}), (\ref{thm4:0}) and (\ref{thm4:4}) are also upper bounds of $E_\alpha(\rho_{P|P_0\cdots P_{r-1}})$ for GWV states $\ket{\psi}_{A_1\cdots A_n}$.
\end{remark}
\section{Conclusion}\label{sec5}
Both monogamy and polygamy relations of quantum entanglement are the fundamental properties of multipartite entangled states. We have investigated the monogamy
and polygamy relations of multipartite entanglement for the arbitrary $n$-qudit GWV states with respect to different partitions. By using the Hamming weight of the binary vectors related to the partition of the subsystems, we have established a class of monogamy inequalities in terms of the $\beta$th power of T$q$E for the GWV states when $\beta\geqslant2$, as well as the polygamy inequalities in terms of the $\mu$th power of T$q$EoA when $0<\mu\leqslant1$. Similarly, we have also provided the monogamy and polygamy relations based on E$\alpha$E and E$\alpha$EoA for the GWV states. We
have further shown that our monogamy and polygamy
inequalities hold under some conditions for the GWV states in a tighter way than the existing ones and can also recover the previous relations, thus they give rise to better restrictions on entanglement distribution among the subsystems of the GWV states.
\begin{acknowledgements}
This work is supported by NSFC (Grant No. 12075159), Beijing Natural Science Foundation (Z190005), Academy for Multidisciplinary Studies, Capital Normal University, the Academician Innovation Platform of Hainan Province, and Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology (No. SIQSE202001).
\end{acknowledgements}
|
2,877,628,090,974 | arxiv | \section{Introduction}
High temperature cuprate superconductors \cite{DSH,BeK}
such as La$_{2-x}$Sr$_x$CuO$_4$ (LSCO)
show interesting Mott insulator behavior
\cite{sudip,fulde} at low doping $x$ while at high doping
these materials display Fermi liquid properties.
\cite{DSH,proust,hussy} This fascinating phase diagram
is complicated by the presence of various inhomogeneities such as
stripe ordering \cite{Kivel} and possible clustering of dopants.\cite{BB}
Many models of the overdoped metallic phase have been put forward,
some generalizing the concept of the Fermi liquid,
while others attempt to use Hubbard or t-J models.
In this context, it is important to investigate the electron momentum density of
these materials, which, when compared to predictions of theoretical models,
can give an indication of their correctness or applicability.
Insight into the evolution of electronic states can be obtained via Fermi
surface (FS) measurements in various doping regimes. Experimental FS work
on the cuprates has to date been limited mainly to angle-resolved
photoemission (ARPES),\cite{DSH,BeK} quantum oscillations (QO),\cite{QOsc}
and scanning tunneling spectroscopies (STS).\cite{hanaguri} However, ARPES\cite{abARPES} and STS\cite{abSTM} are surface sensitive probes. Although QO measurements probe the bulk, they provide only FS areas without giving information on the location of the FSs in momentum space. Furthermore QO require long mean free paths and large magnetic fields which could alter the ground state. For underdoped samples there is evidence of FSs distorted from
Local Density Approximation (LDA) based band structure predictions,
involving arcs or pockets, but as doping is increased it
appears that a large, LDA-like FS becomes manifest as
the pseudogap collapses near optimal doping.
For an overdoped Tl-cuprate, the FS has
been found to be large and closed around $(\pi ,\pi )$,
\cite{peets07} suggesting that
the Van Hove singularity (vHS) still lies below the Fermi energy $E_F$
in good agreement with LDA calculations.\cite{bba94}
These considerations provide motivation for deploying genuinely bulk
sensitive spectroscopies for FS measurements in the cuprates. Two
such spectroscopies, which have been used extensively for this
purpose, are high-resolution Compton scattering \cite{cs1,cs2}
and two-dimensional angular correlation of positron
annihilation radiation (2D-ACAR).\cite{acar1,acar2,acar3}
Compton scattering probes the momentum density of the many-body
electronic ground state of the system and is {\it insensitive} to
the presence of defects or surfaces in the sample.\cite{lsmo}
2D-ACAR also probes the bulk momentum density, but its
interpretation can be complicated by positron spatial
distribution effects.\cite{manuel_acar_review,Blandin,howel} On the
other hand, the Compton scattering technique requires large single
crystals of materials having atoms with high atomic numbers
($Z\sim30$),
as is the case for all cuprates, but it suffers from the problem of a
relatively low signal from valence electrons sitting on the large
background contribution from the core electrons.
With this background, the present article reports a study of the FS and
electron momentum density (EMD) of an overdoped single crystal of
La$_{2-x}$Sr$_x$CuO$_4$ (LSCO) for the hole doping level $x$=0.3. Compton
scattering and 2D-ACAR experiments have been carried out on the same LSCO
sample, and the results are analyzed through parallel computations of the
FS, the EMD as well as the electron-positron momentum density within the
framework of DFT. The conventional picture
of the metallic state based on Landau Fermi liquid theory is expected
to become increasingly viable with doping as the system becomes more
weakly correlated, even though the physics of the cuprates is generally
dominated by deviations from such a simple picture of electronic states.
For this reason, the overdoped LSCO provides a good starting system for
investigating the fermiology of the cuprates.
Although earlier Compton studies have provided some insight into the ground-state momentum density and electron correlation effects in LSCO,
\cite{laukkanen,science_lsco} we attempt to study the FS
of LSCO using high-resolution Compton scattering experiments.
The 2D-ACAR experiments, on the other hand, have been deployed successfully by several groups in the past for delineating the FSs of the cuprates in a number of favorable cases.\cite{manuel_acar_review}
Here, by measuring a series of high resolution Compton profiles, we
reconstruct the 2D momentum density in overdoped LSCO and identify a clear
signature of the FS in the third Brillouin zone. Moreover, the DFT-based
theoretical EMD is found to be in quantitative accord with the Compton
scattering results, indicating that the ground state wavefunction in the
overdoped system is well-described by the weakly correlated DFT picture.
We have also found a quantitative level of agreement between the measured
and computed 2D-ACAR spectra. However, the 2D-ACAR spectra do not reveal
clear FS signatures due to the well-known positron spatial distribution
effects, which can make the positron insensitive to electrons in the Cu-O
planes.
The remainder of this article is organized as follows. Section II
describes experimental details of sample preparation and of
Compton scattering and positron-annihilation experiments. Section III
provides technical details of our DFT-based EMD, FS and
electron-positron momentum density computations. Section IV
discusses momentum density anisotropy results, while Sec. V considers
the FS determination. The article
concludes with a summary of the results in Sec. VI.
\section {EXPERIMENTS}
The heavily overdoped single crystal ($x=0.30$) was grown by the traveling
solvent floating zone method. For this purpose, a powder sample was first
synthesized by the conventional solid state reaction method. It was then
shaped into feed rods under hydrostatic pressure and sintered at 1173 K
for 12 hours, and at 1423 K for an additional 10 hours. In this process,
excess CuO of 2 mol\% was added to the feed rods to compensate for the
evaporation of CuO during the high temperature process. The grown crystal
was subsequently annealed under an oxygen pressure of 3 atm at 1173 K for
100 hours.
Superconducting quantum interference device
(SQUID) measurements, using MPMS-XL5HG (Quantum Design, Inc.), showed
no superconductivity down to 2 K.
Neutron diffraction studies indicated that the crystal is tetragonal
(I4/mmm) down to the lowest temperature.
The crystal was also characterized by other experiments.
\cite{exp1,exp2,exp3}
We have measured 10 Compton profiles with scattering vectors equally
spaced between the [100] and [110] directions using the Cauchois-type
x-ray spectrometer at the BL08W beamline of SPring-8.\cite{spring8} All
measurements were carried out at room temperature. The overall momentum
resolution is estimated to be 0.13 a.u. full-width-at-half-maximum (FWHM).
The incident x-ray energy was 115 keV and the scattering angle was
$2.88$ rad. Approximately 5$\times 10^5$ counts in total were collected at
the Compton peak channel, and two independent measurements were performed
in order to check the results. Each Compton profile was corrected for
absorption, analyzer and detector efficiencies, scattering cross section,
possible double scattering contributions, and x-ray background. The
core-electron contributions were subtracted from each Compton profile. A
two-dimensional momentum density, representing a projection of the
three-dimensional momentum density onto the $a-b$ plane, was reconstructed
from each set of ten Compton profiles using the direct Fourier transform
method.\cite{fourier}\\
The 2D-ACAR was measured using the Delft University 2D-ACAR
spectrometer\cite{falub} with a conventional $^{22}$Na positron
source. The data were taken at a temperature of about $T=70$ K. To
correct for the sample shape $(5.7 \times 3.5 \times 4.5$ mm$^3)$
the data were convoluted with a gaussian of FWHM of 0.1 channel in
the $x$ direction and 1.7 channels in the $y$ direction (along which
the positrons impinge on the sample), where one
channel corresponds to 0.184 mrad. The total resolution is $1.0
\times 1.0$ mrad$^2$ FWHM (1 mrad = 0.137 a.u). The total number of
coincidences collected is $76.3 \times 10^6$ and the maximum number
of coincidences at the center is $16.7\times10^3$ counts.
The effects of superconductivity on the 2D-ACAR as well as
Compton spectra \cite{peter} are expected to be below the
resolution of the current experiments. For this reason,
we have not carried out experiments on the superconducting
state in connection with the present study.
\section {CALCULATIONS}
Our electronic structure calculations are based on the LDA within the framework of the DFT.
An all-electron fully charge-self-consistent semi-relativistic
Korringa-Kohn-Rostoker (KKR) methodology was used.\cite{bansil} The
crystal structure of LSCO was taken to be body-centered
tetragonal (BCT) with space group $I4/mmm$ (139) using lattice parameters $a$
and $c$ given in Ref. \onlinecite{sahrakorpi}. A non-spinpolarized
calculation
neglecting the magnetic structure was performed. Self-consistency
was obtained for $x=0$ and the effect of doping was treated
within a rigid band model by shifting the Fermi energy to
accommodate the proper number of electrons.\cite{foot2ab,footAB2,footAB3}
The results are in good agreement
with other calculations.\cite{freeman}
The formalism for
computing momentum density, $\rho(\textbf{p})$, is discussed in
Refs. \onlinecite{mijnarends1,mijnarends3,mijnarends4,mijnarends5}.
The EMD is calculated according to the formula \be
\begin{array}{c}
\rho(\textbf{p})=\sum_{i}
n_i(\textbf{k})\mid\int
\exp(-i \textbf{p}.\textbf{r})\psi_i(\textbf{r})d\textbf{r}\mid^2\\
\label{eq1}
\end{array}
\ee and the ACAR spectrum is computed as \be
\begin{array}{c}
\rho^{2\gamma}(\textbf{p})=\sum_i n_{i}(\textbf{k})\mid\int
\exp(-i
\textbf{p}.\textbf{r})\psi_i(\textbf{r})\phi_+(\textbf{r})d\textbf{r}\mid^2,\\
\label{eq2}
\end{array}
\ee where $\psi_i$ denotes the electronic wave function, $\phi_+$
the positron wavefunction, and $\textbf{p}=\textbf{k}+\textbf{G}$ with $\textbf{G}$
a reciprocal lattice
vector. $n_{i}(\textbf{k})$ is the occupation function \cite{agp}
which in the independent particle model equals
$1$ if the electron state $i$ is occupied and $0$ when it is empty, and the
summation extends over the occupied $\textbf{k}$ states. In the
$\rho^{2\gamma}(\textbf{p})$ calculation, we have neglected the
enhancement factor for the annihilation rate.\cite{gga} The
inclusion of enhancement effects is crucial for the calculation of
lifetimes but is well-known \cite{Blandin,icpa_bba92}
to be not very important for discussing
questions of bonding and FS signals in momentum density.
The momentum densities are calculated on a momentum mesh with step
($\delta p_x,\delta p_y,\delta p_z)=2\pi (1/32a,1/32a,1/4c)$.
The momentum is expressed in atomic units defined by $1/a_0$
where $a_0$ is the Bohr radius.
The calculations include contributions from both the filled
valence bands and the conduction-band which gives rise to the FS in
LSCO. To study the electronic structure of the system, we consider
two quantities of interest, 2D-EMD and 1D-EMD, which are the
projection of EMD in 2D and 1D, respectively, given by:
\begin{equation}
\rho^{2d}(p_x,p_y)= \int \rho(\textbf{p})dp_z
\label{eq3}
\end{equation}
and
\begin{equation}
\rho^{1d}(p_x)=\int \int \rho(\textbf{p})dp_y dp_z.\\
\label{eq4}
\end{equation}
The calculated band structure of LSCO ($x=0.3$)
near the Fermi level is illustrated
in Fig.~\ref{fig0}(a).
The band closest to the Fermi level is shown by the red dotted curve.
This CuO$_2$ energy band is dominated by copper-oxygen $d_{x^{2}-y^{2}}-p_{x,y}$ orbitals.
The present calculation for $x$=0.3 predicts
that the vHS is above the Fermi energy, so the FS has a diamond shape
closed around the $\Gamma$ point.
Figure~\ref{fig0}(b) shows the 2D momentum density contribution of
the $x^{2}-y^{2}$ band, $\rho^{2d}_{x^{2}-y^{2}}(p_x,p_y)$,
together with the FS sections at $(k_{z}=0)$ for the doping level
$x=0.3$ mapped periodically throughout the momentum space.
The Brillouin zones are visualized in the figure by a grid of $2 \pi/a$, where $a$
is the lattice constant. The momentum density acts as a matrix element on the FS map.
Thus, since
$\rho^{2d}_{x^{2}-y^{2}}(p_x,p_y)$
has strong amplitudes in the third Brillouin zones, the FS could
be more easily detected there.
\begin{figure}
\includegraphics[width=8.0cm]{fig0.eps}
\caption{(Color online)
(a) Band structure of LSCO near the Fermi level.
The CuO$_2$ band is shown as a red dotted line.
(b) Calculated $\rho^{2d}_{x^{2}-y^{2}}(p_x,p_y)$
is shown together with the FS sections at $(k_{z}=0)$ for the doping level
$x=0.3$. The color scale is in units of $\rho^{2d}$(0,0).
The grid represents Brillouin zones with a size of $2 \pi/a$, where $a$
is the lattice constant.}
\label{fig0}
\end{figure}
The positron annihilation and Compton scattering experiments probe all
the electrons in the system. However, core and
semi-core electrons
give isotropic distributions while the anisotropy
of the spectra is produced by electrons
near the Fermi energy.
Therefore, we can concentrate on analyzing the residual
anisotropy after
subtraction of the isotropic part. We consider an anisotropy with
$C_{4v}$ symmetry, given by\cite{c4v}:
\begin{equation}
\begin{array}{c}
A_{C_{4v}}^{2d}(p_x,p_y)=
\rho^{2d}(p_x,p_y)-
\rho^{2d}(\frac{p_x+p_y}{\sqrt{2}},
\frac{p_x-p_y}{\sqrt{2}}).\\
\label{eq5}
\end{array}
\end{equation}
In order to enhance FS signals, it is also possible
to calculate another anisotropy obtained by subtraction of a
smooth cylindrical average of the distribution $\rho^{2d}(p_x,p_y)$
defined by
\begin{equation}
A^{2d}(p_x,p_y)=\rho^{2d}(p_x,p_y)-S(\sqrt{p_x^2+p_y^2}),
\label{eq7}
\end{equation}
where $S$ is a smoothed cylindrical average of $\rho$,
in which the original spectrum is averaged
over rotation angles from 0$^\circ$ to 45$^\circ$ in steps of 1$^\circ$.
\begin{figure}[t]
\includegraphics[width=9.2cm,height=5.0cm]{fig1.eps}
\caption{(color online) Theoretical and experimental profiles
$\rho^{1d}(p_x)$ for (a) ACAR and (b) Compton scattering along
the [100] direction. All spectra are normalized
to $\rho^{1d}$(0). } \label{fig1}
\end{figure}
\section {Momentum Density Anisotropy}
We start by comparing the experimental 1D profiles based on Compton scattering
and ACAR measurements with the corresponding theoretical predictions.
Figure~\ref{fig1} shows a good level of accord between theory and
experiment, especially for the Compton results.
Further insight is gained by considering the 1D-projection of the
anisotropy of Eq.~\ref{eq5} given by
\begin{equation}
A^{1d}(p_x)=\int_{p_{a}}^{p_{b}} A^{2d}_{C_{4v}}(p_x,p_y) dp_y~,
\label{eq6}
\end{equation}
where $\Delta p=p_{b}-p_{a}$ is the momentum range over which the
projection is taken.\cite{note2}
The profile $A^{1d}(p_x)$, shown in Fig.~\ref{fig4},
is also equal to the difference of profiles \cite{lsmo}
between two crystallographic directions
[100] - [110]
and it can be compared to a similar Compton profile anisotropy
measured by Shukla {\em et al.} at a lower hole doping.\cite{shukla99}
The amplitude of the theory is the same as that in the experiment,
while in Ref. \onlinecite{shukla99}
the theoretical $A^{1d}(p_x)$ had to be
scaled down by a factor of $1.4$
to obtain agreement with experiment.
We note that theory predicts that
the main contribution of correlation effects
is an isotropic redistribution of the momentum density,\cite{lp} so
that the amplitudes of oscillation in $A^{1d}(p_x)$ become significantly reduced
in the strongly correlated system. Hence our quantitative agreement between
theory and experiment suggests that the $x=0.30$ doped regime
is consistent with Fermi liquid physics and that correlation effects
modifying the anisotropy \cite{shukla99,bba_platzman} are relatively weak.
\begin{figure}[t]
\includegraphics[width=9.2cm,height=5.0cm]{fig4.eps}
\caption{(color online) Theoretical and experimental $A^{1d}(p_x)$
for (a) ACAR and (b) Compton scattering} \label{fig4}
\end{figure}
In order to focus on
the Cu-O band contribution, Fig.~\ref{fig23} presents the 2D
$C_{4v}$ anisotropy distributions in the ($p_x,p_y$) plane for positron
annihilation and Compton scattering spectra, respectively.~\cite{note1}
The anisotropy of the ACAR \cite{note1} in Fig.~\ref{fig23}
can be modeled by a molecular orbital method \cite{turchi} involving the
overlap of the positron wavefunction with Cu 3$d$ states hybridized with
O 2$p$ states.
For an atomic orbital, the momentum density has the same point symmetry as
the corresponding charge density.
This result carries over to molecular states
\cite{turchi} and is equally applicable
to solid-state wave functions.\cite{harthoorn78}
The Compton scattering anisotropy maps are
very similar overall to the positron-annihilation results,
except that the Compton spectra extend to significantly higher momenta.
This is expected since in the ACAR case the tendency of the positron to
avoid positively charged ionic cores has the effect of suppressing higher
momentum components of the EMD produced by the core and the localized
valence electrons.
Figure~\ref{fig23} clearly shows that the theory reproduces
most of the structure observed in the measured Compton as well as
positron-annihilation distributions, including the momenta at which
various features are located. The fourfold symmetry of the spectra is a
consequence of the symmetry of the underlying body-centered tetragonal
lattice.
\begin{figure}[t]
\includegraphics[width=8.2cm]{fig23.eps}
\caption{(color online) Top: (left) Experimental and
(right) theoretical ACAR
$C_{4v}$ anisotropy.
Bottom:(left) Experimental and (right) theoretical
$C_{4v}$ anisotropy (Compton).
The color scale is in units of $\rho^{2d}$(0,0).}
\label{fig23}
\end{figure}
Therefore, an important finding
of our work is that the correlations that ordinarily
reduce the Compton anisotropy
amplitudes are no longer effective at this
doping. Similar Fermi liquid behavior has been reported
in studies of overdoped Tl-cuprates.\cite{DSH,proust,hussy}
Moreover, the trend of the weakening of correlation effects
with doping is also consistent with the
changes observed in x-ray absorption \cite{towfiq,peets09} and
photoelectron spectroscopies.\cite{schneider}
\section {FERMI SURFACE RESULTS}
We comment briefly on the positron results first. Although the
electron-positron momentum density measured in a positron-annihilation
experiment contains FS signatures, the amplitude of such signatures is
controlled by the extent to which the positron wavefunction overlaps with
the states at the Fermi energy.
Figure~\ref{fig5} shows the positron density distribution in a planar section
of the LSCO unit cell. As in the calculation by Blandin {\em et al.}
\cite{Blandin}, our result indicates that the positron does
not probe well the FS contribution of the Cu-O planes in
LSCO. Moreover, positron-annihilation favors FS contributions involving O 2$p$
rather than Cu 3$d$ states because the positron wavefunction overlaps more with
extended O $2p$ states compared to the more tightly bound Cu 3$d$ states.
\begin{figure}[t]
\includegraphics[width=8.0cm,height=4.5cm]{fig5.eps}
\caption{(color online) Positron density for LSCO in the
(010) plane.
The positron wave-function is normalized in the LSCO
unit cell and a logarithmic density scale
(with integer numbers as units) is used in order to enhance
regions with low positron density.
The atomic positions (La, Cu, O) set the scale for
the $x$ and $y$ axes.} \label{fig5}
\end{figure}
Indeed, we see little evidence of FS signatures
in either the computed or the measured ACAR distributions
Fig.\ref{fig4}~(a).
A more favorable case for the 2D-ACAR distribution
is provided by the YBa$_2$Cu$_3$O$_7$ cuprate superconductor,
where the 1-dimensional ridge FS has a two-fold symmetry which distinguishes
it from important four-fold symmetry wave function
effects.\cite{haghighi,hoffmann} On the other hand, the Compton scattering
calculation reveals a clear Fermi surface feature near a momentum $(1.5,1.5)$ a.u., and equivalent positions,
Fig.~4(b). Strikingly, a similar feature is seen in the experimental spectrum on the left-side of the figure.
To see this more clearly, we apply a Lock-Crisp-West folding procedure, Fig. 6.
The Lock-Crisp-West (LCW) theorem\cite{LCW} can be used
both in Compton scattering 2D-EMD and in the case of 2D-ACAR data
to study the non isotropic features of the momentum
density by folding the data into a single central Brillouin zone. This technique can enhance FS discontinuities
(`breaks') by coherently superposing the umklapp terms.\cite{Blandin} However, the LCW folding can also
artificially enhance errors in the experimental data.\cite{bba94} Thus, to more clearly expose FS features
in the data, we consider the cylindrical anisotropy defined by Eq.~\ref{eq7}.
\begin{figure}[t]
\includegraphics{fig6m.eps}
\caption{(color online) (a) Cylindrical anisotropy for theoretical Compton scattering momentum density.
The white lines define the Brillouin zones, while blue squares indicate a family of zones where the FS
features are particularly strong.
(b) The blue squared regions are isolated from the rest of the spectra and folded back to the central region.
The color scale is in units of $\rho^{2d}$(0,0).}
\label{fig6}
\end{figure}
Since the subtracted function is smooth and slowly varying this procedure does not
contribute to, nor create, new structures in the spectrum remaining after subtraction.
Figure \ref{fig6}~(a) shows the cylindrical anisotropy of the theoretical 2D-EMD spectrum.
The momentum density is represented in the extended zone scheme. Because the FS is periodic,
a complete FS must exist in each Brillouin zone,
but with its intensity modulated by matrix element effects, as in Fig.~1(b).
For a predominantly $d$-wave FS,
the matrix element effects will strongly suppress spectral
weight near $\Gamma$, so the FS breaks are most clearly seen in higher Brillouin zones. These FS breaks appear superimposed on the momentum
density in the form of discontinuities which can occur in any Brillouin zone. In Fig.\ref{fig6}~(a), the Fermi surface breaks are the regions where
the contours run closely together so that the electron momentum density varies rapidly at these locations.
Figure~\ref{fig6}~(a) shows the calculated Fermi breaks in several Brillouin zones. In particular, in the third
zones framed with blue squares, the arc-like features are theoretically predicted FSs associated with Cu-O planes.
Due to the tetragonal symmetry, a rotation
of the spectrum by $\pi/2$ will generate symmetry-related regions (blue squares) with equivalent strong FS features.
These regions are isolated in Fig.\ref{fig6}~(b). By performing a `partial folding', that is, folding only these
regions back into the first Brillouin zone, we produce a full FS,
where strong matrix element effects are substantially circumvented.
The resulting
FS map is shown in the center of Fig.\ref{fig6}~(b), and again on a larger scale in Fig.\ref{fig7}~(a). Applying the same procedure
to other Brillouin zone regions produces similar results. For instance, the four regions along the diagonal neighboring
the central region can also yield full FS information, but here it is superimposed on strong momentum density features.
In Figure~\ref{fig7} we compare
the theoretical FS obtained via the aforementioned `partial folding' procedure
to the corresponding experimental result.
Figure~\ref{fig7}~(a)
shows that the partial backfolding technique produces
the FS of correct size and topology.
The same procedure when applied to the experimental spectrum
yields results of Fig.\ref{fig7}~(d).
To compare with theory, we have convoluted the spectrum
in Fig.\ref{fig7}(a)
with $0.07$ a.u and $0.13$ a.u. resolutions,
leading to the distributions of Figs.\ref{fig7}~(b)~and~(c) respectively.
The latter is close to the actual experimental resolution used in this work.
By comparing the four frames in Fig.~\ref{fig7}, one can see that the
experimental spectrum in Fig.~\ref{fig7}~(d) shows
FS structures consistent with the LDA theory shown
in Fig.~\ref{fig7}~(a).
Clearly the FS discontinuity along the nodal direction is
smeared when including resolution broadening
as shown in Fig.\ref{fig7}~(b) and Fig\ref{fig7}~(c ).
The best overall agreement between theory and experiment
lies between Fig.~\ref{fig7}~(b) and Fig\ref{fig7}~(c ).
This is also confirmed by the cuts taken
along the nodal direction plotted in Fig.\ref{fig8m}.
The remaining discrepancy between experiment and theory could
result from intrinsic or extrinsic
inhomogeneity effects such as the appearance of local ferromagnetic
clusters about concentrated regions of dopant atoms,\cite{BB}
which have been neglected in the present simulations.
Our conclusion is that the closed FS for $x=0.30$ predicted by our LDA calculation is consistent
both with the present Compton scattering data and with surface-sensitive ARPES
results.\cite{Yoshida}
\begin{figure}[t]
\includegraphics{fig7m.eps}
\caption{Result of the 'partial folding' procedure:
(a) Non convoluted Theory;
(b) Theory convoluted with experimental resolution $0.07$ a.u..
(c) Theory convoluted with experimental resolution $0.13$ a.u.;
(d) Experiment. The white color corresponds to a weight of unity (occupied)
while the black color corresponds to zero (unoccupied).}
\label{fig7}
\end{figure}
\begin{figure} [t]
\includegraphics[width=7.2cm,height=7.0cm]{fig8m.eps}
\caption{(color online)
Cuts of the distributions
in Fig.~\ref{fig7}
along the nodal direction.
The amplitudes of theory and experiment are compared
on the same scale.
Curves are offset with respect to one another
for clarity.
From the bottom to the top,
the first curve is the non-convoluted theory,
the second is the convoluted theory with a resolution $0.07$ a.u.,
the third one is the convoluted theory with a resolution $0.13$ a.u.
and the fourth curve is the experimental data.}
\label{fig8m}
\end{figure}
\section{CONCLUSIONS}
We have performed momentum density measurements
on a high quality overdoped LSCO sample using both Compton scattering and 2D-ACAR.
First principles calculations were also performed for the corresponding spectra.
The quantitative agreement between the calculations and the experiment for ACAR as well as
EMD anisotropies suggests that $x=0.3$
overdoped LSCO can be explained within the conventional Fermi-liquid theory.
Nevertheless, a FS signal was only clearly observed by Compton scattering in
the third Brillouin zone along [100]. Our FS analysis
confirms previous ARPES FS measurements \cite{Yoshida} showing an electron-like FS
in the overdoped regime. This validation is important since
we provide via deep inelastic x-ray scattering experiments a truly
bulk-sensitive image of momentum density maps of electrons near the Femi level.
In general, this momentum density information is difficult to extract
from ARPES experiments due to difficulties associated with matrix element
effects and the well-known surface sensitivity of ARPES.\\
{\bf Acknowledgements} This work is supported by the USDOE
grants No. DE-FG02-07ER46352 and No. DE-FG02-08ER46540 (CMSN)
and benefited from the allocation of supercomputer time at NERSC
and Northeastern University's Advanced Scientific Computation Center (ASCC),
and the Stichting Nationale Computer faciliteiten (National Computing
Facilities Foundation, NCF).
The work at JASRI was supported by a Grant-in-Aid for Scientific Research
(nos. 18340111 and 22540382) from the Ministry of Education, Culture, Sports, Science, and Technology (MEXT), Japan, and that
at Tohoku University was supported by a Grant-in-Aid for Scientific Research (nos. 16104005, 19340090 and 22244039) from the
MEXT, Japan.
The Compton scattering experiments were
performed with the approval of JASRI (Proposals, 2003B0762,
2004A0152, 2007B1413, 2008A1191).
|
2,877,628,090,975 | arxiv | \section{Introduction}
Fix
$T>0$ and let $(\Omega,\mathcal{F},\mathbb{P},\{\mathcal{F}_t\}_{t\in
[0,T]},(\{\beta_k(t)\}_{t\in[0,T]})_{k\in\mathbb{N}})$ be a stochastic basis. Without loss of generality, here the filtration $\{\mathcal{F}_t\}_{t\in [0,T]}$ is assumed to be complete and $\{\beta_k(t)\}_{t\in[0,T]}(k \in\mathbb{N})$ are one-dimensional i.i.d. real-valued $\{\mathcal{F}_t\}_{t\in [0,T]}-$Wiener processes. The symbol $\mathbb{E}$ denotes the expectation with respect to $\mathbb{P}$.
For fixed $N\in\mathbb{N}$, let $\mathbb{T}^N\subset\mathbb{R}^N$ be the $N-$dimensional torus with the periodic length to be $1$.
Consider the following Cauchy problem for the scalar conservation laws with stochastic forcing
\begin{eqnarray}\label{P-19}
\left\{
\begin{array}{ll}
du(t,x)+div A(u(t,x))dt=\sum_{k\geq 1}g_k(x,u(t,x)) d\beta_k(t) \quad {\rm{in}}\ \mathbb{T}^N\times (0,T],\\
u(\cdot,0)=\eta(\cdot) \quad {\rm{on}}\ \mathbb{T}^N.
\end{array}
\right.
\end{eqnarray}
where $u:(\omega,x,t)\in\Omega\times\mathbb{T}^N\times[0,T]\mapsto u(\omega,x,t):=u(x,t)\in\mathbb{R}$ is a random field, the flux function $A:\mathbb{R}\to\mathbb{R}^N$ and the coefficient $g_k(\cdot,\cdot)$ are measurable and fulfill certain conditions (see Section 2 in below). Moreover, the initial value $\eta\in L^{\infty}(\mathbb{T}^N)$ is a deterministic function.
\vskip 0.3cm
The purpose of this paper is to establish the quadratic transportation cost inequality for the solution of the stochastic conservation laws. Let us recall the relevant concepts.
Let $(X, d)$ be a metric space equipped with the Borel $\sigma$-field ${\cal B}$. Let $\mu$, $\nu$ be two Borel probability measures on the metric space $(X, d)$. The $L^p$-Wasserstein distance between $\mu$ and $\nu$ is defined as
$$W_p(\nu, \mu):=\left[\inf \iint_{X\times X}d(x,y)^p\,\pi(\mathrm{d}x,\mathrm{d}y)\right]^{\frac{1}{p}},$$
where the infimum is taken over all probability measures $\pi$ on the product space $X\times X$ with marginals $\mu$ and $\nu$. Recall that the Kullback information (or relative entropy) of $\nu$ with respect to $\mu$ is defined by
\[H(\nu|\mu):=\int_X \log\left(\frac{\mathrm{d}\nu}{\mathrm{d}\mu}\right)\, \mathrm{d}\nu ,\]
if $\nu$ is absolutely continuous with respect to $\mu$, and $+\infty$ otherwise.
\begin{dfn}
We say that a
measure $\mu$ satisfies the $L^p$-transportation cost inequality if there exists a constant $C>0$ such that for all probability measures $\nu$,
\begin{equation}\label{1.2}
W_p(\nu, \mu)\leq \sqrt{2C H(\nu| \mu)}.
\end{equation}
The case $p=2$ is referred to as the quadratic transportation cost inequality.
\end{dfn}
Transportation cost inequalities have close connections with other functional inequalities, e.g. Poincare inequalities, logarithmic Sobolev inequalities, and they also imply the concentration of measure phenomenon.
\vskip 0.3cm
For a measurable subset $A\subset X$ and $r>0$, we denote by $A_r$ the $r$-neighborhood of $A$, namely
$A_r=\{x: d(x,A)<r\}$. We say that $\mu$ has normal concentration (or Gaussian tail estimates) on $(X, d)$ if there are constants $C,c>0$ such that for every $r>0$ and every Borel subset $A$ with $\mu(A)\geq \frac{1}{2}$,
\begin{equation}\label{0.1}
1-\mu(A_r)\leq Ce^{-cr^2}.
\end{equation}
The fact that the $L^1$-transportation cost inequality implies normal concentration was obtained in \cite{M-1,M-2} by Marton and in \cite{T1,T2,T3} by Talagrand. An elegant, simple proof of this fact is also contained in the book \cite{L}. The connection of the quadratic transportation cost inequality with other functional inequalities was studied in \cite{OV} by Otto and Villani (see also \cite{L}). For other related interesting works, we refer the reader to \cite{GRS}, \cite{LW}, \cite{PS}, \cite{PS1}.
\vskip 0.3cm
In the past decades, many people established quadratic transportation cost inequalities for various kinds of interesting measures. Let us mention several papers which are relevant to our work. The transportation cost inequalities for stochastic differential equations were obtained by H. Djellout, A. Guillin and L. Wu in \cite{DGW}. The measure concentration for multidimensional diffusion processes with reflecting boundary conditions was considered by S. Pal in \cite{P}. The quadratic transportation cost inequalities for stochastic partial differential equations (SPDEs) driven by Gaussian noise which is white in time and colored in space were obtained by A. S. Ustunel in \cite{U}. In the article \cite{BH2}, the authors obtained the quadratic transportation cost inequality under the $L^2$-distance for stochastic heat equations. In \cite{KS}, the authors established the quadratic transportation cost inequality for more general stochastic partial differential equations (SPDEs) under the $L^2$-distance and under the uniform distance for the case of additive noise. In \cite{SZ}, the authors obtained the quadratic transportation cost inequality for stochastic heat equations equations driven by multiplicative space-time white noise under the uniform distance.
\vskip 0.3cm
On the other hand, both deterministic ($g_k=0$) and stochastic conservation laws have been studied extensively by many people.
Conservation law is fundamental to our understanding of the space-time evolution laws of interesting physical quantities. For more background on this model, we refer the readers to
the monograph \cite{Dafermos}, the work of Ammar,
Wittbold and Carrillo \cite{K-P-J} and references therein. As we know, the Cauchy problem
for the deterministic first-order PDE (\ref{P-19}) does not admit any (global) smooth solutions, but there exist infinitely many weak solutions to the deterministic Cauchy problem. To solve the problem of non-uniqueness, an additional entropy condition was added to identify the physical weak solution. Under this condition,
the notion of entropy solutions for the deterministic first-order scalar conservation laws was introduced by Kru\v{z}kov \cite{K-69,K-70}.
The kinetic formulation of weak entropy solution of the Cauchy problem for a general multi-dimensional scalar conservation laws (also called the kinetic system), was derived by Lions, Perthame and Tadmor in \cite{L-P-T}.
\vskip 0.3cm
In recent years, the stochastic conservation law has been developed rapidly. We refer the reader to the references \cite{K}, \cite{V-W},\cite{F-N}, \cite{DWZZ} etc. We
particularly mention the paper \cite{D-V-1} in which the authors proved the existence and uniqueness of kinetic solution to the Cauchy problem for (\ref{P-19}) in any dimension.
In addition, the long-time behavior of the first-order scalar conservation laws has been studied in the paper \cite{D-V-2}.
Recently, combining techniques used in the context of kinetic solutions as well as new results on large deviations, Dong et al. \cite{DWZZ} established Freidlin-Wentzell's type large deviation principles (LDP) for the kinetic solution to the scalar stochastic conservative laws.
\smallskip
The purpose of this paper is to establish the quadratic transportation cost inequality for the kinetic solution of the scalar stochastic conservation laws, which in particular implies the concentration phenomenon of the law of the solution. To our knowledge, the present paper is the first work towards proving the transportation cost inequality directly for the kinetic solutions to the scalar stochastic conservation laws.
Due to the lack of viscous term, the kinetic solutions of (\ref{P-19}) are living in a rather irregular space $L^1([0, T],L^1(\mathbb{T}^N))$, we will use
the doubling variables method as in the work \cite{D-V-1}. Differ from \cite{D-V-1}, we need to deal with the martingale term carefully to derive a proper bound, which can ensure the application of Gronwall inequality to get an appropriate norm estimation (see (\ref{qq-10})). As an important part of the proof, we also need to make some higher order estimates of the error term than \cite{D-V-1}, which is nontrivial and completely new.
This paper is organized as follows. In Section 2, we lay out the precise setup for the stochastic conservation law and recall some of the known results. Section 3 is devoted to the proof of the transportation cost inequality.
\section{Framework}\label{S:2}
\setcounter{equation}{0}
In this section, we will lay out the precise setup for the stochastic conservation law and recall some results which will be used later.
\subsection{Kinetic solution}
We will follow closely the framework of \cite{D-V-1}.
Let $\|\cdot\|_{L^p}$ denote the norm of usual Lebesgue space $L^p(\mathbb{T}^N)$ for $p\in [1,\infty]$. In particular, set $H=L^2(\mathbb{T}^N)$ with the corresponding norm $\|\cdot\|_H$.
$C_b$ represents the space of bounded, continuous functions and $C^1_b$ stands for the space of bounded, continuously differentiable functions having bounded first order derivative. Define the function $f(x,t,\xi):=I_{u(x,t)>\xi}$, which is the characteristic function of the subgraph of $u$. We write $f:=I_{u>\xi}$ for short.
Moreover, denote by the brackets $\langle\cdot,\cdot\rangle$ the duality between $C^{\infty}_c(\mathbb{T}^N\times \mathbb{R})$ and the space of distributions over $\mathbb{T}^N\times \mathbb{R}$. In what follows, with a slight abuse of the notation $\langle\cdot,\cdot\rangle$, we denote the following integral by
\[
\langle F, G \rangle:=\int_{\mathbb{T}^N}\int_{\mathbb{R}}F(x,\xi)G(x,\xi)dxd\xi, \quad F\in L^p(\mathbb{T}^N\times \mathbb{R}), G\in L^q(\mathbb{T}^N\times \mathbb{R}),
\]
where $1\leq p\leq +\infty$, $q:=\frac{p}{p-1}$ is the conjugate exponent of $p$. In particular, when $p=1$, we set $q=\infty$ by convention. For a measure $m$ on the Borel measurable space $\mathbb{T}^N\times[0,T]\times \mathbb{R}$, the shorthand $m(\phi)$ is defined by
\[
m(\phi):=\langle m, \phi \rangle([0,T]):=\int_{\mathbb{T}^N\times[0,T]\times \mathbb{R}}\phi(x,t,\xi)dm(x,t,\xi), \quad \phi\in C_b(\mathbb{T}^N\times[0,T]\times \mathbb{R}).
\]
In the sequel, the notation $a\lesssim b$ for $a,b\in \mathbb{R}$ means that $a\leq \mathcal{D}b$ for some constant $\mathcal{D}> 0$ independent of any parameters.
\subsection{Hypotheses}
For the flux function $A$ and the coefficients of (\ref{P-19}), we assume that
\begin{description}
\item[\textbf{Hypothesis H}] The flux function $A$ belongs to $C^2(\mathbb{R};\mathbb{R}^N)$ and its derivative $a:=A'$ is of polynomial growth with degree $q_0>1$. That is, there exists a constant $C(q_0)\geq0$ such that
\begin{eqnarray}\label{qeq-22}
|a(\xi)|\leq C(q_0)(1+|\xi|^{q_0}),\quad |a(\xi)-a(\zeta)|\leq \Upsilon(\xi,\zeta)|\xi-\zeta|,
\end{eqnarray}
where $\Upsilon(\xi,\zeta):=C(q_0)(1+|\xi|^{q_0-1}+|\zeta|^{q_0-1})$.
Moreover, we assume that $g_k\in C(\mathbb{T}^N\times \mathbb{R})$ satisfies the following bounds
\begin{eqnarray}\label{e-5}
|g_k(x,u)|&\leq& C^0_k, \quad \sum_{k\geq 1}|C^0_k|^2\leq D_0,\\
\label{e-6}
|g_k(x,u)-g_k(y,v)|&\leq& C^1_k(|x-y|+|u-v|),\quad \sum_{k\geq 1}|C^1_{k}|^2\leq \frac{D_1}{2},
\end{eqnarray}
for $x, y\in \mathbb{T}^N, u,v\in \mathbb{R}$, where $C^0_k, C^1_k, D_0, D_1$ are positive constants.
\end{description}
The hypothesis H implies that
\begin{eqnarray}\label{equ-28}
G^2(x,u):=\sum_{k\geq 1}|g_k(x,u)|^2&\leq& D_0,\\
\label{equ-29}
\sum_{k\geq 1}|g_k(x,u)-g_k(y,v)|^2&\leq& D_1\Big(|x-y|^2+|u-v|^2\Big).
\end{eqnarray}
\subsection{Kinetic solution}
Let us recall the notion of a kinetic solution to equation (\ref{P-19}) from \cite{D-V-1}.
\begin{dfn}(Kinetic measure)\label{dfn-3}
A map $m$ from $\Omega$ to the set of non-negative, finite measures over $\mathbb{T}^N\times [0,T]\times\mathbb{R}$ is said to be a kinetic measure, if
\begin{description}
\item[1.] $ m $ is measurable, that is, for each $\phi\in C_b(\mathbb{T}^N\times [0,T]\times \mathbb{R}), \langle m, \phi \rangle: \Omega\rightarrow \mathbb{R}$ is measurable,
\item[2.] $m$ vanishes for large $\xi$, i.e.,
\begin{eqnarray}\label{equ-37}
\lim_{R\rightarrow +\infty}\mathbb{E}[m(\mathbb{T}^N\times [0,T]\times B^c_R)]=0,
\end{eqnarray}
where $B^c_R:=\{\xi\in \mathbb{R}, |\xi|\geq R\}$,
\item[3.] for every $\phi\in C_b(\mathbb{T}^N\times \mathbb{R})$, the process
\[
(\omega,t)\in\Omega\times[0,T]\mapsto \langle m,\phi\rangle([0,t]):= \int_{\mathbb{T}^N\times [0,t]\times \mathbb{R}}\phi(x,\xi)dm(x,s,\xi)\in\mathbb{R}
\]
is predictable.
\end{description}
\end{dfn}
\begin{remark}\label{r-1}
For any $\phi\in C_b(\mathbb{T}^N\times \mathbb{R})$ and kinetic measure $m$, define $A_t:=\langle m, \phi\rangle([0,t]),$
then a.s., $t\mapsto A_t$ is a right continuous function of finite variation. Moreover, the function $A$ has left limits at any point $t\in (0,T]$. We write $A_{t^-}=\lim_{s\uparrow t}A_s$ and set $A_{0^-}=0$.
As a result, $A_{t^-}=\langle m, \phi\rangle([0,t))$, which is c\`{a}gl\`{a}d (left continuous with right limits).
\end{remark}
\begin{dfn}(Kinetic solution)\label{dfn-1}
Let $\eta\in L^{\infty}(\mathbb{T}^N)$. A measurable function $u: \mathbb{T}^N\times [0,T]\times\Omega\rightarrow \mathbb{R}$ is called a kinetic solution to (\ref{P-19}) with initial datum $\eta$, if
\begin{description}
\item[1.] $(u(t))_{t\in[0,T]}$ is predictable,
\item[2.] for any $p\geq1$, there exists $C_p\geq0$ such that
\begin{eqnarray}\label{qq-8}
\mathbb{E}\left(\underset{0\leq t\leq T}{{\rm{ess\sup}}}\ \|u(t)\|^p_{L^p(\mathbb{T}^N)}\right)\leq C_p,
\end{eqnarray}
\item[3.] there exists a kinetic measure $m$ such that $f:= I_{u>\xi}$ satisfies: for all $\varphi\in C^1_c(\mathbb{T}^N\times [0,T)\times \mathbb{R})$,
\begin{eqnarray}\label{q-2.1}
&&\int^T_0\langle f(t), \partial_t \varphi(t)\rangle dt+\langle f_0, \varphi(0)\rangle +\int^T_0\langle f(t), a(\xi)\cdot \nabla \varphi (t)\rangle dt\\ \notag
&=& -\sum_{k\geq 1}\int^T_0\int_{\mathbb{T}^N} g_k(x, u(t,x))\varphi (x,t,u(x,t))dxd\beta_k(t) \\
\notag
&& -\frac{1}{2}\int^T_0\int_{\mathbb{T}^N}\partial_{\xi}\varphi (x,t,u(x,t))G^2(x,u(t,x))dxdt+ m(\partial_{\xi} \varphi), \ a.s. ,
\end{eqnarray}
where $f_0=I_{\eta>\xi}$, $u(t)=u(\cdot,t,\cdot)$ and $G^2=\sum^{\infty}_{k=1}|g_k|^2$.
\end{description}
\end{dfn}
Let $(X,\lambda)$ be a finite measure space. For some measurable function $u: X\rightarrow \mathbb{R}$, define $f: X\times \mathbb{R}\rightarrow [0,1]$ by $f(z,\xi)=I_{u(z)>\xi}$ a.e.
we use $\bar{f}:=1-f$ to denote its conjugate function. Define $\Lambda_f(z,\xi):=f(z,\xi)-I_{0>\xi}$, which can be viewed as a correction to $f$. Note that $\Lambda_f$ is integrable on $X\times \mathbb{R}$ if $u$ is.
\vskip 0.3cm
It is shown in \cite{D-V-1} that almost surely, for each kinetic solution $u$, the function $f=I_{u(x,t)>\xi}$ admits left and right weak limits at any point $t\in[0,T]$, and the weak form (\ref{q-2.1}) satisfied by a kinetic solution can be strengthened to be weak only respect to $x$ and $\xi$. More precisely, the following results are obtained.
\begin{prp}(\cite{D-V-1}, Left and right weak limits)\label{prp-3} Let $u$ be a kinetic solution to (\ref{P-19}) with initial value $\eta$. Then $f=I_{u(x,t)>\xi}$ admits, almost surely, left and right limits respectively at every point $t\in [0,T]$. More precisely, for any $t\in [0,T]$, there exist kinetic functions $f^{t\pm}$ on $\Omega\times \mathbb{T}^N\times \mathbb{R}$ such that $\mathbb{P}-$a.s.
\begin{eqnarray}\label{e-50}
\langle f(t-r),\varphi\rangle\rightarrow \langle f^{t-},\varphi\rangle
\end{eqnarray}
and
\begin{eqnarray}\label{e-51}
\langle f(t+r),\varphi\rangle\rightarrow \langle f^{t+},\varphi\rangle
\end{eqnarray}
as $r\rightarrow 0$ for all $\varphi\in C^1_c(\mathbb{T}^N\times \mathbb{R})$. Moreover, almost surely,
\[
\langle f^{t+}-f^{t-}, \varphi\rangle=-\int_{\mathbb{T}^N\times[0,T]\times \mathbb{R}}\partial_{\xi}\varphi(x,\xi)I_{\{t\}}(s)dm(x,s,\xi).
\]
In particular, almost surely, the set of $t\in [0,T]$ fulfilling $f^{t+}\neq f^{t-}$ is countable.
\end{prp}
For the function $f=I_{u(x,t)>\xi}$ in Proposition \ref{prp-3}, define $f^{\pm}$ by $f^{\pm}(t)=f^{t \pm}$, $t\in [0,T]$. Since we are dealing with the filtration associated to Brownian motion, both $f^{+}$ and $f^{-}$ are clearly predictable as well. Also $f=f^+=f^-$ almost everywhere in time and we can take any of them in an integral with respect to the Lebesgue measure or in a stochastic integral. However, if the integral is with respect to a measure, typically a kinetic measure in this article, the integral is not well-defined for $f$ and may differ if one chooses $f^+ $ or $f^-$.
The following result was proved in \cite{D-V-1}.
\begin{lemma}\label{lem-1}
The weak form (\ref{q-2.1}) satisfied by $f= I_{u>\xi}$ can be strengthened to be weak only respect to $x$ and $\xi$. Concretely, for all $t\in [0,T)$ and $\varphi\in C^1_c(\mathbb{T}^N\times \mathbb{R})$, $f= I_{u>\xi}$ satisfies
\begin{eqnarray}\notag
\langle f^+(t),\varphi\rangle&=&\langle f_{0}, \varphi\rangle+\int^t_0\langle f(s), a(\xi)\cdot \nabla \varphi\rangle ds\\
\notag
&&+\sum_{k\geq 1}\int^t_0\int_{\mathbb{T}^N}\int_{\mathbb{R}}g_k(x,\xi)\varphi(x,\xi)d\nu_{x,s}(\xi)dxd\beta_k(s)\\
\label{qq-17}
&& +\frac{1}{2}\int^t_0\int_{\mathbb{T}^N}\int_{\mathbb{R}}\partial_{\xi}\varphi(x,\xi)G^2(x,\xi)d\nu_{x,s}(\xi)dxds- \langle m,\partial_{\xi} \varphi\rangle([0,t]), \quad a.s.,
\end{eqnarray}
and we set $f^+(T)=f(T)$.
\end{lemma}
Where $\nu_{x,s}(\xi)=-\partial_{\xi}f(x,s,\xi)=\delta_{u(x,s)=\xi}$.
\begin{remark} By making modification of the proof of Lemma \ref{lem-1}, we have for all $t\in (0,T]$ and $\varphi\in C^1_c(\mathbb{T}^N\times \mathbb{R})$, $f= I_{u>\xi}$ satisfies
\begin{eqnarray}\notag
\langle f^-(t),\varphi\rangle&=&\langle f_{0}, \varphi\rangle+\int^t_0\langle f(s), a(\xi)\cdot \nabla \varphi\rangle ds\\
\notag
&&+\sum_{k\geq 1}\int^t_0\int_{\mathbb{T}^N}\int_{\mathbb{R}}g_k(x,\xi)\varphi(x,\xi)d\nu_{x,s}(\xi)dxd\beta_k(s)\\
\label{e-80}
&& +\frac{1}{2}\int^t_0\int_{\mathbb{T}^N}\int_{\mathbb{R}}\partial_{\xi}\varphi(x,\xi)G^2(x,\xi)d\nu_{x,s}(\xi)dxds- \langle m,\partial_{\xi} \varphi\rangle([0,t)), \quad a.s.,
\end{eqnarray}
and we set $ f^-(0)=f_0$.
\end{remark}
The following well-posedness of (\ref{P-19}) was established in \cite{D-V-1}.
\begin{thm}\label{thm-4}
(\cite{D-V-1}, Existence and Uniqueness) Let $\eta\in L^{\infty}(\mathbb{T}^N)$. Assume Hypothesis H holds, then there is a unique kinetic solution $u$ to equation (\ref{P-19}) with initial datum $\eta$.
\end{thm}
\section{Transportation cost inequality}
Let $\mu$ be the law of the random field solution $u(\cdot, \cdot)$ of SPDE (\ref{P-19}), viewed as a probability measure on $L^1([0, T],L^1(\mathbb{T}^N))$. First we state a lemma which is essentially proved in \cite{KS} describing the probability measures $\nu$ that are absolutely continuous with respect to $\mu$.
\vskip 0.4cm
Let $\nu\ll \mu$ on $L^1([0, T],L^1(\mathbb{T}^N))$.
Define a new probability measure $\mathbb{Q}$ on the filtered probability space $(\Omega, {\cal F}, \{{\cal F}_{t}\}_{0\leq t\leq T}, \mathbb{P})$ by
\begin{align}\label{add 0303.1}
\mathrm{d}\mathbb{Q}:=\frac{\mathrm{d}\nu}{\mathrm{d}\mu}(u) \,\mathrm{d}\mathbb{P} .
\end{align}
Denote the Radon-Nikodym derivative restricted on ${\cal F}_t$ by
\[M_t:=\left. \frac{\mathrm{d}\mathbb{Q}}{\mathrm{d}\mathbb{P}}\right |_{{\cal F}_t}, \quad t\in [0, T].\]
Then $M_t, t\in [0, T]$ forms a $\mathbb{P}$-martingale. A variant of the following result was proved in \cite{KS}.
\begin{lemma}
There exists an adapted stochastic process $h=\{h(s)=(h_1(s),h_2(s),\cdot\cdot\cdot,)\in l^2, s\geq 0\}$ such that $\mathbb{Q}-a.s.$ for all $t\in [0, T]$,
\begin{align*}
\int_0^t |h|_{l^2}^2(s)\,\mathrm{d}s<\infty
\end{align*}
and $\widetilde{\beta}_k: [0, T]\rightarrow \mathbb{R}$ defined by
\begin{align}\label{4.2}
\widetilde{\beta}_k(t):=\beta_k(t)-\int_0^t h_k(s)\,\mathrm{d}s,
\end{align}
are independent Brownian motions under the measure $\mathbb{Q}$. Moreover,
\begin{align}\label{4.3}
M_t=\exp\left (\sum_{k=1}^{\infty}\int_0^th_k(s)\,\mathrm{d}\beta_k(s)-\frac{1}{2}\int_0^t |h|_{l^2}^2(s)\,\mathrm{d}s\right ), \quad \mathbb{Q}-a.s.,
\end{align}
and
\begin{align}\label{4.4}
H(\nu|\mu)=\frac{1}{2}\mathbb{E}^{\mathbb{Q}}\left[\int_0^T|h|_{l^2}^2(s)\,\mathrm{d}s\right],
\end{align}
where $\mathbb{E}^{\mathbb{Q}}$ stands for the expectation under the measure $\mathbb{Q}$.
\end{lemma}
\begin{thm}\label{thm-5}
Let $\eta\in L^{\infty}(\mathbb{T}^N)$. Assume Hypothesis H holds. Then the law $\mu$ of the solution of the stochastic conservation law (\ref{P-19}) satisfies the quadratic transportation cost inequality on the space $L^1([0,T],L^1(\mathbb{T}^N))$.
\end{thm}
\noindent {\bf Proof}.
Take $\nu\ll \mu$ on $L^1([0, T],L^1(\mathbb{T}^N))$.
Let $\mathbb{Q}$ be the probability measure defined as in(\ref{add 0303.1}).
Let $h(t)$ be the corresponding stochastic process appeared in Lemma 3.1. Then, by the Girsanov theorem the solution $u(t)$ of equation (\ref{P-19}) satisfies the following stochastic partial differential equation (SPDE) under the measure $\mathbb{Q}$,
\begin{eqnarray}\label{q-4.1}
\left\{
\begin{array}{ll}
du^h(t,x)+div A(u^h(t,x))dt=\sum_{k\geq 1}g_k(x,u^h(t,x)) d\widetilde{\beta}_k(t)+\sum_{k\geq 1}g_k(x,u^h(t,x))h_k(t)dt \quad {\rm{in}}\ \mathbb{T}^N\times (0,T],\\
u^h(\cdot,0)=\eta(\cdot) \quad {\rm{on}} \ \mathbb{T}^N.
\end{array}
\right.
\end{eqnarray}
Similar to Lemma 2.1, we can show that the kinetic solution $u^h$ satisfies that for any $p\geq1$, there exists $C_p\geq0$ such that
\begin{eqnarray}\label{equation-1}
\mathbb{E}^{\mathbb{Q}}\left(\underset{0\leq t\leq T}{{\rm{ess\sup}}}\ \|u^h(t)\|^p_{L^p(\mathbb{T}^N)}\right)\leq C_p,
\end{eqnarray}
and there exists a kinetic measure $m^h$ such that
for all $t\in [0,T)$ and $\varphi\in C^1_c(\mathbb{T}^N\times \mathbb{R})$, $f:= I_{u^h>\xi}$ satisfies
\begin{eqnarray}\notag
\langle f^+(t),\varphi\rangle&=&\langle f_{0}, \varphi\rangle+\int^t_0\langle f(s), a(\xi)\cdot \nabla \varphi\rangle ds\\
\notag
&& +\sum_{k\geq 1}\int^t_0\int_{\mathbb{T}^N}\int_{\mathbb{R}}g_k(x,\xi)\varphi(x,\xi)d\nu^h_{x,s}(\xi)dx d\widetilde{\beta}_k(s)\\ \notag
&& +\sum_{k\geq 1}\int^t_0\int_{\mathbb{T}^N}\int_{\mathbb{R}} g_k(x, \xi)\varphi (x,\xi)h_k(s)d\nu^h_{x,s}(\xi)dxds \\
\label{equation-2}
&& +\frac{1}{2}\int^t_0\int_{\mathbb{T}^N}\int_{\mathbb{R}}\partial_{\xi}\varphi(x,\xi)G^2(x,\xi)d\nu^h_{x,s}(\xi)dxds- \langle m^h,\partial_{\xi} \varphi\rangle([0,t]), \quad a.s.,
\end{eqnarray}
where $\nu^{h}_{x,s}(\xi)=-\partial_{\xi}f(x,s,\xi)=\delta_{u^{h}(x,s)=\xi}$ and we set $f^+(T)=f(T)$.
\vskip 0.4cm
Consider the solution of the following SPDE:
\begin{eqnarray}\label{q-4.2}
\left\{
\begin{array}{ll}
du(t,x)+div A(u(t,x))dt=\sum_{k\geq 1}g_k(x,u(t,x)) d\widetilde{\beta}_k(t) \quad {\rm{in}}\ \mathbb{T}^N\times (0,T],\\
u(\cdot,0)=\eta(\cdot) \quad {\rm{on}}\ \mathbb{T}^N.
\end{array}
\right.
\end{eqnarray}
By Lemma 3.1, it follows that under the measure $\mathbb{Q}$, the law of $(u,u^h)$ forms a coupling of $(\mu, \nu)$. Therefore by the definition of the Wasserstein distance,
\[
W_2(\nu, \mu)^2\leq \mathbb{E}^{\mathbb{Q}}\left[\left |\int_0^T\int_{\mathbb{T}^N}|u(t,x)-u^h(t,x)|dtdx\right|^2 \right].
\]
In view of (\ref{4.4}), to prove the quadratic transportation cost inequality
\begin{align}
W_2(\nu, \mu)\leq \sqrt{2C H(\nu|\mu)} ,
\end{align}
it is sufficient to show that
\begin{align}\label{111.1}
\mathbb{E}^{\mathbb{Q}}\left[\left |\int_0^T\int_{\mathbb{T}^N}|u(t,x)-u^h(t,x)|dtdx\right|^2 \right]
\leq C \mathbb{E}^{\mathbb{Q}}\left[\int_0^T|h|_{l^2}^2(s)\,\mathrm{d}s\right]
\end{align}
when the right hand side of (\ref{111.1}) is finite.
\vskip 0.3cm
For simplicity, in the sequel we still denote $\mathbb{E}^{\mathbb{Q}}$ by the symbol $\mathbb{E}$ and denote $\widetilde{\beta}_k$ by ${\beta}_k$.
The proof of (\ref{111.1}) is technical and lengthy. It is divided into the following two propositions.
\vskip 0.3cm
Following the idea of the proof Proposition 13 in \cite{D-V-1} and using the doubling variables method, we have the following result relating the two kinetic solution $u$ and $u^{h}$. As the proof is very similar to that of Proposition 13 in \cite{D-V-1}, we omit the proof and refer the reader to \cite{D-V-1}.
\begin{prp}\label{prp-1}
Assume Hypothesis H is in place. Let $u$ and $u^{h}$ be the kinetic solution of (\ref{q-4.2}) and (\ref{q-4.1}) , respectively. Then, for all $0< t< T$, and non-negative test functions $\rho\in C^{\infty}(\mathbb{T}^N), \psi\in C^{\infty}_c(\mathbb{R})$, the corresponding functions $f_1(x,t,\xi):=I_{u^{h}(x,t)>\xi}$ and $f_2(y,t,\zeta):=I_{u(y,t)>\zeta}$ satisfy the following
\begin{eqnarray}\notag
&&\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}\rho (x-y)\psi(\xi-\zeta)(f^{\pm}_1(x,t,\xi)\bar{f}^{\pm}_2(y,t,\zeta)+\bar{f}^{\pm}_1(x,t,\xi)f^{\pm}_2(y,t,\zeta))d\xi d\zeta dxdy\\
\notag
&\leq & \int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}\rho (x-y)\psi(\xi-\zeta)(f_{1,0}(x,\xi)\bar{f}_{2,0}(y,\zeta)+\bar{f}_{1,0}(x,\xi)f_{2,0}(y,\zeta))d\xi d\zeta dxdy\\
\label{eq-14-1}
&& +I(t)+J(t)+K(t)+H(t), \quad a.s.,
\end{eqnarray}
where
\begin{eqnarray*}
I(t)&=&\int^t_0\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}(f_1\bar{f}_2+\bar{f}_1f_2)(a (\xi)-a(\zeta))\cdot\nabla_x\alpha d\xi d\zeta dxdyds,\\
J(t)&=&\int^t_0\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}\alpha \sum_{k\geq 1}|g_k(x,\xi)-g_k(y,\zeta)|^2d\nu^{1}_{x,s}\otimes \nu^{2}_{y,s}(\xi,\zeta)dxdyds,\\
K(t)&=& 2\sum_{k\geq 1}\int^t_0\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}(g_k(x,\xi)-g_k(y,\zeta))\rho(x-y)\chi(\xi,\zeta)d\nu^{1}_{x,s}\otimes \nu^{2}_{y,s}(\xi,\zeta)dxdyd\beta_k(s),\\
H(t)&=&2\sum_{k\geq 1}\int^t_0\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}g_k(x,\xi)h_k(s)\rho(x-y)\chi(\xi,\zeta)d\nu^{1}_{x,s}\otimes \nu^{2}_{y,s}(\xi,\zeta)dxdyds,
\end{eqnarray*}
with $f_{1,0}(x,\xi)=I_{\eta(x)>\xi}, f_{2,0}(y,\zeta)=I_{\eta(y)>\zeta}$, $\alpha=\rho (x-y)\psi(\xi-\zeta)$, $\nu^{1}_{x,s}=-\partial_{\xi}f_1(s,x,\xi)=\delta_{u^{h}(x,s)=\xi}, \nu^{2}_{y,s}=\partial_{\zeta}\bar{f}_2(s,y,\zeta)=\delta_{u(y,s)=\zeta}$ and $\chi(\xi,\zeta)=\int^{\xi}_{-\infty}\psi(\xi'-\zeta)d\xi'=\int^{\xi-\zeta}_{-\infty}\psi(y)dy$.
\end{prp}
The statement (\ref{111.1}) is contained in the next proposition.
\begin{prp}\label{prp-2}
For $T>0$, it holds that
\begin{eqnarray}
\mathbb{E}\Big|\int_0^T\|u^{h}(t)-u(t)\|_{L^1(\mathbb{T}^N)}dt\Big|^2\leq C\mathbb{E}\Big[\int^T_0|h|^2_{l^2}(t)dt\Big],
\end{eqnarray}
where $C=C(T,D_0,D_1)$.
\end{prp}
\begin{proof}
Let $\rho_{\gamma}, \psi_{\delta}$ be approximations to the identity on $\mathbb{T}^N$ and $\mathbb{R}$, respectively. That is, let $\rho\in C^{\infty}(\mathbb{T}^N)$, $\psi\in C^{\infty}_c(\mathbb{R})$ be symmetric non-negative functions such as $\int_{\mathbb{T}^N}\rho =1$, $\int_{\mathbb{R}}\psi =1$ and supp$\psi \subset (-1,1)$. We define
\[
\rho_{\gamma}(x)=\frac{1}{\gamma^N}\rho\Big(\frac{x}{\gamma}\Big), \quad \psi_{\delta}(\xi)=\frac{1}{\delta}\psi\Big(\frac{\xi}{\delta}\Big).
\]
Letting $\rho:=\rho_{\gamma}(x-y)$ and $\psi:=\psi_{\delta}(\xi-\zeta)$ in Proposition \ref{prp-1}, we get from (\ref{eq-14-1}) that
\begin{eqnarray}\label{111.1-1}
&&\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}\rho_{\gamma} (x-y)\psi_{\delta}(\xi-\zeta)(f^{\pm}_1(x,t,\xi)\bar{f}^{\pm}_2(y,t,\zeta)+\bar{f}^{\pm}_1(x,t,\xi)f^{\pm}_2(y,t,\zeta))d\xi d\zeta dxdy \nonumber\\
&\leq & \int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}\rho_{\gamma} (x-y)\psi_{\delta}(\xi-\zeta)(f_{1,0}(x,\xi)\bar{f}_{2,0}(y,\zeta)+\bar{f}_{1,0}(x,\xi)f_{2,0}(y,\zeta))d\xi d\zeta dxdy\nonumber\\
&&\ +\tilde{I}(t)+\tilde{J}(t)+\tilde{K}(t)+\tilde{H}(t),\quad a.s.,
\end{eqnarray}
where $\tilde{I}, \tilde{J}, \tilde{K}, \tilde{H}$ are the corresponding terms $I,J,K,H$ in the statement of Proposition \ref{prp-1} with $\rho$, $\psi$ replaced by $\rho_{\gamma}$, $\psi_{\delta}$, respectively. For simplicity, we still use the notation:
\[
\chi(\xi,\zeta)=\int_{-\infty}^{\xi-\zeta}\psi_{\delta}(y)dy.
\]
With an eye on the following identity,
\begin{eqnarray}\label{111.1-2}
\int_{\mathbb{R}}I_{u^{h,\pm}>\xi}\overline{I_{u^{\pm}>\xi}}d\xi=(u^{h,\pm}-u^{\pm})^+,
\quad
\int_{\mathbb{R}}\overline{I_{u^{h,\pm}>\xi}}I_{u^{\pm}>\xi}d\xi=(u^{h,\pm}-u^{\pm})^-,
\end{eqnarray}
we will start with the estimate (\ref{111.1-1}) and eventually let $\gamma$, $\delta$ appropriately tend to zero to prove the proposition.
For any $t\in [0,T]$, define the error term
\begin{eqnarray}\notag
&&\mathcal{E}_t(\gamma,\delta)\\ \notag
&:=&\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}(f^{\pm}_1(x,t,\xi)\bar{f}^{\pm}_2(y,t,\zeta)+\bar{f}^{\pm}_1(x,t,\xi){f}^{\pm}_2(y,t,\zeta))\rho_{\gamma}(x-y)\psi_{\delta}(\xi-\zeta)dxdyd\xi d\zeta\\
\label{qq-3}
&&-\int_{\mathbb{T}^N}\int_{\mathbb{R}}(f^{\pm}_1(x,t,\xi)\bar{f}^{\pm}_2(x,t,\xi)+\bar{f}^{\pm}_1(x,t,\xi)f^{\pm}_2(x,t,\xi))d\xi dx.
\end{eqnarray}
Using $\int_{\mathbb{R}}\psi_{\delta}(\xi-\zeta)d\zeta=1$, $\int^{\xi}_{\xi-\delta}\psi_{\delta}(\xi-\zeta)d\zeta=\frac{1}{2}$ and $\int_{(\mathbb{T}^N)^2}\rho_{\gamma}(x-y)dxdy\leq1$, we find that
\begin{eqnarray}\notag
&&\Big|\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}}\rho_{\gamma}(x-y)f^{\pm}_1(x,t,\xi)\bar{f}^{\pm}_2(y,t,\xi)d\xi dxdy\\ \notag
&&-\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}f^{\pm}_1(x,t,\xi)\bar{f}^{\pm}_2(y,t,\zeta)\rho_{\gamma}(x-y)\psi_{\delta}(\xi-\zeta)dxdyd\xi d\zeta\Big|\\ \notag
&=&\Big|\int_{(\mathbb{T}^N)^2}\rho_{\gamma}(x-y)\int_{\mathbb{R}}I_{u^{h, \pm}(x,t)>\xi}\int_{\mathbb{R}}\psi_{\delta}(\xi-\zeta)(I_{u^{\pm}(y,t)\leq\xi}-I_{u^{\pm}(y,t)\leq \zeta})d\zeta d\xi dxdy\Big|\\ \notag
&\leq&\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}}\rho_{\gamma}(x-y)I_{u^{h,\pm}(x,t)>\xi}\int^{\xi}_{\xi-\delta}\psi_{\delta}(\xi-\zeta)I_{\zeta<u^{\pm}(y,t)\leq\xi} d\zeta d\xi dxdy\\ \notag
&&\ +\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}}\rho_{\gamma}(x-y)I_{u^{h,\pm}(x,t)>\xi}\int^{\xi+\delta}_{\xi}\psi_{\delta}(\xi-\zeta)I_{\xi<u^{\pm}(y,t)\leq\zeta} d\zeta d\xi dxdy\\ \notag
&\leq& \frac{1}{2}\int_{(\mathbb{T}^N)^2}\rho_{\gamma}(x-y)I_{{u^{h,\pm}(x,t)>u^{\pm}(y,t)}}\int^{min\{u^{h,\pm}(x,t),u^{\pm}(y,t)+\delta\}}_{u^{\pm}(y,t)}d\xi dxdy\\ \notag
&&\ +\frac{1}{2}\int_{(\mathbb{T}^N)^2}\rho_{\gamma}(x-y)I_{{u^{\pm}(y,t)-\delta<u^{h,\pm}(x,t)}}\int^{min\{u^{h,\pm}(x,t),u^{\pm}(y,t)\}}_{u^{\pm}(y,t)-\delta}d\xi dxdy\\ \notag
&=& \frac{\delta}{2}\int_{(\mathbb{T}^N)^2}\rho_{\gamma}(x-y) I_{{ u^{h,\pm}(x,t)>u^{\pm}(y,t)+\delta }}dxdy\\ \notag
&&+\frac{1}{2}\int_{(\mathbb{T}^N)^2}\rho_{\gamma}(x-y)I_{{u^{\pm}(y,t)< u^{h,\pm}(x,t)\leq u^{\pm}(y,t)+\delta }}(u^{h,\pm}(x,t)-u^{\pm}(y,t) dxdy\\ \notag
&&+\frac{\delta}{2}\int_{(\mathbb{T}^N)^2}\rho_{\gamma}(x-y)I_{{u^{\pm}(y,t)<u^{h,\pm}(x,t)}}dxdy\\ \notag
&&+\frac{1}{2}\int_{(\mathbb{T}^N)^2}\rho_{\gamma}(x-y)I_{{u^{\pm}(y,t)-\delta<u^{h,\pm}(x,t)\leq u^{\pm}(y,t)}}(u^{h,\pm}(x,t)-u^{\pm}(y,t)+\delta) dxdy\\
\label{e-23}
&\leq & 2\delta, \quad a.s..
\end{eqnarray}
Similarly, we have
\begin{eqnarray}\notag
&&\Big|\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}}\rho_{\gamma}(x-y)\bar{f}^{\pm}_1(x,t,\xi){f}^{\pm}_2(y,t,\xi)d\xi dxdy\\
\label{e-22}
&&-\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}\bar{f}^{\pm}_1(x,t,\xi){f}^{\pm}_2(y,t,\zeta)\rho_{\gamma}(x-y)\psi_{\delta}(\xi-\zeta)dxdyd\xi d\zeta\Big|
\leq 2\delta, \quad a.s..
\end{eqnarray}
Moreover, when $\gamma$ is small enough, it follows that
\begin{eqnarray}\notag
&&\Big|\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}}\rho_{\gamma}(x-y)f^{\pm}_1(x,t,\xi)\bar{f}^{\pm}_2(y,t,\xi)d\xi dydx-\int_{\mathbb{T}^N}\int_{\mathbb{R}}f^{\pm}_1(x,t,\xi)\bar{f}^{\pm}_2(x,t,\xi)d\xi dx\Big|\\ \notag
&=&\Big|\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}}\rho_{\gamma}(x-y)f^{\pm}_1(x,t,\xi)\bar{f}^{\pm}_2(y,t,\xi)d\xi dydx-\int_{\mathbb{T}^N}\int_{|z|<\gamma}\int_{\mathbb{R}}\rho_{\gamma}(z)f^{\pm}_1(x,t,\xi)\bar{f}^{\pm}_2(x,t,\xi)d\xi dzdx\Big|\\ \notag
&=&\Big|\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}}\rho_{\gamma}(x-y)f^{\pm}_1(x,t,\xi)(\bar{f}^{\pm}_2(y,t,\xi)-\bar{f}^{\pm}_2(x,t,\xi))d\xi dydx\Big|\\ \notag
&\leq&\sup_{|z|<\gamma}\int_{\mathbb{T}^N}\int_{\mathbb{R}}f^{\pm}_1(x,t,\xi)|\bar{f}^{\pm}_2(x-z,t,\xi)-\bar{f}^{\pm}_2(x,t,\xi)|d\xi dx\\ \label{e-43}
&\leq&
\sup_{|z|<\gamma}\int_{\mathbb{T}^N}\int_{\mathbb{R}}|\Lambda_{f^{\pm}_2}(x-z,t,\xi)-\Lambda_{f^{\pm}_2}(x,t,\xi)|d\xi dx.
\end{eqnarray}
As $\Lambda_{f_2}$ is integrable, we have for a countable sequence $\gamma_n\downarrow 0$, (\ref{e-43}) holds a.s. for all $n$, hence, passing to the limit $n\rightarrow \infty$, we get
\begin{eqnarray}\label{qq-1}
\lim_{n\rightarrow \infty}\Big|\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}}\rho_{\gamma_n}(x-y)f^{\pm}_1(x,t,\xi)\bar{f}^{\pm}_2(y,t,\xi)d\xi dxdy-\int_{\mathbb{T}^N}\int_{\mathbb{R}}f^{\pm}_1(x,t,\xi)\bar{f}^{\pm}_2(x,t,\xi)d\xi dx\Big|= 0, \ a.s..
\end{eqnarray}
Similarly, it holds that
\begin{eqnarray}\label{qq-2}
\lim_{n\rightarrow \infty}\Big|\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}}\rho_{\gamma_n}(x-y)\bar{f}^{\pm}_1(x,t,\xi)f^{\pm}_2(y,t,\xi)d\xi dxdy-\int_{\mathbb{T}^N}\int_{\mathbb{R}}\bar{f}^{\pm}_1(x,t,\xi)f^{\pm}_2(x,t,\xi)d\xi dx\Big|=0, \ a.s..
\end{eqnarray}
By a similar argument, passing to the limit $\delta\rightarrow 0$, it follows from (\ref{e-23})-(\ref{qq-2}) that
\begin{eqnarray}\notag
\lim_{n\rightarrow \infty}\mathcal{E}_t(\gamma_n,\delta_n)=0, \quad a.s..
\end{eqnarray}
Without confusion, from now on, we write
\begin{eqnarray}\label{qq-4}
\lim_{\gamma, \delta\rightarrow 0}\mathcal{E}_t(\gamma,\delta)=0, \quad a.s..
\end{eqnarray}
In particular, when $t=0$, it holds that
\begin{eqnarray}\label{qq-5}
\lim_{\gamma, \delta\rightarrow 0}\mathcal{E}_0(\gamma,\delta)=0.
\end{eqnarray}
Now, we will make some estimates for $\tilde{I}(t)$, $\tilde{J}(t)$, $\tilde{K}(t)$ and $\tilde{H}(t)$.
We start with $\tilde{I}(t)$. Set
\[
\Gamma(\xi,\zeta)=\int^{\infty}_{\zeta}\int^{\xi}_{-\infty}\Upsilon(\xi',\zeta')|\xi'-\zeta'|\psi_{\delta}(\xi'-\zeta')d\xi'd\zeta',
\]
where $\Upsilon(\xi, \zeta)$ is the function appeared in Hypothesis H. Integration by parts yields that
\begin{eqnarray*}
&&\int^t_0\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}f_1\bar{f}_2(a(\xi)-a(\zeta))\cdot \nabla_x \alpha d\xi d\zeta dxdyds\\
&\leq& \int^t_0\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}f_1\bar{f}_2\Upsilon(\xi,\zeta)|\xi-\zeta|| \nabla_x \rho_{\gamma}(x-y)|\psi_{\delta}(\xi-\zeta) d\xi d\zeta dxdyds\\
&=& -\int^t_0\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}f_1\bar{f}_2\frac{\partial^2 \Gamma(\xi,\zeta)}{\partial\xi\partial\zeta} |\nabla_x \rho_{\gamma}(x-y)| d\xi d\zeta dxdyds\\
&=&\int^t_0\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}\Gamma(\xi,\zeta)d\nu^{1}_{x,s}\otimes \nu^{2}_{y,s}(\xi,\zeta)|\nabla_x \rho_{\gamma}(x-y)|dxdyds\\
&\leq& C(q_0)\delta\int^t_0\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}(1+|\xi|^{q_0}+|\zeta|^{q_0})d\nu^{1}_{x,s}\otimes \nu^{2}_{y,s}(\xi,\zeta)|\nabla_x \rho_{\gamma}(x-y)|dxdyds\\
&\leq& C(q_0)\delta\gamma^{-1}\int^t_0\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}(1+|\xi|^{q_0}+|\zeta|^{q_0})d\nu^{1}_{x,s}\otimes \nu^{2}_{y,s}(\xi,\zeta)dxdyds,
\end{eqnarray*}
where we have used the fact that $a(\cdot)$ is of polynomial growth with degree $q_0$ and (30) in \cite{D-V-1}.
Namely, we have obtained that
\begin{eqnarray*}
&&\Big|\int^t_0\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}f_1\bar{f}_2(a(\xi)-a(\zeta))\cdot \nabla_x \alpha d\xi d\zeta dxdyds\Big|\\
&\leq& C(q_0)\delta\gamma^{-1}t+C(q_0)\delta\gamma^{-1}t\Big(\underset{0\leq s\leq t}{{\rm{ess\sup}}}\ \|u^{h}(s)\|^{q_0}_{L^{q_0}(\mathbb{T}^N)}+\underset{0\leq s\leq t}{{\rm{ess\sup}}}\ \|u(s)\|^{q_0}_{L^{q_0}(\mathbb{T}^N)}\Big). \quad a.s..
\end{eqnarray*}
Similar calculations lead to
\begin{eqnarray*}
&&\Big|\int^t_0\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}\bar{f}_1f_2(a(\xi)-a(\zeta))\cdot \nabla_x \alpha d\xi d\zeta dxdyds\Big|\\
&\leq& C(q_0)\delta\gamma^{-1}t+C(q_0)\delta\gamma^{-1}t\Big(\underset{0\leq s\leq t}{{\rm{ess\sup}}}\ \|u^{h}(s)\|^{q_0}_{L^{q_0}(\mathbb{T}^N)}+\underset{0\leq s\leq t}{{\rm{ess\sup}}}\ \|u(s)\|^{q_0}_{L^{q_0}(\mathbb{T}^N)}\Big). \quad a.s..
\end{eqnarray*}
Combining the above inequalities, we get
\begin{eqnarray}\notag
|\tilde{I}(t)|&\leq& C(q_0)\delta\gamma^{-1}t+C(q_0)\delta\gamma^{-1}t\Big(\underset{0\leq s\leq t}{{\rm{ess\sup}}}\ \|u^{h}(s)\|^{q_0}_{L^{q_0}(\mathbb{T}^N)}+\underset{0\leq s\leq t}{{\rm{ess\sup}}}\ \|u(s)\|^{q_0}_{L^{q_0}(\mathbb{T}^N)}\Big).\quad a.s..
\end{eqnarray}
By (\ref{equ-29}) in Hypothesis H, we see that
\begin{eqnarray*}
\tilde{J}(t)&=&\int^t_0\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}\alpha \sum_{k\geq 1}|g_k(x,\xi)-g_k(y,\zeta)|^2d\nu^{1}_{x,s}\otimes \nu^{2}_{y,s}(\xi,\zeta)dxdyds\\
&\leq& D_1\int^t_0\int_{(\mathbb{T}^N)^2}\rho_{\gamma}(x-y)|x-y|^2\int_{\mathbb{R}^2}\psi_{\delta}(\xi-\zeta)d\nu^{1}_{x,s}\otimes \nu^{2}_{y,s}(\xi,\zeta)dxdyds\\
&& + D_1\int^t_0\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}\rho_{\gamma}(x-y)\psi_{\delta}(\xi-\zeta)|\xi-\zeta|^2d\nu^{1}_{x,s}\otimes \nu^{2}_{y,s}(\xi,\zeta)dxdyds\\
&=:& \tilde{J}_{1}(t)+\tilde{J}_{2}(t).
\end{eqnarray*}
Noting that
\begin{eqnarray*}
\int_{\mathbb{R}^2}\psi_{\delta}(\xi,\zeta)d\nu^{1}_{x,s}\otimes \nu^{2}_{y,s}(\xi,\zeta)&\leq& \delta^{-1}, \quad a.s.,
\\
\int_{(\mathbb{T}^N)^2}\rho_{\gamma}(x-y)|x-y|^2dxdy&\leq&\gamma^2,
\end{eqnarray*}
we have
\begin{eqnarray}\label{e-12}
\tilde{J}_{1}(t)\leq D_1\delta^{-1}\gamma^2t. \quad a.s..
\end{eqnarray}
For the term $\tilde{J}_{2}$, we have
\begin{eqnarray} \notag
\tilde{J}_{2}&\leq& \delta D_1 \int^t_0\int_{(\mathbb{T}^N)^2}\int_{|\xi-\zeta|\leq \delta}\rho_{\gamma}(x-y)\psi_{\delta}(\xi-\zeta)|\xi-\zeta|d\nu^{1}_{x,s}\otimes \nu^{2}_{y,s}(\xi,\zeta) dxdyds\\
\label{e-11}
&\leq& \delta D_1C_{\psi}t, \quad a.s.,
\end{eqnarray}
where $C_{\psi}:=\sup_{\xi\in \mathbb{R}}\|\psi(\xi)\|$.
(\ref{e-12}) and (\ref{e-11}) together yield
\begin{eqnarray*}
\tilde{J}(t)\leq D_1\delta^{-1}\gamma^2t+ D_1C_{\psi}\delta t, \quad a.s..
\end{eqnarray*}
By H\"{o}lder inequality and (\ref{equ-28}), we get
\begin{eqnarray*}
\tilde{H}(t)&\leq&
2\int^t_0|h(s)|_{l^2}\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}\Big(\sum_{k\geq 1}|g_k(x,\xi)|^2\Big)^{\frac{1}{2}}\rho_{\gamma}(x-y)\chi(\xi,\zeta)d\nu^{1}_{x,s}\otimes \nu^{2}_{y,s}(\xi,\zeta)dxdyds\\
&\leq&
2D^{\frac{1}{2}}_0\int^t_0|h(s)|_{l^2}\int_{(\mathbb{T}^N)^2}\rho_{\gamma}(x-y)dxdyds\\
&\leq&
2D^{\frac{1}{2}}_0\int^t_0|h(s)|_{l^2}ds, \quad a.s.
\end{eqnarray*}
where we have used the fact that $\chi(\xi,\zeta)\leq 1$.
Combining all the above estimates, we deduce that
\begin{eqnarray}\notag
&&\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}\rho_{\gamma} (x-y)\psi_{\delta}(\xi-\zeta)(f^{\pm}_1(x,t,\xi)\bar{f}^{\pm}_2(y,t,\zeta)+\bar{f}^{\pm}_1(x,t,\xi)f^{\pm}_2(y,t,\zeta))d\xi d\zeta dxdy\\
\notag
&\leq & \int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}\rho_{\gamma} (x-y)\psi_{\delta}(\xi-\zeta)(f_{1,0}(x,\xi)\bar{f}_{2,0}(y,\zeta)+\bar{f}_{1,0}(x,\xi)f_{2,0}(y,\zeta))d\xi d\zeta dxdy\\
\notag
&& +D_1\delta^{-1}\gamma^2t+ D_1C_{\psi}\delta t+C(q_0)\delta\gamma^{-1}t+2D^{\frac{1}{2}}_0\int^t_0|h(s)|_{l^2}ds\\
\label{e-20}
&&+C(q_0)\delta\gamma^{-1}t\Big(\underset{0\leq s\leq t}{{\rm{ess\sup}}}\ \|u^{h}(s)\|^{q_0}_{L^{q_0}(\mathbb{T}^N)}+\underset{0\leq s\leq t}{{\rm{ess\sup}}}\ \|u(s)\|^{q_0}_{L^{q_0}(\mathbb{T}^N)}\Big)+\tilde{K}(t), \quad a.s..
\end{eqnarray}
For $s\in (0,T)$, set
\[
R(s):=\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}\rho_{\gamma} (x-y)\psi_{\delta}(\xi-\zeta)(f^{\pm}_1(x,s,\xi)\bar{f}^{\pm}_2(y,s,\zeta)+\bar{f}^{\pm}_1(x,s,\xi)f^{\pm}_2(y,s,\zeta))d\xi d\zeta dxdy.
\]
Then, we deduce from (\ref{e-20}) that
\begin{eqnarray*}
\underset{0\leq s\leq t}{{\rm{ess\sup}}}\ R(s)&\leq& \int_{\mathbb{T}^N}\int_{\mathbb{R}}(f_{1,0}\bar{f}_{2,0}+\bar{f}_{1,0}f_{2,0})d\xi dx+\mathcal{E}_0(\gamma,\delta)\\
&&+D_1\delta^{-1}\gamma^2t+ D_1C_{\psi}\delta t+C(q_0)\delta\gamma^{-1}t+2D^{\frac{1}{2}}_0\int^t_0|h(s)|_{l^2}ds\\
&&+C(q_0)\delta\gamma^{-1}t\Big(\underset{0\leq s\leq t}{{\rm{ess\sup}}}\ \|u^{h}(s)\|^{q_0}_{L^{q_0}(\mathbb{T}^N)}+\underset{0\leq s\leq t}{{\rm{ess\sup}}}\ \|u(s)\|^{q_0}_{L^{q_0}(\mathbb{T}^N)}\Big)\\
&& +\sup_{0\leq s\leq t}|\tilde{K}|(s), \quad a.s.,
\end{eqnarray*}
where $\lim_{\gamma,\delta\rightarrow 0}\mathcal{E}_0(\gamma,\delta)=0$.
Taking the $L^2(\Omega)$-norm on both sides and using H\"{o
}lder inequality, we get that
\begin{eqnarray}\notag
\left(\mathbb{E}\Big|\underset{0\leq s\leq t}{{\rm{ess\sup}}}\ R(s)\Big|^2\right)^{\frac{1}{2}}
&\lesssim & \int_{\mathbb{T}^N}\int_{\mathbb{R}}(f_{1,0}\bar{f}_{2,0}+\bar{f}_{1,0}f_{2,0})d\xi dx+\mathcal{E}_0(\gamma,\delta)\\
\notag
&& +D_1\delta^{-1}\gamma^2 t+ D_1C_{\psi}\delta t+C(q_0)\delta\gamma^{-1}t+2t^{\frac{1}{2}}D^{\frac{1}{2}}_0\left(\mathbb{E}\int^t_0|h(s)|^2_{l^2}ds\right)^{\frac{1}{2}}\\
\label{e-1}
&& +C(q_0)\delta\gamma^{-1}t\mathcal{R} +\left(\mathbb{E}\Big|\sup_{s\in [0,t]}|\tilde{K}(s)|\Big|^2\right)^{\frac{1}{2}},
\end{eqnarray}
where
\begin{eqnarray*}
\mathcal{R}:=\left\{\Big(\mathbb{E}\underset{0\leq s\leq T}{{\rm{ess\sup}}}\ \|u^{h}(s)\|^{2q_0}_{L^{2q_0}(\mathbb{T}^N)}\Big)^{\frac{1}{2}}
+\Big(\mathbb{E}\underset{0\leq s\leq T}{{\rm{ess\sup}}}\ \|u(s)\|^{2q_0}_{L^{2q_0}(\mathbb{T}^N)}\Big)^{\frac{1}{2}}\right\}.
\end{eqnarray*}
In view of (\ref{qq-8}) and (\ref{equation-1}), we have
\begin{eqnarray}\label{qq-29-1}
\mathcal{R}<+\infty.
\end{eqnarray}
To estimate the stochastic integral term, we use the Burkholder inequality to get
\begin{eqnarray}\label{e-7}
&&\mathbb{E}\Big|\sup_{s\in [0,t]}|\tilde{K}|(s)\Big|^2\\ \notag
&=& \mathbb{E}\Big|\sup_{s\in [0,t]}\sum_{k\geq 1}\int^s_0\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}\chi(\xi,\zeta) \rho_{\gamma}(x-y)(g_k(x,\xi)-g_{k}(y,\zeta)) d \nu^{1}_{x,r}\otimes \nu^{2}_{y,r}(\xi,\zeta)dxdyd\beta_k(r)\Big|^2\\
\notag
&\lesssim& \mathbb{E}\Big[\int^t_0\sum_{k\geq 1}\Big|\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}|g_k(x,\xi)-g_k(y,\zeta)|\rho_{\gamma}(x-y)\chi(\xi,\zeta) d \nu^{1}_{x,r}\otimes \nu^{2}_{y,r}(\xi,\zeta)dxdy\Big|^2dr\Big].
\end{eqnarray}
Recalling (\ref{e-6}) in Hypothesis H
\[
|g_k(x,\xi)-g_k(y,\zeta)|\leq C^1_k(|x-y|+|\xi-\zeta|),\quad \sum_{k\geq 1}|C^1_k|^2\leq \frac{D_1}{2},
\]
it follows from (\ref{e-7}) that
\begin{eqnarray*}
&&\mathbb{E}\Big|\sup_{s\in [0,t]}|\tilde{K}|(s)\Big|^2\\
&\lesssim& D_1\mathbb{E}\Big[\int^t_0\Big|\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}(|x-y|+|\xi-\zeta|)\rho_{\gamma}(x-y)\chi(\xi,\zeta) d\nu^{1}_{x,r}\otimes \nu^{2}_{y,r}(\xi,\zeta)dxdy\Big|^2dr\Big].
\end{eqnarray*}
Since
\begin{eqnarray}\notag
\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}|x-y|\rho_{\gamma}(x-y) \chi(\xi,\zeta)d \nu^{1}_{x,r}\otimes \nu^{2}_{y,r}(\xi,\zeta)dxdy\leq \gamma, \quad a.s..
\end{eqnarray}
it follows that
\begin{eqnarray}\label{e-24}
\mathbb{E}\Big|\sup_{s\in [0,t]}|\tilde{K}|(s)\Big|^2
\lesssim D_1\mathbb{E}\Big[\int^t_0\Big|\gamma+\int_{(\mathbb{T}^N)^2}|u^{h,\pm}-u^{\pm}|\rho_{\gamma}(x-y) dxdy\Big|^2dr\Big].
\end{eqnarray}
With the help of (\ref{111.1-2}), (\ref{e-23}) and (\ref{e-22}), we deduce that
\begin{eqnarray}\notag
&& \int_{(\mathbb{T}^N)^2}|u^{h,\pm}(x,r)-u^{\pm}(y,r)|\rho_{\gamma}(x-y) dxdy\\ \notag
&=& \int_{(\mathbb{T}^N)^2}\Big((u^{h,\pm}(x,r)-u^{\pm}(y,r))^{+}+(u^{h,\pm}(x,r)-u^{\pm}(y,r))^{-}\Big)\rho_{\gamma}(x-y) dxdy\\
\notag
&=&\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}}(\bar{f}^{\pm}_1(x,r,\xi)f^{\pm}_2(y,r,\xi)+f^{\pm}_1(x,r,\xi)\bar{f}^{\pm}_2(y,r,\xi))\rho_{\gamma}(x-y)d \xi dxdy\\ \notag
&\leq& 4\delta+\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}(\bar{f}^{\pm}_1(x,r,\xi)f^{\pm}_2(y,r,\zeta)+f^{\pm}_1(x,r,\xi)\bar{f}^{\pm}_2(y,r,\zeta))\rho_{\gamma}(x-y)\psi_{\delta}(\xi-\zeta) d \xi d\zeta dxdy\\
\label{e-25}
&=& 4\delta+R(r). \quad a.s..
\end{eqnarray}
Combining (\ref{e-24}) and (\ref{e-25}), we obtain that
\begin{eqnarray}\notag
\mathbb{E}\Big|\sup_{s\in [0,t]}|\tilde{K}(s)|\Big|^2
&\lesssim& D_1\mathbb{E}\Big[\int^t_0\Big|\gamma+4\delta+R(r)\Big|^2dr\Big]\\
\notag
&\leq & 2D_1\mathbb{E}\Big[\int^t_0\Big|\gamma+4\delta\Big|^2dr+\int^t_0|R(r)|^2dr\Big]
\\
\label{e-2}
&\leq &
2D_1t |\gamma+4\delta|^2+2D_1\mathbb{E}\Big(\int^t_0R^2(r)dr\Big).
\end{eqnarray}
Substitute (\ref{e-2}) back into (\ref{e-1}) to get
\begin{eqnarray*}\notag
&&\left(\mathbb{E}\Big|\underset{0\leq s\leq t}{{\rm{ess\sup}}}\ R(s)\Big|^2\right)^{\frac{1}{2}}\\
&\lesssim & \int_{\mathbb{T}^N}\int_{\mathbb{R}}(f_{1,0}\bar{f}_{2,0}+\bar{f}_{1,0}f_{2,0})d\xi dx+\mathcal{E}_0(\gamma,\delta)+D_1\delta^{-1}\gamma^2t+ D_1C_{\psi}\delta t+C(q_0)\delta\gamma^{-1}t\\
&&+2t^{\frac{1}{2}}D^{\frac{1}{2}}_0\left(\mathbb{E}\int^t_0|h(s)|^2_{l^2}ds\right)^{\frac{1}{2}}
+C(q_0)\delta\gamma^{-1}\mathcal{R}t +2^{\frac{1}{2}}D^{\frac{1}{2}}_1t^{\frac{1}{2}}
|\gamma+4\delta| +2^{\frac{1}{2}}D^{\frac{1}{2}}_1\Big(\mathbb{E}\Big(\int^t_0R^2(r)dr\Big)\Big)^{\frac{1}{2}}.
\end{eqnarray*}
Squaring the above inequality, we get
\begin{eqnarray}\notag
&&\mathbb{E}\Big|\underset{0\leq s\leq t}{{\rm{ess\sup}}}\ R(s)\Big|^2\\ \notag
&\lesssim & \Big[\int_{\mathbb{T}^N}\int_{\mathbb{R}}(f_{1,0}\bar{f}_{2,0}+\bar{f}_{1,0}f_{2,0})d\xi dx+\mathcal{E}_0(\gamma,\delta)+D_1\delta^{-1}\gamma^2t+ D_1C_{\psi}\delta t+C(q_0)\delta\gamma^{-1}t\\ \notag
&&+2t^{\frac{1}{2}}D^{\frac{1}{2}}_0\left(\mathbb{E}\int^t_0|h(s)|^2_{l^2}ds\right)^{\frac{1}{2}}
+C(q_0)\delta\gamma^{-1}\mathcal{R}t +2^{\frac{1}{2}}D^{\frac{1}{2}}_1t^{\frac{1}{2}}
|\gamma+4\delta|\Big]^2\\
\label{e-8}
&& +2D_1\int^t_0\Big(\mathbb{E}\Big|\underset{0\leq s\leq r}{{\rm{ess\sup}}}\ R(s)\Big|^2\Big)dr.
\end{eqnarray}
Applying Gronwall inequality to (\ref{e-8}), we get
\begin{eqnarray}\notag
&&\left(\mathbb{E}\Big|\underset{0\leq s\leq T}{{\rm{ess\sup}}}\ R(s)\Big|^2\right)^{\frac{1}{2}}\\ \notag
&\lesssim& e^{ D_1T}\Big[\int_{\mathbb{T}^N}\int_{\mathbb{R}}(f_{1,0}\bar{f}_{2,0}+\bar{f}_{1,0}f_{2,0})d\xi dx+\mathcal{E}_0(\gamma,\delta)+D_1\delta^{-1}\gamma^2T+ D_1C_{\psi}\delta T+C(q_0)\delta\gamma^{-1}T
\\
\label{qq-10}
&& \quad \quad +2T^{\frac{1}{2}}D^{\frac{1}{2}}_0\Big(\mathbb{E}\int^T_0|h(s)|^2_{l^2}ds\Big)^{\frac{1}{2}}
+C(q_0)\delta\gamma^{-1}\mathcal{R}T +2^{\frac{1}{2}}D^{\frac{1}{2}}_1T^{\frac{1}{2}}
|\gamma+4\delta|\Big].
\end{eqnarray}
Let
\[
Q(s):=\int_{(\mathbb{T}^N)^2}\int_{(\mathbb{R})^2}\rho_{\gamma}(x-y)\psi_{\gamma}(\xi-\zeta)(f^{\pm}_2(s,x,\xi)\bar{f}^{\pm}_2(s,y,\zeta)+\bar{f}^{\pm}_2(s,x,\xi){f}^{\pm}_2(s,y,\zeta))d\xi d\zeta dxdy.
\]
Applying the same arguments to $f^{\pm}_2$ and $\bar{f}^{\pm}_2$ (in this case, $h=0$ ), we can show that
\begin{eqnarray}\notag
\left(\mathbb{E}\Big|\underset{0\leq s\leq T}{{\rm{ess\sup}}}\ Q(s)\Big|^2\right)^{\frac{1}{2}}
&\lesssim & e^{D_1T}\Big[\mathcal{E}_0(\gamma,\delta)+D_1\delta^{-1}\gamma^2T+ D_1C_{\psi}\delta T+C(q_0)\delta\gamma^{-1}T\\
\label{qq-12}
&&
+C(q_0)\delta\gamma^{-1}\mathcal{R}T +2^{\frac{1}{2}}D^{\frac{1}{2}}_1T^{\frac{1}{2}}
|\gamma+4\delta|\Big].
\end{eqnarray}
On the other hand, from (\ref{qq-3}), it follows that
\begin{eqnarray}\notag
&&\left(\mathbb{E} \Big|\underset{0\leq s\leq T}{{\rm{ess\sup}}}\ \int_{\mathbb{T}^N}\int_{\mathbb{R}}(f^{\pm}_1(s,x,\xi)\bar{f}^{\pm}_2(s,x,\xi)+\bar{f}^{\pm}_1(s,x,\xi){f}^{\pm}_2(s,x,\xi))d\xi dx\Big|^2\right)^{\frac{1}{2}}\\ \label{eee-1}
&\lesssim& \left(\mathbb{E}\Big|\underset{0\leq s\leq T}{{\rm{ess\sup}}}\ |\mathcal{E}_s(\gamma,\delta)|\Big|^2\right)^{\frac{1}{2}}+\left(\mathbb{E}\Big|\underset{0\leq s\leq T}{{\rm{ess\sup}}}\ R(s)\Big|^2\right)^{\frac{1}{2}}.
\end{eqnarray}
Now we will provide estimates for $ \left(\mathbb{E}\Big|\underset{0\leq s\leq T}{{\rm{ess\sup}}}\ |\mathcal{E}_s(\gamma,\delta)|\Big|^2\right)^{\frac{1}{2}}$.
For any $s\in (0,T)$, we write
\begin{eqnarray*}
\mathcal{E}_s(\gamma, \delta)&=&\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}(f^{\pm}_1(x,s,\xi)\bar{f}^{\pm}_2(y,s,\zeta)+\bar{f}^{\pm}_1(x,s,\xi){f}^{\pm}_2(y,s,\zeta))\rho_{\gamma}(x-y)\psi_{\delta}(\xi-\zeta)dxdyd\xi d\zeta
\\
&&-\int_{\mathbb{T}^N}\int_{\mathbb{R}}(f^{\pm}_1(x,s,\xi)\bar{f}^{\pm}_2(x,s,\xi)+\bar{f}^{\pm}_1(x,s,\xi){f}^{\pm}_2(x,s,\xi))d\xi dx\\
&=& \Big[\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}}\rho_{\gamma}(x-y)(f^{\pm}_1(x,s,\xi)\bar{f}^{\pm}_2(y,s,\xi)+\bar{f}^{\pm}_1(x,s,\xi){f}^{\pm}_2(y,s,\xi))d\xi dxdy\\
&& -\int_{\mathbb{T}^N}\int_{\mathbb{R}}(f^{\pm}_1(x,s,\xi)\bar{f}^{\pm}_2(x,s,\xi)+\bar{f}^{\pm}_1(x,s,\xi){f}^{\pm}_2(x,s,\xi))d\xi dx\Big]\\
&& +\Big[\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}(f^{\pm}_1(x,s,\xi)\bar{f}^{\pm}_2(y,s,\zeta)+\bar{f}^{\pm}_1(x,s,\xi){f}^{\pm}_2(y,s,\zeta))\rho_{\gamma}(x-y)\psi_{\delta}(\xi-\zeta)dxdyd\xi d\zeta\\
&& -\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}}\rho_{\gamma}(x-y)(f^{\pm}_1(x,s,\xi)\bar{f}^{\pm}_2(y,s,\xi)+\bar{f}^{\pm}_1(x,s,\xi){f}^{\pm}_2(y,s,\xi))d\xi dxdy \Big]\\
&=:&H_1+H_2.
\end{eqnarray*}
By (\ref{e-23}) and (\ref{e-22}), we have
\begin{eqnarray}\label{qq-15}
|H_2|\leq 4\delta, \quad a.s..
\end{eqnarray}
On the other hand,
\begin{eqnarray*}
|H_1|&\leq& \Big|\int_{(\mathbb{T}^N)^2}\rho_{\gamma}(x-y)\int_{\mathbb{R}}I_{u^{h,\pm}(x,s)>\xi}(I_{u^{\pm}(x,s)\leq \xi}-I_{u^{\pm}(y,s)\leq \xi})d\xi dxdy\Big|\\
&& +\Big|\int_{(\mathbb{T}^N)^2}\rho_{\gamma}(x-y)\int_{\mathbb{R}}I_{u^{h,\pm}(x,s)\leq\xi}(I_{u^{\pm}(x,s)> \xi}-I_{u^{\pm}(y,s)> \xi})d\xi dxdy\Big|\\
&\leq& 2\int_{(\mathbb{T}^N)^2}\rho_{\gamma}(x-y)|u^{\pm}(x,s)-u^{\pm}(y,s)|dxdy, \quad a.s..
\end{eqnarray*}
Using (\ref{e-23}) and (\ref{e-22}) again, it follows that
\begin{eqnarray*}
&&\int_{(\mathbb{T}^N)^2}\rho_{\gamma}(x-y)|u^{\pm}(x,s)-u^{\pm}(y,s)|dxdy\\
&=& \int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}}\rho_{\gamma}(x-y)(f^{\pm}_2(x,s,\xi)\bar{f}^{\pm}_2(y,s,\xi)+\bar{f}^{\pm}_2(x,s,\xi){f}^{\pm}_2(y,s,\xi))d\xi dxdy\\
&\leq& \int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}\rho_{\gamma}(x-y)\psi_{\delta}(\xi-\zeta)(f^{\pm}_2(x,s,\xi)\bar{f}^{\pm}_2(y,s,\zeta)+\bar{f}^{\pm}_2(x,s,\xi){f}^{\pm}_2(y,s,\zeta))d\xi d\zeta dxdy+4\delta\\
&=&Q(s)+4\delta, \quad a.s..
\end{eqnarray*}
Hence,
\begin{eqnarray}\label{qq-14}
|H_1|\leq 2Q(s)+8\delta, \quad a.s..
\end{eqnarray}
Collecting (\ref{qq-15}) and (\ref{qq-14}) yields
\begin{eqnarray*}
|\mathcal{E}_s(\gamma, \delta)|\leq 2Q(s)+12\delta, \quad a.s.,
\end{eqnarray*}
By (\ref{qq-12}), we deduce that
\begin{eqnarray}\notag
&&\Big(\mathbb{E}\big|\underset{0\leq s\leq T}{{\rm{ess\sup}}}\ |\mathcal{E}_s(\gamma,\delta)|\big|^2\Big)^{\frac{1}{2}}\\ \notag
&\lesssim& \Big(\mathbb{E}|\underset{0\leq s\leq T}{{\rm{ess\sup}}}\ Q(s)|^2\Big)^{\frac{1}{2}}+\delta\\
\notag
&\lesssim&e^{D_1T}\Big[\mathcal{E}_0(\gamma,\delta)+D_1\delta^{-1}\gamma^2T+ D_1C_{\psi}\delta T+C(q_0)\delta\gamma^{-1}T\\
\label{qq-16}
&&
+C(q_0)\delta\gamma^{-1}\mathcal{R}T +2^{\frac{1}{2}}D^{\frac{1}{2}}_1T^{\frac{1}{2}}
|\gamma+4\delta|\Big]+\delta.
\end{eqnarray}
Combining (\ref{qq-10}) and (\ref{qq-16}), we deduce from (\ref{eee-1}) that
\begin{eqnarray*}\notag
&&\left(\mathbb{E} \Big|\underset{0\leq s\leq T}{{\rm{ess\sup}}}\ \int_{\mathbb{T}^N}\int_{\mathbb{R}}(f^{\pm}_1(s,x,\xi)\bar{f}^{\pm}_2(s,x,\xi)+\bar{f}^{\pm}_1(s,x,\xi){f}^{\pm}_2(s,x,\xi))d\xi dx\Big|^2\right)^{\frac{1}{2}}\\ \notag
&\lesssim&e^{ D_1T}\Big[\int_{\mathbb{T}^N}\int_{\mathbb{R}}(f_{1,0}\bar{f}_{2,0}+\bar{f}_{1,0}f_{2,0})d\xi dx+2\mathcal{E}_0(\gamma,\delta)+2D_1\delta^{-1}\gamma^2T+2D_1C_{\psi}\delta T+2C(q_0)\delta\gamma^{-1}T
\\
&& +2T^{\frac{1}{2}}D^{\frac{1}{2}}_0\Big(\mathbb{E}\int^T_0|h(s)|^2_{l^2}ds\Big)^{\frac{1}{2}}
+2C(q_0)\delta\gamma^{-1}\mathcal{R}T +3D^{\frac{1}{2}}_1T^{\frac{1}{2}}
|\gamma+4\delta|\Big]+\delta.
\end{eqnarray*}
Note that we have $f^{\pm}_1(x,s,\xi)=I_{u^{h,\pm}(s,x)>\xi}$ and $f^{\pm}_2(x,s,\xi)=I_{u^{\pm}(s,x)>\xi}$ with initial data $f_{1,0}=I_{\eta>\xi}$ and ${f}_{2,0}=I_{\eta>\xi}$, respectively. In view of (\ref{111.1-2}), we can rewrite the above inequality as
\begin{eqnarray}\label{e-26}
\left(\mathbb{E}\Big|\underset{0\leq s\leq T}{{\rm{ess\sup}}}\ \|u^{h,\pm}(s)-u^{\pm}(s)\|_{L^1(\mathbb{T}^N)}\Big|^2\right)^{\frac{1}{2}}
\lesssim r(\gamma, \delta),
\end{eqnarray}
where
\begin{eqnarray}\notag
&&r(\gamma, \delta)\\ \notag
&:=&e^{D_1T}\Big[2\mathcal{E}_0(\gamma,\delta)+2D_1\delta^{-1}\gamma^2T+2D_1C_{\psi}\delta T+2C(q_0)\delta\gamma^{-1}T
\\
\label{e-10}
&& +2T^{\frac{1}{2}}D^{\frac{1}{2}}_0\Big(\mathbb{E}\int^T_0|h(s)|^2_{l^2}ds\Big)^{\frac{1}{2}}
+2C(q_0)\delta\gamma^{-1}\mathcal{R}T +3D^{\frac{1}{2}}_1T^{\frac{1}{2}}
|\gamma+4\delta|\Big]+\delta.
\end{eqnarray}
Taking
\[
\delta=\gamma^{\frac{4}{3}},
\]
we have
\begin{eqnarray*}
&&r(\gamma, \delta)\\ \notag
&:=&e^{D_1T}\Big[2\mathcal{E}_0(\gamma,\delta)+2D_1\gamma^{\frac{2}{3}}T+ 2D_1C_{\psi}\gamma^{\frac{4}{3}} T+2C(q_0)\gamma^{\frac{1}{3}}T
\\
&& +2T^{\frac{1}{2}}D^{\frac{1}{2}}_0\Big(\mathbb{E}\int^T_0|h(s)|^2_{l^2}ds\Big)^{\frac{1}{2}}
+2C(q_0)\gamma^{\frac{1}{3}}\mathcal{R}T +3D^{\frac{1}{2}}_1T^{\frac{1}{2}}
|\gamma+4\gamma^{\frac{4}{3}}|\Big]+\gamma^{\frac{4}{3}}.
\end{eqnarray*}
Let $\gamma\rightarrow 0$ to get
\begin{eqnarray*}
\lim_{\gamma\rightarrow 0} r(\gamma, \delta)\leq 2T^{\frac{1}{2}}D^{\frac{1}{2}}_0e^{D_1T}\Big(\mathbb{E}\int^T_0|h(s)|^2_{l^2}ds\Big)^{\frac{1}{2}}.
\end{eqnarray*}
Therefore, we deduce from (\ref{e-26}) that
\begin{eqnarray*}
\left(\mathbb{E}\Big|\underset{0\leq s\leq T}{{\rm{ess\sup}}}\ \|u^{h,\pm}(s)-u^{\pm}(s)\|_{L^1(\mathbb{T}^N)}\Big|^2\right)^{\frac{1}{2}}
\leq2\mathcal{D}T^{\frac{1}{2}}D^{\frac{1}{2}}_0e^{D_1T}\Big(\mathbb{E}\int^T_0|h(s)|^2_{l^2}ds\Big)^{\frac{1}{2}},
\end{eqnarray*}
which implies
\begin{eqnarray}
\mathbb{E}\Big|\int_0^T\|u^{h}(t)-u(t)\|_{L^1(\mathbb{T}^N)}dt\Big|^2\leq 4\mathcal{D}^2D_{0}T^3e^{2D_1T}\mathbb{E}\int^T_0|h(s)|^2_{l^2}ds.
\end{eqnarray}
We complete the proof.
\end{proof}
\noindent{\bf Acknowledgements}\quad This work is partly supported by National Natural Science Foundation of China (No. 11671372, 11971456, 11721101, 11801032, 11971227), Key Laboratory of Random Complex Structures and Data Science, Academy of Mathematics and Systems Science, Chinese Academy of Sciences (No. 2008DP173182), Beijing Institute of Technology Research Fund Program for Young Scholars.
\def\refname{ References}
|
2,877,628,090,976 | arxiv | \section{Introduction}
A closed, simply connected manifold $M$ is called rationally elliptic if
\[\dim\pi_*(M)\ensuremath{\otimes} \ensuremath{\mathbb{Q}}=\sum_{k\geq 2}\dim \pi_k(M)\ensuremath{\otimes} \ensuremath{\mathbb{Q}}< \infty.\]
For a simply connected space $X$ we additionally require that the rational cohomology of $X$ satisfies $\sum_{k\geq0}\dim\ensuremath{\mathrm{H}}^k(X;\ensuremath{\mathbb{Q}})<\infty$. The definition can be generalized to nilpotent spaces.
The importance of rationally elliptic manifolds for Riemannian geometry mainly stems from the conjecture, attributed to Bott, that a closed, simply connected manifold of (almost) nonnegative sectional curvature is rationally elliptic (see \cite{GH82}).
A positive answer to this conjecture would, for example, imply Gromov's conjecture that the bound for the sum of the Betti numbers of a nonnegatively curved $n$-manifold is bounded by $2^n$, see \cite{FrH} and \cite{Pavlov} for an improved estimate for simply connected spaces.
Rationally elliptic spaces have some nice properties. For example, by the work of Halperin \cite{Halperin77} the rational cohomology ring $\ensuremath{\mathrm{H}}^*(X;\ensuremath{\mathbb{Q}})$ of a rationally elliptic space $X$ satisfies Poincaré duality and the sequence of the Betti numbers of the loop space $\Omega X$ grows polynomially, i.e. $\sum_{i=0}^k\ensuremath{\mathrm{b}}_k(\Omega X)\leq k^m$ for some integer $m$, while for a rationally hyperbolic space it grows exponentially (see \cite[Proposition 33.9]{FHT}).
Examples of rationally elliptic manifolds include homogeneous spaces and biquotients of compact Lie groups (by a theorem of Hopf) and co\-homo\-geneity one manifolds (see \cite{GroveHalperin}). Furthermore, if $F\to E \to B$ is a fibre bundle where $E$, $F$ and $B$ are manifolds, then if two of these spaces are rationally elliptic and the third is nilpotent, then the third space is rationally elliptic by the associated exact homotopy sequence.
The classification of closed, simply connected, rationally elliptic manifolds of dimension five or less is known:
\begin{fact}
A closed, simply connected, rationally elliptic manifold of dimension five or less is
\begin{itemize}
\item diffeomorphic to $\ensuremath{\mathrm{S}}^2$ or $\ensuremath{\mathrm{S}}^3$,
\item homeomorphic to $\ensuremath{\mathrm{S}}^4$, $\ensuremath{\mathrm{S}}^2 \times \ensuremath{\mathrm{S}}^2$, $\ensuremath{\mathbb{CP}}^2$, $\ensuremath{\mathbb{CP}}^2 \# \ensuremath{\mathbb{CP}}^2$ or $\ensuremath{\mathbb{CP}}^2 \# \overline{\ensuremath{\mathbb{CP}}}^2$, or
\item rationally homotopy equivalent to $\ensuremath{\mathrm{S}}^5$ or $\ensuremath{\mathrm{S}}^2\times \ensuremath{\mathrm{S}}^3$.
\end{itemize}
\end{fact}
For the 4--dimensional case see \cite[Lemma 3.2]{PatPet03}. The 5--dimensional case follows easily from the classification of possible exponents in this dimension, which is easily done with the results of Section \ref{susubsec:Exponents}. Note that there are infinitely many integral homotopy types of closed, simply connected, rationally elliptic 5--manifolds, which can be seen from Barden's classification of closed, simply connected 5--manifolds in \cite{Barden}.
Our first theorem gives a characterization of closed, simply connected, rationally elliptic 6--manifolds in terms of their cohomology rings.
\begin{thm}\label{TheoremDimension6Rational}
A closed, simply connected 6--manifold $M$ is rationally elliptic if and only if one of the following holds
\begin{enumerate}[label={\rm(\alph*)}]
\item $\ensuremath{\mathrm{b}}_2(M)=\ensuremath{\mathrm{b}}_3(M)=0$;\label{b2=0b3=0}
\item $\ensuremath{\mathrm{b}}_2(M)=0$ and $\ensuremath{\mathrm{b}}_3(M)=2$;\label{b2=0b3=2}
\item $\ensuremath{\mathrm{b}}_2(M)=1$ and $\ensuremath{\mathrm{b}}_3(M)=0$;\label{b2=1b3=0}
\item $\ensuremath{\mathrm{b}}_2(M)=2$, $\ensuremath{\mathrm{b}}_3(M)=0$ and $\ensuremath{\mathrm{H}}^*(M;\ensuremath{\mathbb{Q}})$ is generated by $\ensuremath{\mathrm{H}}^2(M;\ensuremath{\mathbb{Q}})$;\label{b2=2b3=0}
\item $\ensuremath{\mathrm{b}}_2(M)=3$, $\ensuremath{\mathrm{b}}_3(M)=0$, $\ensuremath{\mathrm{H}}^*(M;\ensuremath{\mathbb{Q}})$ is generated by $\ensuremath{\mathrm{H}}^2(M;\ensuremath{\mathbb{Q}})$ and there is a basis $x_1,x_2,x_3$ of $\ensuremath{\mathrm{H}}^2(M;\ensuremath{\mathbb{Q}})$, such that the kernel of the restriction of the homomorphism $\ensuremath{\mathbb{Q}}[\tilde{x}_1, \tilde{x}_2, \tilde{x}_3] \to \ensuremath{\mathrm{H}}^*(M;\ensuremath{\mathbb{Q}})$ with $\tilde{x}_i \mapsto x_i$ to homogeneous polynomials of degree two has a regular sequence as a basis.\label{b2=3b3=0}
\end{enumerate}
\end{thm}
Note that, in dimension up to six, every closed, simply connected manifold is formal by a theorem of Miller (see \cite{Miller79}), so a classification of rational (or real) cohomology rings is equivalent to a classification of rational (or real) homotopy types. The rational (respectively real) cohomology rings of these manifolds are determined by their third Betti number and a cubic form on the second cohomology group with rational (respectively real) coefficients. In the real case we can give a classification of the real homotopy types for closed, simply connected, rationally elliptic 6--manifolds $M$ with second Betti number $\ensuremath{\mathrm{b}}_2(M)\leq 2$.
\begin{thm}\label{TheoremDimension6Realb2leq2}
A closed, simply connected, rationally elliptic 6--manifold $M$ with $\ensuremath{\mathrm{b}}_2(M)\leq 2$ has the real homotopy type of exactly one of the following manifolds:
\[\ensuremath{\mathrm{S}}^6, \ensuremath{\mathrm{S}}^3\times\ensuremath{\mathrm{S}}^3, \ensuremath{\mathbb{CP}}^3, \ensuremath{\mathrm{S}}^2 \times \ensuremath{\mathrm{S}}^4, \ensuremath{\mathbb{CP}}^2\times\ensuremath{\mathrm{S}}^2, \ensuremath{\mathrm{SU}}(3)/ \ensuremath{\mathrm{T}}^2\text{ or }\ensuremath{\mathbb{CP}}^3 \# \ensuremath{\mathbb{CP}}^3.\]
\end{thm}
In the case $\ensuremath{\mathrm{b}}_3(M)=3$ we can give a classification of the possible cubic forms.
\begin{thm}\label{TheoremDimension6Realb2eq3}
A closed, simply connected 6--manifold $M$ with $\ensuremath{\mathrm{b}}_2(M)=3$ is rationally elliptic, if and only if $\ensuremath{\mathrm{b}}_3(M)=0$ and the cubic form associated to $\ensuremath{\mathrm{H}}^*(M;\ensuremath{\mathbb{R}})$ is equivalent to $x y z$, $z(x^2+y^2)$, $z(x^2+y^2-z^2)$, $x(x^2+y^2-z^2)$, $x(x^2+y^2+z^2)$, $x^3+3x^2 z-3 y^2 z$, $x^3-3 x^2 z-3y^2 z$ or $x^3+y^3+z^3+6 \sigma x y z$ for $\sigma\neq 0, 1, -\tfrac{1}{2}$.
\end{thm}
As a by-product of the proof of Theorem~\ref{TheoremDimension6Realb2leq2} we get a classification of certain rationally hyperbolic 6--manifolds.
\begin{cor}\label{cor:hyperbolicDimension6}
A closed, simply connected 6--manifold $M$ with $\ensuremath{\mathrm{b}}_2(M)\leq 2$ and $\ensuremath{\mathrm{b}}_3(M)=0$ is rationally hyperbolic if and only if it has the real homotopy type of $(\ensuremath{\mathrm{S}}^2 \times \ensuremath{\mathrm{S}}^4) \#(\ensuremath{\mathrm{S}}^2 \times \ensuremath{\mathrm{S}}^4)$ or $\ensuremath{\mathbb{CP}}^3 \# (\ensuremath{\mathrm{S}}^2 \times \ensuremath{\mathrm{S}}^4)$.
\end{cor}
A similar statement for the real cubic forms associated to closed, simply connected, rationally hyperbolic 6--manifolds with $\ensuremath{\mathrm{b}}_2=3$ and $\ensuremath{\mathrm{b}}_3=0$ can be read off Table~\ref{TableCubicFormsExamples}.
In the seven-dimensional case we can classify the rational homotopy types. Note that the manifolds in the theorem have pairwise distinct rational homotopy types.
\begin{thm}\label{TheoremDimension7Rational}
A closed, simply connected 7--manifold is rationally elliptic if and only if it has the rational homotopy type of one of the following manifolds: \\
$\ensuremath{\mathrm{S}}^7$, $\ensuremath{\mathrm{S}}^2 \times \ensuremath{\mathrm{S}}^5$, $\ensuremath{\mathbb{CP}}^2 \times \ensuremath{\mathrm{S}}^3$, $\ensuremath{\mathrm{S}}^3 \times \ensuremath{\mathrm{S}}^4$, $N^7$ or $M_\sigma^7$ for some $\sigma \in \ensuremath{\mathbb{Q}}^* / (\ensuremath{\mathbb{Q}}^*)^2$.
\end{thm}
Here the manifolds $M^7_\sigma$ are realizations of certain minimal models which exist by Sullivan's realization result (see Section~\ref{subsubsec:RealizationManifold}). We can choose \[M^7_{[1]}=\ensuremath{\mathrm{S}}^3 \times (\ensuremath{\mathbb{CP}}^2 \# \ensuremath{\mathbb{CP}}^2)\text{ and }M^7_{[-1]}=\ensuremath{\mathrm{S}}^3 \times (\ensuremath{\mathbb{CP}}^2 \# \overline{\ensuremath{\mathbb{CP}}}^2),\] where $\overline{\ensuremath{\mathbb{CP}}}^2$ denotes $\ensuremath{\mathbb{CP}}^2$ with reversed orientation. For $\sigma\neq [\pm1]$ we do not know of a nice realization of $M_\sigma$ as a manifold (see Proposition~\ref{PropM7sigmanot}), but $M_\sigma$ is rationally homotopy equivalent to a nonnegatively curved orbifold (see Remark~\ref{RemarkDefinitionXsigma}). The manifold $N^7$ is a homogeneous space $(\ensuremath{\mathrm{SU}}(2))^3/\,\ensuremath{\mathrm{T}}^2$. Furthermore $N^7$ is an example of a non-formal manifold (see \cite[Example 2.91]{FOT}).
This paper is organized as follows. In Section~\ref{sec:Prelim} we recall some preliminaries on rational homotopy theory and the cohomology rings of 6--manifolds. Section~\ref{sec:DimensionSix} is divided into two parts in which Theorems~\ref{TheoremDimension6Rational},~\ref{TheoremDimension6Realb2leq2} and~\ref{TheoremDimension6Realb2eq3} are proven. In Section~\ref{sec:DimensionSeven}, we prove Theorem~\ref{TheoremDimension7Rational} and we also give a classification of the real and complex homotopy types of closed, simply connected, rationally elliptic 7--manifolds. In Section~\ref{sec:DimensionsEightAndNine} we state and prove some partial classification results in dimensions 8 and 9.
The results in this article were part of the author's dissertation \cite{MHDiss} at the Karlsruhe Institute of Technology. Part of the research was carried out at the University of Fribourg. The author wishes to thank his advisor Wilderich Tuschmann and Anand Dessai for helpful and stimulating discussions. Furthermore, he wishes to thank the referee of a previous version of this manuscript for various helpful remarks.
\section{Preliminaries}\label{sec:Prelim}
\subsection{Rational homotopy theory}
For rational homotopy theory, we use the books \cite{FHT} and \cite{FOT} as references and use their notation. For the convenience of the reader, we give an overview over the results that we need.
\subsubsection{Basic definitions}
Let $\ensuremath{\mathbb{K}}$ be a field of characteristic zero. By $X$ we always denote a simply connected space $X$ with finite Betti numbers. A \emph{commutative differential graded algebra} (cdga henceforth) $(A,d)$ over $\ensuremath{\mathbb{K}}$ is a graded algebra $A=\bigoplus_{k\geq0} A^k$ with unit which is commutative in the graded sense, that is $ab=(-1)^{pq} b a$ for $a\in A^p$, $b\in A^q$, together with a linear differential $d:A\to A$ satisfying $d^2=0$, $d(A^k)\subset A^{k+1}$ and $d(ab)=d(a) \;b+(-1)^p a\; d(b)$ for $a\in A^p$.
For a graded vector space $V=\bigoplus_{k\geq 0} V^k$, we denote by $\Lambda V$ the tensor product of the polynomial algebra on $V^\text{even}=\bigoplus_{k\geq 0} V^{2k}$ and the outer algebra on $V^\text{odd}=\bigoplus_{k\geq0}V^{2k+1}$. If $x_1,\dots, x_n$ is a (homogeneous) basis of $V$ we also write $\Lambda(x_1,\dots,x_n)$ for $\Lambda V$. Furthermore, we will use the following conventions. The elements of degree $k$ in the graded algebra $\Lambda V$ will be denoted by $(\Lambda V)^k$, while we denote by $\Lambda^k V$ the linear subspace generated by elements of word length $k$ in $V$. Furthermore $\Lambda V^k=\Lambda(V^k)$. The degree of a homogeneous element $v\in \Lambda V$ will be denoted by $|v|$.
A \emph{Sullivan algebra} is a cdga $(\Lambda V,d)$ with $V=V^{\geq1}$ such that there exists a basis $\{x_\alpha\}_{\alpha\in I}$ with $I$ a well-ordered index set, such that $d x_i \in \Lambda(x_j, j<i)$. If $V^1=\{0\}$ the existence of such a basis follows for every $(\Lambda V,d)$.
A Sullivan algebra $(\Lambda V,d)$ is called \emph{minimal} if $d(V)\subset \Lambda^{\geq2}V$.
If $(A,d)$ is a cdga with $\ensuremath{\mathrm{H}}^0(A,d)\cong \ensuremath{\mathbb{K}}$, then there exists a \emph{minimal model} of $(A,d)$, that is a minimal Sullivan algebra $(\Lambda V,d)$ and a homomorphism $\varphi:(\Lambda V,d) \to (A,d)$ inducing an isomorphism in cohomology. The minimal model is unique up to isomorphism.
To a space $X$ one can associate a cdga $(\mathrm{A_{PL}}(X;\ensuremath{\mathbb{K}}),d)$ (see \cite[Chapter 10]{FHT}), such that $\ensuremath{\mathrm{H}}^*(X;\ensuremath{\mathbb{K}})\cong \ensuremath{\mathrm{H}}^*(\mathrm{A_{PL}}(X;\ensuremath{\mathbb{K}}),d)$. The $\ensuremath{\mathbb{K}}$--minimal model of $X$ is the minimal model of $(\mathrm{A_{PL}}(X;\ensuremath{\mathbb{K}}),d)$.
If $(\Lambda V,d)$ is the rational minimal model of $X$, then $(\Lambda V,d)\ensuremath{\otimes} \ensuremath{\mathbb{K}}$ is the $\ensuremath{\mathbb{K}}$-minimal model of $X$. We say that $X$ and $Y$ have the same \emph{$\ensuremath{\mathbb{K}}$--homotopy type}, if their $\ensuremath{\mathbb{K}}$--minimal models are isomorphic, and write $X\simeq_\ensuremath{\mathbb{K}} Y$. For $\ensuremath{\mathbb{K}}=\ensuremath{\mathbb{Q}}$ this is equivalent to the usual definition.
If $(\Lambda V,d)$ is the rational minimal model of a simply connected space $X$, then $V^1=\{0\}$ and $V^k\cong\mathrm{Hom}(\pi_k(X),\ensuremath{\mathbb{Q}})$. A minimal Sullivan algebra $(\Lambda V,d)$ is called \emph{rationally elliptic} if $\dim V=\sum_{k}\dim V^k< \infty$ and $\dim \ensuremath{\mathrm{H}}^*(\Lambda V,d)< \infty$.
\subsubsection{Realization of minimal models by manifolds}\label{subsubsec:RealizationManifold}
For a cdga $(A,d)$ the \emph{formal dimension} is defined as the maximal $k\in \ensuremath{\mathbb{N}}$ with $\ensuremath{\mathrm{H}}^k(A,d)\neq\{0\}$, if such a $k$ exists, else it is defined to be $\infty$.
By a theorem of Sullivan \cite[Section 13]{Sull}, compare also \cite{Barge76} and \cite[Theorem 3.2]{FOT}, the following holds:
Let $(\Lambda V, d)$ be a rational minimal Sullivan algebra of formal dimension $n$ with $V=V^{\geq2}$ and let $\ensuremath{\mathrm{H}}^*(\Lambda V,d)$ satisfy Poincaré duality. Then, if $n$ is not divisible by 4, there is a compact simply connected manifold realizing $(\Lambda V,d)$. If $n=4k$ is divisible by 4 and the signature of the quadratic form on $\ensuremath{\mathrm{H}}^{2k}(\Lambda V,d)$ is zero, then $(\Lambda V,d)$ is realizable by a compact, simply connected manifold, if and only if in some basis of $\ensuremath{\mathrm{H}}^{2k}(\Lambda V,d)$ and for some identification $\ensuremath{\mathrm{H}}^{4k}(\Lambda V,d)\cong \ensuremath{\mathbb{Q}}$ the form is given by $\sum\pm x_i^2$. In the case that the signature is nonzero, there are additional conditions on chosen Pontryagin numbers. Here, for $n=4k$, we will only use the case, where the signature is zero.
By a theorem of Halperin \cite[Theorem 3]{Halperin77} a rationally elliptic minimal model satisfies Poincaré duality. Therefore, every simply connected, rationally elliptic minimal Sullivan algebra of formal dimension $n$, with $n$ not divisible by 4, is the minimal model of a compact, simply connected $n$-manifold.
\subsubsection{Exponents}\label{susubsec:Exponents}
Recall that the (a- and b-)exponents of a rationally elliptic, minimal Sullivan algebra $(\Lambda V,d)$ are $a \in \ensuremath{\mathbb{N}}^q$ and $b \in \ensuremath{\mathbb{N}}^r$ if there exist homogeneous bases $x_1,\dots,x_q$ of $V^{\mathrm{even}}$ and $y_1, \dots,y_r$ of $V^{\mathrm{odd}}$, such that $|x_i|=2 a_i$ and $|y_j|=2 b_j-1$. The pairs of tuples $a \in \ensuremath{\mathbb{N}}^q$ and $b \in \ensuremath{\mathbb{N}}^r$ that arise as exponents of rationally elliptic minimal Sullivan algebras have a purely arithmetic description.
\begin{defi*}[Strong arithmetic condition (SAC)]
The tuples $a \in \ensuremath{\mathbb{N}}^q$ and $b \in \ensuremath{\mathbb{N}}^r$ satisfy (SAC) if for all $1\leq s\leq q$ and $1\leq i_1<\dots<i_s\leq q$ there exist $1\leq j_1<\dots<j_s\leq r$ such that there are $\gamma_{kl}\in \ensuremath{\mathbb{N}}_0$ with
\[b_{j_k}=\sum_{l=1}^s \gamma_{kl} a_{i_l}\qquad \text{and} \qquad \sum _{l=1}^s\gamma_{kl}\geq 2\postdisplaypenalty=1000\]
for all $k=1,\dots,s$.
\end{defi*}
Friedlander and Halperin showed in \cite{FrH} that $a\in \ensuremath{\mathbb{N}}^q$ and $b \in \ensuremath{\mathbb{N}}^r$ with $b_j\geq 2$ for $j=1,\dots, r$ arise as the exponents of a simply connected, rationally elliptic minimal Sullivan algebra if and only if they satisfy (SAC). Furthermore the exponents of a simply connected, rationally elliptic minimal Sullivan algebra $(\Lambda V,d)$ satisfy (see \cite{FHT})
\begin{enumerate}[label=\textnormal{(\alph*)}]
\item $\dim V^{\text{even}}=q\leq r =\dim V^{\text{odd}} $\textup{;}
\item $ \sum_{i=1}^q 2a_i\leq n$\textup{;}
\item $\sum_{j=1}^r (2b_j-1) \leq 2n-1$\textup{;}
\item $n= 2 \left(\sum_{j=1}^r b_j - \sum_{i=1}^q a_i \right) -( r-q) $,
\end{enumerate}
where $n$ is the formal dimension of $(\Lambda V,d)$.
This is enough to compute the possible vector spaces $V$ that arise in the minimal models $(\Lambda V,d)$ of closed, simply connected manifolds of a given dimension.
\subsubsection{Pure Sullivan algebras and regular sequences}\label{subsubsec:PureSullivanAlgebrasRegularSequences}The notions of pure Sullivan Algebras and regular sequences will be essential in the proof of our results in dimension~6.
A Sullivan algebra $(\Lambda V,d)$ is called pure if $\dim V<\infty$, $d(V^{\mathrm{even}})=0$ and $d(V^{\mathrm{odd}})\subset\Lambda V^{\mathrm{even}}$.
Let $R$ be a ring. Recall that a sequence $r_1,r_2,\dots,r_k$ of elements of $R$ is called \emph{regular} if $r_1$ is not a zero divisor in $R$ and $r_i$ is not a zero divisor in $R/(r_1,\dots,r_{i-1})$ for $i=2,\dots,n$. In general being regular depends on the order of the sequence $r_1,\dots,r_n$. However, we are only interested in the case where $R=\ensuremath{\mathbb{K}}[x_1,\dots,x_n]$ is a polyomial ring over a field and the $r_i$ are homogeneous polynomials. In this case, being regular does not depend on the order of the elements $r_1, \dots,r_n$ (see \cite[Corollary to Theorem 16.3]{Matsumura} for example).
These two notions can be brought together as follows. Let $(\Lambda V,d)$ be a pure minimal Sullivan algebra with $\dim V^{\mathrm{even}}=\dim V^{\mathrm{odd}}$ and $y_1, \dots,y_k$ a basis of $V^{\mathrm{odd}}$, then $(\Lambda V,d)$ is rationally elliptic if and only if $dy_1,\dots,dy_k$ is a regular sequence. Furthermore, if $(\Lambda V,d)$ is rationally elliptic, then $\ensuremath{\mathrm{H}}^*(\Lambda V,d)\cong \Lambda V^{\mathrm{even}}/(dy_1,\dots,dy_k)$. This follows from \cite[Propositions 32.2 and 32.3]{FHT} and \cite[Corollary 3.2]{Stanley78}.
\subsection{Cohomology rings of 6--manifolds}
Let $\ensuremath{\mathbb{K}}$ be a field of characteristic zero. By a result of Miller \cite{Miller79}, in dimensions $\leq 6$ every closed, simply connected manifold $M$ is formal, i.e. its minimal model over $\ensuremath{\mathbb{K}}$ is also a minimal model for the cdga $(\ensuremath{\mathrm{H}}^*(M;\ensuremath{\mathbb{K}}),0)$. Due to the uniqueness of the minimal model, two formal spaces have the same $\ensuremath{\mathbb{K}}$--homotopy type if and only if their cohomology rings with coefficients in $\ensuremath{\mathbb{K}}$ are isomorphic. Therefore, in dimension 6 we only need to consider the cohomology rings.
The isomorphism class of the cohomology ring $\ensuremath{\mathrm{H}}^*(M;\ensuremath{\mathbb{K}})$ of a closed, simply connected 6--manifold $M$ is determined by the dimension of $\ensuremath{\mathrm{H}}^3(M;\ensuremath{\mathbb{K}})$ and the equivalence class of the cubic form on $\ensuremath{\mathrm{H}}^2(M;\ensuremath{\mathbb{K}})$ given by the cup product to $\ensuremath{\mathrm{H}}^6(M;\ensuremath{\mathbb{K}})\cong \ensuremath{\mathbb{K}}$. The equivalence relation we use is given by changing the basis of $\ensuremath{\mathrm{H}}^2(M;\ensuremath{\mathbb{K}})$ and scaling the form by a number in $\ensuremath{\mathbb{K}}$ (the scaling isn't necessary for $\ensuremath{\mathbb{K}}=\ensuremath{\mathbb{R}}$ or $\ensuremath{\mathbb{C}}$).
By a result of Wall \cite{Wall}, every rational cubic form is also realizable as the form associated to a closed, simply connected, spin manifold of dimension 6 with $\ensuremath{\mathrm{b}}_3=0$ and torsion free homology.
We will use two equivalent definitions of cubic forms on a vector space $V$ of finite dimension $n$ in this paper. The first is that of a symmetric multilinear map \[F: V \times V \times V \to \ensuremath{\mathbb{K}},\] which is uniquely determined by the coefficients $F_{ijk}=F(e_i,e_j,e_k)$ with $i\leq j\leq k$ for some basis $e_1,\dots,e_n$ of V. The second description is that of a homogeneous polynomial of degree 3 in $n$ variables.
These definitions can be identified via
\[F\mapsto F(\sum_{i=1}^n x_i e_i,\sum_{i=1}^n x_i e_i,\sum_{i=1}^n x_i e_i)\in \ensuremath{\mathbb{K}}[x_1,\dots,x_n].\]
\section{Six-dimensional manifolds}\label{sec:DimensionSix}
\subsection{The rational case (proof of Theorem~\ref{TheoremDimension6Rational})}
The possible exponents have already been calculated by Pavlov using the results of Friedlander and Halperin mentioned in Section \ref{susubsec:Exponents}.
\begin{lem}[See \cite{Pavlov}]\label{Lemma6dimensionalExponents}
A closed, simply connected, rationally elliptic 6--manifold has one of the following exponents:
\begin{multicols}{2}
\begin{enumerate}
\item[(6.1)] $a=(~)$, $b=(2,2)$
\item[(6.2)] $a=(1)$, $b=(4)$
\item[(6.3)] $a=(3)$, $b=(6)$
\item[(6.4)] $a=(1,1)$, $b=(2,3)$
\item[(6.5)] $a=(1,2)$, $b=(2,4)$
\item[(6.6)] $a=(1,1,1)$, $b=(2,2,2)$
\end{enumerate}
\end{multicols}
\end{lem}
In four of these cases the minimal model is already determined by its vector space structure.
\begin{lem}[See \cite{Pavlov}]
A closed, simply connected, rationally elliptic 6--manifold with exponents like in
\begin{itemize}
\item (6.1) is rationally homotopy equivalent to $\ensuremath{\mathrm{S}}^3\times \ensuremath{\mathrm{S}}^3$;
\item (6.2) is rationally homotopy equivalent to $\ensuremath{\mathbb{CP}}^3$;
\item (6.3) is rationally homotopy equivalent to $\ensuremath{\mathrm{S}}^6$;
\item (6.5) is rationally homotopy equivalent to $\ensuremath{\mathrm{S}}^2 \times \ensuremath{\mathrm{S}}^4$,
\end{itemize}
\end{lem}
We will now deal with case (6.4). Let $(\Lambda \tilde{V},d)=(\Lambda(x_1,x_2,y_1,y_2),d)$ be given by $|x_i|=2, |y_1|=3, |y_2|=5$ and $ dx_i=0$, $dy_1=x_1^2+f_2 \, x_2^2$, and $ dy_2 = g_1 \,x_1^3 + g_2 \, x_1^2 x_2 + g_3 \, x_1 x_2^2 + g_4 \, x_2^3$ for some $f_2,g_1,g_2,g_3,g_4 \in \ensuremath{\mathbb{Q}}$.
Note that the minimal model of a closed, simply connected, rationally elliptic 6--manifold with exponents like in (6.4) is of this form: The quadratic form given by $dy_1$ cannot vanish, so one can choose an orthogonal basis for it and rescale.
\begin{lem}\label{LemmaVTildeModels}
The above model $(\Lambda \tilde{V},d)$ is the minimal model of a closed, simply connected 6--manifold if and only if
\[ g_4\neq f_2 g_2\pm \sqrt{-f_2} (f_2 g_1-g_3)\tag{$*$}.\]
\end{lem}
\begin{proof}
To see that $(*)$ is necessary, one can compute the determinant of the differential $d_7: (\Lambda \tilde{V})^7 \to \Kern d_8$ in the bases $y_1x_1^2, y_1x_1x_2, y_1x_2^2, y_2x_1,y_2x_2$ of $(\Lambda \tilde{V})^7$ and $x_1^4, x_1^3 x_2,x_1^2x_2^2x_1x_2^3,x_2^4$ of $\Kern d_8$. It is $f_2^3 g_1^2 + f_2^2 g_2^2 - 2 f_2^2 g_1 g_3 + f_2 g_3^2 - 2 f_2 g_2 g_4 + g_4^2\neq 0$. Solving for $g_4$, this gives $(*)$.
To see that $(*)$ is sufficient we only need to prove that $\ensuremath{\mathrm{H}}^*(\Lambda \tilde{V},d)$ is finite dimensional. If we have done so, the formal dimension needs to be 6 due to its exponents and by the results mentioned in Section \ref{subsubsec:RealizationManifold} it is realized by a compact, simply connected 6--manifold. We show that $\dim \ensuremath{\mathrm{H}}^{\geq9}(\Lambda \tilde{V},d)=0$ by an elementary calculation.
Let $k\geq4$. It is easy to see that $d_{2k}$ is injective when restricted to the span of $y_1y_2 x_1^ix_2^{k-4-i}$, $i=0,\dots,k-4$. So $\dim( \Image d_{2k})=k-3$ and $\dim( \Kern d_{2k})=k+1$.
The image of $d_{2k+1}$ is generated by
\[v_i=d(y_1x_1^{k-i} x_2^{i-1})= x_1^{k-i+2}x_2^{i-1}+f_2 x_1^{k-i} x_2^{i+1}, \quad i=1, \dots,k\]
and
\begin{align*} w_j&=d(y_2x_1^{k-1-j} x_2^{j-1})\\&=g_1 \, x_1^{k+2-j} x_2^{j-1}+g_2 \, x_1^{k+1-j} x_2^{j}+ g_3 \, x_1^{k-j} x_2^{j+1}+g_4 \, x_1^{k-1-j} x_2^{j+2}\end{align*}
for $j=1,\dots, k-1$.
Let
\begin{align*}u_1&=w_{k-2}-g_1 \, v_{k-2} - g_2 \, v_{k-1} - (g_3-f_2 g_1) v_k\\&= (g_4-f_2 g_2 )\; x_1x_2^{k}- f_2 (g_3 - f_2 g_1)\, x_2^{k+1} \end{align*}
and
\begin{align*}u_2&=w_{k-1}-g_1 \, v_{k-1} - g_2 \, v_{k} \\&= (g_3-f_2 g_1 )\; x_1x_2^{k}+ (g_4 - f_2 g_2)\, x_2^{k+1}. \end{align*}
Because of $(*)$, the elements $v_1,\dots, v_k,u_1,u_2$ are linearly independent. So $\dim \Image d_{2k+1}\geq k+2 =\dim \ker d_{2k+2}$ and therefore $\Image d_{2k+1}=\Kern d_{2k+2}$. By also computing their dimensions, we get $\Image d_{2k}=\Kern d_{2k+1}$.
\end{proof}
\begin{rem}
It is easy to see that the equivalence class of $f_2$ in $\ensuremath{\mathbb{Q}}/(\ensuremath{\mathbb{Q}}^*)^2$ is an invariant of the isomorphism class of $(\Lambda \tilde{V},d)$. Since for every $f_2 \in \ensuremath{\mathbb{Q}}$, one can choose $g_1,\dots,g_4$ such that $(*)$ holds, there are infinitely many rational homotopy types of closed, simply connected, rationally elliptic 6-manifolds with $\ensuremath{\mathrm{b}}_2=2$, in contrast to real homotopy types of these.
\end{rem}
In the following we assume that $(\Lambda \tilde{V},d)$ satisfies $(*)$.
Let $\omega_1$ and $\omega_2$ be the cohomology classes of $x_1$ and $x_2$ and $\alpha_1=f_2 g_1-g_3$ and $\alpha_2=f_2g_2-g_4$.
Then $\omega_1^3= -f_2 \omega_1 \omega_2^2$ and $\omega_1^2 \omega_2=-f_2 \omega_2^3$. Therefore
\[0=g_1\omega_1^3 + g_2 \omega_1^2 \omega_2 + g_3\omega_1 \omega_2^2+g_4\omega_2^3=-(\alpha_1\; \omega_1 \omega_2^2 + \alpha_2\; \omega_2^3).\]
Then $\Omega=-\alpha_2\; \omega_1 \omega_2^2+\alpha_1\; \omega_2^3\neq 0$, since $(\alpha_1, \alpha_2)\neq(0,0)$ due to $(*)$. We have $(\alpha_1^2 +\alpha_2^2)\omega_1 \omega_2^2=-\alpha_2 \Omega$ and $(\alpha_1^2 +\alpha_2^2) \omega_2^3=\alpha_1 \Omega$.
Since we can use $\tfrac{1}{\alpha_1^2+\alpha_2^2} \Omega$ to define the cubic form $F$ associated to $\ensuremath{\mathrm{H}}^*(\Lambda \tilde{V},d)$, it is given by the components \[F_{111}=f_2 \alpha_2, \quad F_{112} =-f_2 \alpha_1, \quad F_{122}=-\alpha_2, \quad F_{222}=\alpha_1\]
and because of $(*)$, we have $\alpha_2 \neq \pm \sqrt{-f_2} \alpha_1$.
On the other hand, every cubic form that is of this form with given parameters $f_2, \alpha_1,\alpha_2 \in \ensuremath{\mathbb{Q}}$ satisfying $\alpha_2 \neq \pm \sqrt{-f_2} \alpha_1$ is realized by a minimal model $(\Lambda \tilde{V},d)$ of a closed, simply connected, rationally elliptic 6--manifold.
\begin{lem}
Let $F$ be a cubic form on a two-dimensional vector space $V$ over $\ensuremath{\mathbb{Q}}$. Then there is a basis of $V$ and $f_2, \alpha_1,\alpha_2\in \ensuremath{\mathbb{Q}}$ such that the components of $F$ in this basis are given by
\[F_{111}=f_2 \alpha_2, \quad F_{112} =-f_2 \alpha_1, \quad F_{122}=-\alpha_2, \quad F_{222}=\alpha_1.\]
\end{lem}
\begin{proof}
First we prove that it is possible to find a basis such that $F_{111}F_{222}=F_{112}F_{122}$.
The change of basis $\tilde{x}_1=x_1$, $\tilde{x}_2=\lambda x_1 + x_2$ gives
\[\tilde{F}_{111} \tilde{F}_{222} - \tilde{F}_{112} \tilde{F}_{122} = F_{111} F_{222} - F_{112} F_{122} +\lambda \;2( F_{111} F_{122} - F_{112}^2),\]
where the $\tilde{F}_{ijk}$ are the components with respect to the new basis. This expression vanishes for some $\lambda \in \ensuremath{\mathbb{Q}}$ if $F_{112}^2 \neq F_{111} F_{122}$. If $F_{112}^2 = F_{111} F_{122}$, then changing the basis to $\tilde{x}_1=x_1 + \lambda x_2$, $\tilde{x}_2=x_2$ gives
\[\tilde{F}_{111} \tilde{F}_{122} - \tilde{F}_{112}^2= (F_{111} F_{222} - F_{112} F_{122}) \lambda + (F_{112} F_{222} - F_{122}^2) \lambda^2,\]
so we can arrange $\tilde{F}_{111} \tilde{F}_{122} \neq \tilde{F}_{112}^2$ if the basis doesn't already satisfy $F_{111} F_{222} = F_{112} F_{122}$.
Assume now that $F_{111} F_{222} = F_{112} F_{122}$. If $F=0$, choose $\alpha_1=\alpha_2=0$. If $F\neq 0$, we can assume $F_{122}\neq 0$ or $F_{222} \neq 0$. Then let $\alpha_1=F_{222}$, $\alpha_2=-F_{122}$ and $f_2=-\tfrac{F_{112}}{F_{222}}$ or $f_2=-\tfrac{F_{111}}{F_{122}}$, respectively.
\end{proof}
\begin{lem}\label{LemmaDimension6FormsNotRealized}
If a cubic form $F$ on two-dimensional vector space over $\ensuremath{\mathbb{Q}}$ is not realized by one of the above models $(\Lambda \tilde{V},d)$ satisfying $(*)$ then it is equivalent to the form associated to $(\ensuremath{\mathrm{S}}^2\times \ensuremath{\mathrm{S}}^4)\#(\ensuremath{\mathrm{S}}^2 \times \ensuremath{\mathrm{S}}^4)$ or $(\ensuremath{\mathrm{S}}^2\times \ensuremath{\mathrm{S}}^4) \# \ensuremath{\mathbb{CP}}^3$.
\end{lem}
\begin{proof}
Under a general change of basis $\tilde{x}_1=a x_1 + b x_2$, $\tilde{x}_2=c x_1 + dx_2$ and only assuming $F_{111} F_{222}=F_{112}F_{122}$:
\[\tilde{F}_{111} \tilde{F}_{222} - \tilde{F}_{112}\tilde{F}_{122}=2 (b c - a d)^2 \big(a c ( F_{111} F_{122}-F_{112}^2 ) + b d (F_{112} F_{222}-F_{122}^2)\big),\]
where, as before, $\tilde{F}_{ijk}$ denote the components with respect to the new basis. So if
\[F_{111} F_{222}=F_{112}F_{122},\quad F_{112}^2 =F_{122} F_{111}\;\text{ and }\;F_{122}^2= F_{112} F_{222}\]
holds in one basis, it holds in every basis.
By the last lemma and the discussion preceding it, we can assume that a cubic form $F$, which is not realized by one of the above models $(\Lambda \tilde{V},d)$ with $(*)$, satisfies $F_{111}=f_2 \alpha_2$, $F_{112} =-f_2 \alpha_1$, $F_{122}=-\alpha_2$, $F_{222}=\alpha_1$ and $\alpha_2=\pm \sqrt{-f_2} \alpha_1$.
Therefore
\[F_{111} F_{222}=F_{112}F_{122},\quad F_{112}^2 =F_{122} F_{111}\;\text{ and }\;F_{122}^2= F_{112} F_{222}.\]
If $F\neq0$, we can assume that $F_{222}\neq 0$. Then the change of basis $\tilde{x}_1=x_1+\lambda x_2$, $\tilde{x}_2=x_2$ with $\lambda=-\frac{F_{122}}{F_{222}}$, gives $\tilde{F}_{122}= F_{122}+\lambda F_{222}=0$, $\tilde{F}_{222}=F_{222}\neq0$ and with the above relations $\tilde{F}_{111}=\tilde{F}_{112}=0$. Scaling to $\tilde{F}_{222}=1$ this is the form associated to $(\ensuremath{\mathrm{S}}^2\times \ensuremath{\mathrm{S}}^4) \# \ensuremath{\mathbb{CP}}^3$.
If $F=0$ it is the cubic form associated to $(\ensuremath{\mathrm{S}}^2\times \ensuremath{\mathrm{S}}^4)\#(\ensuremath{\mathrm{S}}^2 \times \ensuremath{\mathrm{S}}^4)$.
\end{proof}
The proof of Theorem~\ref{TheoremDimension6Rational} is now easy.
\begin{proof}[Proof of Theorem~\ref{TheoremDimension6Rational}]
By Lemma~\ref{Lemma6dimensionalExponents} the second Betti number of a closed, simply connected, rationally elliptic 6--manifold $M$ satisfies $\ensuremath{\mathrm{b}}_2(M)\leq3$.
Note that a manifold satisfying (a), (b) or (c) of Theorem~\ref{TheoremDimension6Rational} is rationally homotopy equivalent to $\ensuremath{\mathrm{S}}^6$, $\ensuremath{\mathrm{S}}^3 \times \ensuremath{\mathrm{S}}^3$, $\ensuremath{\mathbb{CP}}^3$ or $\ensuremath{\mathrm{S}}^2 \times \ensuremath{\mathrm{S}}^4$.
Now consider a closed, simply connected, rationally elliptic 6--manifold with $\ensuremath{\mathrm{b}}_2=2$. By Lemma~\ref{Lemma6dimensionalExponents} and the discussion preceding Lemma~\ref{LemmaVTildeModels} its minimal model is one of the $(\Lambda \tilde{V},d)$ satisfying $(*)$. Therefore it falls into (d) of the theorem. If on the other hand a closed, simply connected 6--manifold $M$ falling into (d) is given, its minimal model has to be one of $(\Lambda \tilde{V},d)$ satisfying $(*)$ by Lemma~\ref{LemmaDimension6FormsNotRealized}.
Finally, consider a closed, simply connected, rationally elliptic 6--manifold $M$ with $\ensuremath{\mathrm{b}}_2(M)=3$, then its rational minimal model has the form $(\Lambda V,d)=(\Lambda(x_1,x_2,x_3,y_1,y_2,y_3),d)$ with $|x_i|=2$, $|y_j|=3$ and $dx_i=0$. In particular, $(\Lambda V,d)$ is a pure Sullivan algebra with an equal number of even and odd generators. As seen in Section~\ref{subsubsec:PureSullivanAlgebrasRegularSequences}, $\ensuremath{\mathrm{H}}^*(M;\ensuremath{\mathbb{Q}})=\ensuremath{\mathrm{H}}^*(\Lambda V,d)=\Lambda(x_1,x_2,x_3)/(dy_1,dy_2,dy_3)$ and $dy_1, dy_2,dy_3$ is a regular sequence. Thus $M$ falls into case (e).
If, on the other hand, a manifold falling into case (e) is given, then the minimal model has the above form and the manifold is rationally elliptic.
\end{proof}
\subsection{The real case (proof of Theorems~\ref{TheoremDimension6Realb2leq2} and~\ref{TheoremDimension6Realb2eq3}, and Corollary~\ref{cor:hyperbolicDimension6})}
The main difference in approaching the real case is that binary and ternary real cubic forms have been classified in \cite[Lemmas 3 and 4]{McK}. For the rest of this section we will use the definition of a cubic form as a homogeneous polynomial of degree 3 as it is used there. We will now state the classification of McKay \cite{McK}.
A binary real cubic form is equivalent to exactly one of $0$, $x^3$, $x^2y$, $x^3 +y^3$ and $x^2 y - x y^2$.
A singular ternary real cubic form is equivalent to exactly one of the following:
\begin{multicols}{2}
\begin{itemize} \item$0,$ \item $ x^3,$ \item $ x^2y,$ \item $ x^2 y - x y^2,$ \item $ x(x^2+y^2),$ \item $ x y z,$ \item $ z(x^2+y^2),$ \item $ x(xz -y^2)$ \item$z(x^2+y^2-z^2),$ \item $ x(x^2+y^2-z^2),$ \item $x(x^2+y^2+z^2),$ \item $ x^3-3y^2 z,$ \item $ x^3+3 x^2 z-3 y^2 z$ \item and $x^3-3 x^2 z-3 y^2 z$.
\end{itemize}
\end{multicols}
A nonsingular ternary real cubic form is equivalent to exactly one of the forms \[x^3+y^3+z^3+6 \sigma \;x y z\] with $\sigma \neq -\tfrac{1}{2}$.
\begin{lem}\label{LemmaBinaryCubicFormsRealization}
The binary real cubic forms are realized by the following manifolds:
\begin{multicols}{2}
\begin{itemize}
\item $0$: $(\ensuremath{\mathrm{S}}^2\times \ensuremath{\mathrm{S}}^4)\#(\ensuremath{\mathrm{S}}^2 \times \ensuremath{\mathrm{S}}^4)$
\item $x^3$: $(\ensuremath{\mathrm{S}}^2\times \ensuremath{\mathrm{S}}^4)\# \ensuremath{\mathbb{CP}}^3$
\item $x^2y$: $\ensuremath{\mathbb{CP}}^2 \times \ensuremath{\mathrm{S}}^2$\columnbreak
\item $x^3 +y^3$: $\ensuremath{\mathbb{CP}}^3 \# \ensuremath{\mathbb{CP}}^3 $
\item $x^2 y - x y^2$: $\ensuremath{\mathrm{SU}}(3)/ \ensuremath{\mathrm{T}}^2$
\end{itemize}
\end{multicols}
\end{lem}
\begin{proof}
The first four are easy to see. The cohomology ring of $\ensuremath{\mathrm{SU}}(3)/\ensuremath{\mathrm{T}}^2$ has been calculated in \cite{Borel53} and is \[\ensuremath{\mathrm{H}}^*(\ensuremath{\mathrm{SU}}(3)/\ensuremath{\mathrm{T}}^2;\ensuremath{\mathbb{R}})=\Lambda(x_1,x_2)/(x_1^2 +x_1 x_2 + x_2^2, x_1^2 x_2+x_1 x_2^2)\] with $|x_i|=2$. Therefore $x_1^3=x_2^3=0$ and $x_1^2 x_2=-x_1 x_2^2$. So the associated cubic form is as stated.
\end{proof}
Of these manifolds, $\ensuremath{\mathbb{CP}}^2 \times \ensuremath{\mathrm{S}}^2$, $\ensuremath{\mathbb{CP}}^3 \# \ensuremath{\mathbb{CP}}^3 $ and $\ensuremath{\mathrm{SU}}(3)/ \ensuremath{\mathrm{T}}^2$ are rationally elliptic, since they have their rational cohomology ring generated by $\ensuremath{\mathrm{H}}^2$. Since the closed, simply connected, rationally elliptic 6--manifolds with second Betti number $\ensuremath{\mathrm{b}}_2\leq 1$ have been identified before, this already proves Theorem~\ref{TheoremDimension6Realb2leq2}.
Corollary~\ref{cor:hyperbolicDimension6} also follows from this, since we have seen that every compact, simply connected 6-manifold $M$ with $\ensuremath{\mathrm{b}}_2(M)\leq1$ and $\ensuremath{\mathrm{b}}_3(M)=0$ is rationally elliptic.
For the proof of Theorem~\ref{TheoremDimension6Realb2eq3}, we start with the following models. For $\lambda\neq1$ let
\[(\Lambda V,d_\lambda) = (\Lambda(x_1,x_2,x_3,y_1,y_2,y_3),d_\lambda)\]
with
$ |x_i|=2$, $|y_j|=3$, $dx_i=0$ and $d_\lambda y_j=x_j^2- \lambda \frac{x_1 x_2 x_3}{x_j}$ for $i,j=1,2,3$.
Let $u_j=x_j^2- \lambda \frac{x_1 x_2 x_3}{x_j}\in\ensuremath{\mathbb{R}}[x_1,x_2,x_3]$ and suppose there is a $z \in \ensuremath{\mathbb{C}}^3\setminus\{(0,0,0)\}$ with $u_i(z)=0$ for $i=1,2,3$. Since $z_1^2 =\lambda z_2 z_3$, $z_2^2=\lambda z_1z_3$ and $z_3^2=\lambda z_1 z_2$, we have $z_i\neq 0$ for $i=1,2,3$. Then $z_1^4=\lambda^2 z_2^2 z_3^2=\lambda ^4 z_1^2 z_2 z_3$, so $\lambda^4 z_2 z_3=z_1^2 = \lambda z_2 z_3$. Therefore $\lambda=1$, which we excluded. So $(0,0,0)$ is the only common zero of $u_1$, $u_2$ and $u_3$ in $\ensuremath{\mathbb{C}}^3$. By Hilbert's Nullstellensatz, $\ensuremath{\mathbb{R}}[x_1,x_2,x_3]/(u_1,u_2,u_3)$ is finite dimensional. By \cite[Propositions 32.1, 32.2 and 32.3]{FHT}, $u_1,u_2,u_3$ is a regular sequence, $(\Lambda V,d_\lambda)$ is rationally elliptic, of formal dimension 6 due to its exponents and its cohomology ring is $\ensuremath{\mathrm{H}}^*(\Lambda V,d_\lambda)\cong\ensuremath{\mathbb{R}}[x_1,x_2,x_3]/(u_1,u_2,u_3)$.
The cubic form associated to $(\Lambda V,d_\lambda)$ is $x^3+y^3+z^3+6 \tfrac{1}{\lambda} \;x y z$ if $\lambda \neq 0$ and $xyz$ if $\lambda =0$. So if a closed, simply connected 6--manifold with $\ensuremath{\mathrm{b}}_3=0$ has one of these forms associated to it, it is rationally elliptic. As the models $(\Lambda V,d_\lambda)$ with $\lambda \in \ensuremath{\mathbb{Q}}\setminus \{1\}$ can obviously be defined over the rational numbers, they can be realized as minimal models of a closed, simply connected 6--manifold and we get the following.
\begin{prop}\label{prop:InfinitelymanyrealHomotopytypesDimension6}
There are infinitely many real homotopy types of closed, simply connected, rationally elliptic 6--manifolds.
\end{prop}
For the remaining cubic forms we can use the same trick. Given a cubic form, we can associate the subspace of the homogenous polynomials of degree 2 in $\ensuremath{\mathbb{R}}[x_1,x_2,x_3]$ which vanish in the associated cohomology ring $\ensuremath{\mathrm{H}}^*(M;\ensuremath{\mathbb{R}})$ of some closed, simply connected 6--manifold. To do this, one uses that such a polynomial $f$ vanishes in the cohomology if and only if $x_1 f$, $x_2 f$ and $x_3f$ vanish in cohomology, which can be seen using the cubic form. If we take for example the cubic form $x^3+y^3+z^3$ (belonging to $\ensuremath{\mathbb{CP}}^3\#\ensuremath{\mathbb{CP}}^3\#\ensuremath{\mathbb{CP}}^3$) we left out above, the associated subspace is generated by $x_1x_2$, $x_1x_3$ and $x_2 x_3$, which is not a regular sequence, since $x_1x_3$ is a zero divisor in $\ensuremath{\mathbb{R}}[x_1,x_2,x_3]/(x_1x_2)$. Therefore $\ensuremath{\mathbb{CP}}^3\#\ensuremath{\mathbb{CP}}^3\#\ensuremath{\mathbb{CP}}^3$ is not rationally elliptic.
The other nonsingular ternary cubic form we left out, $x^3+y^3+z^3+6 x y z$, is not regular, since
\[x_2 (x_3^2-x_1x_2)=-x_3(x_1^2-x_2x_3)-x_1(x_2^2-x_1x_3),\]
so $x_3^2-x_1x_2$ is a zero divisor in $\ensuremath{\mathbb{R}}[x_1,x_2,x_3]/(x_1^2-x_2x_3,x_2^2-x_1x_3)$
The proof of Theorem~\ref{TheoremDimension6Realb2eq3} will be completed by the following lemma.
\begin{table}[!htbp]\begin{center}\caption{\label{TableTernaryCubicForms}Ternary real cubic forms and associated sequence of homogenous polynomials of degree two}
\begin{tabular}{p{3.5cm}p{5.6cm}l}\toprule
cubic form& sequence & regular\\
\midrule
$0$& $x_1^2$,\, $x_2^2$,\, $x_3^2$,\, $x_1 x_2$,\, $x_1 x_3$,\, $x_2 x_3$&no \\ \addlinespace[.3em]
$x^3$& $x_2^2$,\, $x_3^2$,\, $x_1 x_2$,\, $x_1 x_3$,\, $x_2 x_3$ & no \\ \addlinespace[.3em]
$x^2y$ &$x_2^2$,\, $x_1 x_3$,\, $x_2 x_3$,\, $x_3^2$& no\\ \addlinespace[.3em]
$x^2 y - x y^2$ & $x_1^2+x_1 x_2 + x_2^2$,\, $x_1 x_3$,\, $x_2 x_3$,\, $x_3^2 $ & no\\ \addlinespace[.3em]
$x(x^2+y^2)\sim x^3+y^3 $ & $x_1x_2$,\, $x_1x_3$,\, $x_2 x_3$,\, $x_3^2$& no\\ \addlinespace[.3em]
$x y z$ & $x_1^2$,\, $x_2^2$,\, $x_3^2$ & yes\\ \addlinespace[.3em]
$z(x^2+y^2)$ & $x_1 x_2$,\, $x_1^2-x_2^2$,\, $x_3^2$ & yes\\ \addlinespace[.3em]
$x(xz -y^2)$ & $x_2^2+x_1 x_3$,\, $x_3^2$,\, $x_2 x_3$ & no\\ \addlinespace[.3em]
{\raggedright $z(x^2+y^2-z^2)$\par $\sim z (3 x^2 + 3 y^2-z^2)$ }&$x_1x_2$,\, $x_1^2+x_3^2$,\, $x_2^2 + x_3^2$ & yes\\ \addlinespace[.3em]
{\raggedright $x(x^2+y^2-z^2)$\par$\sim x(x^2+3 y^2 - 3 z^2)$} & $x_2 x_3$,\, $x_1^2-x_2^2$,\, $x_1^2+x_3^2$ & yes\\ \addlinespace[.3em]
{\raggedright $x(x^2+y^2+z^2)$\par$\sim x(x^2+3 y^2 + 3 z^2)$} &$x_2 x_3$,\, $x_1^2-x_2^2$,\, $x_1^2-x_3^2$& yes\\ \addlinespace[.3em]
$x^3-3 y^2 z $ & $x_1 x_2$,\, $x_1 x_3$,\, $x_3^2$ & no\\ \addlinespace[.3em]
$x^3+3x^2 z-3 y^2 z$ & $x_1 x_2$,\, $x_3^2$,\, $x_1^2 - x_1 x_3 + x_2^2$ & yes\\ \addlinespace[.3em]
$x^3-3 x^2 z-3y^2 z$ &$x_1 x_2$,\, $x_3^2$,\, $x_1^2 + x_1 x_3 - x_2^2$ & yes\\ \addlinespace[.3em]
$x^3+y^3+z^3+6 \sigma x y z$, $\sigma \neq-\frac{1}{2}$&$\sigma x_1^2-x_2 x_3$,\, $\sigma x_2^2-x_1 x_3$,\, $\sigma x_3^2 - x_1 x_2$&\\
\bottomrule
\end{tabular}\end{center}\end{table}
\begin{lem}The subspaces associated to the cubic forms $x y z$, $z(x^2+y^2)$, $z(x^2+y^2-z^2)$, $x(x^2+y^2-z^2)$, $x(x^2+y^2+z^2)$, $x^3+3x^2 z-3 y^2 z$, $x^3-3 x^2 z-3y^2 z$, and $x^3+y^3+z^3+6 \sigma x y z$ for $\sigma \not\in\{0,1,-\frac{1}{2}\}$ are generated by a regular sequence, while the ones associated to $0$, $x^3$, $x^2y$, $x^2 y - x y^2$, $x(x^2+y^2)$, $x(xz -y^2)$, $x^3-3 y^2 z $ and $x^3+y^3+z^3+6 \sigma x y z$ for $\sigma\in \{0,1\}$ are not generated by a regular sequence.
\end{lem}
\begin{proof}
Bases for the associated subspaces are given in Table~\ref{TableTernaryCubicForms}. The regularity of the sequences associated to $x y z$, $z(x^2+y^2)$, $z(x^2+y^2-z^2)$, $x(x^2+y^2-z^2)$, $x(x^2+y^2+z^2)$, $x^3+3x^2 z-3 y^2 z$, $x^3-3 x^2 z-3y^2 z$, and $x^3+y^3+z^3+6 \sigma x y z$ for $\sigma \not\in\{0,1,-\frac{1}{2}\}$ is seen using the application of Hilbert's Nullstellensatz already used in the discussion following Lemma~\ref{LemmaBinaryCubicFormsRealization}. Except for the ones associated to $x(xz -y^2)$ and $x^3+y^3+z^3+6 \sigma x y z$ with $\sigma \in \{0,1\}$, all non-regular sequences contain two elements of the form $x_i x_j$ and $ x_i x_k$ with $\{i,j,k\}=\{1,2,3\}$. These are non-regular, since $x_ix_j \cdot x_k \in (x_i x_k)$. For $x(xz -y^2)$, the two elements $x_3^2$ and $x_2 x_3$ allow a similar construction and the last case has been treated above.
\end{proof}
\begin{table}[!htb]\begin{center}\caption{\label{TableCubicFormsExamples}Ternary real cubic forms and examples of manifolds with cohomology ring having the cubic form associated to it}
\begin{tabular}{p{3.2cm}p{5.3cm}p{2.2cm}}\toprule
cubic form &example & rationally\\
\midrule
$0$& $(\ensuremath{\mathrm{S}}^2 \times \ensuremath{\mathrm{S}}^4) \#(\ensuremath{\mathrm{S}}^2 \times \ensuremath{\mathrm{S}}^4) \#(\ensuremath{\mathrm{S}}^2 \times \ensuremath{\mathrm{S}}^4 )$&hyperbolic \\\addlinespace[.3em]
$x^3$&$\ensuremath{\mathbb{CP}}^3 \# (\ensuremath{\mathrm{S}}^2 \times \ensuremath{\mathrm{S}}^4) \#(\ensuremath{\mathrm{S}}^2 \times \ensuremath{\mathrm{S}}^4)$& hyperbolic \\\addlinespace[.3em]
$x^2y$ & $(\ensuremath{\mathbb{CP}}^2 \times \ensuremath{\mathrm{S}}^2)\# (\ensuremath{\mathrm{S}}^2 \times \ensuremath{\mathrm{S}}^4) $& hyperbolic\\\addlinespace[.3em]
$x^2 y - x y^2$ & $(\ensuremath{\mathrm{SU}}(3)/ \ensuremath{\mathrm{T}}^2)\# (\ensuremath{\mathrm{S}}^2 \times \ensuremath{\mathrm{S}}^4) $& hyperbolic\\\addlinespace[.3em]
$x(x^2+y^2)$ & $\ensuremath{\mathbb{CP}}^3 \# \ensuremath{\mathbb{CP}}^3 \#(\ensuremath{\mathrm{S}}^2 \times \ensuremath{\mathrm{S}}^4) $& hyperbolic\\\addlinespace[.3em]
$x y z$ & $\ensuremath{\mathrm{S}}^2 \times\ensuremath{\mathrm{S}}^2 \times\ensuremath{\mathrm{S}}^2 $, $(\ensuremath{\mathbb{CP}}^2 \# \overline{\ensuremath{\mathbb{CP}}}^2) \times \ensuremath{\mathrm{S}}^2 $ & elliptic\\\addlinespace[.3em]
$z(x^2+y^2)$ & $(\ensuremath{\mathbb{CP}}^2 \# \ensuremath{\mathbb{CP}}^2) \times \ensuremath{\mathrm{S}}^2 $ & elliptic\\\addlinespace[.3em]
$x(xz -y^2)$ & $ $ & hyperbolic\\\addlinespace[.3em]
$z(x^2+y^2-z^2)$ & & elliptic\\\addlinespace[.3em]
$x(x^2+y^2-z^2)$ & $B^3_{b_1,c_1,c_2}$ with $c_2\neq 0$, $c_1\neq \frac{b_1c_2}{2}$& elliptic\\\addlinespace[.3em]
$x(x^2+y^2+z^2)$ & $B^1_{c_1,c_2} $, with $(c_1,c_2)\neq(0,0)$ & elliptic\\\addlinespace[.3em]
$x^3-3 y^2 z $ & $\ensuremath{\mathbb{CP}}^3 \# (\ensuremath{\mathbb{CP}}^2 \times \ensuremath{\mathrm{S}}^2) $ & hyperbolic\\\addlinespace[.3em]
$x^3+3x^2 z-3 y^2 z$ & $ $ & elliptic \\\addlinespace[.3em]
$x^3-3 x^2 z-3y^2 z$ &$B^2_{0,b_3} $ with $b_3\neq 0$& elliptic\\\addlinespace[.3em]
$x^3+y^3+z^3+6 \sigma x y z$, $\sigma \neq-\frac{1}{2},0,1$&$B^\text{sp}$&elliptic\\\addlinespace[.3em]
$x^3+y^3+z^3+6 \sigma x y z$, $\sigma \in\{0,1\}$&$\ensuremath{\mathbb{CP}}^3\#\ensuremath{\mathbb{CP}}^3\#\ensuremath{\mathbb{CP}}^3$&hyperbolic\\%hyperbolic/ \mbox{elliptic}\\
\bottomrule
\end{tabular}\end{center}\end{table}
In some of these cases we can give examples of manifolds which have these cubic forms, see Table~\ref{TableCubicFormsExamples}. Most of these are easy to see. We concentrate on the manifolds $B^1_{c_1,c_2}$, $B^2_{a_3,b_3}$ and $B^3_{b_1,c_1,c_2}$. They are certain biquotients that have been studied by DeVito \cite{DeV,DeVitoBiqu67}. They are given as quotients of $\ensuremath{\mathrm{S}}^3\times \ensuremath{\mathrm{S}}^3 \times \ensuremath{\mathrm{S}}^3$ by a $\ensuremath{\mathrm{T}}^3$-action. The general form of the actions is given by
\begin{align*}
&(u,v,w).((p_1,p_2),(q_1,q_2),(r_1,r_2))\\
&\qquad \qquad=((u p_1,u^{a_1} v^{a_2} w^{a_3} p_2), (u q_1,u^{b_1} v^{b_2} w^{b_3} q_2), (u r_1,u^{c_1} v^{c_2} w^{c_3} r_2)).
\end{align*}
where $(u,v,w)\in \ensuremath{\mathrm{T}}^3$ and $((p_1,p_2),(q_1,q_2),(r_1,r_2))\in (\ensuremath{\mathrm{S}}^3)^3\subset (\ensuremath{\mathbb{C}}^2)^3$. The action is determined by the matrix $\left(\begin{smallmatrix}a_1&a_2&a_3\\b_1&b_2&b_3\\c_1&c_2&c_3\\\end{smallmatrix}\right)\in \ensuremath{\mathbb{Z}}^{3 \times 3}$. The biquotients $B^1_{c_1,c_2}$, $B^2_{a_3,b_3}$ and $B^3_{b_1,c_1,c_2}$ are given by the matrices
\[\begin{pmatrix}1&2&0\\1&1&0\\c_1&c_2&1\\\end{pmatrix},\begin{pmatrix}1&2&a_3\\1&1&b_3\\0&0&1\\\end{pmatrix}\text{ and }\begin{pmatrix}1&0&0\\b_1&1&0\\c_1&c_2&1\\\end{pmatrix}.\]
The manifold $B^\text{sp}$ also is a biquotient of this form, the first of the sporadic examples in \cite{DeVitoBiqu67} with action determined by the matrix \[ \begin{pmatrix}1&2&2\\1&1&2\\1&1&1\\\end{pmatrix}.\]
Their cohomology rings have been computed in \cite[Proposition 4.9]{DeVitoBiqu67}:{\medmuskip=2.5mu
\begin{align*}
\ensuremath{\mathrm{H}}^*(B^1_{c_1,c_2};\ensuremath{\mathbb{Z}}) &\cong \ensuremath{\mathbb{Z}}[u,v,w]/(u^2 + 2 u v, v^2+ u v, w^2+ c_1 u w + c_2 v w),\\
\ensuremath{\mathrm{H}}^*(B^2_{a_3,b_3};\ensuremath{\mathbb{Z}})& \cong \ensuremath{\mathbb{Z}}[u,v,w]/(u^2 + 2 u v + a_3 u w, v^2+ u v + b_3 v w, w^2),\\
\ensuremath{\mathrm{H}}^*(B^3_{b_1,c_1,c_2};\ensuremath{\mathbb{Z}})& \cong \ensuremath{\mathbb{Z}}[u,v,w]/(u^2, v^2+b_1 u v, w^2+ c_1 u w + c_2 v w),\\
\ensuremath{\mathrm{H}}^*(B^{\text{sp}};\ensuremath{\mathbb{Z}}) &\cong \ensuremath{\mathbb{Z}}[u,v,w]/(u^2+2uv+2uw, v^2+ u v+2vw, w^2+uw+vw)
\end{align*}
with $u,v,w$ of degree 2.}
For some of these biquotients we will now compute the cubic form, associated to their cohomology rings.
Consider first $B^1_{c_1,c_2}$ with $(c_1,c_2)\neq (0,0)$. Let $\alpha=\sqrt{c_2^2+(2 c_1-c_2)^2}\neq0$ and $x_1,x_2, x_3$ be the basis of $\ensuremath{\mathrm{H}}^2(B^1_{c_1,c_2};\ensuremath{\mathbb{R}})$ with $u=-2 x_3$, $v=x_2+x_3$ and $w=-\tfrac{\alpha}{2} x_1 - \tfrac{c_2}{2} x_2 + (c_1-\tfrac{c_2}{2}) x_3$. Then
\begin{align*}u^2 + 2 u v&= -4 x_2 x_3\\
v^2+u v&= - (x_1^2-x_2^2) +(x_1^2-x_3^2)\\
w^2 + c_1 u w+ c_2 v w&= \tfrac{c_2^2}{4} \;(x_1^2-x_2^2) +\tfrac{1}{4} (2c_1-c_2)^2\;(x_1^2-x_3^2)\\ &\qquad+\big(c_1 c_2 - \tfrac{c_2^2}{2}\big)\; x_2 x_3,
\end{align*}
which spans the same subspace of $(\ensuremath{\mathbb{R}}[x_1,x_2,x_3])^2$ as $ x_2x_3,x_1^2-x_2^2,x_1^2-x_3^2$, the sequence associated to $x(x^2+y^2+z^2)$, see Table~\ref{TableTernaryCubicForms}. Therefore $B^1_{c_1,c_2}$ has $x(x^2+y^2+z^2)$ as associated cubic form.
Next consider $B^2_{0,b_3}$ with $b_3\neq 0$. Let $x_1,x_2,x_3$ be the basis of $\ensuremath{\mathrm{H}}^2(B^2_{0,b_3};\ensuremath{\mathbb{R}})$ with $u=-\frac{b_3^{1/3} }{2^{2/3}}(2 x_1-2 x_2+x_3)$, $v=-2^{1/3} b_3^{1/3} x_2$ and $w=\frac{1}{2^{2/3} b_3^{2/3}}x_3$. Then
\begin{align*}
u^2 + 2 u v&=\frac{b_3^{2/3}}{2^{4/3}}\; x_3^2+2^{2/3} b_3^{2/3}\;(x_1^2 + x_1 x_3 - x_2^2),
\\v^2+ u v + b_3 v w&=2^{2/3}b_3^{2/3} \;x_1 x_2,
\\w^2&=\left(\tfrac{1}{2^{2/3} b_3^{2/3}}\right)^2 \;x_3^2.
\end{align*}
Consulting Table~\ref{TableTernaryCubicForms} shows that the associated cubic form is $x^3-3 x^2 z-3y^2 z$.
Now consider $B^3_{b_1,c_1,c_2}$ with $c_2\neq 0$ and $2c_1 \neq b_1 c_2$. Let $x_1,x_2,x_3$ be the basis of $\ensuremath{\mathrm{H}}^2(B^3_{b_1,c_1,c_2};\ensuremath{\mathbb{R}})$ with $u=c_2(x_2-x_3)$, $v=(c_1-b_1c_2)x_2 + c_1 x_3$ and $w=\tfrac{1}{2} c_2 (b_1c_2-2 c_1)(x_1+x_2)$. Then
\begin{align*}
u^2 &= -c_2^2 (2 f_1+f_2 - f_3)\\
v^2+u v&= (2c_1^2-2b_1 c_1 c_2 +b_1^2 c_2^2)f_1 + (c_1^2-b_1c_1c_2) (-f_2+f_3)\\
w^2 + c_1 u w+ c_2 v w&= \tfrac{1}{4}c_2^2 (b_1c_2-2c_1)^2 f_2,
\end{align*}
with $f_1=x_2 x_3$, $f_2=x_1^2-x_2^3$ and $f_3=x_1^2+x_3^2$. It follows that $B^3_{b_1,c_1,c_2}$ realizes the cubic form $x(x^2+y^2-z^2)$ by again consulting Table~\ref{TableTernaryCubicForms}.
In $\ensuremath{\mathrm{H}}^*(B^\text{sp})$, we have that $u^2v=-2 u v w$, $u^2 w= 0$, $uv^2=0$, $uw^2=-u v w$, $v^2 w=-uvw$, $vw^2=0$, $u^3=4uvw$, $v^3=2uvw$ and $w^3=uvw$. Hence, the cubic form associated to the cohomology ring of $B^\text{sp}$ is
\[4x^3+2y^3+z^3-6x^2y-3x z^2-3y^2 z+6xyz.\]
Computing the gradient, it is easy to see, that this form is nonsingular. So for some $\sigma$ it is equivalent to the form $x^3+y^3+z^3+6 \sigma x y z$. A numerical computation shows, that $\sigma \approx 0.27788$ for $B^\text{sp}$.
\begin{rem}[The complex case]
The normal forms of complex ternary cubic forms can be found for example in \cite[Section~I.7]{Kraft} or \cite[Section~7.3]{PlaneAlgCurves}. In particular every nonsingular cubic form can be brought to the Hesse normal form $C_\lambda=x^3+y^3+z^3+\lambda x y z$ with $\lambda \in \ensuremath{\mathbb{C}}$, $\lambda^3\neq 27$. For a given $\lambda $ there are only finitely many $\lambda'$ such that $C_\lambda$ and $C_{\lambda'}$ are equivalent, see \cite[Section~7.3, Theorem~10]{PlaneAlgCurves}. Therefore the results can easily be adapted to the complex case, in particular Proposition~\ref{prop:InfinitelymanyrealHomotopytypesDimension6} still holds for complex homotopy types.
\end{rem}
\section{Seven-dimensional manifolds}\label{sec:DimensionSeven}
As in the six-dimensional case we start with the computation of the possible exponents using the results of Friedlander and Halperin given in Section \ref{susubsec:Exponents}.
\begin{lem}\label{Lemma7dimensionalExponents}
A closed, simply connected, rationally elliptic 7--manifold has one of the following exponents:
\begin{multicols}{2}
\begin{enumerate}
\item[(7.1)]$a=(~)$, $b=(4)$
\item[(7.2)]$a=(1)$, $b=(2,3)$
\item[(7.3)]$a=(2)$, $b=(2,4)$
\item[(7.4)]$a=(1,1)$, $b=(2,2,2)$
\end{enumerate}
\end{multicols}
\end{lem}
Again, most exponents allow only finitely many rational homotopy types.
\begin{lem}\label{Lemma7dimensionalFinitelyManyExamples}
A closed, simply connected, rationally elliptic 7--manifold with exponents like in
\begin{itemize}
\item (7.1) is rationally homotopy equivalent to $\ensuremath{\mathrm{S}}^7$;
\item (7.2) is rationally homotopy equivalent to $\ensuremath{\mathrm{S}}^2 \times \ensuremath{\mathrm{S}}^5$ or $\ensuremath{\mathbb{CP}}^2 \times \ensuremath{\mathrm{S}}^3$;
\item (7.3) is rationally homotopy equivalent to $\ensuremath{\mathrm{S}}^3\times\ensuremath{\mathrm{S}}^4$.
\end{itemize}
\end{lem}
\begin{proof}
Cases (7.1) and (7.3) are easy. In case (7.2) there are generators $x\in V^2$, $y_3\in V^3$ and $y_5\in V^5$. For the differential there are three possibilities: $d_1x=0$, $d_1y_3=x^2$ and $d_1y_5=0$ which gives the minimal model of $\ensuremath{\mathrm{S}}^2 \times \ensuremath{\mathrm{S}}^5$, $d_2x=0$, $d_2y_3=0$ and $d_2y_5=x^3$ which gives the minimal model of $\ensuremath{\mathrm{S}}^3\times \ensuremath{\mathbb{CP}}^2$ and $d_3x=0$, $d_3y_3=x^2$ and $d_3y_5=x^3$. The last model is isomorphic to the first via $\varphi: (\Lambda(x,y_1,y_2),d_3)\to (\Lambda V,d_1)$ with $\varphi(x)=x$, $\varphi(y_3)=y_3$ and $\varphi(y_5)=y_5-x y_3$.
\end{proof}
So we are left with manifolds having exponents like in case (7.4). First note, that for a minimal Sullivan algebra $(\Lambda V,d)$ with exponents like in (7.4), so that $\dim V^2=2$, $\dim V^3=3$ and $\dim V^i=0$ else, the rank of $d|_{V^3}$ has to satisfy $\rank d|_{V^3}\geq 2$ if $\dim \ensuremath{\mathrm{H}}^*(\Lambda V,d)<\infty$.
Consider the minimal Sullivan algebras
\[(\Lambda V,d_{\tilde{\sigma}})=(\Lambda(x_1,x_2,y_1,y_2,y_3),d_{\tilde{\sigma}})\]
with $\tilde{\sigma}\in \ensuremath{\mathbb{Q}}^*$, $|x_i|=2$, $|y_j|=3$ and differential given by $d_{\tilde{\sigma}} x_i=0=d_{\tilde{\sigma}}y_3$, $d_{\tilde{\sigma}}y_1= x_1x_2$ and $d_{\tilde{\sigma}}y_2=x_1^2-\tilde{\sigma}x_2^2$.
\begin{lem}\label{LemmaIsomorphicSigmaModels}
Two such models $(\Lambda V, d_{\tilde{\sigma}})$ and $(\Lambda V, d_{\tilde{\sigma}'})$ are isomorphic if and only if the equivalence classes $[\tilde{\sigma}]$ and $[\tilde{\sigma}']$ in $\ensuremath{\mathbb{Q}}^*/(\ensuremath{\mathbb{Q}}^*)^2$ agree.
Let $\sigma=[\tilde{\sigma}]\in \ensuremath{\mathbb{Q}}^*/(\ensuremath{\mathbb{Q}}^*)^2$. Then $(\Lambda V, d_{\tilde{\sigma}})$ is the minimal model of a 7--manifold $M^7_\sigma$.
\end{lem}
\begin{proof}
To see that $(\Lambda V, d_{\tilde{\sigma}})$ is the minimal model of a 7--manifold first note that $(\Lambda V, d_{\tilde{\sigma}})\cong (\Lambda(x_1,x_2,y_1,y_2),d_{\tilde{\sigma}})\ensuremath{\otimes} (\Lambda(y_3),0)$. A short computation shows, that $x_1^2-\tilde{\sigma} x_2^2$, $x_1x_2$ is a regular sequence. So $\ensuremath{\mathrm{H}}^*(\Lambda (x_1,x_2,y_1,y_2),d_{\tilde{\sigma}})$ is finite dimensional and $(\Lambda (x_1,x_2,y_1,y_2), d_{\tilde{\sigma}})$ is rationally elliptic. By a theorem of Halperin \cite[Theorem 3]{Halperin77}, $\ensuremath{\mathrm{H}}^*(\Lambda (x_1,x_2,y_1,y_2),d_{\tilde{\sigma}})$ and therefore $\ensuremath{\mathrm{H}}^*(\Lambda V,d_{\tilde{\sigma}})$ satisfy Poincaré duality, and by work of Sullivan $(\Lambda V,d_{\tilde{\sigma}})$ is the minimal model of a closed, simply connected 7--manifold.
Since $\ensuremath{\mathrm{H}}^4(\Lambda V,d_{\tilde{\sigma}})$ is one-dimensional, we can identify it with $\ensuremath{\mathbb{Q}}$ and get a symmetric bilinear form on $\ensuremath{\mathrm{H}}^2(\Lambda V,d_{\tilde{\sigma}})$. The determinant of this form is $\tilde{\sigma}$ if we choose $x_1^2$ as a generator of $\ensuremath{\mathrm{H}}^4(\Lambda V,d_{\tilde{\sigma}})$ and its equivalence class in $\ensuremath{\mathbb{Q}}^*/(\ensuremath{\mathbb{Q}}^*)^2$ is an invariant of the cohomology ring.
If, on the other hand, $\tilde{\sigma},\tilde{\sigma}'\in \ensuremath{\mathbb{Q}}^*$ with $[\tilde{\sigma}]=[\tilde{\sigma}']$ in $\ensuremath{\mathbb{Q}}^*/(\ensuremath{\mathbb{Q}}^*)^2$ are given, then $\sqrt{\tilde{\sigma}'/\tilde{\sigma}}\in \ensuremath{\mathbb{Q}}$ and $\varphi: (\Lambda V, d_{\tilde{\sigma}})\to (\Lambda V, d_{\tilde{\sigma}'})$ defined by $\varphi(x_1)=x_1$, $\varphi(x_2)= \sqrt{\tilde{\sigma}'/\tilde{\sigma}} \; x_2$, $\varphi(y_1)=\sqrt{\tilde{\sigma}'/\tilde{\sigma}}\; y_1$ and $\varphi(y_j)=y_j$ for $j=2,3$ is an isomorphism.
\end{proof}
\begin{rem}
One can choose \[M^7_{[1]}=(\ensuremath{\mathbb{CP}}^2\#\ensuremath{\mathbb{CP}}^2)\times \ensuremath{\mathrm{S}}^3\] and \[M^7_{[-1]}=(\ensuremath{\mathbb{CP}}^2\#\overline{\ensuremath{\mathbb{CP}}}^2)\times \ensuremath{\mathrm{S}}^3\simeq_\ensuremath{\mathbb{Q}}\ensuremath{\mathrm{S}}^2\times \ensuremath{\mathrm{S}}^2\times \ensuremath{\mathrm{S}}^3.\] Here $\overline{\ensuremath{\mathbb{CP}}}^2$ denotes reversing the orientation and $\simeq_\ensuremath{\mathbb{Q}}$ denotes being rationally homotopy equivalent.
\end{rem}
\begin{rem}\label{RemarkDefinitionXsigma}
The minimal Sullivan algebras
\[(\Lambda(x_1,x_2,y_1,y_2),d_{\tilde{\sigma}})\]
used in the proof define rationally elliptic spaces $X_\sigma$, $\sigma \in \ensuremath{\mathbb{Q}}^*/(\ensuremath{\mathbb{Q}}^*)^2$, of formal dimension 4. These can be realized as four-dimensional orbifolds of nonnegative curvature, see \cite{GGKRW14}. However, $X_\sigma $ is not rationally homotpy equivalent to a manifold, since the intersection form cannot be induced by a unimodular form defined over the free part of the integer cohomology. The proof also shows that $M^7_\sigma\simeq_\ensuremath{\mathbb{Q}} X_\sigma\times\ensuremath{\mathrm{S}}^3$.
\end{rem}
The last minimal model we need to consider is
\[(\Lambda V,d)=(\Lambda(x_y,x_2,y_1,y_2,y_3),d), \quad |x_i|=2,|y_j|=3\]
with $dx_i=0$, $dy_1=x_1^2$, $dy_2=x_2^2$ and $dy_3=x_1x_2$. In \cite[Example 2.91]{FOT} it is introduced as the minimal model of an $\ensuremath{\mathrm{S}}^3$-bundle over $\ensuremath{\mathrm{S}}^2 \times \ensuremath{\mathrm{S}}^2$. We will give a description of it as a homogeneous space. Let
\[K=\left\{\left(\left(\begin{smallmatrix}z&0\\0&z^{-1}\end{smallmatrix}\right),\left(\begin{smallmatrix}w&0\\0&w^{-1}\end{smallmatrix}\right),\left(\begin{smallmatrix}zw&0\\0&(zw)^{-1}\end{smallmatrix}\right)\right)\middle| z,w \in \ensuremath{\mathrm{S}}^1\right\}\leq G:=(\ensuremath{\mathrm{SU}}(2))^3\]
and $N^7=G/K$. Then, see \cite[Theorem 2.71]{FOT}, a model for $N^7$ is given by $(\Lambda W \oplus \Lambda( sU),d)$, where $\Lambda W=\ensuremath{\mathrm{H}}^*(\mathrm{B}K;\ensuremath{\mathbb{Q}})$, $\Lambda U=\ensuremath{\mathrm{H}}^*(\mathrm{B}G;\ensuremath{\mathbb{Q}})$, and $sU$ denotes a shift in degree, so $|su|=|u|-1$ for $u\in U$. The differential is given by $dw=0$ for $w\in W$ and $d(su)=\ensuremath{\mathrm{H}}^*(\mathrm{B}\iota)(u)$ for $u \in U$ and $\iota: K\hookrightarrow G$ the inclusion. In our situation, $\Lambda W=\Lambda (x_1,x_2)$ with $|x_i|=2$, $\Lambda (sU)=\Lambda(y_1,y_2,y_3)$ with $|y_j|=3$. The map $\ensuremath{\mathrm{H}}^*(\mathrm{B}\iota)$ can be computed from the inclusion of $H$ in the standard maximal torus of $G$. One gets $dy_1=x_1^2$, $dy_2=x_2^2$ and $dy_3=(x_1+x_2)^2$, so the minimal model of $N^7$ is isomorphic to $(\Lambda V, d)$ as above.
\begin{proof}[Proof of Theorem~\ref{TheoremDimension7Rational}]
By Lemma~\ref{Lemma7dimensionalFinitelyManyExamples} we only need to show that a minimal model with exponents like in (7.4) is isomorphic to the minimal model of $N^7$ or some $M^7_\sigma$. Let $(\Lambda V,d)$ be a minimal model with exponents like in (7.4). Then as we already noted $\rank d|_{V^3}\geq2$.
Suppose $\rank d|_{V^3}=2$. Then $\ensuremath{\mathrm{H}}^4(\Lambda V,d)$ is one-dimensional and the multiplication $\ensuremath{\mathrm{H}}^2(\Lambda V,d)\times\ensuremath{\mathrm{H}}^2(\Lambda V,d)\to \ensuremath{\mathrm{H}}^4(\Lambda V,d)$ can be interpreted as a symmetric bilinear form. Choose a basis $x_1, x_2$ of $V^2=\ensuremath{\mathrm{H}}^2(\Lambda V,d)$ that diagonalizes this form. Then $x_1 x_2\in (\Lambda V)^4$ is exact, so there exists $y_1 \in V^3$ with $dy_1=x_1x_2$. Choose $y_3 \in\ker d|_{V^3}$. Then choose $y_2\in V^3$ such that $y_1,y_2,y_3$ is a basis. By subtracting a multiple of $y_1$, scaling and possibly interchanging $x_1$ and $x_2$, we can assume that $dy_2=x_1^2+a x_2^2$ for some $a\in \ensuremath{\mathbb{Q}}$. If $a=0$ then for every $n\in \ensuremath{\mathbb{N}}$, we had that $x_2^n$ is closed but not exact, so $a\neq0$.
If $\rank d|_{V^3}=3$, then the minimal model is obviously the one of $N^7$.\end{proof}
Using the classification of rationally elliptic manifolds in lower dimensions, the classification of compact, simply connected homogeneous manifolds in dimensions up to 9 by Klaus \cite{Klaus} and low-dimensional cohomogeneity one manifolds by Hoelscher (\cite{Hoel10class} and \cite{Hoel10hom}) one can prove the following.
\begin{samepage}
\begin{prop}\label{PropM7sigmanot} For $\sigma \in \ensuremath{\mathbb{Q}}^*/(\ensuremath{\mathbb{Q}}^*)^2\setminus\{ [1],[-1]\}$ the manifold $M_\sigma^7$ does not have the rational homotopy type of
\begin{enumerate}[label=\alph*)]
\item a product of closed, simply connected manifolds,
\item a bundle over a closed, simply connected, rationally elliptic manifold of dimension $\leq 5$ with fibre a closed, simply connected manifold,
\item a closed, simply connected, homogeneous space,
\item a closed, simply connected cohomogeneity one manifold.
\end{enumerate}
\end{prop}
\end{samepage}
The classification of real homotopy types of closed, simply connected, rationally elliptic 7--manifolds now reduces to understanding which of the rational homotopy types of Theorem~\ref{TheoremDimension7Rational} give the same real one. Lemma~\ref{LemmaIsomorphicSigmaModels} carries over to the real case, replacing $\ensuremath{\mathbb{Q}}^*/(\ensuremath{\mathbb{Q}}^*)^2$ by $\ensuremath{\mathbb{R}}^*/(\ensuremath{\mathbb{R}}^*)^2=\{1,-1\}$. Since $M^7_{[1]}=(\ensuremath{\mathbb{CP}}^2\#\ensuremath{\mathbb{CP}}^2)\times \ensuremath{\mathrm{S}}^3$, $M^7_{[-1]}=(\ensuremath{\mathbb{CP}}^2\#\overline{\ensuremath{\mathbb{CP}}}^2)\times \ensuremath{\mathrm{S}}^3$, and the other manifolds in Theorem~\ref{TheoremDimension7Rational} already differ by their Betti numbers, we get the following proposition.
\begin{prop}
A closed, simply connected 7--manifold is rationally elliptic if and only if it has the real homotopy type of one of the following manifolds: \\
$\ensuremath{\mathrm{S}}^7$, $\ensuremath{\mathrm{S}}^2 \times \ensuremath{\mathrm{S}}^5$, $\ensuremath{\mathbb{CP}}^2 \times \ensuremath{\mathrm{S}}^3$, $\ensuremath{\mathrm{S}}^3 \times \ensuremath{\mathrm{S}}^4$, $N^7$, $(\ensuremath{\mathbb{CP}}^2\#\ensuremath{\mathbb{CP}}^2)\times \ensuremath{\mathrm{S}}^3$ or $(\ensuremath{\mathbb{CP}}^2\#\overline{\ensuremath{\mathbb{CP}}}^2)\times \ensuremath{\mathrm{S}}^3$.
\end{prop}
Of these manifolds the only ones having the same complex homotopy type are $(\ensuremath{\mathbb{CP}}^2\#\ensuremath{\mathbb{CP}}^2)\times \ensuremath{\mathrm{S}}^3$ and $(\ensuremath{\mathbb{CP}}^2\#\overline{\ensuremath{\mathbb{CP}}}^2)\times \ensuremath{\mathrm{S}}^3$. Since $ \ensuremath{\mathbb{CP}}^2\#\overline{\ensuremath{\mathbb{CP}}}^2\simeq_\ensuremath{\mathbb{Q}} \ensuremath{\mathrm{S}}^2\times \ensuremath{\mathrm{S}}^2$ this shows the following for the complex homotopy types.
\begin{prop}
A closed, simply connected 7--manifold is rationally elliptic if and only if it has the complex homotopy type of one of the following manifolds: \\
$\ensuremath{\mathrm{S}}^7$, $\ensuremath{\mathrm{S}}^2 \times \ensuremath{\mathrm{S}}^5$, $\ensuremath{\mathbb{CP}}^2 \times \ensuremath{\mathrm{S}}^3$, $\ensuremath{\mathrm{S}}^3 \times \ensuremath{\mathrm{S}}^4$, $N^7$ or $\ensuremath{\mathrm{S}}^2 \times \ensuremath{\mathrm{S}}^2 \times \ensuremath{\mathrm{S}}^3$.
\end{prop}
\section{Higher dimensions}\label{sec:DimensionsEightAndNine}
\subsection{Dimension 8}
As before, we start by computing the possible exponents of closed, simply connected, rationally elliptic 8--manifolds using the results of Friedlander and Halperin mentioned in Section \ref{susubsec:Exponents}..
\begin{lem}\label{Lemma8dimensionalExponents}
A closed, simply connected, rationally elliptic 8--manifold has one of the following exponents:
\begin{multicols}{2}
\begin{enumerate}
\item[(8.1)]$a=(~)$, $b=(2,3)$
\item[(8.2)]$a=(1)$, $b=(5)$
\item[(8.3)]$a=(2)$, $b=(6)$
\item[(8.4)]$a=(4)$, $b=(8)$
\item[(8.5)]$a=(1)$, $b=(2,2,2)$
\item[(8.6)]$a=(1,1)$, $b=(2,4)$
\item[(8.7)]$a=(1,1)$, $b=(3,3)$
\item[(8.8)]$a=(1,2)$, $b=(3,4)$
\item[(8.9)]$a=(1,3)$, $b=(2,6)$
\item[(8.10)]$a=(2,2)$, $b=(4,4)$
\item[(8.11)]$a=(1,1,1)$, $b=(2,2,3)$
\item[(8.12)]$a=(1,1,2)$, $b=(2,2,4)$
\item[(8.13)]$a=(1,1,1,1)$, $b=(2,2,2,2)$
\end{enumerate}
\end{multicols}
\end{lem}
In eight of these cases we show that there are only finitely many possible rational homotopy types with the given exponents.
\begin{prop}
In cases $(8.1)$, $(8.2)$, $(8.3)$, $(8.4)$, $(8.5)$, $(8.8)$, $(8.9)$ and $(8.10)$ of Lemma~\ref{Lemma8dimensionalExponents} there are only finitely many rational homotopy types of closed, simply connected 8--manifolds with these exponents. They are:
\begin{multicols}{3}
\begin{enumerate}
\item[(8.1)] $\ensuremath{\mathrm{S}}^3 \times \ensuremath{\mathrm{S}}^5$
\item[(8.2)] $\ensuremath{\mathbb{CP}}^4$
\item[(8.3)] $\mathbb{HP}^2 $
\item[(8.4)] $\ensuremath{\mathrm{S}}^8$
\item[(8.5)] $\ensuremath{\mathrm{S}}^2 \times \ensuremath{\mathrm{S}}^3 \times \ensuremath{\mathrm{S}}^3$
\item[(8.8)] $\ensuremath{\mathbb{CP}}^2 \times \ensuremath{\mathrm{S}}^4$
\item[(8.9)] $\ensuremath{\mathrm{S}}^2 \times \ensuremath{\mathrm{S}}^6$
\item[(8.10)] $\ensuremath{\mathrm{S}}^4 \times \ensuremath{\mathrm{S}}^4$,\\ $\ensuremath{\mathbb{HP}}^2\#\ensuremath{\mathbb{HP}}^2$
\end{enumerate}
\end{multicols}
\end{prop}
\begin{proof}
Most is easy, so we concentrate on (8.10).
Let $M$ be a manifold with exponents like in (8.10). Then there is a basis $\omega_1,\omega_2$ of $\ensuremath{\mathrm{H}}^4(M;\ensuremath{\mathbb{Q}})$ such that $\omega_1 \omega_2=0$ and $\omega_1^2=\varepsilon \omega_2^2$, $\varepsilon=\pm1$. Choose $x_1,x_2\in V^4$ corresponding to $\omega_1,\omega_2$. Then there are $y_1,y_2\in V^7$ with $dy_1=x_1x_2$ and $dy_2=x_1^2-\varepsilon x_2^2$. For $\varepsilon =1$ this is the minimal model of $\ensuremath{\mathbb{HP}}^2 \# \ensuremath{\mathbb{HP}}^2$, for $\varepsilon=-1$ it is isomorphic to the one of $\ensuremath{\mathrm{S}}^4 \times \ensuremath{\mathrm{S}}^4$.
\end{proof}
\begin{rem}
In case (8.10) there is an infinite family of simply connected rationally elliptic spaces that are not rationally homotopy equivalent to a manifold, analogous to the four-dimensional family $X_\sigma$.
\end{rem}
\begin{prop}
The rational homotopy types of closed, simply connected, rationally elliptic 8--manifolds with exponents like in case (8.12) of Lemma~\ref{Lemma8dimensionalExponents} are exactly the ones given by the $X_\sigma \times \ensuremath{\mathrm{S}}^4$ with $\sigma \in \ensuremath{\mathbb{Q}}^* /(\ensuremath{\mathbb{Q}}^*)^2$. In particular, there are infinitely many of these.
\end{prop}
\begin{proof}
Let $(\Lambda V,d)$ be the minimal model of such an 8--manifold. Then $\dim V^2= \dim V^3=2$, $\dim V^4=\dim V^7=1$ and $\dim V^k=0$ else. Then $d(V^2)=\{0\}$ and $d(V^3)\subset \Lambda^2 V^2$, because of the minimality of the model.
Suppose $\rank d|_{V^3}\neq 2$. If $\rank d|_{V^3}=1$, let $0\neq y\in V^3$ with $dy=0$. Let $0\neq a\in V^4$. Then $da= y v$ for some $v \in V^2$, so $d(y a)=0$. But $ya\in (\Lambda V)^7 $ is not exact, since $d((\Lambda V)^6)\subset \Lambda^2 V^2 \cdot V^3$. So we have $\ensuremath{\mathrm{H}}^7(\Lambda V,d))\neq \{0\}$, a contradiction. If $\rank d|_{V^3}=0$, then \[\dim \Kern d|_{(\Lambda V)^{10}}\geq\dim(\Lambda^5 V^2\oplus(\Lambda^2 V^2) \cdot (\Lambda^2 V^3))=9\] and \[\rank(d|_{(\Lambda V)^9})\leq \dim( V^2 \cdot V^3 \cdot V^4\oplus V^2 \cdot V^7)=6,\] so $\ensuremath{\mathrm{H}}^{10}(\Lambda V,d)\neq\{0\}$, a contradiction.
Therefore $\rank d|_{V^3}= 2$, so we can choose bases $x_1,x_2$ of $V^2$ and $y_1,y_2$ of $V^3$ such that $dy_1=x_1^2-\tilde{\sigma} x_2^2$ for some $\tilde{\sigma} \in \ensuremath{\mathbb{Q}}$ and $dy_2=x_1 x_2$. Furthermore let $0\neq a \in V^4$. Then $da=0$, since there are no closed elements in $(\Lambda V)^5$.
Suppose now that $\tilde{\sigma}=0$. Then $x_2^n$ or $a^n$ is closed, but not exact for every $n$, a contradiction. So $\tilde{\sigma}\neq 0$. Now the only non-exact, closed elements of $(\Lambda V)^8$ are multiples of $a^2$, so up to isomorphism, a generator $z\in V^7$ satisfies $dz=a^2$, which gives the minimal model of $X_\sigma\times \ensuremath{\mathrm{S}}^4$ for $\sigma=[\tilde{\sigma}]$.
Since their cohomology rings are pairwise non-isomorphic, the $X_\sigma \times \ensuremath{\mathrm{S}}^4$, $\sigma \in \ensuremath{\mathbb{Q}}^*/(\ensuremath{\mathbb{Q}}^*)^2$, have different homotopy types. Since their intersection form is given by $x^2-y^2$, they can be realized as a manifold by Sullivan's realization result, see Section~\ref{subsubsec:RealizationManifold}.
\end{proof}
\subsection{Dimension 9}
Again we compute the possible exponents of closed, simply connected, rationally elliptic 9--manifolds using the results of Friedlander and Halperin mentioned in Section \ref{susubsec:Exponents} and show that in seven of the nine cases there are only finitely many rational homotopy types with the given exponents.
\begin{lem}\label{Lemma9dimensionalExponents}
A closed, simply connected, rationally elliptic 9--manifold has one of the following exponents:
\begin{multicols}{2}
\begin{enumerate}
\item[(9.1)]$a=()$, $b=(5)$
\item[(9.2)]$a=()$, $b=(2,2,2)$
\item[(9.3)]$a=(1)$, $b=(2,4)$
\item[(9.4)]$a=(1) $, $b=(3,3)$
\item[(9.5)]$a=(2)$, $b=(3,4)$
\item[(9.6)]$a=(3)$, $b=(2,6)$
\item[(9.7)]$a=(1,1)$, $b=(2,2,3)$
\item[(9.8)] $a=(1,2)$, $b=(2,2,4)$
\item[(9.9)] $a=(1,1,1)$, $b=(2,2,2,2)$
\end{enumerate}
\end{multicols}
\end{lem}
\begin{prop}\label{ProprationaleindeutigDim9}
In cases (9.1)---(9.6) and (9.8) of Lemma~\ref{Lemma9dimensionalExponents} there are only finitely many rational homotopy types of closed, simply connected 9--manifolds with these exponents. They are:
\begin{multicols}{2}
\begin{enumerate}
\item[(9.1)]$\ensuremath{\mathrm{S}}^9$
\item[(9.2)]$\ensuremath{\mathrm{S}}^3 \times \ensuremath{\mathrm{S}}^3 \times \ensuremath{\mathrm{S}}^3$
\item[(9.3)]$\ensuremath{\mathrm{S}}^2 \times \ensuremath{\mathrm{S}}^7$, $\ensuremath{\mathrm{S}}^3 \times \ensuremath{\mathbb{CP}}^3$
\item[(9.4)]$\ensuremath{\mathrm{S}}^5 \times \ensuremath{\mathbb{CP}}^2$
\item[(9.5)]$\ensuremath{\mathrm{S}}^4 \times \ensuremath{\mathrm{S}}^5$
\item[(9.6)]$\ensuremath{\mathrm{S}}^3 \times \ensuremath{\mathrm{S}}^6$
\item[(9.8)] $\ensuremath{\mathrm{S}}^2 \times \ensuremath{\mathrm{S}}^3 \times \ensuremath{\mathrm{S}}^4$
\end{enumerate}
\end{multicols}
\end{prop}
Let $E=\gamma\oplus\varepsilon$ be the complex rank 2 vector bundle over $\ensuremath{\mathbb{CP}}^3 \# \ensuremath{\mathbb{CP}}^3$ which is obtained as the sum of a trivial line bundle $\varepsilon$ and the line bundle $\gamma$ with first Chern class $-(x_1+x_2)$ for generators $x_1$, $x_2$ of $\ensuremath{\mathrm{H}}^2(\ensuremath{\mathbb{CP}}^3\#\ensuremath{\mathbb{CP}}^3)$ coming from the two $\ensuremath{\mathbb{CP}}^3$ summands. Let $M^8=P(E)$ be the projectified bundle. By the Leray-Hirsch theorem, the cohomology ring of $M^8$ is given by
\[\ensuremath{\mathrm{H}}^*(M^8;\ensuremath{\mathbb{Q}})\cong \ensuremath{\mathbb{Q}}[x_1,x_2,y]/(x_1x_2,x_1^3-x_2^3,y^2-x_1 y- x_2 y),\]
where $y$ is of degree 2.
Let $N^9$ be the principal circle bundle over $M^8$ with first Chern class given by $y - 2 x_1$. Using the Serre spectral sequence, we can compute the cohomology ring of $N^9$. We get that $\ensuremath{\mathrm{H}}^{\leq4}(N^9;\ensuremath{\mathbb{Q}})$ is generated by $x_1$ and $x_2$ with relations $x_1x_2=0=x_1^2$.
From the construction it is clear that $N^9$ is rationally elliptic. Since the second Betti number $\ensuremath{\mathrm{b}}_2(N^9)=2$, by Lemma~\ref{Lemma9dimensionalExponents} the exponents of $N^9$ are like in case (9.7).
\begin{samepage}
\begin{prop}
A closed, simply connected 9--manifold with exponents like in (9.7) of Lemma~\ref{Lemma9dimensionalExponents} has the rational homotopy type of $N^9$, $X_\sigma \times \ensuremath{\mathrm{S}}^5$ (see Remark~\ref{RemarkDefinitionXsigma}) for some $\sigma \in \ensuremath{\mathbb{Q}}^* / (\ensuremath{\mathbb{Q}}^*)^2$ or $M^6 \times \ensuremath{\mathrm{S}}^3$ for a closed, simply connected, rationally elliptic 6--manifold $M^6$ with $\ensuremath{\mathrm{b}}_2(M^6)=2$.
\end{prop}
\end{samepage}
\begin{proof}
Let $(\Lambda V,d)$ be the minimal model of such a 9--manifold $M$. In particular, $\dim V^2=\dim V^3=2$ and $\dim V^5=1$. If $\ker(d|_{V^3})\neq\{0\}$, then $M\simeq_\ensuremath{\mathbb{Q}} X\times \ensuremath{\mathrm{S}}^3$, where $X$ is of formal dimension 6. Since $X$ is rationally elliptic, $X\simeq_\ensuremath{\mathbb{Q}} M^6$, with $M^6$ like in the statement of the proposition.
If $\ker(d|_{V^3})=\{0\}$, then $\dim\ensuremath{\mathrm{H}}^4(\Lambda V,d)=1$. We can then choose bases $x_1$, $x_2$ of $V^2$ and $y_1$, $y_2$ of $V^3$ such that $dy_1=x_1 x_2$ and $dy_2= x_1^2+ a x_2^2$ for some $a\in \ensuremath{\mathbb{Q}}$. If $a \neq 0$, then $(\Lambda V, d)$ is isomorphic to the minimal model of $X_\sigma \times \ensuremath{\mathrm{S}}^5$ with $\sigma$ the equivalence class of $a$ in $\ensuremath{\mathbb{Q}}/(\ensuremath{\mathbb{Q}}^*)^2$.
Suppose now $a=0$. Then, up to isomorphism, we can choose $0\neq z \in V^5$ with $d z = x_2^3$. Therefore $\ensuremath{\mathrm{H}}^{\leq 4}(\Lambda V,d) \cong \ensuremath{\mathrm{H}}^{\leq 4}(N^9)$. Since $N^9$ has the right exponents and the cohomology ring of $N^9$ is non-isomorphic to all of the previously calculated, $(\Lambda V,d)$ is the minimal model of $N^9$.
\end{proof}
In the remaining case (9.9) of Lemma~\ref{Lemma9dimensionalExponents} there are products $M_\sigma \times \ensuremath{\mathrm{S}}^2$ and $N^7 \times \ensuremath{\mathrm{S}}^2$ of seven-dimensional manifolds with $\ensuremath{\mathrm{S}}^2$ and products of $\ensuremath{\mathrm{S}}^3$ with closed, simply connected, rationally elliptic 6--manifolds with $\ensuremath{\mathrm{b}}_2=3$. But there are also examples not having the rational homotopy type of a product.
As an example of such a manifold consider the principal $\ensuremath{\mathrm{S}}^1$-bundle $Y$ over $\ensuremath{\mathrm{S}}^2\times\ensuremath{\mathrm{S}}^2\times\ensuremath{\mathrm{S}}^2\times\ensuremath{\mathrm{S}}^2$ with first Chern class $c_1(Y)=x_1+x_2+x_3+x_4$, where the $x_i$ are generators of the integral cohomology rings of the $\ensuremath{\mathrm{S}}^2$ factors. Using the Serre spectral sequence one can compute the cohomology ring of $Y$. In particular, $\ensuremath{\mathrm{H}}^2(Y;\ensuremath{\mathbb{Q}})$ is generated by $[x_1]$, $[x_2]$ and $[x_3]$. The products of these generate $\ensuremath{\mathrm{H}}^4(Y;\ensuremath{\mathbb{Q}})$ subject to relations $[x_i]^2=0=[x_1][x_2]+[x_1][x_3]+[x_2][x_3]$. Now suppose $Y$ is rationally homotopy equivalent to a product. Due to the classification in dimensions 5 and below, it then has the rational homotopy type of a product with $\ensuremath{\mathrm{S}}^2$, $\ensuremath{\mathrm{S}}^3 $ or $\ensuremath{\mathrm{S}}^5$. A product with $\ensuremath{\mathrm{S}}^5$ is not possible, since $\ensuremath{\mathrm{b}}_2(Y)=3$ and $\ensuremath{\mathrm{b}}_2(X)\leq 2$ for a simply connected, rationally elliptic space $X$ of formal dimension 4. As $\ensuremath{\mathrm{b}}_3(Y)=0$, we can also exclude a product with $\ensuremath{\mathrm{S}}^3$. By our classification in dimension 7, the last case is that of a product $M_\sigma \times \ensuremath{\mathrm{S}}^2$ or $N^7 \times \ensuremath{\mathrm{S}}^2$. To exclude this, consider the set of elements of the respective second complex cohomology group with vanishing square. For $M_\sigma^7\times \ensuremath{\mathrm{S}}^2$ this is the union of three one-dimensional subspaces, for $N^7\times \ensuremath{\mathrm{S}}^2$ it is the union of a one and a two-dimensional subspace, while for $Y$ it is the union of the four one-dimensional subspaces generated by $[x_1]$, $[x_2]$, $[x_3]$ and $[x_1]+[x_2]+[x_3]$, respectively.
The same argument holds for the family of 9-dimensional biquotients considered by Totaro \cite{Tot}, giving rise to infinitely many rational homotopy types of simply connected, rationally elliptic 9-manifolds with exponents like in (9.9), that do not have the rational homotopy type of a product.
\bibliographystyle{alpha}
|
2,877,628,090,977 | arxiv | \section{Proof of the First Zonklar Equation}
\bibliographystyle{plain}
\section{Funding Sources for CNs}\label{sec:funding}
\begin{comment}
\item \textbf{Non profit vs for profit models:}
Considering the \textit{profitable character} of the network, there are two different economic models. On the one hand, the sustainability model has to do with maintaining an alternative character and bypassing commercial interests, particularly those of the big providers (e.g. Consume, Free2Air, B4RN, i4free). This model perceives community networks as grounded on commons, communitarian practices and not-profit initiatives. On the other hand, sustainability is simply a matter of providing a good service, and this can be realised better through commercial arrangements, albeit with small commercial providers, rather than the big telcos (e.g. Kinmuck). This conception is based on for-profit provision but by small- and medium-sized enterprises. The main economic contradiction underlying community networks models appears to be one between community economy and commercial for-profit provision.
The provision of Internet connectivity by community networks is significantly cheaper and often better quality and faster than commercial alternatives, if they exist. Subscriptions then are relatively low and do not jeopardise affordability.
It is important to note here that, in sharp contrast to commercial for-profit provision, the non-profit character of community networks contributes to their sustainability in various ways. First, subscriptions are affordable. This creates a virtuous circle: the more affordable access to the community network is, the more users will join; the more users join, the larger the pool that contributes to the financing of the network helping therefore to maintain subscriptions affordable. Second, this non-profit funding model of community networks brings the community closer together and this in turn increases the commitment of the community to the network. Rather than (higher) network subscriptions creating profits which then go into benefiting primarily the shareholders of commercial for-profit providers, the link between financial (and other, for instance time) contributions by community members and the development of the community network is more direct and visible to the community in question.
The non-profit character of community networks improves their economic sustainability and strengthens community ties and commitment to the initiative.
It is also worth stressing that many communities value inclusiveness more than profitability. Some communities have ways to provide connection even to those who could not afford it (e.g. B4RN). Equally, fairness is another aspect that some pricing policies reflected as is the case of B4RN where there are different pricing schemes for residential users, small enterprises and bigger businesses, the latter two considered as heavier Internet users.
\item \textbf{External vs internal funding models:}
Economic perceptions in terms of \textit{funding practices} can also be distinguished. As far as funding is concerned, there are two possible ways of income a) receive funding resources from entities external to the network or b) from the network's own resources.
There are examples of communities that tried to access local or national funding (in the case of B4RN, their bid was subsequently withdrawn and put into a larger regional pool). However, it is suggested \cite{NETCOMMONS_D2_2} that financial sustainability requires a subscription model and the funding of the network from community-owned resources. It is not simply the case that some community networks found potential access to local, national or European funds too difficult and demanding, often not tailored to such initiatives, uncertain, and too bureaucratic, but it is the belief that the funding of a community network from own-resources is more reliable and sustainable (e.g. B4RN), as it precludes dependence on external funding which might not materialise or, given that it has a fixed term, it might put at risk the subsequent development and maintenance of the network when it finishes. Internal funding from own resources, if possible, is preferred within the CNs.
\end{comment}
CNs use one or more of the following ways (Table \ref{tab:cn_aspects}) to fund their activities~\cite{NETCOMMONS_D2_4}:
\subsection{Member subscriptions and contributions in kind}
This is the most common funding model for CNs. In this case, the members of the CN contribute network equipment and time/effort to the network growth and maintenance. In the case of the BARN network, which provides fibre connectivity, members even contributed digging effort. In most cases, the CN users pay a monthly/annual subscription fee for the CN needs. Several CNs such as AWMN in Greece, Ninux.net in Italy, BARN in UK, and Freifunk.net in Germany, have managed to scale significantly this way.
Despite its simplicity, the model has several variations. Subscriptions may be mandatory or voluntary; or they may serve as a prerequisite for participation in decision-making bodies and voting rights. In the case of the Sarantaporo.gr, it is villages under the network coverage, rather than individual CN users, that are charged with a fee. How each village will split the cost among local users is left to the the CN participants in that specific village to define.
What the CN users get in return for their subscriptions is closely related to the way the CN organizes itself and positions in the telecommunications arena.
For example, B4RN operates as a community benefit society, which provides Internet service to its subscribers. The subscription model is composed of a connectivity fee and different service fees for different types of users. On a similar note, Zenzeleni.net operates as a cooperative telecommunications operator providing voice and data services to its customers. TakNet has developed a social enterprise called Net2Home. Users have to pay monthly fees that are used for covering fiber (to the network operator), maintenance, equipment installation, technical online support, network management and monitoring costs. Rhizomatica helps communities in Mexico build their networks receives a flat rate for equipment installation and community member training as well as a percentage of monthly subscription, advisory and technical services fees. Finally, but far more rarely, a CN may operate as a for-profit company. Some of the FFDN networks in France are commercial networks that indeed rely on policies such as standard pay-per-use contracts and added value services to customers outside the CN. However, in contrast with traditional commercial companies that extract profit from customers and locals to retribute investors, CNs reinvest the profits in the CPR infrastructure.
\subsection{Donations from supporters}
Community Networks are often financed through crowd-funding projects or direct, regular or one-time, donations.
In some CNs, citizens can invest in the infrastructure, either for a specific reason such as crowdfunding the construction or improvement of a critical link that affects the user (typical in guifi.net), or generic, through community shares to expand the local network or even his home access (in B4RN). These investments can generate tax returns (guifi.net Foundation or B4RN). In B4RN this investment also generates (3\%) interest after the third year.
In developing areas or in disaster situations external donors can contribute funds such as in Zenzeleni (ZA), Rhizomatica and Nepal Wireless (NP).
This funding source typically complements other funding sources since it rarely suffices to cover the CN's funding needs.
\subsection{Support from public agencies and institutions} There are cases, where CN initiatives have got generous support from public funds (cash or in kind). Municipalities and local authorities emerge as main actors in this respect. The synergy of commons/public service with civil society/municipality can limit the survival concerns of CNs as far as one finds sustainable models that motivate their cooperation~\cite{RePEc:ehl:lserod:29461}, \cite{powell2006going},\cite{powell2008wifi}.
One such case is the Sarantaporo.gr CN, which set up its first nodes with hardware and equipment received from the Greek Free/Open source Software Society (GFOSS); and, later, expanded the CN through funding by the CONFINE project~\cite{braem2013case}, funded by the European Commission.
Likewise, in the case of Freifunk, the support from public authorities was expressed through making available public buildings such as churches or Town Halls for placing and storing the network's equipment (\eg antennas).
\begin{comment}
Another distinct model for community networks seems to be the co-operation of commons/public service with civil society/municipality, as in guifi.net. Such synergies can limit the survival concerns of CNs as far as one can find models in which municipal power and civil society commons are not seen as opposing each other, but acting cooperatively as antidotes to the mainstream and profit-making media outlets~\cite{RePEc:ehl:lserod:29461}, \cite{powell2006going},\cite{powell2008wifi}.
\textbf{[IS THE MODEL ACTUALLY IN USE SOMEWHERE? see comment in .tex file - inline]}.
\end{comment}
Sometimes, the support may be expressed in more indirect, yet equally significant, ways such as giving proper attention to CNs in regulatory actions. The guifi.net Foundation has developed a cooperative infrastructure sharing model (the “Universal” deployment model) that develops over the Directive 2014/61/CE on broadband cost reduction of the EU\footnote{\url{https://ec.europa.eu/digital-single-market/en/news/factsheets-directive-201461ce-broadband-cost-reduction}} and the infrastructure sharing concept of the ITU\footnote{\url{https://www.itu.int/ITU-D /treg/publications/Trends08_exec_A5-e.pdf}}. The model prescribes how municipalities and counties can regulate the use of public space by private, government and civil society in a sustainable manner~\cite{universal17}. Rhizomatica has expressed interest in following a compensation system such as the one used in guifi.net~\cite{NETCOMMONS_D1_3}.
\vspace{-1.2mm}
\begin{comment}
\begin{figure}[tbhp]
\centering
\includegraphics[width=1\linewidth]{figures/2dchart}
\caption{Radar chart with types of economic contribution for CNs.}
\label{fig:chart2}
\end{figure}
\end{comment}
\begin{figure}[tbhp]
\centering
\includegraphics[width=0.9\linewidth]{figures/funding_entities}
\caption{Radar chart with CN funding entities in a 0-3 scale. \textit{CN funding sources}. 0: mainly private entities' involvement, 1: mainly public agencies' involvement, small scale member contribution 2: mainly member contribution (donations, non regular fees), 3: member contribution only (regular fees).}
\label{fig:chart2}
\end{figure}
\vspace{-1.2mm}
\subsection{Funding from private sector through commons-based policies}
In the case of guifi.net, CNs have come up with
unique innovative models combining voluntary and professional services into a commons-based approach. Commercial service providers offer services over the CN and charge the CN users as typical customers, but also subsidize the CN growth and maintenance subscribing to the commons policies. This way, the CN maintains its non-profit orientation and pursues its sustainability through synergies with entities undertaking commercial for-profit activities~\cite{Baig:2016:MCN:2940157.2940163}.
\vspace{0.5mm}
When assessing the strengths and weaknesses of the four categories of funding sources, the following remarks are due:
\begin{itemize}
\item Some sources (\ie donations, voluntary contributions \etc) are one way or another not guaranteed and they make long-term strategic planning difficult. They could also lead to disagreements and conflicts between CN members concerning their distribution inside the network, especially if there are not well-defined decision-making processes.
\item Unless something dramatically changes on the regulation side, the support of public authorities for CNs cannot be taken for granted. BARN is one CN instance that tried to access national funding without success (their bid for the funding was eventually withdrawn). In the case of guifi.net, the municipality of Barcelona is reluctant to provide the CN with access to the city wi-fi and fiber infrastructure. In general, CNs tend to view access to local, national or European funds too difficult as well as demanding, uncertain, and bureaucratic.
\item The dominant view across CN initiatives is that the funding from own resources is the most reliable and favorable option. BARN and Freifunk, two of the three networks in Europe that have managed to scale in the order of tens of thousands of nodes, have followed this approach.
\item Trying to put commercial service providers in the loop while preserving the CN ideals, as guifi.net does, definitely represents an innovative approach. The success it experiences in the case of the guifi network renders it a valid funding model alternative.
\end{itemize}
\vspace{0.3cm}
Interestingly, only guifi.net so far has managed to involve in its funding model all possible actors (end users/members, private sector and public authorities). Striking the right balance between the roles and contribution modes of these three parts may prove the key towards the economic sustainability of CN initiatives.
Fig. \ref{fig:chart2} summarizes the funding dependencies on different sources for the 15 CNs. Member contributions, public or private institutions, public authorities, contests and funding projects are met in different scales within each CN and provide part (or all) of the network's resources.
Some CNs operate through regular economic contributions of their members in commercialized subscription models (B4RN, Zenzeleni.net, FFDN, Rhizomatica) while others adhere to non regular fees usually gathered in the form of donations by their members (AWMN, Freifunk, Funkfeuer). In cases where the contributions by a CN's own members are not systematic, public fundings (Sarantaporo.gr, TakNET) pose a significant aid and private entity involvement contributes to the economic activity of the CN (i4Free, guifi.net, Wireless Leiden). The involvement of private entities varies among a single person's funding met in i4Free, ISP's offering their services and contributing to the ecosystem of the CN in guifi.net, or private and public funding by institutions in Wireless Leiden.
\section{Community networks and the sustainability question}\label{network_sustainability}
The survival of CNs as complex and multi-leveled structures depends on a number of reasons. To identify those reasons it is important to look at the interaction of individual actors, their level of participation and the exact models and mechanisms followed. While some combinations of efforts lead the CN to great success and expansion of the network others may lead to failure and depletion. There are clearly no recipes on how to reach one end or the other. Hence, it is interesting to identify the different perceptions in moving towards a sustainable CN.
These perceptions have risen from the conceptual framework of sustainability developed by Fuchs in netCommons deliverable \cite{NETCOMMONS_D2_1} and the case studies and interviews from CN participants in \cite{NETCOMMONS_D2_2} that build and complement th work in \cite{NETCOMMONS_D2_1}.
\subsection{Sustainability of a CN}
Sustainability is a multifaceted concept used to study a variety of systems \ie biological, socio-cultural, technical. Its actual definition is established in accordance with the system of interest. In general, the great sustainability challenge revolves around identifying and understanding the way that a system can function in the present and endure in the future \ie develop in a sustainable way. Sustainability is not a specific goal per se but a continuous process to reach a goal. Its first acknowledged aspect refers to environmental sustainability and it was defined as the process of a sustainable development that should be able to satisfy current human needs without compromising the fulfillment of needs for the future generations (United Nations World Commission on Environment and Development (WCED) 1987). In recent years, the definition has developed to include three pillars of sustainability environmental, social and economical (World Summit on Social Development, 2005).
In the context of a CN, sustainability is seen as a multi-dimensional concept of \textit{economic, political and socio-cultural} aspects \cite{NETCOMMONS_D2_1} like any other community of people.
\subsection{Perceptions and common themes}
Following the empirical research in \cite{NETCOMMONS_D2_2}, the perceived dimensions show that a) \textit{economic sustainability} concerns funding and resourcing, the size of the community, and time b) the understanding for \textit{political sustainability} is relevant to organizational and legal issues, as well as open structures, and c) \textit{cultural sustainability} contemplates the sense of belonging, community identity and spirit and communitarian practices.
The different dimensions of sustainability are analyzed in detail below.\\
\subsubsection{Economic sustainability}
It is primarily based on the personal efforts of the CN's participants. The results of the comparative research showed that in a number of cases (Consume, Free2Air, i4free) personal efforts by a small number of people or even a single individual were instrumental for the deployment and take off of the network. However, there is a range of models that diverse actors understand as sustainable.
\begin{itemize}
\item \textbf{Requirements:}
Certain conditions have been identified as crucial for the development of a sustainable network. Firstly, the size of the community of the network plays a significant role in the economic sustainability of the initiative, in the sense of providing a critical mass of users/members who can either provide subscriptions or contribute their resources. Secondly, time is another valuable resource. Such CN
initiatives depend by definition on volunteers who are prepared to give up their time and contribute
to various aspects and stages of the process. Thirdly, the access to funding is crucial for the network's economic sustainability.
\item \textbf{Non profit vs for profit models:}
Considering the \textit{profitable character} of the network, there are two different economic models. On the one hand, the sustainability model has to do with maintaining an alternative character and bypassing commercial interests, particularly those of the big providers (e.g. Consume, Free2Air, B4RN, i4free). This model perceives community networks as grounded on commons, communitarian practices and not-profit initiatives. On the other hand, sustainability is simply a matter of providing a good service, and this can be realised better through commercial arrangements, albeit with small commercial providers, rather than the big telcos (e.g. Kinmuck). This conception is based on for-profit provision but by small- and medium-sized enterprises. The main economic contradiction underlying community networks models appears to be one between community economy and commercial for-profit provision.
The provision of Internet connectivity by community networks is significantly cheaper and often better quality and faster than commercial alternatives, if they exist. Subscriptions then are relatively low and do not jeopardise affordability. It is important to note here that, in sharp contrast to commercial for-profit provision, the non-profit character of community networks contributes to their sustainability in various ways. First, subscriptions are affordable. This creates a virtuous circle: the more affordable access to the community network is, the more users will join; the more users join, the larger the pool that contributes to the financing of the network helping therefore to maintain subscriptions affordable. Second, this non-profit funding model of community networks brings the community closer together and this in turn increases the commitment of the community to the network. Rather than (higher) network subscriptions creating profits which then go into benefiting primarily the shareholders of commercial for-profit providers, the link between financial (and other, for instance time) contributions by community members and the development of the community network is more direct and visible to the community in question.
The non-profit character of community networks improves their economic sustainability and strengthens community ties and commitment to the initiative.
It is also worth stressing that many communities value inclusiveness more than profitability. Some communities have ways to provide connection even to those who could not afford it (e.g. B4RN). Equally, fairness is another aspect that some pricing policies reflected as is the case of B4RN where there are different pricing schemes for residential users, small enterprises and bigger businesses, the latter two considered as heavier Internet users.
\item \textbf{External vs internal funding models:}
Economic perceptions in terms of \textit{funding practices} can also be distinguished. As far as funding is concerned, there are two possible ways of income a) receive funding resources from entities external to the network or b) from the network's own resources.
There are examples of communities that tried to access local or national funding (in the case of B4RN, their bid was subsequently withdrawn and put into a larger regional pool). However, it is suggested \cite{NETCOMMONS_D2_2} that financial sustainability requires a subscription model and the funding of the network from community-owned resources. It is not simply the case that some community networks found potential access to local, national or European funds too difficult and demanding, often not tailored to such initiatives, uncertain, and too bureaucratic, but it is the belief that the funding of a community network from own-resources is more reliable and sustainable (e.g. B4RN), as it precludes dependence on external funding which might not materialise or, given that it has a fixed term, it might put at risk the subsequent development and maintenance of the network when it finishes. Internal funding from own resources, if possible, is preferred within the CNs.
External funding sources may be \cite{NETCOMMONS_D2_4}
\begin{enumerate}[label=\alph*)]
\item \textit{Donations from external supporters:} Community Networks are often financed through crowd-funding projects or direct, regular or one-time, donations (\ie Freifunk.net).
\item \textit{Funding from institutions:} Receive hardware and equipment from the Free/Open Source
Software Society (GFOSS) and funding by national or European projects (\ie Sarantaporo.gr).
\item \textit{Access to public infrastructure:} Being non-profit, official or unofficial organizations, CNs are often granted access to public infrastructure in favorable terms (\ie Sarantaporo.gr). Public buildings can be used to place and store the network's equipment. However, cases in which CNs are treated unfairly in comparison with traditional
providers are also present (\ie guifi.net).
\item \textit{Market-based policies:}
It is often the case that economic sustainability is linked to for-profit activities and there are
commercial networks that indeed rely on such policies ranging from standard pay per use contracts, like in the case of some of the FFDN networks, advertising other added value services to external customers. When the policies reach out to external customers, this type of funding can be considered external.
\item \textit{Partnerships with local authorities:}
A good model for community networks seems to be a hybrid/co-operation of commons/public
service with civil society/municipality, in which civil society projects co-operate with
municipalities \cite{RePEc:ehl:lserod:29461}, \cite{powell2006going},\cite{powell2008wifi}. So, the survival problems
of alternative media may be limited if one can find models in which municipal power and civil
society commons are not seen as opposing each other, but acting cooperatively as antidotes to the mainstream and profit-making media outlets.
The guifi.net Foundation has developed a cooperative infrastructure sharing model (the “Universal”
deployment model) that develops over the Directive 2014/61/CE on broadband cost reduction of the EU\footnote{\url{https://ec.europa.eu/digital-single-market/en/news/factsheets-directive-201461ce-broadband-cost-reduction}} and the infrastructure sharing concept of ITU\footnote{\url{https://www.itu.int/ITU-D/treg/publications/Trends08_exec_A5-e.pdf}} to regulate the details of
how municipalities and counties can regulate the use of public space by private, government and
civil society in a sustainable manner. The proposal is written as a template municipal ordinance to
be used by municipalities to regulate this complex aspect correctly and fairly and, therefore, reduce
the barriers created by uncertainty in these topics that usually escape the knowledge of municipal
governments.
\end{enumerate}
Moreover, internal resources commonly used in CNs, may refer to
\begin{enumerate}[label=\alph*)]
\item \textit{Voluntary contributions from members:} This is the most common source of contributions based on which many existing Community Networks have managed to thrive, like AWMN in Greece, Ninux.net in Italy, Wlanslovenja, and Freifunk.net in Germany.
The networks reach a significant scale relying mostly on its own members’ investments in
time, effort, and equipment. To achieve this level of participation (\ie Freifunk), these networks usually develop strong feelings of
community and intrinsic motivations related to the commitment toward a more democratic, private,
secure, and neutral Internet offering affordable access to all.
\item \textit{Commons-based policies:}
Some of the sources of financial support (\ie donations, voluntary contributions \etc) are one way or another not guaranteed, and they
make long-term strategic planning difficult. They could also lead to disagreements and conflicts
between members on its distribution inside the network, especially if there are not well-defined
decision-making processes.
When the infrastructure costs are high, it is often the case that CNs are organized around formal
organizations such as NPOs and broadband co-operatives in which large numbers of local community members join and pay membership fees in order to set up the network (Dutch networks, B4RN).
They can also develop unique models combining voluntary and professional services in a commons-based approach. As we have seen in section II, guifi.net has focused on building and managing network infrastructure in a way that is sustainable, fair and non-profit, based on a novel compensation scheme \cite{Baig:2016:MCN:2940157.2940163}.
\item \textit{Market-based policies:} In this case the commercial activities are developed for network users only and they do not target external customers.
\item \textit{Incentive mechanisms:} Another strategy in ensuring the economic sustainability of a Community Network is making it
attractive in the first place. The description of the vision and key principles, the public image, the
availability of Internet connectivity or not, the existence of local services of good quality, the
participation rules, are some of the important aspects that motivate people to join a CN either as contributing members or simply as users.
\end{enumerate}
\end{itemize}
\vspace{3mm}
\subsubsection{Socio-cultural sustainability} It is predominantly understood as a spirit of social cohesion and common identity or, at least, as a spirit of sharing common resources.
\begin{itemize}
\item \textbf{Requirements:} The socio-cultural sustainability of a CN, requires the satisfaction of certain conditions within the community.
The presence of a strong relatively closely-knit community is a necessary but not sufficient condition for the take-off and subsequent success of community network initiatives. In turn, through the process of setting up and running the network the community is brought closer together and in this sense community ties are further strengthened, whilst Internet connectivity brings the community closer to other parts of the society. In short, social cohesion is therefore enhanced.
Good community relations also matter. Knowing the people in the community well can help resolve any disagreements, conflicts and soften intransigence (e.g. B4RN). Conversely, mistrust in the community adversely impacts upon the sustainability of the network (e.g. i4free).
The community needs to have identified specific needs that the initiative will address. To start with, a basic need is Internet connectivity and possibly at a later stage specific community-services. Put differently, there has to be demand in the community.
Moreover, the community has to be resourceful. Various resources are essential and they include: a community leader who can bring the community behind the project, and technology leaders who will design the network. Added skills are desirable such as people with a good knowledge of accounts and law. The more people are trained and willing to take on leadership roles the better it is for the network. This can counter problems when some leaders have to move on and cannot commit anymore to the same degree.
Experimentation and knowledge play a central role in many of the CNs examined. Very often the technology leader at least is regular contact with other leaders in other communities with similar initiatives underway. Knowledge exchange and sharing of experiences can contribute to the sustainability of the network. Many actors attach great importance in involving the young generation and provide motivation to them to build skills and transfer them in building other community networks.
While the presence of relevant, mostly technical, knowledge is an important precondition for the initiation of a community network initiative, not all CNs are confined to so-called geek-publics. Although some might have started as communities of geeks (e.g. Consume), all the networks studied for the present deliverable strived to reach out to the general public of non-geeks and non-techies. Still, the passing down of technical knowledge, the sharing of experiences and the support of other similar initiatives, as well as, the acquisition of new technical knowledge and capabilities are all key elements for the sustainability of community networks.
\item \textbf{Strong vs weak commitment models:}
Depending on whether the aforementioned conditions are satisfied or not, the following socio-cultural sustainability models can be identified.
Communitarian practices and philosophical concerns for the commons in general are often additional features of the communities (e.g. Digcoop, Consume, Free2Air). Commitment, solidarity and trust are key ingredients for a socio-cultural sustainability model and their absence is detrimental to it. However within the CN, one can also find cases of people that do not abide to these characteristics. Network members that prioritize and value individual profit instead of solidarity and community spirit are in many cases present. They take the role of -resource and connectivity- consumers and contribute poorly or not at all.
The problem identified, namely free-riding in peer initiatives and their reliance on the efforts of the very few for the benefit of the many, is a socio-cultural contradiction at the heart of community networks.
\end{itemize}
\vspace{3mm}
\subsubsection{Political sustainability} The idea of empowerment, active ownership of resources and data, and control of one’s own communication needs, often linked with the idea of bypassing dominant commercial providers, are the basis of CN structure. They offer an open structure, whose management, operation, experimentation and lack of censorship are hugely important.
\begin{itemize}
\item \textbf{Requirements:} Organisational aspects are a locus of cultural contradictions of community networks, as in some cases they can operate against sustainability. A suitable organisational form, addressing economic aspects but also being democratic and targeted towards serving community interests is thus perceived of as essential for sustainability.
Legal aspects can be seen as ingredients potentially threatening sustainability. The case of Sarantaporo, operating a non-profit community network, possibly faces legal challenges based on the Greek regulatory framework, which does not distinguish between the regulation of for-profit and non-profit network providers.
Moreover, CNs need to be kept synchronized and follow current technological
developments. Technological changes at the level of applications might make
the choices of the community network obsolete. The proliferation of
sophisticated applications on the Internet, and the lower cost of storing such applications on cloud,
as opposed to local servers, create further tensions on community networks. CNs have to be kept up-to-date with current and future technological needs of their members but also, maintain their unique characteristics (\ie local character).
\item \textbf{Privacy-aware vs privacy-unaware models:}
The contradiction here is one between the potentially more open and privacy-friendlier character of community networks, and the closed environment of commercial for-profit networks where surveillance practices are deeply entrenched features for commercial and political reasons. Awareness around privacy has increased in recent years, in particular following the Snowden revelations, and the open privacy-respecting alternative that community networks promise might well increase their attractiveness and contribute to their growth and expansion. The cases examined have generally provided opportunities for user active involvement in managing their data and have kept central user data storage and control at a minimum.
However, the contradiction between data control/privacy and data ownership by big corporations, such as Facebook, remains in the cases where community networks are used only for
free or low-cost Internet access for the purpose of deploying these large corporate platforms.
\end{itemize}
\section{CN stakeholders and the sustainability question}\label{sec:CN_sustain_incentives}
Sustainability is a multifaceted concept used to study a variety of systems such as technical, biological and socio-cultural ones. Its precise definition depends on the system of interest. In general, the sustainability challenge consists in understanding the way that a system can smoothly operate in the present and develop in the future. Hence, sustainability is not a specific goal per se but a continuous process to reach a goal.
Although, originally the term was used in an environmental context (United Nations World Commission on Environment and Development (WCED) 1987), it more recently acquired broader social and economical semantics (World Summit on Social Development, 2005).
Equally broad is the context of sustainability in the case of community networks, which are by definition complex socio-technical systems. Contrary to the commercial production communication networks, their existence per se is conditioned on the sustained and active participation of all its stakeholders, who contribute resources and generate value for it. Therefore, a sustainable network should first of all ensure that all these actors, primarily end users, but also commercial service providers and public organizations when they are present, have proper commitments and incentives to contribute to the network. This is not a trivial task since the participation of each actor is driven by different types of motives and aspirations, including economical, socio-cultural, and political ones. Hence, the network needs to put in place mechanisms, limits and incentive mechanisms to properly address these aspirations, as in any commons regime~\cite{Ostrom1990}.
The success of the CN to attract a critical mass of actors also determines the funding alternatives of a CN. A sustainable funding model, which will ensure the network capability to cover its deployment and maintenance expenses, is a crucial parameter for its long-term viability.
We review the practices of different CNs with respect to funding in section \ref{sec:funding}. In the remainder of this section, we describe the broadly varying motives met across and within the different actors in a CN. Then, in section \ref{mechanisms}, we describe how different CNs respond to these motives.
\begin{figure}[tbhp]
\centering
\includegraphics[width=1\linewidth]{figures/LEGAL_FORM_CHART}
\caption{Radar chart with legal forms found in CNs. \textit{CN legal forms}. 0: None, 1: organization (NPO, Foundation), 2: social entrepreneur, 3: operator, ISP.}
\label{fig:legal}
\end{figure}
\vspace{1mm}
\subsection{Volunteers}\label{subsec:volunteer}
In the context of CNs, volunteers are the people who initiate the CN project. More often than not, (a subset of) these people take an active role in the network expansion, either through helping with the technical matters and/or organizing informational and training events for potential participants~\cite{DBLP:journals/corr/abs-1207-1031}.
The volunteer groups usually comprise of people that cumulatively possess knowledge and expertise over a wide set of areas, including technical, legal, and finance matters~\cite{aichele2006wireless}:
technology enthusiasts, radio amateurs, hackers, (social media) activists, and academics. It is not uncommon for volunteers to create a legal entity (Fig. \ref{fig:legal}) to represent the network to third parties (\ie government, third party organizations, companies, Internet Service Providers (ISPs)). This lets them have a voice and interface with third parties on legal and regulatory matters, but also get involved in financial transactions (\eg collecting user subscriptions, fund raising, purchase of equipment).
Their motives have a strong bias towards political and socio-cultural values and ideals, which is not met in any of the other three stakeholder groups. Experimentation with technology, open software and do-it-yourself (DIY) tools, sensitivity to privacy and network neutrality, the desire to bridge the digital divide, but also commitment to the community spirit and social movement, participatory governance and decision-making, and protection of consumers' rights, count as primary reasons for their involvement in CN initiatives. Economic motivations are much rarer; on the contrary, the members of the volunteers' groups usually end up investing a lot of personal effort, time, and money to the CN initiative, without direct financial return of any kind.
More specifically:
\begin{comment}
The incentives of the volunteer groups are not necessarily static throughout the lifetime of the CN initiatives. There are instances where these have evolved over time, adapting to the group membership (\eg members joining or leaving the group), new technologies that were made available over time, and the evolution of the surrounding legal/regulatory environment.
\end{comment}
\subsubsection{Socio-Economic motives}
\label{subsec:vol_economic}
Socio-cultural motives often stand behind the original conception and deployment of CNs.
\textbf{Bridging the digital divide:} The right to (broadband) connectivity is a matter of equal opportunities in the contemporary digital society; and digital illiteracy puts at disadvantage populations deprived of it. The launch of CN initiatives has many times been the response to poor or non-existent access to the Internet and Information and Communication Technology (ICT) services. This is typically the case with remote, sparsely populated rural areas, where commercial operators are reluctant to invest on fixed broadband infrastructure because they do not deem this cost-efficient.
The initial volunteers' group comes typically from local residents suffering the digital divide (as the case is with the B4RN~\cite{NETCOMMONS_D2_2} and guifi networks~\cite{NETCOMMONS_D1_2}). However help may come from outside. In the case of the Sarantaporo.gr network, in Greece, the CN came out of the efforts of a small group of people living in Athens and abroad, with origins from the Sarantaporo area, by the time that no broadband access alternative was available there.
Likewise, the i4Free network in an island with poor Internet connectivity close to the town of Nafpaktos, in Greece, started from the initiative of a German engineer and professor. He created a small network at his own expenses so that locals could have access to ICT services \cite{NETCOMMONS_D1_2}, \cite{NETCOMMONS_D2_2}, \cite{NETCOMMONS_D2_3}. In a similar scenario, Peter Bloom, an American national, founded Rhizomatica to promote mobile-phone based services in the rural area of Oaxaca, Mexico~\cite{NETCOMMONS_D1_3}.
\textbf{Economic incentives:} these are in a sense relevant whenever a CN is set up in pursuit of cheaper (affordable) Internet access. In these cases, the underlying idea is how to expand coverage of the service, ensure its sustainability and sovereignty from commercial decisions, and save money with CNs compared to commercial alternatives rather than how to make money out of the CN initiative.
Therefore, in remote, sparsely populated areas, such as the rural areas addressed by the B4RN initiative, the competing alternatives, where they exist, such as satellite or cellular, are typically more expensive and of lower quality. B4RN was conceived also as a way to offer better connections at more affordable prices than its competitors, again in areas where these exist.
Another case, this time in urban environment, is the Consume network in East London, UK, one of the very first CN initiatives in Europe. James Stevens ran a technology incubation business offering web, live streaming and video distribution services through a leased optic fiber connection. He came up with the idea to connect buildings through wireless mesh links as a way to bypass the expensive license costs and regulatory constraints related to expanding the fiber communication across the buildings.
\subsubsection{Political motives}\label{subsec:vol_political}
Political causes often serve as driving forces for the groups that lead CN initiatives. Such causes often prove to be strong enough to fuel these groups' active involvement with the CN despite the effort, time and money this requires. They include:
\textbf{Openness, net neutrality, and privacy:}
These highly controversial issues have served as primary motivations for CN initiatives. The principle of net neutrality dictates that traffic within the network should be treated in an equal manner independently of the type of content or the source. The data communicated across the network is not subject to discrimination.
A characteristic example of principles underlying the CN initiatives is found in the declaration by the guifi.net Foundation, the volunteers' group that has developed and still operates the guifi CN, in Catalonia, Spain~\cite{Perez2016, barcelo2014bottom, 5514608}:
\begin{itemize}
\item Freedom to use the network, as long as the other users, the contents, and the network itself are respected.
\item Freedom to learn the working details of network elements and the network as a whole.
\item Freedom to disseminate the knowledge and the spirit of the network.
\item Freedom to offer services and contents.
\end{itemize}
Moreover, volunteers are often interested in accessing ICT services without having to compromise their privacy. This applies for technology enthusiasts, activists and users in general that wish to protect their private content.
CNs such as the French FFDN and the German Freifunk declare privacy/anonymity and net neutrality as integral parts of their manifesto and incorporate them in their fundamental operation principles.
\textbf{Autonomy and alternative communication models:}
These are common motives for the original deployment and subsequent operation of CNs~\cite{lawrence2007wireless}, especially in urban areas, where the digital divide threat is much less pronounced. Community networks such as Consume\footnote{\url{http://consume.net/}}\footnote{\url{http://wiki.p2pfoundation.net/Consume}} and Free2Air\footnote{\url{http://www.free2air.org/}}\footnote{\url{http://wiki.p2pfoundation.net/Free2Air}} started out representing alternative approaches to the commercial Internet provision, aiming at higher freedom and control over personal communications. In other cases, such as guifi.net, which started as an attempt to bridge the digital divide, such political purposes emerged as an equally strong motivating factor, especially when the number of network connectivity alternatives increased. Rhizomatica founder goals were both to bridge the digital divide and to create an alternate telecommunications network where people could communicate with costs much lower than the existing telecom solutions in the area (where they existed).
\begin{comment}
\begin{itemize}
\item \textbf{Consume.} This CN was one of the first ones to be conceived and deployed in Europe. Its development was led by James Stevens and Julian Priest and a number of people that were organized around them. Although the original motivation was to save Internet access fees for conducting business, the initiative evolved to an attempt to ``short-circuit" what, by that time, has become the {\it ``anti-competitive telecommunications market model''}~\cite{NETCOMMONS_D2_2}.
\item \textbf{Free2Air.} This initiative was initiated in East London as an alternative network to the commercial Internet provision. The initiative was run by a small number of artists and a number of other individuals, a central figure being Adam Burns, an IT security expert. Himself together with a few others set up the network addressing the main technical tasks such as network routing, planning, and other tasks. Burns describes Free2Air as a largely political project, attempting to put into practice ideas about control and ownership of personal communication. He recalls that the one of the two main motivations for starting the network was exactly to try de-mediating the personal communication and getting more control over the communication needs~\cite{NETCOMMONS_D2_2}. Burns himself was involved in significant political activities participating in debates on the idea of commons and what this implies for governance, legal and policy issues, but also the alternative organization and autonomy of communication.
\item \textbf{guifi.net.} Guifi CN has formalized its alternative approach to network operation and management in the context of the economic theory of Commons. The guifi.net foundation promotes the view of their CN as CPR and apply principles of CPR management, as set by the Nobelist economist Elinor Ostrom~\cite{Ostrom1990}, to their CN management.
\end{itemize}
\end{comment}
\subsubsection{Socio-cultural motives}
Socio-cultural motives
often stand behind the original conception and deployment of CNs. Among the main ones count:
\textbf{Experimentation with technology and DIY culture:} Several initiatives are driven by hackers, technology enthusiasts, and academics who enjoy experimenting with network and radio technologies. The involvement within such a community presents them with a unique opportunity to further enhance their technical knowledge and practice it over real networks.
The AWMN, Ninux, and Freifunk CNs were initiated and are still run by network technicians and computer enthusiasts. As such, they have been characterized by a culture of experimentation and improvisation. AWMN and Ninux, in particular are used by their volunteers as testbeds for manufacturing equipment (antennas, feeders) and experimenting with routing protocols and applications. This is evidenced also in the impressive number of native applications and services that were developed for AWMN, without need for public Internet connectivity, including games, libraries, network monitoring tools, DNS solutions, and experimental platforms. Notably, neither AWMN nor Ninux, whose initials stand for "No Internet, Network Under eXperiment", nominally provide Internet access.
\textbf{Community spirit and altruism:}
Altruism, often coupled with a strong commitment to community ideals serve as important motivations for the active involvement of volunteers' groups in CNs.
Both are strongly evidenced in the B4RN, Sarantaporo.gr and i4Free CN initiatives.
Community activists have been among the leading figures in B4RN and have set it up as a community benefit society which “can never be bought by a commercial operator and its profits can only be distributed to the community.”
Likewise, the Sarantaporo.gr non-profit organization involves people who are activists in the area of commons and supporters of community ideals. They place a lot of emphasis on cultivating these ideals in the residents of the area with parallel activities and social events.
Finally, the leading figure behind the i4Free CN, identifies himself as a warm fan of community life and ideals. He has spent enormous amounts of time trying to build a community around the CN through training and educational events, even without much success as he admits \cite{NETCOMMONS_D2_2}.
\subsection{Active participants}
Even broader is the variety of reasons for the involvement of citizens in a community network. Decisive for many of them is the expectation of available and abundant local connectivity anywhere is needed. Furthermore there is the expectation of cheaper, or even free, Internet access and other services provided by commercial entities. For others, the CN represents a perfect opportunity to acquire new knowledge and experiment with technologies, and/or socialize and become part of a bigger community. Activism in favor of higher autonomy and data privacy are also evidenced as user participation motives, albeit to a smaller extent than in volunteer groups.
Their levels of participation typical vary a lot within a CN. Some of them are highly active participating in events organized by volunteers or other types of collective activities, sharing their technical experience, developing applications and devoting personal time and efforts to the CN. On the other extreme, a number of users that tends to be the majority in most CNs, set up a node and use the CN to get Internet access or access to local services without further contributing to the activities of the community.
However, the presence of even these passive users can benefit the network to the extent that others can join the CN through their nodes.
The CN users may pay a \emph{connectivity fee} for being part of the CN or not. These fees contribute to pay the necessary costs to upgrade or maintain the CN infrastructure. Depending on whether they receive some service over the CN, they may pay a \emph{consumption fee} and maintain a contributor, shareholder or customer relationship with the CN, directly or indirectly through paying service fees to a commercial service provider acting as intermediary and value-added reseller.
\subsubsection{Socio-Economic motives}
Users often expect benefits of economic nature from their participation in a CN, both direct and indirect.
\textbf{Direct economic benefits:} The most usual one is local connectivity or Internet access that is not offered by other providers, or at lower cost than alternative solutions, offered by commercial telecom operators (Table \ref{tab:CN_list}). A characteristic example, Rhizomatica, has managed to reduce costs by 98\% on international (U.S.) calls and 66\% on cellphone calls. Internet connectivity can either be provided by the CNs themselves, which take on the role of alternative Internet service providers (\eg B4RN, Sarantaporo.gr); as an add-on service over the CN by a third party (\eg guifi.net, Rhizomatica); or by CN members who pro bono share their access with other peers (\eg AWMN).
The collective efforts of the CN participants is often fundamental for expanding the coverage of the network or lowering the connectivity cost. For instance, B4RN partially crowdsources the cost and effort involved in deploying fiber in rural communities in Northern England. This way, it can offer fiber connectivity and Internet speed in underserved areas and at more favorable prices than alternative commercial solutions.
A local infrastructure, locally maintained, feeds the local economy, creating paid jobs for the deployment, maintenance, expansion and operation of the network itself and related services over the network (content and services) or enabled by the network (telework, remote assistance, surveillance, sensing).
CNs create the opportunity for local investment. Local can obtain economic benefits from investing in local infrastructures, particularly more durable fibre infrastructures, that can have good returns of investment from usage fees, while also giving indirect economic benefits by increasing the value of households, typically the largest investment of a family.
\textbf{Indirect economic benefits:}
Participation in a CN may incur additional benefits to their users.
One of them relates to the growth of human capital and another to the added value that the CN generates for businesses and professionals participating in it. Examples from Sarantaporo.gr and AWMN show that young people (in the age of 18-35) view the CNs as a path to information about job and further education opportunities and to business activities developed around the CN \cite{NETCOMMONS_D2_2}. Moreover, in remote rural areas, network access and Internet connectivity can enable professionals to search better markets for their products and cheaper suppliers for their materials (\eg farmers) and small business owners to join the network in the anticipation that visitors appreciate the Internet connectivity feature when choosing where to go (\eg Sarantaporo.gr).
Underserved communities in terms of connectivity tend to suffer from fragility or lack of other critical infrastructures. The deployment of networking infrastructures creates economies of sharing and bundling, such as improvements in electrification, with the introduction of solar panels, that for instance can enable or improve the quality of night-time lighting and food preservation, which in turn may create economic benefits from trading of these products.
\subsubsection{Political motives}
As seen in section \ref{subsec:vol_political}, many CNs have been initiated under aspirations of privacy, net neutrality, and alternative models of Internet connectivity provision with strong flavor of autonomy and self-organization.
The ideals underlying the initial development of these CNs are often inherited by subsequent users of the CN. However, these users tend to be a small part of the total CN user population. Typically, the larger the CN grows the harder it becomes to find political causes that unite the whole community behind them.
\textbf{Openness, net neutrality and privacy:}\label{privacy_users}
The aspects of privacy and neutrality have a strong role in CNs that utilize the Picopeering agreement\footnote{\url{http://www.picopeer.net/PPA-en.shtml}} as a participation/operations framework and are part of the movement for open wireless radio networks\footnote{\url{https://openwireless.org/}.} (\eg Freifunk, guifi.net, Ninux, FFDN).
The Picopeering agreement is a baseline template formalizing the interaction between two network peers. It caters for a) an agreement on free exchange of data; b) an agreement on providence of open communication by publishing relevant peering information; c) no service level guarantees; d) users' formulations of use policies; and e) local amendments dependent on the will of node owners.
\textbf{Autonomy and self-organization:}
The participation in CN groups cultivates feelings of autonomy and self-organization. Self organization is practised in the way new users connect to the CN, where they have to rely on their own resources and the voluntary assistance of experienced network members. Being part of an independent network satisfies personal ideological aspirations for self-organized network and autonomous use ~\cite{lawrence2007wireless}. The ability to participate in collective decision making and contribute to an alternative "commons"-based model of ICT access counts itself as a worthy experience for users with strong "commons" ideals.
\subsubsection{Socio-cultural motives}
A CN is a characteristic example of participatory involvement, where users dedicate their efforts and time to the network~\cite{Vega2014}. A number of services and applications combined with other activities that one way or another revolve around the CN, offer users the opportunity to communicate, educate and entertain themselves, thus further motivating their participation in the network ~\cite{szabo2007wireless},~\cite{pedraza2013community}. \vspace{-3mm}
{ \renewcommand{\arraystretch}{1.2}
\begin{center}
\begin{table*}[t]
\centering
\resizebox{14.5cm}{!} {
\begin{tabular}{|p{2.1cm}|p{4.2cm}|p{5cm}|}
\hline
\textbf{CN} & \textbf{Legal form} & \textbf{Funding} \\
\hline
AWMN & AWMN Foundation & Members (individually) \\
\hline
B4RN & Community Benefit Society & Members \\
\hline
Consume & None & Central actors\\
\hline
FFDN & Non-Profit Organization
& Members, Local authorities, Donations \\
\hline
Free2Air
& Incorporated Legal Company & Members \\
\hline
Freifunk & Non-Profit Organization & Members, Public Institutions
\\
\hline
Funkfeuer & None & Members
\\
\hline
guifi.net & Guifi.net Foundation & Members \\
\hline
i4Free & None & Central actor\\
\hline
Ninux & None & Members \\
\hline
\multirow{2}{*}{Rhizomatica} & \multirow{2}{*}{Non-Profit Organization} & Members, National and International organizations, Donations \\
\hline
Sarantaporo.gr & Non-Profit Organization & Members, European Union, Donations \\
\hline
\multirow{2}{*}{TakNET} & \multirow{2}{*}{Social enterprise - Net2home} & Members, Private Insitutions, THNIC Foundation, European Union \\
\hline
Wireless Leiden & Non-Profit Organization & Members, Public/Private Institutions \\
\hline
Zenzeleni.net & Formal Network/Telecom Operator & Members, Public Institutions \\
\hline
\end{tabular}
}
\caption{CN specific organizational aspects.}
\end{table*}
\label{tab:cn_aspects}
\end{center}
}
\vspace{-5mm}
\textbf{Experimentation and training with ICT:} Technology enthusiasts participate in the network for experimenting with the technology \ie trying software they develop and hacked code, make network speed measurements, play with network mapping and management tools ~\cite{lawrence2007wireless}. Users can acquire new skills about computer and network use, \ie either through self-experimentation or through training by network experts.
In CNs initiated by volunteers with technical background (\eg AWMN, Ninux, Freifunk), the amount and type of services, applications and self-produced software increased greatly within the community. Besides a variety of network monitoring tools, users can enjoy communication services such as VoIP, online forums, mails, and instant messaging; data exchange services with servers, community clouds and file sharing systems; entertainment services with gaming applications and audio/video broadcasting tools; information and educating services with online seminars, e-learning platforms, and wikis.
\textbf{Desire for social interaction:} The smooth operation and development of a CN demands cooperation links at the network infrastructure level but also at the social level.
In CNs, participants are able to share their ideas and interests, participate in groups, interact and communicate with other network members just like they would in any other online or physical community. Social networking and communication tools raise great interest and remain active even when other tools and services have a drop in their utilization.
The importance of local relationships in a CN~\cite{kornhybrid} is also evidenced in three independent studies addressing a rural village in Zambia~\cite{Johnson:2010:IUP:1836001.1836008}, the TakNet CN, in the rural area of northern Thailand~\cite{Lertsinsrubtavee:2015:UIU:2837030.2837033}, as well as Australian and Greek CNs in~\cite{lawrence2007wireless}. In this last study, 91.2\% of the users stated that they enjoyed interacting with the community, 88\% felt that their efforts would be returned by other community members and 80.5\% expressed that the community allowed them to work with people that they could trust and share similar interests. Likewise, in the case of TakNet,
much of the activity among users of the popular applications such as messaging, email, online social networks and gaming, exhibits a high degree of locality, \ie people use Internet to interact with people within the same CN.
\iffalse
\begin{itemize}
\item In the study \textbf{SAY WHICH CN THIS RELATES TO} in~\cite{lawrence2007wireless}, 91.2\% of the users enjoyed interacting with the community, 88\% felt that their efforts would be returned by other community members and 80.5\% expressed that the community allowed them to work with people that they could trust and share similar interests.
\item Messaging, email, online social networks and gaming services attract great interest in the TakNet, in the rural area of northern Thailand~\cite{Lertsinsrubtavee:2015:UIU:2837030.2837033}. The social activity among the users exhibits a high degree of locality, \ie people use Internet to interact with people within the same CN.
\item The importance of local relationships in a CN~\cite{kornhybrid} is also evidenced in a study of Internet service in a rural village in Zambia~\cite{Johnson:2010:IUP:1836001.1836008}. Even though Internet service is dominant in certain community networks~\cite{6673334}, if similar services could be applied at a local scale, they would have the potential to make an impact on CN users.
\end{itemize}
\fi
\textbf{Socio-psychological motives:}
As social motives count socially-aware mechanisms that relate to concepts such as visibility, acknowledgment, social approval, individual privileges and status. This social activity is applied within the networks' technical limits~\cite{Mcdonald02socialissues}.
The ability to compete with other people and satisfy one's self esteem through the involvement in the community, or receive a certain type of credit by others in the community, are motives not as easy to distinguish but still present ~\cite{lawrence2007wireless},~\cite{Lertsinsrubtavee:2015:UIU:2837030.2837033},~\cite{Johnson:2010:IUP:1836001.1836008},~\cite{6673334} and with an impact on network growth and operation~\cite{6979946, 4124126}.
\subsection{Private sector service providers}
\begin{comment}
The professional is the stakeholder type that is most rarely met in CN initiatives.
In fact, and to the best of our knowledge, guifi.net has been the first and single CN instance with clear
and well articulated provisions for the involvement of professionals in the CN ~\cite{Baig:2016:MCN:2940157.2940163}.
At first glance, these entities do over the CN what they do over any other network. However, the legal provisions and conditions of running business over the CN
are different. In the case of the guifi CN, the guifi.net foundation prepares licences that serve the commons purposes and ensure that any professional entity providing service over the network will also contribute to the network expansion and maintenance ~\cite{Baig2015150}.
\end{comment}
Private sector service providers form the stakeholder type that may be less involved in CN initiatives. The term points to companies, ISPs, small businesses or individuals, namely entities that support or use the network to provide some service and get compensated for it.
These can be a) the professionals that are involved in the installation, operation and maintenance of the CPR network infrastructure, or b) the organizations that provide content or services inside the CN.
At first glance, these entities do over the CN what they do over any other network, \ie provide services where there is demand for them. However, the legal provisions and conditions of running business over the CN may be different given the existence of a CPR infrastructure and the governance and the crowdsourced nature of it.
In fact, the CPR is an enabler of small private sector providers. Since the network commons is a shared resource, that enables these small players to operate and provide services over a larger population, with the economies of scale of cooperative aggregation of CAPEX and OPEX among multiple participants, and the complementarity and opportunities of specialization among them. This also means a lower barrier or entry, with much less initial investment and less risk thanks to the cooperative, and cost oriented model, of the network commons. Therefore the network infrastructure commons becomes a critical resource for the operation and competitiveness of these local private sector service providers. Therefore the common goal would be preserving the commons to enable their specific business models.
\begin{comment}
The main focus of professionals can be divided into the following parts:
\begin{enumerate}[label=\alph*)]
\item \textit{Demand}: finding customers in need of their service provision. Expanding the network is beneficial, because it increases demand \ie their potential customer pool. However, the increase of competition by other professionals is possible as well.
\item \textit{Service supply:} providing qualitative services to end users is the principal role of professionals. They are highly interested in the quality of their service provision to end customers, as it is their main distinguishable feature towards competition.
\item \textit{Stability of operation:} For professionals to be able to provide a qualitative and stable service, the network has to provide uninterrupted network connection for both professionals and users. To this end, it is not uncommon for professionals to contribute in the network's technical infrastructure \ie directly, contributing actual hardware or indirectly, providing economic contributions and become part of the network.
\end{enumerate}
\end{comment}
The incentives for the participation of private sector service providers in the network are almost always economic.
These actors are interested in profit. The CN provides them with access to potential customers who would otherwise be unreachable. The implementation of their commercial activities depends on the organizational nature of the CN. Guifi.net has set up a framework that enables the participation of private sector in its CN, including maintainers, installers, ISP providers, VoIP providers (Table \ref{tab:guifi_prof}). These entities may sign agreements with the guifi.net Foundation, when the service provision has to do with the sustainability of the CPR. External Over-The-Top (OTT) services, such as Internet VOIP, Video, content providers are left outside. In Rhizomatica, ISPs and VOIP providers are key partners of the organization. Rhizomatica provides the Radio Access Network through which the service providers reach the local communities and their CN users.
\vspace{-3mm}
{ \renewcommand{\arraystretch}{1.2}
\begin{center}
\begin{table*}[t]
\centering
\resizebox{16cm}{!} {
\begin{tabular}{|p{3.5cm}|p{10cm}|}
\hline
\textbf{Services} & \textbf{Private sector providers} \\ \hline
\multirow{4}{*}{Internet Service Provider (ISP)} & Adit Slu, Ballus Informatica, Capa8, Cittec, Delanit, Del-Internet Telecom, Ebrecom, Emporda Wifi - Guifi.net a l'Alt Emporda, Girona Fibra, Goufone, Indaleccius Broadcasting, Pangea.org, Priona.net, S.G. Electronics, Steitec-Servei T\`ecnic d'Electronica i Telecomunicacions, Ticae, Xartic
\\ \hline
\multirow{3}{*}{Mobile Provider} & Ballus Informatica, Capa8, Cittec, Delanit, Ebrecom, Emporda Wifi - Guifi.net al Alt Emporda, Girona Fibra, Goufone, Indaleccius Broadcasting, Priona.net, S.G.Electronics, Ticae \\ \hline
\multirow{2}{*}{Surveillance} & Ballus Informatica S.L., Capa8, Delanit, Ebrecom, Girona Fibra, Goufone, Matwifi, S.G. Electronics, Ticae \\ \hline
\multirow{3}{*}{Telephony (VoIP) Provider} & Ballus Informatica, Capa8, Cittec, Delanit, Del-Internet Telecom, Ebrecom, Emporda Wifi - Guifi.net a l'Alt Emporda, Girona Fibra, Goufone, Indaleccius Broadcasting, Matwifi, Priona.net, S.G. Electronics, Ticae \\ \hline
TV (IpTv) Provider & Delanit, Del-Internet Telecom, Indaleccius Broadcasting, Priona.net\\ \hline
\textbf{Agreement types} & \textbf{Service providers} \\ \hline
Economic Activity Agreement & Adit Slu, Asociacion SevillaGuifi, Associacio Guifinet la Bisbal d'Emporda, Ballus Informatica, Capa8, Cittec, Delanit, Del-Internet Telecom, Ebrecom, Emporda Wifi - Guifi.net al Alt Emporda, Girona Fibra, Goufone, Indaleccius Broadcasting, Ion Alejos Garizabal, Maider Likona, Matwifi, Pangea.org, Priona.net, S.G.Electronics, Steitec- Servei Tecnin d'Electronica I Telecomunicacions, Ticae, Xartic \\ \hline
Volunteer Agreement & Cittec, Girona Fibra, Matwifi\\ \hline
\end{tabular}
}
\caption{Private sector service providers in guifi.net and the services they provide.}
\end{table*}
\label{tab:guifi_prof}
\end{center}
}
\vspace{-5mm}
\subsection{Public agencies}
Public agencies have the natural role of regulating the public space, either for service provision, occupation of public spectrum, public land, but also supporting local development and ensuring access rights to public information and services.
Public agencies have a responsibility to regulate the deployment and service provision of CNs, as with any other entity performing these activities. Furthermore, they may cooperate with a CN when the mission of both align. They may contribute to its deployment and growth through funding the initiative, sponsoring network equipment, consuming CN services, facilitating its expansion and growth or by permitting the use of public space and resources by a CN. In Catalonia, the Foundation operating guifi.net has developed the Universal format~\cite{universal17}, a template municipal ordinance, that allows municipalities to regulate public, commercial and community entities to deploy shared infrastructures in public space. Under these principles, several local authorities have allowed guifi.net groups to dig public space and lay down fiber for expanding the network. In several German cities, Freifunk is given the permission to set up antennas and equipment in the roof top of churches, Town Halls, or other public buildings.
Quite often other types of public agencies get involved in the network.
Sarantaporo.gr has received network equipment for the initial deployment by the Greek Foundation for open-source software and Internet connectivity from the regional University of Applied Sciences. TakNet received financial support from the Thai Network Information Centre Foundation and initial equipment donation and support from the Network Startup Resource Centre.
Depending on their level of participation public agencies
can sign collaboration agreements with the legal entity of the CN and contribute economic or infrastructure resources with or without compensation.
\subsubsection{Socio-economic motives}
The participation of public agencies in a CN initiative can also have an economic motivation. In the case of guifi.net public agencies can fund the network expansion through purchase of equipment in return for complimentary added value services over the CN. Public agencies may be interested in the added value of purchasing connectivity services from a CPR infrastructure, as while being competitive in price, can amplify the spill-over effects in the local economy, and contribute to socio-economic development. However, public entities may also be tempted to put obstacles as a result of the influence and pressure of traditional large telecom companies, with more taxation than large telecom or Internet players that may enjoy unfair tax benefits.
\subsubsection{Political motives}
The participation of public agencies in a CN often comes as a result of high-level policies against the digital divide, to increase the offer or lower the costs of local connectivity, and in favor of equal opportunities in the digital economy and society.
\subsubsection{Socio-cultural motives} Public agencies may also support CNs because they acknowledge their long-term potential to strengthen the community links, raise awareness for issues concerning the local societies and favor the engagement of citizens with the commons. On the polar opposite and more opportunistic note, local administrations (such as municipalities) can advertise the provision of network services as a political achievement that increases their re-election chances.
\begin{comment}
\begin{itemize}
\item \textbf {Sarantaporo.gr} The Greek Foundation for open-source software, an initiative with the participation and support of the whole Greek academia, has sponsored the network equipment for the initial deployment of the CN in 2013. The University of Applied Sciences of Thessaly has provided them with connectivity to the Internet through its access to the Greek Research and Education Network (GRNET). Additional funds through the participation in the EU FP7 CONFINE project allowed the network expansion to 14 villages in the area.
\item \textbf{guifi.net} The local authorities of many villages in Catalunia have allowed the foundation to dig public space and lay down fiber for expanding the network coverage to these areas.
\end{itemize}
\subsubsection{Economic incentives}
Public administrations can participate in the network because this may prove profitable, just like professional entities do.
\begin{itemize}
\item \textbf{guifi.net} In the case of guifi.net public administrations can fund the network expansion through purchase of equipment in return for added value services over the network. In other words, they can invest in the network and get compensated for their investment.
\end{itemize}
\end{comment}
\section{Discussion and conclusions} \label{closure}
The survey has examined the issue of sustainability in community networks and the various aspects within political, socio-cultural and economic perspectives for CN participants. Special focus has been given to the economic sustainability of CNs. Open access network infrastructure models were presented and compared to tradition telecom business models while possible funding options from private and public entities and CN members were analyzed. Economic sustainability is a challenging issue which seems to be a necessary but not the only sufficient condition for approaching a sustainable operation of the CN. Socio-cultural and politic perspectives are needed as well.
Chosen CNs were studied with respect to actual and theoretical mechanisms for enhancing sustainability by organizing and encouraging member participation. Each CN is composed by four basic types of entities (\ie volunteers, private sector entities, users, public agencies). These entities experience their own motives for joining the network and take part in mechanisms deployed to organize their contributions. In order to match their political, socio-cultural and economic interests, corresponding mechanisms have to be in place
It appears that sustainability cannot be reached following a set of exhaustive rules and there are no clearcut answers for approaching it. However, checkpoints or indicative guidelines can be used to assess it. These can be summarized in the following evaluation form following the fieldwork in~\cite{NETCOMMONS_D2_2}.
\subsection{Economy}
\subsubsection{Market and model of provision}
\begin{itemize}
\item To which extent is the community network supported by non-profit/community based network access and services provision?
\item To which extent does the community network rely on a commercial provider? What is the nature of this provider (e.g. for-profit vs. social enterprise, or local vs. non-local)?
\item To which extent does the model of network provision of the community network face competition from commercial for-profit telcos on the basis of quality of signal/provision, lower cost and/or better network maintenance?
\end{itemize}
\subsubsection{Resources}
\begin{itemize}
\item To which extent does the community network manage to survive economically, i.e. to afford the necessary hardware and labour-power necessary for running the network?
\item To which extent can the community network ensure that it has enough resources, supporters, workers, volunteers, and users?
\item To which extent does the community network rely on internal funding sources?
\item To which extent does the community network rely on external funding sources?
How regular are they?
\item Are there possibilities for the community network to obtain public or municipal
funding or to co-operate with municipalities, public institutions or the state in providing access and services?
\item To which extent does the community network rely on a single individual or a small group of actors for providing the necessary resources (time, skills, money)?
\end{itemize}
\subsubsection{Network wealth for all}
\begin{itemize}
\item To which extent does the community network provide gratis/cheap/affordable network and Internet access for all?
\item If subscriptions are used, are they affordable?
\item To which extent are there different pricing schemes such as for residential users,
small enterprises, bigger firms, and public institutions (e.g. schools)?
\item How can the community avoid or lower the digital divide?
\item What technological skills are required of the average user to benefit from the community network?
\end{itemize}
\subsubsection{Needs}
\begin{itemize}
\item To which extent are the community needs served by the community network?
\item To which extent are the needs of diverse individuals (e.g. by gender, age, nationality) and groups in the community served by the community network?
\item To which extent are the needs of local businesses served by the community
network?
\end{itemize}
\subsection{Politics}
\subsubsection{Participation/governance}
\begin{itemize}
\item How is the community network governed? How does it decide on which rules, standards, licences, etc. are adopted?
\item To what extent does the community network allow and encourage the participation of community members in governance processes?
\item To what extent are there in place mechanisms for conflict resolution and for proceedings in the case of the violation of community rules?
\end{itemize}
\subsubsection{Data ownership and control}
\begin{itemize}
\item To which extent does the community network enhance the protection of privacy of user data?
\item To which extent does the community network provide opportunities for active user
involvement in the management of their data? What are the skills required and how are they provided?
\item To which extent and for how long are user data kept in servers controlled centrally
(e.g. by the network administrators)? How do you guarantee that data storage is
done in line with data protection regulation and is privacy-friendly?
\end{itemize}
\subsection{Culture}
\subsubsection{Community spirit}
\begin{itemize}
\item How closely knit is the community? To which extent are trust and solidarity present and how are they manifested?
\item To which degree is the community network a “geek public” that has an elitist, exclusionary culture or a “community public” that is based on a culture of unity in diversity?
\item To which extent does the community network provide mechanisms for learning,
education, training, communication, conversations, community engagement, strong democracy, participation, co-operation, and well-being? In what ways?
\item To which degree is the community network able to foster a culture of togetherness and conviviality that brings together people? In what ways?
\end{itemize}
\subsection{Future research directions}
Having looked carefully into the broader issue of sustainability, our future work is going to focus in specific CNs and will seek to propose incentive mechanisms that can address the sustainability challenge. This is going to be pursued through different directions.
One direction is on the effort to import elements from the guifi.net model such as the involvement of professionals in the network and the provision of commercial services over it. More specifically, part of the work will be devoted to analyzing the incentive mechanisms that guifi.net has put in place: the sustainability of its compensation system, which is the main tool for incentivizing the participation of commercial entities in the CN; and the exportability of this model to the newer and promising Sarantaporo.gr CN.
A second, related direction, is through the launch of an open source mobile application over the CN that realizes mobile crowdsourcing and sharing economy practices. We will analyze incentives (\ie game theoretic tools, reciprocity theories) that can be embedded in the application to maximize its use, and through that, the use of the CN. Following the prototype of the application released in~\cite{NETCOMMONS_D3_2}, we will then design incentives to accommodate the agricultural services that are of interest in this case, taking into account the particularities of the CN network such as the ownership of network nodes and network connectivity alternatives that are available in the area.
\section{ACKNOWLEDGMENTS}
The authors acknowledge the support of the European Commission through the Horizon 2020 project netCommons (Contract number 688768, duration 2016-2018).
\section{Introduction}
\subsection{CNs: history, evolution and current status}
Community networks (CNs) are networks inspired, built and managed by citizens and non-profit organizations. They are crowdsourced initiatives where people combine their efforts and resources in a collective manner to instantiate communication network infrastructures.
While the phenomenon of community initiatives in the field of media is as old as the distinct media themselves, CNs originally surfaced in the late 90s and have taken many forms and shapes ever since. Typically, these CNs are initiated by tiny groups of people, usually in the range of one to ten, who more often than not are driven by strong cultural and political motives. They contribute to the fight against the digital divide through the provision of telecommunication services in under-served areas, the desire for autonomy and self-organization practices, the right to open, neutral networks and privacy, the experimentation with technology in do-it-yourself manner, and the commitment to community ideals and needs.
Whereas some CNs have become obsolete due to the rise of commercial high speed broadband networks in the areas CNs operated, others have flourished and evolved into alternative telecommunication network models (section \ref{subsec:additional}). Not only have they filled in the coverage gaps of commercial operators providing telecom services in rural areas, but they have also developed rich organizational frameworks with various tools and mechanisms. Typically, these frameworks emerge as a result of past experiences, successful and unsuccessful practices and accumulated knowledge. They are meant to systematize the network's governance, management and operation and ensure the CNs sustainability. The establishment of functional economic models is a key factor to this end.
\subsection{Current motivating factors and new paths for CNs}\label{subsec:additional}
With a few notable exceptions (e.g., \cite{Baig2015150}), most community networks have been viewed (and have been viewing themselves) as alternative networks that are incompatible with any commercial notion, not least because of the strong cultural/political values of the small groups that initiated them.
Yet, there seem to currently exist additional good reasons that motivate a reiteration of their positioning in the overall telecommunications landscape and new approaches to their sustainability.
\begin{figure*}
\vspace{-1cm}
\centering
\subfloat[][]{\includegraphics[width=7.3cm]{figures/SERVICES_chart}\label{<figure1>}}
\hspace{-1.99cm}
\subfloat[][]{\includegraphics[width=7.3cm]{figures/INFRASTRUCTURE_chart}\label{<figure2>}}
\hspace{-1.99cm}
\subfloat[][]{\includegraphics[width=7.3cm]{figures/SIZE_CHART}\label{<figure3>}}
\caption{Radar charts following CN characteristics \ie types of services and infrastructure used, number of participating nodes. (a) \textit{CN services}. 0: local services as default (Internet connectivity available upon request, manual configuration), 1: mix of local services and Internet connectivity, 2: Internet connectivity as the main service (only management tools as local services), 3: Internet connectivity only. (b) \textit{CN Infrastructure}. 0: mostly backbone network, access points offered by certain individuals through their home routers, 1: mostly backbone, good access points, 2: mostly access network (backbone is used to connect to the Internet or interconnect small "islands"), 3: only access network (no backbone network).
(c) \textit{CN Size}. 0: Very small (number of nodes$ <100$), 1: Small ($100 <$ number of nodes $< 1000$), 2: Medium ($1000<$ number of nodes $<10.000$), 3: Large (number of nodes $>10.000$).}
\label{fig:chart}
\end{figure*}
\begin{comment}
\begin{figure*}[ht]
\vspace{-1.5cm}
\begin{minipage}[c]{7.8cm}
\centering
\includegraphics[width=\textwidth]{figures/Services_chart}
\caption{(a)}
\label{fig:figure1}
\end{minipage}
\hspace{-2.7cm}
\begin{minipage}[c]{7.6cm}
\centering
\includegraphics[width=\textwidth]{figures/backbone_vs_access_chart}
\caption{b}
\label{fig:figure2}
\end{minipage}
\hspace{-2.5cm}
\begin{minipage}[c]{7.6cm}
\centering
\includegraphics[width=\textwidth]{figures/size_chart}
\label{fig:figure1}
\end{minipage}
\caption{Services: 0:local services as default, 1: mix of local services and Internet connectivity, 2: Internet connectivity as the main service, 3: only Internet connectivity \\ Infrastructure 0: mostly backbone network,access points offered by certain individuals through their home routers, 1: mostly backbone, good access points, 2: mostly access network, 3: only access network \\ Size: 0: Very small ($<100$ nodes), 1: Small ($>100 $nodes < 1000 nodes), 2: Medium (>1000 nodes <10.000 nodes), 3: Large (>10.000 nodes)}
\end{figure*}
\end{comment}
\vspace{2mm}
\subsubsection{Contributing to broadband connectivity goals}
Broadband Internet access has been promoted as a core priority of political agendas throughout the world. In Europe, for example, the European Commission (EC) has set ambitious policy objectives for the years to come, summarized under the EC broadband 2020\footnote{\url{https://ec.europa.eu/digital-single-market/en/broadband-strategy-policy}} and 2025\footnote{\url{https://ec.europa.eu/digital-single-market/en/broadband-europe}} agendas.
These agendas demand huge investment costs and grassroots initiatives such as CNs are acknowledged as one possible response to this challenge and one of four ways to involve public authorities in the realization of the broadband vision \cite{ecbb2014}. Community broadband networks such as the Catalan guifi CN \cite{Baig2015150} are singled out in the EC website as best practices in this respect.
\vspace{2mm}
\subsubsection{Realizing Internet access in developing regions}
More than half of the world's population --specifically women, the poor and marginalised populations in developing areas-- are still offline~\cite{ITUBR16}. Many large industrial corporations such as Google, Microsoft and Facebook have stated ambitious objectives for connecting another billion users around the globe. These initiatives are commercial, for-profit and often do not plan access to the open Internet. Combining the Do-It-Yourself culture with provisions for unlicensed spectrum and cheap fibre, small crowdfunded community operators that create local value for the local people, without need for complex and centralized systems, may be the obvious way to go about realizing the vision of Internet access to developing regions.
\begin{comment}
What if we can show that 5G is quite wrong for these scenarios? That the scenarios and the markets may be wrong for that population, that the direction is (DIY and lots of unlicensed spectrum + cheap fibre) simple but fast devices (small solar panels instead of nuclear, Ubiquity instead of Cisco), Still I saw a video in the recent OTI WS and one USA operator said he could get 1$/day (?) ARPU in Malawi from average users and that was Ok given the GDP per capita and that seemed to justify his investment … average! at what personal cost for most!
\end{comment}
\begin{center}
\begin{table*}
\centering
\resizebox{18.4cm}{!} {
\begin{tabular}{ | p{1.8cm} | p{1.8cm} | p{2cm} | p{1.2cm} |p{9cm} |}
\hline
\multirow{2}{*}{\textbf{CN}} &\multirow{2}{*}{\textbf{Location}} & \textbf{Networking technology} & \multirow{2}{*}{\textbf{Internet}} & \multirow{2}{*}{\textbf{Description}}
\\ \hline
\multirow{3}{*}{AWMN} & \multirow{3}{*}{Greece} & \multirow{3}{*}{wifi} & \multirow{3}{*}{Yes*} & Built by network technicians, enthusiasts and radio amateurs. Contains native services without need for public Internet connectivity \ie
games, libraries, network monitoring tools, DNS solutions, and experimental platforms. (2002)
\\ \hline
\multirow{2}{*}{B4RN} & \multirow{2}{*}{UK} & \multirow{2}{*}{fibre} & \multirow{2}{*}{Yes} & Started by a local volunteer, who led the group as a networking expert. Aimed at bridging the digital
divide. Based exclusively on fiber.(2011)
\\ \hline
\multirow{3}{*}{Consume} & \multirow{3}{*}{UK} & \multirow{3}{*}{wifi} & \multirow{3}{*}{Yes} & One of the first CNs to be conceived
and deployed in Europe. The original motivation was to save Internet access
fees for conducting business. It has epitomised the anti-commercial model of networking. Not active anymore. (2000)
\\ \hline
\multirow{2}{*}{FFDN} & \multirow{2}{*}{France, Belgium} & \multirow{2}{*}{wifi, DSL/fibre} & \multirow{2}{*}{Yes} & An umbrella organization
embracing 28 CNs operating across France. Adheres to values
of collaboration, openness and support of human rights
(freedom of expression, privacy). (2011)
\\ \hline
\multirow{2}{*}{Free2Air} & \multirow{2}{*}{ UK} & \multirow{2}{*}{wired, wifi} & \multirow{2}{*}{Yes} & An alternative to the commercial Internet
provision. Run by a small number
of artists and a number of other individuals until 2015. (1999)
\\
\hline
\multirow{2}{*}{Freifunk} & \multirow{2}{*}{Germany} & \multirow{2}{*}{fibre, wifi} & \multirow{2}{*}{Yes} & An open initiative that supports free computer
networks in Germany. It attracted many
artists, activists and tech enthusiasts from all over
Europe. (2002)
\\ \hline
\multirow{2}{*}{Funkfeuer} & \multirow{2}{*}{Austria} & \multirow{2}{*}{wireless} & \multirow{2}{*}{Yes} & A free experimental wireless
network across Austria, committed
to the idea of DIY, built and currently
maintained by a group of computer enthusiasts. (2003)
\\ \hline
\multirow{2}{*}{guifi.net} & \multirow{2}{*}{Spain } & \multirow{2}{*}{fibre, wifi} & \multirow{2}{*}{Yes*} & Started in Osona to serve remote rural areas that were not covered by conventional ISPs. Applies the principles of CPR management. (2004)
\\ \hline
\multirow{2}{*}{i4Free} & \multirow{2}{*}{Greece} & \multirow{2}{*}{wifi} & \multirow{2}{*}{Yes} & The initiative of a German engineer and professor in an island of Greece with poor Internet connectivity. (2014)
\\ \hline
\multirow{2}{*}{Ninux} & \multirow{2}{*}{Italy} & \multirow{2}{*}{wifi} & \multirow{2}{*}{No} & Experimentation and hacking culture. Ninux operates as an experimental
platform for decentralized protocols, policies and technologies. (2003)
\\ \hline
\multirow{3}{*}{Rhizomatica} & \multirow{3}{*}{Mexico} & \multirow{3}{*}{wireless} &\multirow{3}{*}{Yes} & Provides GSM services. Creates open-source technology and helps communities to build their own networks. Initiated by a small group of people with knowledge of community organization and technology. (2009)
\\ \hline
\multirow{2}{*}{Sarantaporo.gr} & \multirow{2}{*}{Greece} & \multirow{2}{*}{wireless} &\multirow{2}{*}{Yes} & People with origins from the area
of Sarantaporo wanted to create a
website for their village when they
realized that there was no network connection. (2010)
\\ \hline
\multirow{3}{*}{TakNET} & \multirow{3}{*}{Thailand} & \multirow{3}{*}{wifi} & \multirow{3}{*}{Yes} & Established as an academic project at the Asian Institute of Technology (AIT). Follows the goal of bridging the digital divide in Thailand villages. Composed of TakNET1, TakNET2 and TakNET3. (2012)
\\ \hline
\multirow{3}{*}{Wireless Leiden} & \multirow{3}{*}{Netherlands} & \multirow{3}{*}{wifi} & \multirow{3}{*}{Yes} & Volunteer-based open, inexpensive, fast wireless network in Leiden and surrounding villages. Developed by a group of local residents. Provides Internet access and free local communication. (2002)
\\ \hline
\multirow{3}{*}{Zenzeleni.net} & \multirow{3}{*}{South Africa} & \multirow{3}{*}{wifi} & Yes, VoIP public phones & Initiated by researchers from the University of the Western Cape (UWC) in the rural under-developed area of Mankosi. Solar powered network. Operated as an umbrella co-operative enterprise and a telecoms provider. (2013)
\\ \hline
\end{tabular}
}
\caption{Basic information about the 15 CN instances that are analyzed further in the survey. These are chosen as representative instances of the rich variety of worldwide CNs.}
\end{table*}
\label{tab:CN_list}
\end{center}
\vspace{-0.7cm}
\subsubsection{Democratization of the telecommunication market}
The market of telecom services is usually composed of monopolies and oligopolies that concentrate significant amount of power. The prevention of telecommunications market distortions and the openness of networks is a key goal set by the International Telecommunication Union (ITU)~\cite{itu2008}, the EC~\cite{ecbb2014}, and the Organization for Economic Cooperation and Development (OECD)~\cite{OECDreport}. Monopolies lead to vertically integrated models, where all the layers of the network belong to one entity and end users are left with limited options when it comes to choosing an operator.
The way they are built and operated makes CNs an ideal candidate model for separating the network infrastructure from the service provision layer. This separation generates opportunities for sharing the related costs between multiple players and opening the network to public administrations and commercial entities such as local/regional ISPs (we elaborate on this model in section II.C).
\begin{comment}
Exhaustive antagonism can also affect the end user whose data and personal information is stored centrally in large company databases. A user's data can be utilized in various ways by companies \ie user data promoted to advertisement companies, data mining companies for profit, receive attacks by other companies \etc. Hence, data privacy can be jeopardized.
\end{comment}
\vspace{0.4cm}
Our survey does not aim at presenting the status of the hundreds of CN efforts around the globe, nor is it a review of the technologies used in CNs today. Such information is already available in the CN literature~\cite{szabo2007wireless},~\cite{lawrence2007wireless},~\cite{5762819}. Instead, the focus of this survey is on the multiple, often complementary, ways different CN initiatives pursue their sustainability. We approach sustainability as a multi-faceted term, with technical, economic, socio-cultural and political context. We review how these networks fund their activities; which ones have been the dominant motives behind their initiation and which ones are the aspirations of other actors when participating in them; and what kind of tools and processes are in place as incentives in the different CNs to best respond to these motives and aspirations.
Most of the material for this survey originates from interviews, both in-person and questionnaire-based, carried out in the context of the netCommons R\&D project~\cite{NETCOMMONS_D2_2},~\cite{NETCOMMONS_D2_1}. Another big part, on proposed participation incentives and mechanisms, is the result of an exhaustive review of the existing scientific literature on the topic. Fifteen CNs are primarily discussed in this paper, as listed in Table \ref{tab:CN_list}.
\begin{comment}
They are selected because they represent adequately the diversity of existing CNs with respect to supported services (local services by priority or exclusively (AWMN, Ninux, Zenzeleni.net) \vs~ Internet access (B4RN, i4Free, TakNet)); network infrastructure concentration (on the backbone network (AMWN, Consume, Freifunk, Ninux) \vs~the access network (FFDN)); size (small i4Free, Sarantaporo.gr, Zenzeleni.net \vs~ large Freifunk, guifi.net, B4RN).
\end{comment}
They are selected as good representatives of the diversity in existing CNs with respect to size, supported services (local services~\vs~Internet access), network scope/role (backbone network \vs~access network), geographical area of coverage (urban areas with rich communication alternatives~\vs rural under-served areas), organizational structure (involved actors and decision-making processes), and funding sources.
The radar chart of Fig. 1 depicts how these fifteen CNs score on the first three attributes (size, services, network role) on a 0-3 scale.
In the remainder of the survey, we first present the layered network infrastructure model, which aims at maximal openness and involvement of actors, and explore how CNs fit in it as open access network instances (section \ref{sec:netinfra}). Then, in section \ref{sec:CN_sustain_incentives}, we iterate on the participation motives of different actors
and their implications for the CN sustainability. In section \ref{sec:funding}, we elaborate on the economic sustainability aspects and the funding sources of CNs. Finally, we investigate actual and theoretical practices adopted by CNs or proposed in the literature (section \ref{mechanisms}), before concluding in section \ref{closure} with a list of the most valuable insights out of the survey.
\begin{comment}
\begin{figure}[tbhp]
\centering
\includegraphics[width=1\linewidth]{figures/3dchart}
\caption{Radar chart with CN profiles related to services, network infrastructure and size.}
\label{fig:chart}
\end{figure}
\end{comment}
\section{Incentive Mechanisms in CNs} \label{mechanisms}
To ensure a sustainable presence, CNs have put in place diverse incentive mechanisms. As with other types of commons~\cite{Ostrom1990}, the main purpose of these mechanisms is to limit, encourage and fuel the original motives for participation of all types of actors. They also aim to prevent phenomena and conditions that might weaken the original motivation of actors. Such phenomena include mainly:
\emph{Free riding and selfish behaviors:} many users are solely interested in enjoying network connectivity without themselves contributing adequate or any resources to the CN. Such behaviors can easily lead to the depletion of network resources and CN degradation.
Mechanisms for organizing and ensuring users sustained contributions and distributing effort across them are of significant importance.
\emph{Unclear CN legal status:} CN actors (users or private sector entities) may be deterred from joining the network and participating in its activities if its legal status is not clear. Well established operational and participation rules can alleviate such effects.
In what follows, we review incentive mechanisms that are either in place in different CNs or have been proposed, without (yet) finding a path to implementation, in the literature. In the latter context, we also review mechanisms that have been proposed for \emph{similar} systems such as wireless ad-hoc networks, P2P systems, and virtual online communities. These systems display inherent structural similarities with CNs in that they also depend on the collective effort and cooperation of their participants to fulfill their tasks: forward and route data in wireless ad hoc networks, disseminate files and other data in P2P systems, share effort and data in virtual online communities.
The different incentive mechanisms aiming to motivate the participation in CNs and strengthen their sustainability are grouped into six categories (Fig. \ref{fig:mechanisms}).
\begin{figure}[tbhp]
\centering
\includegraphics[width=1\linewidth]{figures/mechanisms3}
\caption{Categories of incentive mechanisms used in CNs.}
\label{fig:mechanisms}
\end{figure}
\vspace{-1.5mm}
\subsection{Enforcing fairness in users' contributions and interactions}\label{s:enforcing}
Despite the direct threat that free riding phenomena pose to the network's long-term sustainability, actual prevention countermeasures are not that widespread in most CNs, with the notable exception of guifi.net~\cite{Baig2015150}. Interestingly, a quite broad range of solutions have been proposed in the literature, either in the specific context of CNs or that of similar systems (wireless ad-hoc, P2P, and online virtual communities) \cite{GhLo-2011, CiLo-2011}.
\subsubsection{Direct reciprocity-based mechanisms}
Reciprocity is a broad term that incorporates the notion of human cooperation in different interaction scenarios~\cite{nowak2006five}. \textit{Direct reciprocity} keeps records of the interaction of two specific individuals so that the accounts are settled between those two. The "tit-for-tat" manner of connecting to wireless CNs is quite common practice between their members. For a node to connect to a CN, there must be another node to which the connection is directed. In many cases, the reciprocal sharing obligations stemming from the participation in the CN, are explicitly described in licenses such as the Wireless Commons License (WCL)\cite{Baig2015150}\footnote{\url{http://wiki.p2pfoundation.net/Wireless_Commons_License}} defined in terms of neutrality and general reciprocation.
Direct reciprocity mechanisms can be described in various contexts such as in sharing network connectivity or storage and computing resources. The compensation tables in guifi.net is a key resource to ensure the economic sustainability of the network, ensuring a cooperative and cost-oriented model to share the recurring costs and balance investment, maintenance and consumption~\cite{Baig:2016:MCN:2940157.2940163}.
In terms of proposals, connectivity sharing is the objective studied in \cite{4146973}. A reciprocity algorithm, coupled with the P2PWNC protocol in~\cite{efstathiou2006practical}, keeps account of the services each participant provides and consumes via technical receipts. This way, it keeps a balance between the amount of traffic users transfer and that they relay on behalf of others. The model considers the provision of Internet access through the APs of a wireless CN. Participants are divided into teams that manage their own APs and consume/contribute traffic of/to another AP.
Reciprocity-based mechanisms for sharing storage and computing resources are reported in~\cite{6583482},~\cite{buyukcsahin2013incentive} and~\cite{Vega:2013:SHR:2595405.2595411}. In~\cite{6583482} and~\cite{buyukcsahin2013incentive}, the reciprocity-based mechanism is implemented over a Community Cloud made out of shared computational resources of the network members and is based on records of participants' efforts. Results indicate that the most suitable structure for community clouds should distinguish between ordinary nodes that possess cloud resources and super nodes that are responsible for the management of resource sharing. In~\cite{Vega:2013:SHR:2595405.2595411}, mobile devices used for computing, borrow CPU slots in a reciprocal manner. It is suggested that the heterogeneity in the amount of available resources may not be beneficial for participants with large-scale resources.
\subsubsection{Indirect reciprocity-based mechanisms}
The concept of direct reciprocity readily expands to that of \textit{indirect reciprocity}, which is essentially realized by reputation mechanisms. Indirect reciprocity does not consider two specific individuals (like direct reciprocity) but rather asymmetric random exchanges based on the reputation scores of each individual node.
Key issues in building reputation mechanisms \cite{4468733}, involve keeping past behavior records (as node reputation is partially built over time), carefully evaluating all of the acquired information and distinguishing between old data vs recently gathered ones. Among other challenges, reputation-based systems have to face the impact of liars on peer reputation \ie nodes giving unreliable information about other nodes. The system should be able to yield immediate response to known misbehaving nodes by drawing from past information.
The guifi.net classification of suppliers\footnote{\url{https://guifi.net/en/node/3671/suppliers}} (professionals, volunteers) provides a public ranked list according to reputation of professionals and volunteers available for a range of tasks. The list is based on the certification of their abilities based on actual deployments or training courses.
Reputation mechanisms have been proposed for P2P systems and wireless ad-hoc networks. In \cite{4343463}, such a mechanism is developed to build a reputation score for P2P system participants. Each peer is described based on how much service (bandwidth, computation) it provides and consumes. Peers are encouraged to collaborate with each other and receive an increase in their reputation metrics. The mechanism successfully results in peers making coalitions that eventually work to their benefit. In a similar rationale for routing in mobile ad-hoc networks (MANETs), the reputation technique aims at isolating non-cooperative node behavior using the Confidant protocol. The tamper-proof hardware, which is embedded in nodes, keeps account of their \emph{virtual credit} collected as they contribute in packet forwarding. The reputation mechanism in~\cite{Michiardi:2002:CCR:647802.737297} keeps records of the collaboration activities of nodes in the MANET and builds a reputation score for each node, based on monitored collaboration data and information input from other nodes.
\subsubsection{Punishment of free-riders}
Free riding is a quite common problem in commons, experienced in various forms by each network type. The design of long-enduring CPR institutions~\cite{Ostrom1990} requires graduated sanctions for appropriators who do not respect community rules.
This implies defining the ``boundaries'', determined by the community license and agreements, and requires effective conflict resolution methods that may include sanctions \cite{Baig2015150}.
The conflicts resolution system in guifi.net provides a systematic and clear procedure for resolution of conflicts with participants that negatively affect the common infrastructure resource, with a scale of graduated sanctions. It consists of three stages —-conciliation, mediation, and arbitration—- all of them driven by a lawyer chosen from a set of volunteers. This has been found critical to keep the infrastructure and the project itself operational.
In multi-hop wireless networks, consumption of bandwidth and energy serve as the main motivations for nodes' free riding behavior. Nodes enjoy packet forwarding of their own packets by other nodes but defer, either deterministically or probabilistically, from forwarding the packet of other nodes. Detection and punishment of suspected free-riding nodes are the two basic steps suggested for dealing with this phenomenon in the corresponding literature.
In the generic setting in \cite{8bd91e0a0a71406681d682e5d29b4bbd}, it is suggested that free riding should be confronted using exclusion of peers from a group as a plausible threat. Misbehaving nodes are detected through reputation protocols and excluded from the network or community. Detection of selfish behaviors of mesh routing nodes is carried out in~\cite{Martignon:2009:FDS:1641944.1641958} with a trust-based mechanism. The mechanism can be developed based on the combined observations of neighbor (and other) nodes of the CN such as in KDet \cite{lopez2015kdet}. The Catch protocol in~\cite{Mahajan:2005:SCM:1251203.1251220}, tries to limit the free riding problem in multi-hop wireless networks while preserving anonymity. The adopted technique uses anonymous messages and statistical tests to detect the selfishly behaving nodes and isolate them. It relies on the assumption that free-riding does not appear in the initial stages of the network deployment but later, as the number of peers starts to grow. The corresponding example in CNs reflects the fact that the initial members, \ie volunteers, create the CNs based on certain principles and knowledge that are not compatible with free riding practice. Members that join the network in subsequent stages, \ie users, are often not acquainted with these principles and the importance of complying to them.
\subsubsection{Direct and indirect financial compensation}
\label{subsec:compensation}
This type of mechanisms aim to support CNs' economic sustainability. Guifi.net, a representative example of this category, involves private sector actors that provide commercial services in the CN. To this end, it has set forth additional mechanisms for compensating contributions of different stakeholders \ie compensation system, provision of donation certificates~\cite{Baig:2016:MCN:2940157.2940163}.
The compensation system aims at settling imbalances between network usage and contributions (CAPEX or OPEX). It is a way for participating entities to share network costs while acquiring network resources. Private sector service providers may assume the roles of operators that contribute to the network and consume its resources, investors that only contribute, and pure operators that only consume network resources. Operators can contribute either to the deployment of the infrastructure or to its maintenance.
The provision of donation certificates that are amenable to tax deductions, is a way of acquiring indirect benefits for contributing to a commons infrastructure. Users who pay commercial service providers for service provision, can have some tax deduction benefits as well according to the Spanish legislation and regulation authorities.
Other mechanisms explored in the literature but not yet validated in CNs are the following.
\subsubsection{Community currencies}
\label{subsec:currencies}
The design of community currencies is a way to enforce reciprocity and balance the contributions of nodes to the network. As long as the cost/value of nodes' contribution can be quantified, community currencies can ease the exchange of a wider set of services between CN members and users of a CN and properly reward voluntary activities. At the same time, community currencies are themselves collaborative activities that increase the community spirit and strengthen the intrinsic motivations for participating in a CN. In fact, the smooth operation of a community currency depends heavily on building trust between community members both to accept and use the corresponding currency but also to be able to provide risk-free credits that are very important for the required flow of currency. This trust is a very important asset that can play a key role in the initial birth and sustainable operation of CNs. For the same reason (existing trust and community values), the existence and operation of a CN eases the launch of a community currency. The development of community currencies for CNs is yet at an initial stage but they pose a promising mechanism that exhibits a complex bidirectional relation with CNs~\cite{NETCOMMONS_D2_4}.
\subsubsection{Other game-theoretic mechanisms for enforcing participation} Participant's motives for contributing in CNs can be enhanced by game-theoretic and mechanism design approaches. An incentive mechanism based on a Stackelberg game is provided in~\cite{Biczok:2011:IGW:1942329.1942565}. The objective is to stimulate user and ISP provider participation in a hypothesis of a global CN where the participating entities (users and ISPs) interact with an intermediate entity, \ie the community provider or mediator.
Due to the cooperative nature of CNs, participation of peers often needs to be combined with mutual cooperation \ie forwarding packets, amongst them. While some works use reputation-based mechanisms there are others that prefer credit as a plausible economic incentive to sustain participation. The works in~\cite{Feldman:2004:RIT:988772.988788},~\cite{1610590},~\cite{zhong2003sprite} and~\cite{Zhong2007} tackle this objective in different types of systems \ie P2P, static or mobile ad-hoc systems.
In a P2P network setting~\cite{Feldman:2004:RIT:988772.988788}, the prisoner's dilemma is chosen to design incentive techniques and deal with challenges such as large populations with small lifetime, asymmetry of interest in participation and multiple peer identities. In order to enhance cooperation and avoid false identities and hijacking, the mechanism proposes to keep records of peer interaction and use them to build reputation metrics. In another approach, the work in~\cite{1610590} uses game theory techniques to enhance cooperation in static ad-hoc networks and suggests that the most effective incentivizing structure is one that combines actual incentive mechanisms \ie actual credits as reputation systems or virtual currencies, with mechanisms that target players' self interest and enjoyment. A Video on Demand service on wireless ad hoc systems is the setting for the Stackelberg game presented in~\cite{6260452}. In order to promote cooperation among participants \ie upload and forward data, the content provider offers them rewards which vary across actual payment, virtual credit or reputation points. A software protocol in~\cite{zhong2003sprite} combined with a game-theoretic aspect is used to stimulate cooperation among selfish nodes in mobile ad-hoc networks. A cheat-proof and credit-based mechanism determines node rewards and costs which are utilized for packet forwarding and route discovery.
\subsection{Community cloud infrastructure}
\label{sec:privacy}
\label{sec:cloud}
CN services and applications that store data or process locally can serve as privacy-related incentive mechanisms for CN participants avoiding the exposure to not well understood and often privacy-unfriendly practices of commercial data storage solutions. More often than not, such services involve the deployment of distributed cloud solutions that are deployed locally across the CN nodes, that process and store users' data without dependence on external cloud services.
A proposition to extend CN resource sharing beyond bandwidth resources to computing resources can be found and discussed in~\cite{6673334}. Cloud computing infrastructures can be developed in various ways but face severe challenges due to the nature of CNs \ie hardware and software diversity with various options for inexpensive material, decentralized management where users contribute and manage their own resources and rapid changes in the number of contributing nodes. The idea of developing a distributed Community Cloud that follows the topology of CNs is proposed in~\cite{6583482}. The goal is to regulate consumption and contribution of participant resources in the community cloud in accordance to one's level of contribution. They present an effort-based mechanism for stimulating node participation in resource sharing. Nodes are incentivized using rewards that depend upon their contribution, \ie effort to the local cloud system. A Community Cloud can also be used in conjunction with Grid Computing techniques~\cite{Marinos:2009:CCC:1695659.1695704}. The Community Cloud uses the spare resources of network nodes while considering environmental sustainability and self-management and replaces vendor clouds with full access to users' resources.
\subsection{Socializing processes and tools}
\label{s:socializing}
CNs have developed a great variety of mechanisms to promote participation, interaction and knowledge dissemination among CN members \ie social events, meetings, new member induction process. These mechanisms serve as a "social" incentive to encourage active involvement and engage new and old members to CN processes and operation.
\subsubsection{Social events and meetings}
Large- and small-scale CNs organize gatherings and events to discuss not only CN organizational matters but also strengthen the bonds of community members through social activities.
Face to face meetings are common practices. Depending on their morphology \ie a single network or network of networks, CN members have meetings weekly, monthly or annually. CNs which are composed of smaller networks (guifi.net, Ninux, Freifunk), tend to have weekly or monthly face to face meeting at the local networks and an annual global meeting to get together and discuss the issues arising from the operation of the entire network. Other CNs like AMWN, schedule frequent meetings (\ie General Assembly) when important organizational matters are up for discussion.
\begin{comment}
\begin{itemize}
\item \textbf{guifi.net}, as a network of networks, is divided among smaller networks, each coming with its own local support group. Face-to-face meetings allow volunteers to discuss the issues arising from the operation of the network. These meetings take place every week or every month at the level of the local guifi.net communities and once a year at the level of the whole guifi.net.
\item \textbf{Ninux} Similar practices are followed in Ninux and its own CN islands, with meetings organized periodically at local level. Global meetings and events take place every few years.
\item \textbf{AWMN} Face-to-face meetings are organized by the AWMN Association in order to discuss important organizational matters. The General Assembly is used to regulate and decide about the governance of the network and the election of boards. In most cases, groups of users take advantage of these events and go out together for coffee or drinks when they are over.
\item \textbf{Freifunk} c-base gatherings and the annual "Wireless Community Weekend" event is the way that Freifunk members and organizations get in touch with each other.
\end{itemize}
\end{comment}
\subsubsection{New member induction processes}
Depending on the mentality and philosophy of the particular CN, interaction with network members is a natural prerequisite for a newcomer's access to the network. The way that this interaction is later on retained, is possible to determine their individual participation level.
For example in AWMN or guifi.net, new participants are urged to register and communicate with nodes of physical proximity to them. After communicating with the node owners, they are able to receive advice about the equipment they need and acquire assistance from existing members in setting up their own nodes and joining the network. Many node owners provide public contact information for others to contact them. In cases, where actual interaction with node owners is not possible or for complementary assistance, users can register to the website and post their questions in the CN's forum.
\vspace{1mm}
\subsection{Education and training practices}
\label{s:education}
Education and training of CN members is an important aspect of CNs, addressing their members desire for acquiring new skills and learn more about networking and radio technologies. Seminars, workshops and online manuals are the main deliverables of this line of effort, invested typically by members of the volunteers' group but also by other CN members.
\subsubsection{Workshops and seminars}
Several workshop and seminar events are organized by existing CNs (AWMN, Sarantaporo.gr, guifi.net). Experienced members share their knowledge with new members, exchange ideas and present available technical solutions. Guifi.net is quite active in organizing workshops and training seminars \ie guifi labs\footnote{\url{http://www.guifiraval.net/}} \footnote{\url{https://guifi.net/en/event}}, the SAX\footnote{\url{ https://sax2016.guifi.net}}, or supports related events FOSDEM\footnote{Free and Open Source Software Developers' European Meeting: \url{https://en.wikipedia.org/wiki/FOSDEM}}, the Dynamic Coalition on Community Connectivity (DC3)\footnote{\url{https://www.intgovforum.org/cms/175-igf-2015/3014-dynamic-coalition-on-community-connectivity-dc3}}.
AWMN workshops aim at enhancing members' technical skills by disseminating knowledge and technical expertise, interacting with people that have the same interests, strengthening the bonds within the community and new member training. In a different approach, Sarantaporo.gr workshops are more focused to the broader community of locals (with or without technical expertise), inform people about the operation of the network and share knowledge over the wireless networking principles and the development of community networks.
\begin{comment}
\begin{itemize}
\item \textbf{AWMN} Face to face meetings and workshops have been taking place in AWMN not only for the proper organization of the network but also for knowing new members, disseminating knowledge and technical expertise, interacting with people that have the same interests and strengthening the bonds within the community.
\item \textbf{Sarantaporo.gr} organizes seminars and workshops to inform people about the operation of the network and share knowledge over the wireless networking principles and the development of community networks. The latest workshop was organized in conjunction with netCommons in November 2016.
\item \textbf{guifi.net} is quite active in organizing events. It hosts workshops and learning seminars for end users or professionals known as guifi labs\footnote{\url{http://www.guifiraval.net/}} \footnote{\url{https://guifi.net/en/event}}, the SAX\footnote{\url{ https://sax2016.guifi.net}}, or supports related events GUADEC\footnote{\url{https://en.wikipedia.org/wiki/GNOME_Users_And_Developers_European_Conference}}, the e-week in Vic\footnote{\url{https://twitter.com/eweekvic}}. It also provides support for the World Summit for Free Information Infrastructures
\end{itemize}
\end{comment}
\subsubsection{Online material for DIY fans}
CNs invest effort to derive manuals and how-to documents so that users can learn more about technical matters and be able to set up their own nodes. Freifunk, Ninux, AWMN, guifi.net follow this practice and develop guides that provide technical instructions on actions and requirements of setting up nodes, FAQs and other useful information. Participants are encouraged to self-educate and "take matters into their own hands" instead of relying to "experts" and behaving as consumers of service. In cases, where online material is not enough they can always get advice in CN forums, or retrieve contact info of node owners.
\vspace{-0.2cm}
{ \renewcommand{\arraystretch}{1.1}
\begin{center}
\begin{table*}[t]
\centering
\resizebox{14cm}{!} {
\begin{tabular}{|l|c|c|c|c|}
\hline
\textbf{Mechanisms} & \textbf{Volunteers} & \textbf{Users} & \textbf{Private sector service providers} & \textbf{Public agencies} \\
\hline
Direct reciprocity & & x & & \\
\hline
Indirect reciprocity & & x & & \\
\hline
Punishment of free-riders & x & & & \\
\hline
Community currencies & & x & x & \\
\hline
Game-theoretic & & x & x & \\
\hline
Financial compensation & & & x & \\
\hline
Local data storage infrastructure & & x & & \\
\hline
Social events and meetings & x & x & &\\
\hline
New member induction processes & & x & & \\
\hline
Workshops and seminars & & x & & \\
\hline
Online material for DIY fans & & x & & \\
\hline
Local applications and services & & x & & \\
\hline
Operation as legal entities & & x & x & x\\
\hline
Licenses and Agreements & & x & x & \\
\hline
\end{tabular}
}
\caption{Incentive mechanisms and relevance to the CN stakeholders.}
\end{table*}
\label{tab:mechanism_stakeholder}
\end{center}
}
\vspace{-4mm}
\subsection{Local applications and services as incentives}\label{s:local}
The applications running over the network can themselves be considered as mechanisms motivating persons to join the network\footnote{There are arguments both in favour of the importance of local services in CNs~\cite{antoniadis2016}, but also doubts that local services can make an impact on CNs considering that public Internet covers any application needs on the side of the user ~\cite{NETCOMMONS_D2_2}.}. Such services range from network connectivity to communication and entertainment.
\subsubsection{Proposed services} The CN literature has shown special interest in services such as VoIP, community clouds and crowdsourcing applications.
Trusted VoIP service for nomadic users in wireless network scenarios is the subject of~\cite{efstathiou2006building},~\cite{Frangoudis2014330} and~\cite{6979951}. Building upon an existing scheme, the Peer-to-peer Wireless Network Confederation (P2PWNC), the work in~\cite{efstathiou2006building}, develops a VoIP scheme utilizing residential Wireless LAN access points (producers of bandwidth) for nomadic users (consumers of bandwidth) as a low cost alternative to traditional GSM telephony. In another generic setting in~\cite{Frangoudis2014330} and~\cite{6979951}, authors experiment with VoIP services for nomadic users using community-based Internet access and identify their perceived challenges (trust on nodes, data privacy and unspecified conditions of the wireless environment) and performance limits (capacity, service quality, security).
Cloud services have also attracted a lot of attention as fundamental privacy enablers, \ie for storing the data of the CN locally, without needing to interact with the public Internet. A detailed discussion on clouds can be found in section \ref{sec:privacy}.
Crowdsourcing applications have the potential to match very well the participatory nature of wireless community networks, \ie participatory networking~\cite{Vega2014} and the strong community-oriented social structure met in most developing regions. In the crowdsourcing paradigm, individual users solicit information, content or service from groups of people. The community dimension only strengthens the case for such applications since the community bonds serve as additional socio-psychological incentives for the active participation and contributions of end users. The common resources shared by the members can serve as the media where users (mobile or not) connect to post tasks or get informed about available task announcements. Users receive explicit rewards such as monetary payments, virtual credits of services that match the services they offer~\cite{6275807}.
\begin{comment}
\begin{itemize}
\item \textbf{VOIP services.} A VOIP scheme was presented in~\cite{efstathiou2006building} building upon an existing scheme, the Peer-to-peer Wireless Network Confederation (P2PWNC) that considers nomadic users as consumers of bandwidth and residential WLAN owners as providers of bandwidth. This scheme is applicable to CNs with appropriate coverage capabilities. The VoIP application is analyzed based on the P2PWNC architecture.
Another approach of a VOIP service for wireless environments is the focus of~\cite{Frangoudis2014330} and ~\cite{6979951} in a setting where nomadic users have community-based Internet access but is generic enough to be implemented in other cases as well. The implementation of wireless communication services faces challenges such as trust on nodes, data privacy and unspecified conditions of the wireless environment (i.e., poor signal, transmission delay etc.). A secure VOIP scheme is developed in a residential WLAN and is performance limits (i.e., capacity, service quality, security) are investigated. The results acknowledge the potentials of the scheme in a wider scale implementation.
\item \textbf{Community clouds.} Clouds have attracted a lot of attention since they are seen as fundamental privacy enablers, i.e., store the data of the CN locally, without needing to interact with the public Internet (see section \ref{sec:privacy}).
\item \textbf{Crowdsourcing applications} match very well the participatory nature of wireless community networks, i.e., participatory networking~\cite{Vega2014} and the strong community-oriented social structure met in most developing regions. In the crowdsourcing paradigm, individual users solicit information, content or service from groups of people. The community dimension only strengthens the case for such applications since the community bonds serve as additional socio-psychological incentives for the active participation and contributions of end users. The common resources shared by the members can serve as the media where users (mobile or not) connect to post tasks or get informed about available task announcements. Users receive explicit reward such as monetary payment, virtual credits of services that match the services they offer~\cite{6275807}.
\end{itemize}
\end{comment}
\subsubsection{Implemented services}
Certain CNs have implemented a broad variety of services and applications, while others are at a more initial stage of service and application provision. In CNs like Sarantaporo.gr and i4Free, the main service of interest is Internet access. Yet, Internet access is not always on offer by the CN: Ninux does not provide any Internet service at all; guifi.net offers the ability only through private Internet service providers operating over the CN; and, in other networks, such as the AWMN, members occasionally share their Internet connections with other users through APs.
Networks built by people with technological background tend to elaborate more on the provision of non-professional services. Tools for communication such as chat, email servers, mailing lists, wikis, forums, data exchange, entertainment like broadcast radios, podcasts and streaming are common services found in most CNs (AWMN, Ninux, Freifunk, guifi.net). AWMN and Ninux users have also access to VoIP and chats, guifi.net users to videoconferencing, AWMN and guifi.net users to local clouds and FFDN and Freifunk and AWMN users to collaborative writing tools. Apart from the basic services used in most CNs, there are also several CN specific ones \ie multi-player gaming, broadcasting, live streaming, e-learning, local search engines (Quicksearch, Wahoo, Woogle) in AWMN, web proxies, FTP or shared disk servers, XMPP instant messaging servers, IRC servers, cloud services as the \textit{Cloudy} distribution~\cite{Selimi:2015:CSG:2852375.2852752} in guifi.net, Internet cube, BitTorrent tracker, IndeCP or Internet service in FFDN, private VoIP service and weather monitoring in Sarantaporo.gr.
\vspace{2mm}
\subsection{Lawful framework of operation}
\label{s:lawful}
An operational framework of CNs (legal status, rights, obligations) which is not well defined may impede the attraction of new participants. The level of support of CN initiatives by the state or local administration has an impact on users' decisions to join or not the network~\cite{abdelaal2013social}. When local authorities or another third-party organization with clear legal status are involved, \eg by signing licenses, the user's concerns are easier overcome and the decision to participate looks far less risky. The response of most CN initiatives to these reservations is to develop legal entities, and set forth licenses and agreements as legal documents specifying the terms and conditions of participation in the network.
\subsubsection{Operation as legal entities}
The majority of CNs have developed legal entities to represent the network to third parties (Table \ref{tab:cn_aspects}). For example, Guifi.net created the guifi.net Foundation, AWMN the Association of AWMN, FFDN consists of non-profit member organizations registered as telecom operators, Sarantaporo.gr operates as a non-profit civil partnership subject to the Greek legal framework about NPOs, Freifunk has the Forderverein freie Netzwerke e.V. as a reference NPO authority, TakNet is a social enterprise and B4RN a community benefit society.
\begin{comment}
\begin{itemize}
\item \textbf{guifi.net} In guifi.net ~\cite{Baig2015150}, four year after its inception, a group of network users created the guifi.net foundation. The foundation was created with as a nonprofit legal entity for managing operational and funding issues regarding the guifi CN. To this end, it has developed several other legal mechanisms and tools (ref. \ref{subsec:licences}). The legal entity of guifi.net is recognized both at local and national level.
\item \textbf{AWMN} has founded the "Association of AWMN", a legal entity with a non-profit character representing the network to third parties. The Association has certain rules reflecting its main purpose of supporting and promoting ICT services.
\item \textbf{FFDN} is a federation of CNs, each of which has been declared as a non-profit member organization and they are registered as telecom operators according to the French legislation.
\item \textbf{Sarantaporo.gr} has developed a non-profit civil partnership that follows a set of articles and is subject to the Greek legal framework about NPOs.
\item \textbf{Freifunk} association called Forderverein freie Netzwerke e.V., is the reference authority (NPO) that gathers the responsibilities of funding and operation of the website and other media platforms. It is composed by a variety of networks that expand from Germany to Switzerland and Austria and is governed in a decentralized manner.To this purpose, for each of these network a local group is formed as a non-profit organization and undertakes responsibility for their local CN. They deliberately avoid hierarchies (of knowledge) that would give the participants the feeling that they are consumers of service (passive users).
\end{itemize}
\end{comment}
\subsubsection{Licenses and Agreements}
\label{subsec:licences}
Besides the legal status, CNs normally make use of legal documents, such as Licenses and Agreements, to specify the frame of their members' participation and their own interaction with third-party entities.
Guifi.net and FFDN utilize a Network Commons License (NCL) for establishing the rights and duties of subscribed participants. Moreover, guifi.net has developed collaboration agreements (\textit{Type A}, \textit{Type B}, \textit{Type C}) that define the terms of conditions of third party collaboration within the network. Any private sector entity that wants to perform economic activities and use a significant amount of resources of the network has to sign an Agreement with the Foundation and participate in the compensation system (\cref{subsec:compensation}). Freifunk uses the PicoPeering Agreement that promotes the free exchange of data within the network
Ninux participants comply with the Ninux manifesto, which is a variation of the PicoPeering Agreement.
\begin{comment}
\begin{itemize}
\item \textbf{guifi.net} has devised certain legal documents determining participation rules: a Network Commons License (NCL) for establishing the rights and duties of subscribed participants and collaboration agreements that define the terms of condition of third party collaboration within the network. The Foundation of guifi.net is always present in these agreements as a central hub.
Any professional that wants to perform economic activities and use a significant amount of resources of the network has to sign an Agreement with the Foundation and participate in the compensation system (ref. section \cref{subsec:compensation}). There are three types of Agreements, depending on the type of contribution professionals make to the common infrastructure. The first one, \textit{Type A}, assumes that all of the infrastructure contributed by a professional will be incorporated in the commons; the second type, \textit{Type B} applies when parts of the contributed infrastructure is attributed to the common infrastructure; and the last type, \textit{Type C} refers to professionals that don't contribute infrastructure but use the one already deployed in the network.
\item \textbf{FFDN} The NCL is adopted by FFDN as well. The issue of net neutrality is specified within the NCL. ISPs of FFDN are bound to use public router IP addresses for each of their subscribers. Collaboration agreements are also present in guifi.net and FFDN. A Reference Authority is present in FFDN for legal representation of the local CNs and their members.
\item \textbf{Freifunk} The PicoPeering Agreement initiated by Freifunk promotes the free exchange of data within the network. The Memorandum of Understanding (MOU) is also used in Freifunk to declare the basic principles of network operation.
\end{itemize}
\end{comment}
\vspace{2mm}
\subsection{Incentive mechanism classification}
Several of the incentive mechanisms that are described in sections from \ref{s:enforcing} to \ref{s:lawful} have never gone beyond the paper analysis stage. On the other hand, several others are indeed applied in existing CNs. The financial compensation system of guifi.net, the social events, meetings and workshops organized by many CNs, the adoption of licences in Freifunk and guifi.net, as well as the introduction of a lawful operational framework serve, one way or another, as incentive mechanisms that motivate the participation of different types of stakeholders in CN initiatives, as shown in Table \ref{tab:mechanism_stakeholder}.
Some of these incentive mechanisms apply almost invariably to all CNs. The lawful operational status, for example, is mandatory if the CN wants to attract critical masses of users, but also private sector entities and the support from public agencies. Equally common among CNs is the care for social events and meetings that can strengthen the links between their members and satisfy socio-cultural motives of users. On the contrary, incentive mechanisms of economical nature, such as the financial compensation scheme and the donation certificates issued bu guifi.net for tax deduction purposes are more relevant in CNs that support commercial operations over them.
For sure, it would be rather wise to match the incentive mechanisms with the different stakeholder types. Hence, volunteers would be more responsive to incentive mechanisms that underline political and cultural causes; private sector service providers would respond, maybe exclusively, to incentive mechanisms with economic implications; and local authorities will be much more prone to get involved when they realize that public expenses can be saved or some political strategic objective be served through this involvement.
By far, the majority of incentive mechanisms target CN users. One aspect that is not well understood is how the effectiveness of a mechanism varies with different features of the community; namely, if we could have a characterization of a community according to a fixed set of attributes (urban vs. rural, educational level, professional background, dominant political preferences) that could predict which incentive mechanism would best mobilize its members. An important parameter in this context is the size of the community. Characterizations along attributes is easier if the community is small\footnote{But not too small. The CN will not be sustainable if there are not enough human resources to pull from.} and with roughly uniform interests and professional background. As their size grows, such characterizations become harder and so does any attempt to predict the suitability of incentive mechanisms.
\section{Network Infrastructures and \\ Community Networks}\label{sec:netinfra}
In order to understand how CNs fit in the broader picture of broadband networks, we contemplate the typical network infrastructure layers, the basic actors and the business models --as they are met in most networks-- below.
{ \renewcommand{\arraystretch}{1.2}
\begin{center}
\begin{table}[t]
\centering
\resizebox{8.5cm}{!} {
\begin{tabular}{|p{1.1cm}|p{6.7cm}|}
\hline
\textbf{Acronym} & \textbf{Description} \\ \hline
AP & Access Point \\ \hline
CAPEX & Capital Expenditure \\ \hline
CN & Community Network \\ \hline
CONFINE & Community Networks Testbed for the Future Internet \\ \hline
CPR & Common Pool Resource \\ \hline
CS & Community Service \\ \hline
DIY & Do-It-Yourself \\ \hline
DNS & Domain Name Server \\ \hline
EC & European Commission \\ \hline
EU & European Union \\ \hline
GFOSS & Greek Free/Open Source Software Society \\ \hline
ICT & Information and Communication Technology \\ \hline
ISP & Internet Service Provider \\ \hline
ITU & International Telecommunications Union \\ \hline
MANET & Mobile Ad-hoc Networks \\ \hline
NCL & Network Commons License \\ \hline
NP & Network Provider \\ \hline
NPO & Non-Profit Organization \\ \hline
OECD & Organization for Economic Cooperation and Development \\ \hline
OPEX & Operational Expenditure \\ \hline
P2P & Peer to Peer \\ \hline
P2PWNC & Peer-to-peer Wireless Network Confederation \\ \hline
PIP & Physical Infrastructure Provider \\ \hline
SP & Service Provider \\ \hline
VoIP & Voice over Internet Protocol \\ \hline
WCED & World Commission Environment and Development \\ \hline
WCL & Wireless Commons License \\ \hline
\end{tabular}
}
\caption{Terminology used throughout the paper.}
\end{table}
\label{tab:acronyms}
\end{center}
}
\vspace{-1cm}
\subsection{Network Infrastructure Layers}
Considering how a broadband network is created, its structure can be decomposed into three distinct but inter-dependent layers: a) \textit{passive infrastructure}, b) \textit{active infrastructure} and c) \textit{services}.
\begin{figure}[tbhp]
\centering
\includegraphics[width=0.8\linewidth]{figures/layers}
\caption{The layers of a broadband network.}
\label{fig:broadbmodel_layers}
\end{figure}
\begin{comment}
\begin{figure*}[ht]
\begin{minipage}[b]{0.45\linewidth}
\centering
\includegraphics[width=\textwidth]{figures/Services_chart}
\caption{default}
\label{fig:figure1}
\end{minipage}
\hspace{0.05cm}
\begin{minipage}[b]{0.45\linewidth}
\centering
\includegraphics[width=\textwidth]{figures/backbone_vs_access_chart}
\caption{default}
\label{fig:figure2}
\end{minipage}
\end{figure*}
\end{comment}
The \textit{passive infrastructure layer} expresses the non-electronic physical equipment needed to deploy the network. Non-electric elements vary depending on the link technology in use, \eg fibre, copper, antennas. They typically refer to ducts, cables, masts, towers, technical premises, easements \etc The passive infrastructure is built to endure for many years, usually decades. Its development demands high capital expenditure (CAPEX) and frequent upgrades are difficult to realize. However, its operational costs (OPEX) are relatively low.
The second layer, \ie \textit{active infrastructure}, describes the electronic physical equipment of a network such as routers, switches, transponders, control and management servers. The OPEX of the active equipment is high (\ie electricity costs) but its capital expenditure is usually low since it involves up-to-date technological elements. The active equipment needs to follow the rapid advances of technology and get renewed frequently, \ie within a decade.
The third and highest layer of a broadband network is the layer of \textit{services}. It corresponds to the telecommunication services provided on top of the passive and active infrastructure. These services may be both private and public and include electronic government, education, health, commerce, Internet, entertainment, telephony (e.g., VoIP), access to media content (television, radio, movies) and many more. End users usually pay a fee for receiving the services either directly or indirectly. The type of reimbursement depends on the chosen network infrastructure model and the business actors involved.
The implementation of the service layer is conditioned on the deployment of the passive and active infrastructure. Therefore, the first two layers are a prerequisite for the existence of the third one (Fig.\ref{fig:broadbmodel_layers}).
\subsection{Business actors}
Business actors are determined in accordance with the network infrastructure layers \cite{ecbb2014},\cite{NETCOMMONS_D1_2},\cite{Forzati2010},\cite{szabo2007wireless}. They are typically providers of the network's equipment and services. Telecom operators and private companies, public authorities, local cooperatives and housing associations, are some characteristic examples of business actors.
In detail, the \textit{physical infrastructure provider (PIP)} has ownership of the passive equipment and undertakes the equipment's maintenance and operation responsibilities. PIPs can be divided into \textit{backbone PIPs} and \textit{access area PIPs}, depending on which network parts they possess. Backbone PIPs invest in the backbone network infrastructure, while access area PIPs own and moderate the infrastructure aimed for providing connections to the end users \ie first-mile connectivity. In the case of CNs, a local organization may participate as a backbone PIP, an access PIP or both.
The \textit{network provider (NP)} owns and operates the active equipment. It leases physical infrastructure installations from the PIPs and makes its equipment available for the provision of services by other SPs or provides its own services. Network providers may be public authorities, private companies, local cooperatives who own the equipment or entities who are subcontracted to operate them by one of the aforementioned owner entities.
The \textit{service provider} offers services within the network. Service providers are typically companies that utilize the network's active and passive equipment to offer their services to end users in exchange for compensation, typically payment. The payment can be direct (service fee) or indirect (connection or network fee). They need access to the NP's interface and install their own devices if and where needed. The existence of service provision within the network is vital for the end user engagement and therefore the network's viability.
\subsection{Network Infrastructures Business Models}
The roles and responsibilities of different business actors in network infrastructures vary resulting in a great range of business models (Fig. \ref{fig:broadbmodel}).
Traditional telecom models follow the concept of \textit{vertical integration}. In these models, the ownership and operation of all three infrastructure layers is concentrated to one single entity. As a consequence, cases of monopolies or oligopolies that hamper the existence of competitors by exercising great control over the market, \ie "market failure" cases, are common. Moreover, due to lack of other competing entities, a single vertical integrated operator is often not willing to provide broadband access to remote areas featuring high network expansion costs, leaving several rural areas under-served.
To reverse this picture, the ITU \cite{ecbb2014},\cite{itu2008} and the EC have set a goal to promote infrastructure separation and sharing through legislation, regulation and subsidies. Open access networks have been brought to focus.
The \textit{openness of a network} is characterized by the presence of multiple providers in the market offering customers the opportunity to choose amongst them. Open access network models separate the ownership of the business actors from the infrastructure layers \ie PIP, NP, SP, with the aim of promoting competition, sharing of the network infrastructure and discouraging vertical integration.
The following cases can be distinguished although the limits among the respective actors are not always clearcut.
\begin{figure}[tbhp]
\centering
\includegraphics[width=1\linewidth]{figures/1}
\caption{The components of a broadband network (with a focus on optical fibre) and the three service layers.}
\label{fig:broadbmodel}
\end{figure}
The different models differ in the functional separation across layers, as recommended by ITU \cite{itu2008}, ranging from vertical integration across all layers in \textit{e, f, g}, partial separation in \textit{a, b, d}, and full functional separation in \textit{c}. The models also differ in terms of alternatives and therefore competition in each layer, except the passive infrastructure, that tend to a single actor in charge of deploying and operating either the backbone or the access area PIP. While all models except \textit{g} offer alternatives in service provision, only\textit{ d, e} provide alternatives in network provision.
\iffalse
\item \textit{Integrated PIP/NP and distinct SPs.}\\
In this model, one single actor is in charge of deploying and operating the physical and active infrastructure. On the contrary, the service provision layer is open to separate service providers. The SPs compensate the integrated PIP and NP for utilizing the network and providing services to the end users. In some cases, users pay a fee directly to the single PIP/NP actor.
\item \textit{PIP and integrated NP/SPs.} \\
The passive infrastructure is owned by a single actor (PIP) and is built by itself or by subcontracted third-party companies. The NP also acts as SP. There can be multiple integrated NP/SPs that compete with each other to deliver network access and services to the end users. The integrated NP/SPs lease the PIP's infrastructure (backbone PIP and access area PIP) and receive access to connect their active equipment.
\item \textit{Distinct PIP, NP, SPs} \\
The business actors are separate entities and the model is open at all three layers. The PIP owns the physical infrastructure. This is leased by the NP, which places its active equipment. The active equipment is then available to the SPs for delivering their services to the end users. The NP is not permitted to deliver its own services and does not interfere with the services offered by the SPs. It is also possible for multiple NPs to be contracted by the PIP to operate over distinct slices of time and/or different geographical areas each.
The NP reimburses the PIP for using the physical infrastructure and the SPs compensate the NP for having access to the active infrastructure. Users pay service fees directly to the SP. In some model variants, users may also pay a network fee directly to the NP.
\item \textit{PIP, NP or NP/SP, SPs} \\
This model is a variation of the previous one. The passive infrastructure is owned by the PIP and leased to the NPs. However, the NP now has the ability to act as SP and compete with other SPs.
\item \textit{Vertically integrated operator with LLUB and NP/SP.} \\
The vertically integrated operator is in charge of all layers but allows other operators to use its physical infrastructure for delivering services to their customers \ie local loop un-bundling (LLUB).
The NP/SP business actors install their equipment in the access nodes to reach their customers. They face competition from the vertical integrated operator and other NP/SPs.
\item \textit{Vertically integrated operator with bit stream access and SPs.} \\
The vertically integrated operator has access to all layers. It can provide access to service providers for using the active infrastructure \ie bit stream access. Service providers place their equipment to access the interface of the vertically integrated operator and compete with each other and the operator in the provision of services.
The model variants, e) and f) refer to cases, where the operator has calculable market power or has received public funding. In these cases, its provides access to its competitors in the physical e) or in the active layer f).
\item \textit{Vertically integrated operator, "All-in-a-box".}\\
This is the traditional model used in telecom markets \ie the vertical integrated network. The telecommunication operator dominates in all three network infrastructure layers and does not leave any opportunities for other entities to participate in one or more layers.
\fi
Diverse types of local cooperative schemes fit and build on these cases. Municipal networks focus on maximizing the access to connectivity from public (municipal) interest, and they usually rely on public-private partnerships. The service is defined and governed by the public partner but implemented and operated by one or multiple private partners. Typical cases the optical fibre service from Stokab in the Stockholm region, among several other regions in Europe, following the \textit{d} model, or the public WiFi services in most European cities, that can follow any model for service provision, as the public entity just defines, funds and oversees the public service under private operation. Internet eXchange Points (IXP) are physical infrastructure through which Internet service providers (ISPs) and Content Delivery Networks (CDNs) exchange Internet traffic between their networks (autonomous systems). The switching infrastructure is built and managed as a CPR according to Fig.\ref{fig:broadbmodel}, but the governance may range from a centralized \textit{a} to a participatory \textit{CN} model. IXPs and CNs are quite equivalent, the main difference being that IXPs connect larger entities only (wholesale) and CNs focus on individuals and households (retail). However the difference blurs as they expand, with the example of guifi.net that is both a CN and a de-facto regional IXP, or the case of Ninux, that acts like a country IXP of diverse city or regional networks, and connected to the Rome IXP (Namex).
\vspace{1mm}
\subsection{CNs as open access network instances: the commons model}
CNs differ from other models in that there is crowdsourcing in all layers. The community participants contribute and share the passive infrastructure, they also coordinate and operate the active network, and multiple service providers can benefit from that network infrastructure CPR.
Furthermore, CNs embody some key principles \cite{Baig2015150}:
\textbf{Non-discriminatory and open access.} The access is non-discriminatory because any pricing, when practiced, is determined using a cooperative, rather than competitive, model. Typically this results in a cost-oriented model (vs. market-oriented) applying the fair-trade principle for labour pricing \cite{moore2004fair}. It is open because everybody has the right to join the infrastructure.
\textbf{Open participation.} Everybody has the right to join the community. According to roles and interests, several main groups could be identified as stakeholders: i) volunteers interested in aspects such as neutrality, privacy, independence, creativity, innovation, DIY, or protection of consumers’ rights; ii) commercial entities interested in aspects such as demand, service supply, and stability of operation; iii) end users (\ie \textit{customers}), interested in network access and service consumption; and iv) public agencies (local or national),
interested in
regulating the participation of society and the usage of public space, and even in satisfying their own telecommunication needs.
Preserving a balance among these or other stakeholders is desirable, as every group has natural attributions that should not be delegated or undertaken by any other. It is important to clarify that not all stakeholders are present in all CNs. For instance, many CNs object to the participation of commercial entities as this is against their vision and philosophy (e.g. B4RN).
The model of the CN is based on the concept that the physical and active equipment are used as a Common Pool Resource (CPR).
Its participants must accept the rules to join the network and must contribute the required infrastructure to do it (routers, links, and servers), but they keep the ownership of hardware they have contributed and the right to withdraw. As a result, the infrastructure is shared and managed collectively, as a collective good.
Comparing the CN commons model with the aforementioned models for open access networks:
\begin{itemize}
\item the CPR (\ie participants of the network, legal entity) replaces the PIP and NP actors;
\item the CPR offers access to private service providers (SPs) but also provides community services (CSs).
\end{itemize}
Cooperation at the network deployment and operation level is crucial but competition in the service provision is encouraged to avoid monopoly situations.
An example of the commons model in action is provided by the guifi.net CN. The network employs cost sharing and compensation mechanisms in order to facilitate the participation of commercial SPs and operators in the CN. They deliver their services through the network's infrastructure and receive payment from their customers. At the same time, they can contribute infrastructure and invest money to the CPR or compensate the network for using it \cite{Baig:2016:MCN:2940157.2940163}.
\begin{comment}
According to the guifi.net model, which is a vivid example of the commons model, service providers deliver their services through the network's infrastructure and get compensated for them. The network also employs cost sharing and compensation mechanisms in order to facilitate the participation of professional SPs and operators in the CN. SPs are able to contribute infrastructure, invest money to the CPR or compensate the network for using it and receive payment from providing services to their customers \cite{Baig:2016:MCN:2940157.2940163}, \cite{NETCOMMONS_D2_4}. Participants who use a significant amount of resources from the infrastructure commons are obliged
to sign an agreement for economic activities and for the participation in the compensation system.
The compensation system is built around the following basic concepts of a CN:
\begin{itemize}
\item Costs: The costs a CN involve its capital (CAPEX) and operational (OPEX) expenditure.
\item Consumption: The SPs and operators utilize the network's infrastructure to deliver services to their end users contributing in this way in the network costs.
\item Contribution: Participants economic contributions to the CPR.
\item Balance: The compensation system then calculates a balance of contribution and consumption for each participant. If the balance is negative this means that the consumption of network resources has overcome the contributions invested in the infrastructure and there is the need for extra compensation. Otherwise, a positive balance means that the acquired funds can be reinvested in the CN.
\end{itemize}
The adopted commons model manages to increase competition by equalizing business opportunities (SPs pool their assets), facilitating the entry of professionals due to cost-sharing mechanisms, not employing physical and network layer intermediates, enabling service delivery to the whole network and encompassing tasks of simple equipment reconfiguration instead of changing suppliers. The participation from both professionals and volunteers are allowed in the balance, making the model unique on building and managing network infrastructure in a way that is
sustainable, fair and non-profit.
\end{comment}
\section{CN STAKEHOLDERS AND INCENTIVES FOR PARTICIPATION}
Originating from the fact that CNs are complex socio-technical systems, built and operated by humans, that combine the technological infrastructure with the multiple social dimensions [C. Fuchs[3], M. Gurstein [4]], it possible to say that they form a kind of society [Schuler[5]] whose physical existence is based on technological elements, i.e., infrastructure, software. In fact, the technical infrastructure itself can be considered as a result of social interaction among network participants and as an ex-ante condition for further political, socio-cultural and economic interactions among the community members.
As we have seen in the section \ref{network_sustainability}, the sustainability of community networks is a multi-dimensional (political, socio-cultural, economic) concept which is highly dependent on CN user participation.
It is important to note that network participants are not homogeneous. Different types of users weigh differently their participation depending on their interests, status, obligations, goals and motivations. This heterogeneity can be expressed using four types of stakeholders \textit{Volunteers, Users, Professionals, Public administrations} as they were primarily established in guifi.net. Each stakeholder possesses its own incentives for participating.
In general, the incentives for joining a CN, can be distinguished according to the dimensions adopted for the concept of sustainability. They can be \textit{politically, socio-culturally or economically} oriented.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{figures/stakeholder_incentives}
\caption{Primary participation incentives per stakeholder type.}
\label{incentives_st}
\end{figure}
\subsection{Volunteers}\label{volunteer
The term \textit{volunteer} is used to define an individual or a group activated to offer their services for a purpose without desiring any compensation in return. In the context of CNs, volunteers are the people that offer their services for the creation of a CN. They are the initiators of the CN project. Their interests reside in aspects of \textit{neutrality, privacy, independence, creativity, innovation, DIY, or protection of consumers' rights}.
Almost always, (a subset of) these people retain their interest in the network after its original deployment phase, holding the responsibility for the operation and maintenance of the CN~\cite{DBLP:journals/corr/abs-1207-1031}. And more often than not, they take an active role in the network expansion, either through helping with the technical matters and/or organizing informational and training events for potential participants.
Volunteers usually form the core team responsible for the management and operation of the network. Managing the network infrastructure refers to designing, planning, developing and maintaining the network. These are implemented by organizing and coordinating the efforts of participants, finding resources or even resolving issues that arise from the use of the network i.e. technical problems, conflicts, etc. In their effort to regulate all these matters, volunteers utilize software tools for the network infrastructure and governance tools for the participation of users. Also, it is not uncommon for
volunteers to create a legal entity to represent the network to third parties (i.e.,
government, third party organizations, companies, Internet Service Providers (ISPs) providers). This lets them have a voice and interface with third parties on legal and regulatory matters, but also get involved in financial transactions (e.g., user subscriptions, fund raising, purchase of equipment).
The volunteer groups usually comprise of people that cumulatively possess knowledge and expertise over a wide set of areas~\cite{aichele2006wireless}:
\begin{itemize}
\item{In technical matters concerning the development of a computer network, ranging from installing and fine tuning antennas to configuring connections, IP addresses and network routes, and troubleshooting problems at both hardware and software level;}
\item{In legal matters such as national and international legislation around technology and civil law, including the provisions of licenses, regulation status, and possible agreement frameworks for the operation of the network;}
\item{In potential sources of funding, primarily from public administrations at national and European Union level.}
\end{itemize}
The volunteer groups typically involve technology enthusiasts, radio amateurs, hackers, activists, and academics.
Their motives have a strong bias towards political and socio-cultural values and ideals, which is not met in any of the other three stakeholder groups. Experimentation with technology, open software and do-it-yourself (DIY) tools, sensitivity to privacy and network neutrality, the desire to bridge the digital divide, but also commitment to the community spirit and social movement, participatory governance and decision-making, and protection of consumers' rights, count as primary motives of people that lead the CN initiatives, mixing in variable ways both across and within the different CNs. Economic incentives are also present in a few cases, albeit to a smaller extent. On the contrary, in many cases, the members of the volunteers' group end up investing a lot of personal effort, time, and money to the CN initiative, without direct financial return of any kind.
The incentives of the volunteer groups are not necessarily static throughout the lifetime of the CN initiatives. There are instances where these have evolved over time, adapting to the group membership (\eg members joining or leaving the group), new technologies that were made available over time, and the evolution of the surrounding legal/regulatory environment.
\subsubsection{Political incentives}\label{subsec:vol_political}
Political causes often serve as driving forces for the groups that lead CN initiatives. A characteristic example of principles underlying such initiatives is found in the declaration by the guifi.net foundation, the volunteers' group that has developed and still operates the guifi CN, in Catalunia, Spain ~\cite{Perez2016, barcelo2014bottom, 5514608}:
\begin{itemize}
\item Freedom to use the network, as long the other users, the contents, and the network itself are respected.
\item Freedom to learn the working details of network elements and the network as a whole.
\item Freedom to disseminate the knowledge and the spirit of the network.
\item Freedom to offer services and contents.
\end{itemize}
Such causes often prove to be strong enough to fuel these groups' active involvement with the CN despite the effort, time and money this requires
\begin{description}
\item[Bridging the digital divide:]
CNs have thrived in rural areas where the access to the Internet, and ICT services more generally, was (and, in many cases, still is) poor or non existent. The main reason for this is the reluctance of commercial operators to invest on fixed broadband infrastructure in remote, sparsely populated areas, because they do not deem this cost-efficient. Internet connectivity alternatives based on wireless technologies (satellite Internet, cellular data), when they are available, are usually either expensive and/or of lower quality.
The right to (broadband) connectivity is a matter of equal opportunities in the contemporary digital society; and digital illiteracy puts at disadvantage populations deprived of it. The launch of CN initiatives has many times been the response to this threat. Interestingly, the volunteers' group is not always formed by local residents suffering the digital divide (as the case is with the B4RN~\cite{NETCOMMONS_D2_2} and guifi networks~\cite{NETCOMMONS_D1_2}), but also visitors or people with origins from the area under question, as the case is with the i4Free and Sarantaporo.gr networks, respectively. More analytically \cite{NETCOMMONS_D1_2}, \cite{NETCOMMONS_D2_2}, \cite{NETCOMMONS_D2_3}:
\begin{itemize}
\item \textbf{guifi.net} started in Osona, a village in Catalunia, in 2004. A group of people decided to create a network that would serve remote rural areas. Internet connection through conventional ISPs was not available due to the high cost related to the network deployment. The locals then, decided to solve the problem themselves by creating a wireless network throughout the region.
\item \textbf{Sarantaporo.gr}. People with origins from the area of Sarantaporo in north-western Greece, and residents of Athens and abroad, originally wanted to create a website for the village. Yet, to their surprise, they realized that there was no network connection other than telephone modems and cellular data. This led a small group of people to put efforts and build a wireless community network. With the help of the Greek Foundation for open-source Software and local academic institutions, as well as funding from EU RnD projects, the network grew over time to cover currently 14 villages in the broader area and get used by much of the local population.
\textbf{i4Free}\footnote{\url{https://openhardware.ellak.gr/tag/i4free-gr/}}\footnote{\url{https://www.facebook.com/i4free.gr/}}. This is a network that started from the initiative of a German engineer and professor in an island of Greece with poor Internet connectivity. Anticipating the importance of Internet resources and equipped with technical knowledge, the initiator of i4free created a small network at his own expenses so that locals could have access to ICT services
\item \textbf{B4RN}. The Broadband For Rural North (B4RN) initiative in Lancashire, UK, aimed at bridging the digital divide, started in December of 2011 by a local volunteer group led by a networking expert, Barry Forde. Contrary to other CNs, their CN is based exclusively on fiber and was developed as a more affordable and cost-efficient alternative to broadband access than alternative technologies such as satellite and cellular.
\end{itemize}
\item[ Openness, net neutrality, and privacy:]
\label{net_privacy}
These highly controversial subjects have served as primary motivations for CN initiatives. The principle of net neutrality dictates that traffic within the network should be treated in an equal manner independently of the content or the source. The data that is communicated across the network is not subject to discrimination. However, this is not the case in the Internet where large ISPs are able to block and prioritize traffic without having the consensus to do so~\cite{dischinger_2010, miorandi_2013, dischinger_2008}, often also disregarding basic legislation on freedom of communications.
Moreover, volunteers are often interested in accessing ICT services without having to compromise their \textbf{privacy}. This applies for technology enthusiasts, activists and users in general that wish to protect their private content.
CNs such as the French FFDN and the German Freifunk declare privacy/anonymity and net neutrality as integral parts of their manifesto and incorporate them in their fundamental operation principles.
\begin{itemize}
\item The \textbf{FFDN}\footnote{\url{https://www.ffdn.org/}.} was founded in 2011 as an umbrella organization embracing 28 CNs operating across France (plus one in Brussels, Belgium), including the most popular French CN by that time, FDN. This happened after a a call by FDN's president Benjamin Bayart and other FDN active volunteers to people across France to start building their CNs. This call came as a response to several events that made an impact on the debate about digital rights (\eg WikiLeaks, Cablegate,DataGate). All CNs under the FFDN association adhere to values of collaboration, openness and support of human rights (freedom of expression, privacy) embedded by the Free Software Movement.
\item \textbf{Freifunk} is an open initiative that supports free computer networks in the German region, counting about 150 local communities with more than 35,000 access points. It started its operation in Berlin attracting many artists, activists and tech enthusiasts from all over Europe. The incentives behind its initiative are reflected in the very early statement of its basic principles of operation: (a) Public and anonymous access; (b) Lack of commercial flavor and censorship; (c) Decentralized operation and community ownership over it.
\end{itemize}
\item[Autonomy and alternative communication models:]
These are common motives for the original deployment and subsequent operation of CNs~\cite{lawrence2007wireless}, especially in urban areas, where the digital divide threat is much less pronounced. Community networks such as Consume\footnote{\url{http://consume.net/}}\footnote{\url{http://wiki.p2pfoundation.net/Consume}} and Free2Air\footnote{\url{http://www.free2air.org/}}\footnote{\url{http://wiki.p2pfoundation.net/Free2Air}} started out representing alternative approaches to the commercial Internet provision, aiming at higher freedom and control over personal communications. In other cases, such as guifi.net, which started as an attempt to bridge the digital divide, such political purposes emerged as an equally strong motivation factor, especially when the number of network connectivity alternatives increased. In more detail:
\begin{itemize}
\item \textbf{Consume.} This CN was one of the first ones to be conceived and deployed in Europe. Its development was led by James Stevens and Julian Priest and a number of people that were organized around them. Although the original motivation was to save Internet access fees for conducting business, the initiative evolved to an attempt to ``short-circuit" what, by that time, has become the {\it ``anti-competitive telecommunications market model''}~\cite{NETCOMMONS_D2_2}.
\item \textbf{Free2Air.} This initiative was initiated in East London as an alternative network to the commercial Internet provision. The initiative was run by a small number of artists and a number of other individuals, a central figure being Adam Burns, an IT security expert. Himself together with a few others set up the network addressing the main technical tasks such as network routing, planning, and other tasks. Burns describes Free2Air as a largely political project, attempting to put into practice ideas about control and ownership of personal communication. He recalls that the one of the two main motivations for starting the network was exactly to try de-mediating the personal communication and getting more control over the communication needs~\cite{NETCOMMONS_D2_2}. Burns himself was involved in significant political activities participating in debates on the idea of commons and what this implies for governance, legal and policy issues, but also the alternative organization and autonomy of communication.
\item \textbf{guifi.net.} Guifi CN has formalized its alternative approach to network operation and management in the context of the economic theory of Commons. The guifi.net foundation promotes the view of their CN as CPR and apply principles of CPR management, as set by the Nobelist economist Elinor Ostrom~\cite{Ostrom1990}, to their CN management.
\end{itemize}
\end{description}
\subsubsection{Socio-cultural incentives}
Socio-cultural motives are strongly relevant to the participation of volunteer groups in CN initiatives. There are instances that such motives stand behind the original conception and deployment of CNs; in other cases, these may emerge in a later stage across the people who run and manage the CN.
Concepts of intrinsic motivation such as \textbf{creativity, innovation}, enjoyment are found in the group of volunteers in CNs. Education and knowledge acquired from the interaction with other network members and involvement with the network\,\cite{4124126}, tend to be noted and appreciated by the members of a CN.
\begin{description}
\item[Experimentation technology and DIY culture:]
Several initiatives are driven by hackers, technology enthusiasts, and academics who enjoy experimenting with network and radio technologies. The involvement within such a community presents them with a unique opportunity to further enhance their technical knowledge and practise it over real networks.
\begin{itemize}
\item \textbf{AWMN.} The AWMN was founded in 2002 by a group of people involving primarily network technicians and enthusiasts and radio amateurs. The network was characterized by a culture of experimentation and improvisation \cite{NETCOMMONS_D2_2}.
For the people leading the activity, it was a great place to test and enhance their knowledge and create things. This involved the manufacturing of antennae, the production of feeders, and the design of mesh protocols for routing traffic over the network.
This experimentation and hacking culture is best reflected in the unparalleled offer of applications and services that were developed for AWMN, \ie to work as native services without need for public Internet connectivity, including games, libraries, network monitoring tools, DNS solutions, and experimental platforms.
\item \textbf{Ninux.} Experimentation and hacking were the primary motivation behind setting up the Ninux CN in Italy early 2000s. This is directly reflected in its name, as ``Ninux" stands for "No Internet, Network Under eXperiment". As with AWMN, Internet access is not officially offered by Ninux, which operates as an experimental platform for decentralized protocols, policies and technologies.
\item \textbf{Funkfeuer.} Funkfeuer is a free experimental wireless network across Austria. It was built and currently maintained by a group of computer enthusiasts with different motivation and interests. Funkfeuer is committed to the idea of DIY.
\end{itemize}
\item[Community spirit and altruism:]
Altruism, often coupled with belief in community ideals emerge as important motivations for the active involvement of volunteers' groups in CNs.
\begin{itemize}
\item \textbf{B4RN.} The community ideals are highly prioritized in B4RN. The volunteer group has been set up to operate as a community benefit society which “can never be bought by a commercial operator and its profits can only be distributed to the community.” This was a decision early made by the few people who initiated the CN.
\item \textbf{Sarantaporo.gr.} The Sarantaporo.gr involves people who are activists in the area of commons and supporters of community ideals. They place a lot of emphasis on cultivating these ideals in the residents of the area with parallel activities and social events. Even the small yearly subscriptions that aim at the maintenance of the network infrastructure are determined at village/community rather than individual level.
\item \textbf{i4Free.} The leading figure behind the i4Free CN in Greece, is also a warm fan of community life and ideals. He has spent enormous amounts of time in training and educational events trying to build a community around the CN, even without much success as he admits \cite{NETCOMMONS_D2_2}.
\end{itemize}
Altruism and the spirit of community are also evidenced in other CNs, where the primary motivation of volunteers is the experimentation and hacking culture or other political reasons
Therefore, in AWMN and Ninux, guides and instructions have been developed with the certain purpose of providing information and recommendations to interested potential participants in order to buy and set up their own node.
\end{description}
\subsubsection{Economic incentives} \label{subsec:vol_economic}
Economic incentives are rarely relevant to the volunteers' group. These groups are mostly organized as nonprofit organizations and, in several cases, their members end up funding the initiative one way or another. Yet, there are some instances that such incentives are present, or were present at some stage of the CN development. In all cases, the underlying idea, when present, is how to save money with CNs compared to commercial alternatives rather than how to make money out of the CN initiative.
\begin{itemize}
\item \textbf{Consume.} One such case is the Consume network, one of the very first CN initiatives that set a recipe for other CNs across Europe. James Stevens ran a technology incubation business offering web, live streaming and video distribution services through a leased optic fiber connection. He came up with the idea to connect buildings through wireless mesh links as a way to bypass the expensive licence costs and regulatory constraints related to expanding the fiber communication across the buildings.
Yet, the initiative soon acquired more political purposes, as a movement against anti-competitive practices that protected certain financial monopolistic interests in UK. And the two people that led the development of the network, Stevens and Julian Priest, ended up undertaking almost the full financial cost related to the network deployment.
\item \textbf{BARN.} In the rural areas covered by B4RN, there is no fixed commercial broadband infrastructure available since commercial operators do not consider it worth in financial terms. Yet, there are other options such as satellite or cellular
that are typically more expensive and of lower quality. B4RN offers better connections at more affordable prices than the competing solutions.
\item \textbf{Ninux.} Indirect economic benefits can come through the enhancement of an individuals human capital. From their involvement with the network, individuals in Ninux have acquired knowledge needed to find jobs in the ICT sector.
\end{itemize}
\subsection{Users}
Users are people that join the network for different kind of reasons. Their reasons for participating may refer to network access, acquiring connectivity and utilizing the resources of the network for data exchange, communication or gaming. A user can either pay a fee for connecting to the network or not. Examples of both cases are found in practice i.e. in different CNs or even within the same CN. It is also possible for users to join the network for non-technical reasons. Some may be interested in exercising activism while others wish to ensure data privacy. Receiving qualitative services provided by the professionals is another typical example. In cases where CN users are interested only in \textit{network access} and \textit{professional service consumption} and offer some kind of reimbursement in exchange, users are considered \textit{customers}. Practically, their participation is followed by a connectivity fee paid directly to the CN or consumption fees paid to the professionals for consuming their services and indirectly to the network.
User participation levels exhibit high variance. They may be highly active and participate in the events organized by volunteers or other types of
collective activities, provide technical experience, develop apps, and devote personal time and efforts
to the network, get involved in the service offering of professionals and contribute to the economic activity of the CN; they may set up a node without contributing personally to the activities
of the community; or they may use the CN to get Internet access or access to local services without
contributing in any way (economic, hardware or personal efforts). The latest type of users are termed \textit{free riders}. However, their participation may benefit the network in terms of the network effect i.e. the larger the
network the easier to enter, so even passive users such as free riders can potentially enable others to join.
Likewise variable is their motivation for joining the CN. Decisive for many of them is the expectation of cheap, or even free, Internet access. For others, the CN is viewed as a perfect opportunity to acquire new knowledge and experiment with high tech stuff. Socializing and becoming part of a bigger community is also reported as important motivation for participation in the CN.
Finally, political causes are also evidenced as motives for user participation, albeit to a smaller extent than in volunteer groups.
\subsubsection{Political incentives}
Although users of the network are not involved in its initial deployment and operation, they may too experience political motives for participating in the network.
Many CNs as seen in section \ref{subsec:vol_political}, have been created under aspirations of privacy and net neutrality, autonomy and self-organization, providing an alternative to existing communication models and bridging the digital divide in rural, poorly served by ICT operators, areas.
The ideals underlying the initial development of these CNs are often passed on some of their members --the larger the CN the harder to find political causes uniting the whole community behind them.
\begin{description}
\item[Openness, net neutrality and privacy:]\label{privacy_users} Users often participate in CN initiatives in an attempt to get away with privacy concerns and tracking/monitoring software used in the public Internet.
The aspects of privacy and neutrality have a strong role in networks that utilize the Picopeering agreement\footnote{\url{http://www.picopeer.net/PPA-en.shtml}} and are part of the movement for open wireless radio networks\footnote{\url{https://openwireless.org/}}. The Picopeering agreement is a baseline template that formalizes the interaction between two peers of the network. Its basic properties include:
\begin{itemize}
\item Node owners agree on free exchange of data into, out of or across a network without any interference.
\item Node owners agree on providence of open communication by publishing relevant peering information subject to free license and information of contact.
\item There are no guarantees of service level.
\item Node owners can formulate use policies as long as they do not interfere with the basic parts of the picopeering agreement.
\item Local amendments can take place by the will of node owners.
\end{itemize}
\item[Autonomy and self-organization:]
The participation in CN groups cultivates feelings of autonomy and self-organization. Self organization is also depicted in the way that new users connect to the network, where they have to rely on their own resources and on the voluntary assistance of experienced network members.
Being part of an independent network satisfies personal ideology aspirations for self-organized network and autonomous use. The ability to participate in collective decision making and contribute to the a "common" network infrastructure following an alternate model of ICT access is itself an experience for users interested in participating in a community of "commons". In the study found in~\cite{lawrence2007wireless}, 94.5\% responded that they experienced autonomy in their groups and expressed freely their own opinions.
\end{description}
\subsubsection{Socio-cultural incentives}
A CN is a characteristic example of participatory involvement, where users dedicate their efforts and time to the network~\cite{Vega2014}. A number of services and applications combined with other activities that one way or another revolve around the CN, offer users the opportunity to communicate, educate and entertain themselves, thus further motivating their participation in the network ~\cite{szabo2007wireless}~\cite{pedraza2013community}.
\begin{description}
\item[Experimentation and new ICT knowledge acquisition:] Technology enthusiasts participate in the network for experimenting with the technology and new gadgets.
They find in a CN an as realistic as possible testbed for trying software they develop and hacked code, make network speed measurements, play with network mapping and management tools.
Other users view CNs as a ``place", where they can acquire new skills about computer and network use. They are willing to invest personal effort on this but, at the same time, they expect to get triggers and help and guidance from the experts that know more about this.
AWMN, Ninux, and Freifunk are CNs built by people with solid technical background and technology lovers. It is no surprise that many of the users these CNs attract tend to share similar interests.
In CNs that were initiated by volunteers with technical background, the amount and type of services, applications and self-produced software increased greatly within the community.
In such CNs, users with these kind of motives appreciate getting access to:
\begin{itemize}
\item communication services with VOIP and forums, mails, instant messaging,
\item data exchange services with servers, community clouds, file sharing systems,
\item entertainment services with games, applications, video and audio broadcasting,
\item information and educating services with online seminars, e-learning platforms, wikis, monitoring tools, search engines.
\end{itemize}
In a study found in~\cite{lawrence2007wireless}, 78\% of the people that filled in questionnaires reported that the CN satisfied their personal needs, and 87\% were identified as technology enthusiasts.
\item[Social interaction:]
The smooth operation and development of a CN demands cooperation links at the network infrastructure level but also at the social level.
Works ~\cite{4468734} and ~\cite{4444315} state that the social layer in P2P+ systems is often neglected and left out of the design of incentive mechanisms.
As social incentives count socially-aware mechanisms that may relate to concepts such as visibility, acknowledgment, social approval, individual privileges and status. This social activity is applied within the networks' technical limits~\cite{Mcdonald02socialissues}.
\textit{Social motives} are common in the participation of users and affect network growth and operation~\cite{6979946, 4124126}.
In CNs, participants are able to share their ideas and interests, participate in groups, interact and communicate with other network members just like they would in any other online or physical community.
Communication within or outside the network is an easily observable motive and one of the most popular reasons why users take part in online communities. Social networking and communication tools raise great interest and remain active even when other tools and services have a drop in their utilization.
Finally, the ability to compete with other people and satisfy one's self esteem through the involvement in the community, or receive a certain type of credit by others in the community, are motives not as easy to distinguish but still present in reported studies.
\begin{itemize}
\item In the study in~\cite{lawrence2007wireless} results showed that 91.2\% enjoyed interacting with the community, 88\% felt that their efforts would be returned by other community members and 80.5\% expressed that the community allowed them to work with people that they could trust and share similar interests.
\item A socio-technical study in the rural area of northern Thailand, showed that CN users with access to the Internet showed great interest in messaging, email, online social networks, and online gaming services~\cite{Lertsinsrubtavee:2015:UIU:2837030.2837033}. The social activity among the users exhibited a high degree of locality, which means that people used Internet to interact with people within the same CN.
\item Similar results were found in a study of Internet service in a rural village in Zambia~\cite{Johnson:2010:IUP:1836001.1836008}. The implication is that local relationships can be of great importance in a CN~\cite{kornhybrid} and that even though Internet service is dominant in certain community networks~\cite{6673334}, if similar services could be applied at a local scale, they would have the potential to make an impact on CN users.
\end{itemize}
\end{description}
\subsubsection{Economic incentives}
Motives of economic nature are evidenced and have been reported in literature among CN users. Namely, users expect benefits of economic nature from their participation in the network, which may be direct or indirect.
\begin{description}
\item[Direct economic benefits:] One of the main reasons why users join CNs is that they can get Internet access at lower cost than alternative commercial solutions, offered by telecom operators.
\begin{itemize}
\item \textbf{Sarantaporo.gr} offers Internet connectivity at a small subscription fee that is charged on per village basis. The resulting cost per network user goes down with the number of people sharing the Internet access and is several times smaller than what the same users would need to pay if they individually subscribed to available commercial solutions. In fact, the anticipation of the CN as "Internet for free" has put a lot of obstacles towards a more participatory stance in sharing the CN operational expenses.
\item \textbf{AWMN} members of the Association pay a typical small subscription fee in exchange for rights and involvement in decision-making processes. Non-members of the Association are not required to pay any kind of fee except for the expenses of their own equipment.
\item One of the strong points in the evolution of \textbf{B4RN} has been its capability to offer fiber connectivity and Internet speed at much more favorable prices than alternative commercial solutions did. Part of these savings relates to partly crowdsourcing the cost and effort for digging, which is a strong indication of how these initiatives can mobilize local skills and resources.
\end{itemize}
\item[Indirect economic benefits:]
Users do not always identify economic benefits (only) with the capability to save money for Internet access.
\begin{itemize}
\item In \textbf{Sarantaporo.gr} young people (in the age of 18-35) view the CN as a path to information about job and further education opportunities; farmers search better markets for their products and cheaper suppliers for raw materials; and locals running coffee shops or taverns join the network in the anticipation that visitors appreciate the Internet connectivity feature when choosing where to go.
\item In \textbf{AWMN} people out of those who contributed to its development also developed some business activity around the CN. They had Wi-Fi expertise and broader technical knowledge and opened shops to provide infrastructure for the network \cite{NETCOMMONS_D2_2}.
\end{itemize}
\end{description}
\subsection{Professionals}
The professional is the stakeholder type that is most rarely met in CN initiatives.
In fact, and to the best of our knowledge, guifi.net has been the first and single CN instance with clear
and well articulated provisions for the involvement of professionals in the CN ~\cite{Baig:2016:MCN:2940157.2940163}.
At first glance, these entities do over the CN what they do over any other network. However, the legal provisions and conditions of running business over the CN
are different. In the case of the guifi CN, the guifi.net foundation prepares licences that serve the
commons purposes and ensure that any professional entity providing service over the network will also contribute to the network expansion and maintenance ~\cite{Baig2015150}.
The CN is a great opportunity for users to receive services. Services may include Internet access, cloud storage, video streaming and video on-demand. They are offered by professionals who come into touch with the users via the infrastructure of the network. Professionals are usually companies, ISPs, small businesses or individuals, i.e., entities that use the network to provide commercial services, promote their expertise in a specific field and get compensated for it. In contrast to the traditional telecommunication companies where users are seen solely as consumers, users in CN can themselves become professionals and take part in the service provision.
Their main focus of the professionals is \textit{demand, service supply and stability of operation}. \textit{Demand} stands for finding customers in need of their offers within the network. For example the expansion of the network is usually beneficial for the professionals, because it increases their potential customers. However, it may also increase the competition. The \textit{service supply} is the principal role of the professionals for network users. They are also interested in the quality of their service they provide to the end customers, as it is their main distinguishable feature within the network's market. \textit{Stability of operation} is inherently interconnected to the networks operation. For professionals to be able to provide a qualitative and stable service the network has to provide uninterrupted network connection for both the professionals and the users. But they are not independent from the CN. Professionals contribute to the technical infrastructure directly by contributing actual hardware or indirectly by providing economic contributions. In this way, they have a slice of responsibility for the network operation as they are themselves part of it.
The incentives for the participation of professionals in the network are primarily, if not exclusively, economic.
\subsubsection{Economic incentives}
\begin{itemize}
\item The most advanced network so far, is the \textbf{guifi.net} which incorporates a number of organizational rules to promote economic interaction within the network and motivate professionals to participate and offer their services using the CN technical infrastructure. Professionals are able to participate in the network, provide their services over it and get compensated for them.
\item \textbf{B4RN}, a fiber community network located in Lancashire in England, employs a community funding model composed of shares for each investment, support for loans from the community and subscription fees for the participants. Community members can acquire B4RN shares. The network's expenses are covered by its own shareholders. B4RN utilizes a subscription model for both households and non domestic users. The subscription model is composed of a connectivity fee and different service fees for different types of users.
\end{itemize}
\subsection{Public administrations}
Public administrations may interact with a CN in different ways:
\begin{itemize}
\item by contributing to the deployment and growth of the CN through either funding the initiative or sponsoring network equipment.
\item by positioning as a user of the CN services. In case of municipal authorities, they might let a CN manage and maintain equipment they own in return for network connectivity.
\item as a regulating body facilitating or placing obstacles to its expansion and growth or by permitting the use of public space and resources by a CN (\eg as antenna or CN node installation sites)
\end{itemize}
It is possible to distinguish other types of groups that may get involved in the network such as Universities and other organizations. Depending on their level of participation they can sign collaboration agreements with the legal entity of the CN and contribute economic or infrastructure resources with or without compensation.
\subsubsection{Political incentives}
Public bodies may serve different political causes by participating in a CN and/or funding its activities.
First of all, it is possible to implement EU promoted policies against the digital divide and in favor of equal opportunities in the digital economy and society. CNs have shown over time their potential to mobilize local communities and altruistic forces in the society. They have managed to offer network connectivity in areas that are not attractive for commercial operators and might otherwise need generous public subsidies to cover. Second, the CNs often strengthen the community links and raise awareness for issues concerning the local societies. Ideally, CNs sort of train users to become more engaged with the commons. Third, on a more selfish and short-sighted note, local administrations (such as municipalities) can advertise the provision of network services as a political achievement that increases their chances of re-election.
\begin{itemize}
\item \textbf {Sarantaporo.gr} The Greek Foundation for open-source software, an initiative with the participation and support of the whole Greek academia, has sponsored the network equipment for the initial deployment of the CN in 2013. The University of Applied Sciences of Thessaly has provided them with connectivity to the Internet through its access to the Greek Research and Education Network (GRNET). Additional funds through the participation in the EU FP7 CONFINE project allowed the network expansion to 14 villages in the area.
\item \textbf{guifi.net} The local authorities of many villages in Catalunia have allowed the foundation to dig public space and lay down fiber for expanding the network coverage to these areas.
\end{itemize}
\subsubsection{Economic incentives}
Public administrations can participate in the network because this may prove profitable, just like professional entities do.
\begin{itemize}
\item \textbf{guifi.net} In the case of guifi.net public administrations can fund the network expansion through purchase of equipment in return for added value services over the network. In other words, they can invest in the network and get compensated for their investment.
\end{itemize}
\begin{center}
\begin{table*}[t]
\centering
\resizebox{12cm}{!} {
\begin{tabular}{|l|c|c|c|c|}
\hline
\textbf{CN} & \textbf{Location} & \textbf{Creation} & \textbf{Initiators} \\
\hline
AWMN & Athens, Greece & & \\
\hline
B4RN & Lancashire, UK & & \\
\hline
Consume & London, UK & & \\
\hline
FFDN & France, Belgium & & \\
\hline
Free2Air & London, UK & &\\
\hline
Freifunk & Berlin, Germany & & \\
\hline
Funkfeuer & & & \\
\hline
guifi.net & Catalonia, Spain & & \\
\hline
i4Free & Fokida, Greece & & \\
\hline
Ninux & Italy & &\\
\hline
Sarantaporo.gr & Elassona, Greece & &\\
\hline
\end{tabular}
}
\caption{Community networks studied here.}
\end{table*}
\label{tab:mechanism_stakeholder}
\end{center}
|
2,877,628,090,978 | arxiv | \section{Introduction}
\vspace{-0.08in}
\blue{We consider the design of a decentralized cooperative
localization (CL) algorithm for a group of communicating mobile
robots. Using CL, mobile robots in a team improve their positioning
accuracy by jointly processing inter-robot relative measurement
feedbacks.
Unlike classical beacon-based localization algorithms~\cite{JL-HFD:91}
or fixed feature-based Simultaneous Localization and Mapping
algorithms~\cite{MWMGD-PN-SC-HFDW-MC:01}, CL does not rely on external
features of the environment.} As such, this approach is an appropriate
localization strategy in applications that take place in a priori
uncharted environments with no or intermittent GPS
access.
\blue{Via CL strong correlations among the local state estimates of the
robotic team members are created. Similar to any state estimation
process, accounting for these cross-correlations is crucial for the
consistency of CL algorithms. Since correlations create nonlinear
couplings in the state estimate equations of the robots, to
produce consistent results, initial implementations of CL were fully
centralized. These schemes gathered and processed information \emph{at
each time-step} from the entire team at a single device, either by
means of a leader robot or a fusion center (FC), and broadcast back
the estimated location results to each
robot~\cite{SIR:00,AH-MJM-GSS:02}. Multi-centralized CL, wherein each
robot keeps a copy of the state estimate equation of the entire team
and broadcasts its own information to the entire team so that every
robot can reproduce the centralized pose estimate is also proposed in
the literature~\cite{NT-SIR-GBG:09}. Besides a high-processing cost
for each robot, this scheme requires an all-to-all robot communication
at the time of each information exchange. Developing consistent CL
algorithms that account for the intrinsic cross-correlations of state
estimates with reasonable communication, computation and storage costs
has been an active research area for the past decade.} This problem
becomes more challenging if in-network communications fail due to
external events such as obstacle blocking or limited communication
ranges.
\blue{For applications that maintaining multi-agent connectivity is
challenging,~\cite{AB-MRW-JJL:09,POA-CR-RKM:01,LCC-EDN-JLG-SIR:13,HL-FN:13,DM-NO-VC:13,LL-TS-SIR-WB:16}
propose a set of algorithms in which communication is only required at
the relative measurement times between the two robots involved in the
measurement. As such, these schemes can update only the state estimate
of one or both of these robots. }To eliminate the tight connectivity
requirement, instead of maintaining the exact prior robot-to-robot
correlations, in~\cite{AB-MRW-JJL:09} each robot maintains a bank of
EKFs together with an accurate book-keeping of what robot estimates
were used in the past to update these local filters. Computational
complexity, large memory demand, and the growing size of information
needed at each update time are the main
drawbacks. \blue{In~\cite{POA-CR-RKM:01,LCC-EDN-JLG-SIR:13,HL-FN:13,DM-NO-VC:13},
also the prior robot-to-robot correlations are not maintained, but are
accounted for in an implicit manner using Covariance Intersection
fusion (CIF) method. Because CIF uses conservative bounds to account
for missing cross-covariance information, these methods often deliver
highly conservative estimates. To improve estimation
accuracy,~\cite{LL-TS-SIR-WB:16} proposes an algorithm in which each
robot, by tolerating an $O(N)$ processing and storage cost, maintains
an approximate track of its prior cross-covariances with
others.
In another approach to relax
connectivity,~\cite{SEW-JMW-LLW-RME:13} proposes a leader-assistive CL
scheme for underwater vehicles.} This algorithm is a decentralized
extended information filter that uses ranges and state information
from a single reference source (the server) with higher navigation
accuracy to improve localization
accuracy of underwater
vehicle(s) (the client(s)). In this scheme the server interacts
with each client separately and there is no cooperation between the
clients.
\blue{Despite their relaxed connectivity requirement, the algorithms
of~\cite{AB-MRW-JJL:09,POA-CR-RKM:01,LCC-EDN-JLG-SIR:13,HL-FN:13,DM-NO-VC:13,LL-TS-SIR-WB:16,SEW-JMW-LLW-RME:13}
are conservative also by nature because they do not enable other
agents in the network to fully benefit from measurement
updates. Recall that correlation terms are means of expanding the
benefit of robot-to-robot measurement updates to the entire team
(see~\cite{SSK-SF-SM:16} for further details). Therefore,
\emph{tightly-coupled} decentralized CL algorithms that maintain
the correlations among the team members result in better localization
accuracy. One such algorithm obtained from distributing computations
of a joint EKF CL algorithm is proposed in~\cite{SIR-GAB:02}, where
the propagation stage is fully decentralized by splitting each
cross-covariance term between the corresponding two robots. However,
at update times, the separated parts should be combined, requiring
either an all-to-all robot communications or bidirectional all-to-a
fusion-center communications. Another decentralized CL algorithm based on
decoupling the propagation stage of a joint EKF CL using an
alternative but equivalent formulation of EKF CL is proposed
in~\cite{SSK-SF-SM:16}. Unlike~\cite{SIR-GAB:02},
in~\cite{SSK-SF-SM:16} each robot can locally reproduce the updated
pose estimate and covariance of the joint EKF at the update stage,
after receiving an update message only from the robot that has made
the relative measurement. In both of these algorithms, for a team of $N$
robots, each robot incurs an $O(N^2)$ processing and storage cost as
they need to evolve a variable of size of the entire covariance matrix
of the robotic team.}
Subsequently,~\cite{EDN-SIR-AM:09} presents a maximum-a-posteriori
(MAP) decentralized CL algorithm in which all the robots in the team calculate
parts of the centralized CL. All the algorithms above assume that
communication messages are delivered perfectly at all times. A decentralized CL
approach equivalent to a centralized CL, when possible, which handles
both limited communication ranges and time-varying communication
graphs is proposed in~\cite{KYKL-TDB-HHTL:10}. This technique uses an
information transfer scheme wherein each robot broadcasts all its
locally available information (the past and present measurements, as
well as past measurements previously~received from other robots) to
every robot within its communication radius at each time-step. The
main drawback of this algorithm is its high communication and storage
cost.
\blue{ In this paper, we design a novel tightly-coupled distributed CL
algorithm in which each robot localizes itself in a global
coordinate frame by local dead reckoning, and
opportunistically corrects its pose estimate whenever it
receives a relative measurement update message from a server. The
update message is broadcast any time server receives an inter-robot
relative measurement and local estimates from a pair of robots in
the team that were engaged in a relative measurement. In our setup,
the server can be a team member with greater processing and storage
capabilities. Under a perfect communication scenario, we show that
our algorithm is an exact distributed implementation of a joint
CL via EKF formulation. To obtain our algorithm, we use an
alternative representation of EKF formulation of CL called \textsc{Split EKF}\xspace
for CL. \textsc{Split EKF}\xspace for CL was proposed in~\cite{SSK-SR-SM:15-icra}
without the formal guarantee of equivalency. In this paper, we
establish this guarantee via a mathematical induction proof. Our
next contribution is to show that our proposed algorithm is robust
to occasional message dropouts in the network. Specifically, we show
that the updated estimates of robots receiving the update message
are minimum variance. In our algorithm, since every robot only
propagates and updates its own pose estimates, the storage and
processing cost per robot is {$O(1)$}. Robots only need to
communicate with the server if they are involved in an inter-robot
measurement. Since occasional message drop-outs are allowed in our
algorithm, the connectivity requirement is flexible. Moreover, we
make no assumptions about the type of robots or relative
measurements. Therefore, our algorithm can be employed for teams of
heterogenous robots. }
\vspace{-0.05in}
\section{Preliminaries}\label{sec::robot-discrib}\vspace{-0.08in}
\blue{In this section, we describe our robotic team model and review the joint CL via EKF as well as its alternative representation \textsc{Split EKF}\xspace. In the proceeding sections, we use \textsc{Split EKF}\xspace to devised our proposed server assisted CL algorithm.
We consider a team of $N$ robots in which every robot has a detectable
unique identifier and corresponding unique integer label belonging to
the set {$\mathcal{V}=\{1,\dots,N\}$}. Using a set of
proprioceptive sensors, robot {$i\in\mathcal{V}$} measures its self-motion
and uses it to dead reckon, i.e., propagate its equations of
motion $\vect{x}^i(k+1)=\vect{f}^i(\vect{x}^i(k),\vect{u}_m^i(k))$,
$k\in{\mathbb{Z}}_{\ge 0}$, where {$\vect{x}^i\in{\mathbb{R}}^{n^{i}}$} is
the pose vector and
{$\vect{u}_m^i=\vect{u}^i+\vect{\eta}^i\in{\mathbb{R}}^{m^i}$} is the
measured self-motion variable (for example velocities) with
$\vect{u}^i$ being the actual value and $\vect{\eta}^i$ the
contaminating noise. } The robotic team can be heterogeneous.
Every robot also carries exteroceptive
sensors to detect, uniquely, the other
robots in the team and take relative measurements from them, e.g.,
range or bearing or both. We let ($i\xrightarrow{k}j$) indicate that robot $i$ has taken relative measurement from robot $j$ at time $k$. The relative measurement is modeled by
{\begin{align}\label{eq::measur_i,j}
\vect{z}_{i,j}(k)&=\vect{h}_{i,j}(\vect{x}^i(k),\vect{x}^j(k))+\vect{\nu}^i(k),~~\vect{z}_{i,j}\in{\mathbb{R}}^{n_z^i},
\end{align}}
where {$\vect{h}_{i,j}(\vect{x}^i,\vect{x}^j)$} is the measurement model
and {$\vect{\nu}^i$} is measurement noise. The noises {$\vect{\eta}^i$} and {$\vect{\nu}^i$}, {$i\in\mathcal{V}$}, are independent
zero-mean white Gaussian processes with known positive definite
variances {$\vect{Q}^i(k)=\text{E}[{\vect{\eta}^i}(k)
\vect{\eta}^i(k)^\top]$} and $\vect{R}^i(k)\!=\!\text{E}[{\vect{\nu}^i}(k)
\vect{\nu}^i(k)^\top]$.
All noises are assumed to be mutually
uncorrelated. In the~following, we use $\mathbb{S}^{n}_{>0}$ as set of real positive definite $n\times n$ matrices.
\blue{Joint CL via EKF is obtained from applying
EKF over the joint system motion model ~$
\vect{x}(k\!+\!1)\! =\!(\vect{f}^1(\vect{x}^1, \vect{u}^1),\cdots, \vect{f}^N( \vect{x}^N ,$ $\vect{u}^N))+\Diag{\vect{g}^1(\vect{x}^1),\cdots,\vect{g}^N(\vect{x}^N)}\vect{\eta}(k),$ and the relative measurement
model~\eqref{eq::measur_i,j}~\cite{SIR-GAB:02}. }
Starting at
$\Hvect{x}^{i\mbox{+}}\!(0)\!\in\!{\mathbb{R}}^{n^i}$, $\vect{P}^{i\mbox{+}}\!(0)\!\in\!\mathbb{S}^{n^i}_{>0}$, $\vect{P}_{i,j}^{\mbox{+}}(0)\!=\!\vect{0}_{n^i\times
n^j}$, $i\in\mathcal{V}$ and $j\!\in\!\mathcal{V}\backslash\{i\}$, the propagation and update equations of the EKF CL are
\begin{subequations}\label{eq::central-robotwise}
\begin{align}
& \!\!\!\Hvect{x}^{i\mbox{-}}(k\!+\!1)\!=\vect{f}^i(\Hvect{x}^{i\mbox{+}}(k),\vect{u}^i(k)),\label{eq::propag_central_Expanded-a}\\
& \!\!\!\vect{P}^{i\mbox{-}}(k\!+\!1)\!=\vect{F}^i(k)\vect{P}^{i\mbox{+}}(k)\vect{F}^i(k)\!^\top\!\!\!+\!\vect{G}^i(k)\vect{Q}^i(k)\vect{G}^i(k)\!^\top\!\!\!,\label{eq::propag_central_Expanded-b}\\
& \!\!\!\vect{P}_{i,j}^{\mbox{-}}(k\!+\!1)\!=\vect{F}^i(k)\vect{P}_{i,j}^{\mbox{+}}(k){\vect{F}^j(k)}\!^\top\!\!,\label{eq::propag_central_Expanded-c}\\
& \!\!\!\Hvect{x}^{i\mbox{+}}(k\!+\!1)\!=\Hvect{x}^{i\mbox{-}}(k\!+\!1)+\vect{K}_i(k\!+\!1)\vect{r}^{a}(k\!+\!1),\label{eq::RobotCovarUpdate-a}\\
& \!\!\!\vect{P}^{\!i\mbox{+}}(k\!+\!1)\!=\vect{P}^{\!i\mbox{-}}\!(k\!+\!1)\!-\!\vect{K}_i(k\!+\!1) \vect{S}_{a,b}(k\!+\!1)\vect{K}_i(k\!+\!1)\!\!^\top\!\!\!,\label{eq::RobotCovarUpdate-b}\\
& \!\!\!\vect{P}_{\!i,j}^{\mbox{+}}(k\!+\!1)\!=\vect{P}_{\!i,j}^{\mbox{-}}(k\!+\!1)\!-\!\vect{K}_{\!i}(k\!+\!1)\vect{S}_{a,b}(k\!+\!1)\vect{K}_{\!j}(k\!+\!1)\!^\top\!\!\!,\label{eq::RobotCovarUpdate-c}\\
& \!\!\!\vect{K}_i(k\!+\!1)=\label{eq::K_gain_robotwise}\\
&\begin{cases}\vect{0}, \qquad\qquad\qquad\qquad~\text{no relative measurement at~}k\!+\!1,\\
(\vect{P}_{i
,b}^{\mbox{-}}(k\!+\!1)\Tvect{H}_b^\top\!+\!\vect{P}_{i,a}^{\mbox{-}}(k\!+\!1)\Tvect{H}_a^\top){\vect{S}_{a,b}}^{-1}\!,
\qquad~ a\xrightarrow{k+1} b.
\end{cases}\nonumber
\end{align}
\end{subequations}
for
$k\in{\mathbb{Z}}_{\ge 0}$, with $\vect{F}^i=\partial\vect{f}(\Hvect{x}^{i\mbox{+}},\vect{u}_m^i)/\partial
\vect{x}^i|_{\Hvect{x}^i,\vect{u}^i_m=\vect{0}}$ and
$\vect{G}^i=\partial\vect{f}(\Hvect{x}^{i\mbox{+}},\vect{u}_m^i)/\partial
\vect{\eta}^i|_{\Hvect{x}^i,\vect{u}^i_m=\vect{0}}$.
Moreover, when a robot $a$ takes a relative measurement from robot $b$ at some given
time {$k+1$}, the measurement residual and its covariance are,
respectively,
\vspace{-0.08in}
{\begin{subequations}
\begin{align}
\!\!\!\vect{r}^{a}(k\!+\!1)\!&=\vect{z}_{a,b}(k\!+\!1)\!-\!\vect{h}_{a,b}(\Hvect{x}^{a\mbox{-}}(k\!+\!1),\Hvect{x}^{b\mbox{-}}(k\!+\!1)),\label{eq::reletive_Residual}\\
\!\!\vect{S}_{a,b}(k\!+\!1)\!
\!&=\!\vect{R}^a(k\!+\!1)+\Tvect{H}_a(k\!+\!1)\vect{P}^{a\mbox{-}}(k\!+\!1)\Tvect{H}_a(k\!+\!1)\!^\top\nonumber\\
&~~+\Tvect{H}_b(k+1) \vect{P}^{b\mbox{-}}(k+1)\Tvect{H}_b(k+1)^\top\label{eq::S_ab}\\
&~~+\Tvect{H}_b(k+1)\vect{P}_{ba}^{\mbox{-}}(k+1){\Tvect{H}_a}(k+1)^\top\nonumber\\
&~~+\Tvect{H}_a(k+1)\vect{P}_{a,b}^{\mbox{-}}(k+1)\Tvect{H}_b(k+1)^\top,\nonumber
\end{align}
\end{subequations}}
where (without loss of generality we assume that
$a<b$) \vspace{-0.08in}\begin{align}
&\vect{H}_{a,b}(k)=\big[\overset{1}{\vect{0}}~~\overset{\cdots}{\cdots}~~\overset{a}{\Tvect{H}_a}(k)~~\overset{a+1}{\vect{0}}~~\overset{\cdots}{\cdots}~~\overset{b}{\Tvect{H}_b}(k)~~\overset{b+1}{\vect{0}}~~\overset{\cdots}{\cdots}~~\overset{N}{\vect{0}}\big],\nonumber\\
&\Tvect{H}_l(k)=\partial\vect{h}_{a,b}(\Hvect{x}^{a\mbox{-}}(k),\Hvect{x}^{b\mbox{-}}(k))/\partial \vect{x}^l,\quad l\in\{a,b\}.
\label{eq::H_ab}
\end{align}
\blue{$\vect{P}_{i,j}$~is the cross-covaraince between the estimates~of~robots $i$ and $j$. Equations in~\eqref{eq::central-robotwise} are the representation of the joint EKF CL in robot-wise components, e.g., ~{$\vect{K}\!=\![\vect{K}_1^\top,\cdots,\vect{K}_N^\top
]^\top\!\!=\vect{P}^{\mbox{-}}(k\!+\!1)\vect{H}_{a,b}(k\!+\!1)^\top {\vect{S}_{a,b}}(k+1)^{-1}$} and
\begin{align}\label{eq::Kgain_central}
\vect{P}^{\mbox{+}}(k\!+\!1)&=\vect{P}^{\mbox{-}}(k\!+\!1)\!-\!\vect{K}(k\!+\!1)\vect{S}_{a,b}\vect{K}(k\!+\!1)^\top
\end{align} expands as ~\eqref{eq::RobotCovarUpdate-b} and~\eqref{eq::RobotCovarUpdate-c}. }\\
Since $\vect{K}_{\!i}(k\!+\!1)
\vect{S}_{a,b}(k\!+\!1)\vect{K}_{\!i}(k\!+\!1)^\top$
in~\eqref{eq::RobotCovarUpdate-b} is positive~semi-definite, relative
measurement updates reduce the estimation uncertainty. \blue{However, due to
the inherent coupling in~cross-covariances
~\eqref{eq::propag_central_Expanded-c}
and~\eqref{eq::RobotCovarUpdate-c}, the EKF
CL~\eqref{eq::central-robotwise} can only be implemented in a
decentralized way using all-to-all communication if each agent
keeps a copy of its cross-covariance matrices with the rest of the
team. \textsc{Split EKF}\xspace~CL, proposed
in~\cite{SSK-SF-SM:16}, is as an \textit{alternative but, as proven
here, an exactly equivalent} representation of the EKF CL
formulation~\eqref{eq::central-robotwise}. It uses
a set of intermediate variables to allow for the decoupling of the
estimation equations of the robots as shown in the next~section.}
\begin{thm}[\textsc{Split EKF}\xspace~CL, an exact alternative representation of EKF for joint CL]\label{thm::main}
Consider the EKF CL algorithm~\eqref{eq::central-robotwise} with
its given initial conditions.
For $i\in\mathcal{V}$, let
$\vect{\Phi}^i(0)=\vect{I}_{n^i}$ and
$\vect{\Pi}_{i,j}(0)=\vect{0}_{n^i\times n^j}$,
$j\!\in\!\mathcal{V}\backslash\{i\}$. Moreover, assume that
$\vect{F}^i(k)$, {$i\in\mathcal{V}$}, is invertible at all
$k\in{\mathbb{Z}}_{\ge 0}$. Next, for $i\in\mathcal{V}$ let
\begin{subequations}\label{eq::intermidate_var}
\begin{align}
&\vect{\Phi}^i(k+1)=\vect{F}^i(k)\vect{\Phi}^i(k),\label{eq::Phi}\\
&\vect{\Pi}_{i,j}(k\!+\!1)=\vect{\Pi}_{i,j}(k)+\vect{\Gamma}_i(k\!+\!1)\,\vect{\Gamma}_j(k\!+\!1)\!^\top,\label{eq::Pi}
\end{align}
\end{subequations}
$j\!\in\!\mathcal{V}\backslash\{i\}$, where
\begin{subequations}\label{eq::intermidate_var2}
\begin{align}
&\vect{\Gamma}_{i}(k\!+\!1)=\vect{0},~~~~~~~
\text{no relative measurement at~} k\!+\!1,\label{eq::barD-no-meas}\\
&\vect{\Gamma}_{a}(k\!+\!1)=
\big(\vect{\Pi}_{a,b}(k){\vect{\Phi}^b(k+1)}^\top\Tvect{H}_{b}^\top\!
+\nonumber\\
&\qquad\!\vect{\Phi}^a(k+1)^{-1}\vect{P}^{a\mbox{-}}(k+1)\Tvect{H}_{a}^\top\big)\,{\vect{S}_{a,b}}\!
\!^{-\frac{1}{2}}\!,~\quad a\xrightarrow{k+1} b, \label{eq::barD-a}\\
&\vect{\Gamma}_{b}(k+1)=
\big(\vect{\Phi}^{b}(k+1)^{-1}\vect{P}^{b\mbox{-}}(k+1)\Tvect{H}_{b}^\top\!
+\nonumber\\
&~~~~~~\qquad\!\vect{\Pi}_{b,a}(k){\vect{\Phi}^a(k+1)}^\top\Tvect{H}_{a}^\top\big)
\,{\vect{S}_{a,b}}\!\!^{-\frac{1}{2}},\quad a\xrightarrow{k+1} b, \label{eq::barD-b}\\
&\vect{\Gamma}_{l}(k+1)=
(\vect{\Pi}_{l,b}(k){\vect{\Phi}^b(k+1)}^\top\Tvect{H}_{b}^\top\!
+\vect{\Pi}_{l,a}(k) \times\nonumber\\
&~~~~~~{\vect{\Phi}^a(k+1)}^\top\Tvect{H}_{a}^\top)\,{\vect{S}_{a,b}}\!\!^{-\frac{1}{2}},
~ l\!\in\!\mathcal{V}\backslash\{a,\!b\}, ~~ a\xrightarrow{k+1} b,\label{eq::barD-i}
\end{align}
\end{subequations}
for $k\in{\mathbb{Z}}_{\ge 0}$. Then, we can write~\eqref{eq::propag_central_Expanded-c}~as
\begin{align}
\vect{P}_{i,j}^{\mbox{-}}(k+1)=&\vect{\Phi}^i(k+1)\,
\vect{\Pi}_{i,j}(k)\,\vect{\Phi}^j(k+1)^\top,\label{eq::alternate-EKF-equations-a}
\end{align}
and~\eqref{eq::RobotCovarUpdate-a}, \eqref{eq::RobotCovarUpdate-b} and
\eqref{eq::RobotCovarUpdate-c}, respectively,~as
\begin{subequations}\label{eq::alternative-EKF-update-xi-Pi}
\begin{align}
\!\!\Hvect{x}^{i\mbox{+}}\!(k\!+\!1)\!=\,&
\Hvect{x}^{i\mbox{-}}\!(k\!+\!1)\!+\!\vect{\Phi}^{i}\!(k\!+\!1)\vect{\Gamma}_i(k\!+\!1)
\Bvect{r}^{a}\!(k\!+\!1),\label{eq::alternate-EKF-equations-b}\\
\!\!\vect{P}^{i\mbox{+}}(k\!+\!1)\!=\,&\vect{P}^{i\mbox{-}}(k+1)-\label{eq::alternate-EKF-equations-c}\\
&\vect{\Phi}^{i}(k+1)
\vect{\Gamma}_{i}(k+1)\vect{\Gamma}_i^\top(k+1)
\vect{\Phi}^i(k+1)^\top\!\!\!,\nonumber\\
\vect{P}_{i,j}^{\mbox{+}}(k+1)=\,&\vect{\Phi}^i(k+1)\,\vect{\Pi}_{i,j}(k+1)\,
\vect{\Phi}^j(k+1)^\top,\label{eq::alternate-EKF-equations-d}
\end{align}
\end{subequations}
for $i\in\mathcal{V}$ and $j\!\in\!\mathcal{V}\backslash\{i\}$, where
$\Bvect{r}^{a}(k\!+\!1)={\vect{S}_{a,b}}\!\!^{-\frac{1}{2}}\vect{r}^{a}(k\!+\!1)$.
\end{thm}
\blue{The proof of this theorem is given in Appendix. Inevitability of $\vect{F}^i(k)$ is generic and holds for a wide
class of motion models e.g., non-holonomic robots. Note here that using~\eqref{eq::alternate-EKF-equations-d}, $\vect{S}_{a,b}$ in~\eqref{eq::S_ab} can be expressed equivalently as}
\begin{align}\label{eq::Sab_DCL}
\vect{S}_{a,b}=&\,\vect{R}^{a}(k\!+\!1)\!+\!\Tvect{H}_{a}\vect{P}^{a\mbox{-}}(k\!+\!1)\Tvect{H}_{a}^\top\!+\!\Tvect{H}_{b}
\vect{P}^{b\mbox{-}}(k\!+\!1)\Tvect{H}_{b}^\top\nonumber\\
&+\Tvect{H}_{a}\vect{\Phi}^a(k+1)\vect{\Pi}_{a,b}(k)\vect{\Phi}^b(k\!+\!1)^\top\Tvect{H}_{b}^\top+\\
&\Tvect{H}_{b}\vect{\Phi}^b(k\!+\!1)\vect{\Pi}_{b,a}(k){\vect{\Phi}^a(k\!+\!1)}^\top\Tvect{H}_{a}^\top.\nonumber
\end{align}
\vspace{-0.05in}
\section{\blue{A server assisted distributed cooperative localization}}\label{sec::partially-decentralized}\vspace{-0.08in}
\blue{In this section, we propose a novel distributed cooperative localization algorithm
in which each agent maintains its
own local state estimate for autonomy, incurs only $O(1)$ processing and
storage costs, and needs to communicate only when there is an
inter-agent relative measurement. Our proposed solution is a server assisted distributed implementation of \textsc{Split EKF}\xspace~CL (\textsl{SA-split-EKF}\xspace for short) which is given in Algorithm~\ref{alg::ouralgpar}. For clarity of presentation, we are assuming that at most there
is one relative measurement at each time in the team. To process
multiple synchronized measurements, we use \emph{sequential
updating} (c.f.~\cite[ch. 3]{CTL:66},\cite{YB-PKW-XT:11}), for details see Appendix.
In \textsl{SA-split-EKF}\xspace, every robot $i\in\mathcal{V}$ maintains
and propagates its own propagated state estimate~\eqref{eq::propag_central_Expanded-a} and covariance
matrix~\eqref{eq::propag_central_Expanded-b}, as well as, the
variable $\vect{\Phi}^i\in{\mathbb{R}}^{n^i\times n^i}$~\eqref{eq::Phi}. Since these variables are
local, the propagation stage is fully decoupled and there is no need for communication at this stage. To free the
robots from maintaining the team cross-covariances, \textsl{SA-split-EKF}\xspace assigns a server to
maintain and to update $\vect{\Pi}_{i,j}$'s~\eqref{eq::Pi},
the main source of high processing and storage costs.
The communication between robots and the server is only required when
there is a relative measurement in the~team.
When robot $a$ takes relative measurement from robot $b$,
robot $a$ informs the server. Then, the server starts the update procedure
by taking the following actions. First, it acquires the $\textsl{Landmark-message}\xspace$~\eqref{eq::DCL-lmssg}
from robots $a$ and $b$, which
is of order $O(1)$ in terms of the size of the team.
Then, using this
information along with its locally maintained $\vect{\Pi}_{i,j}$'s, server calculates
and sends to
each robot $i\in\mathcal{V}$ its corresponding update message~\eqref{eq::updt_mssg}
so that the robot can update its local estimates
using~\eqref{eq::alternative-EKF-update-xi-Pi}.
It also updates its
local $\vect{\Pi}_{i,j}$ using \eqref{eq::Pi}, for all
$i\in\mathcal{V}\backslash\{N\}$ and $j\in\{i+1,\cdots,N\}$--because of the
symmetry of the joint matrix $\vect{\Pi}$ we only save the upper triangular part of this
matrix. The size of update message for each robot is of order $O(1)$ in terms of the size of the team. We can show that multiple concurrent measurements can be processed jointly at the server and the update message for each robot is still of order $O(1)$, for details see Appedix.} \textsl{SA-split-EKF}\xspace~CL algorithm processes absolute measurements in a similar way to relative measurements, i.e., the robot with the absolute measurement informs the server, which proceeds with the same described updating procedure
and issues the update message~\eqref{eq::updt_mssg} to every
robot~$i\!\in\!\mathcal{V}$.
\blue{A fully decentralized implementation of the
\textsc{Split EKF}\xspace~CL has been proposed in~\cite{SSK-SF-SM:16}. In this scheme, instead of a server each agent keeps a local copy of $\vect{\Pi}_{l,j}(k)$'s which results in an {$O(N^2)$} storage and
{$O(N^2\times N_z)$} processing cost per robot with {$N_z$} the total
number of relative measurement in the team in a given time. The
downside of the algorithm of~\cite{SSK-SF-SM:16} is that any incidence
of message dropout at each agent causes disparity between the local
copy of $\vect{\Pi}_{l,j}(k)$'s at that agent and the local copies of
the rest of the team, jeopardizing the integrity of the
decentralized~implementation. In the next section we show that \textsl{SA-split-EKF}\xspace has robustness to message dropouts.
}
\section{Accounting for in-network message
dropouts}\label{sec::C-CL_miss}\vspace{-0.08in}
\blue{\textsl{SA-split-EKF}\xspace CL described so far operates based on the assumption that
at the time of measurement update, all the robots can receive the
update message of the server, i.e., $\mathcal{V}_{\text{missed}}(k+1)$, the set
of agents missing the update message of the server at timestep $k+1$, is
empty. It is straightforward to see that \textsl{SA-split-EKF}\xspace~CL algorithm is robust to permanent
team member dropouts. The server only suffers from a processing and
communication cost until it can confirm that the dropout is
permanent. In what follows, we study the robustness of Algorithm~\ref{alg::ouralgpar} against occasional
communication link failures between robots and the server. Specifically we show that
Algorithm~\ref{alg::ouralgpar} has robustness to message dropout with formal guarantees that the updated estimates of the
robots receiving the update message are of minimum variance in
a first-order approximate sense at that given timestep.}
Our guarantees are based on the assumption that the two
robots involved in a relative measurement can both communicate with
the server at the same time otherwise, we discard that measurement. We
base our study on analyzing a EKF for joint CL in which at some update
times, we do not update the estimate of some of the robots. In our
server assisted distributed implementation, these robots
are those which miss the update-message of the server and as such they
are not updating their~estimates.
\setlength{\textfloatsep}{5pt}
\begin{algorithm}[!t]
{\scriptsize
\caption{{ \textsl{SA-split-EKF}\xspace~CL}}
\label{alg::ouralgpar}
\begin{algorithmic}[1]
\Require
Initialization ($k=0$):
\begin{align*}
&\text{Robot~} i\in\mathcal{V}:~ \Hvect{x}^{i\mbox{+}}(0)\in{\mathbb{R}}^{n^i},~~\vect{P}^{i\mbox{+}}(0)\in\mathbb{S}_{>0}^{n^i},~~\vect{\Phi}^i(0)=\vect{I}_{n^i},\\
&~~~~~\quad\text{Server}:~\vect{\Pi}^i_{i,j}(0)=\vect{0}_{n^i\times n^j}, ~i\in\mathcal{V}\backslash\{N\},~~j\in\{i+1,\cdots,N\}.
\end{align*}
\hspace{-0.38in}\noindent\textbf{Iteration $k$}
\State \textbf{Propagation}: Every robot $i\in\mathcal{V}$ proceeds by
$$(\Hvect{x}^{i\mbox{-}}\!(k\!+\!1), \vect{\Phi}^i(k\!+\!1),\vect{P}^{i\mbox{-}}\!(k\!+\!1) )\xleftarrow[\eqref{eq::propag_central_Expanded-a}, \eqref{eq::propag_central_Expanded-b},\eqref{eq::Phi}]{\text{using}}(\Hvect{x}^{i\mbox{+}}(k),\vect{\Phi}^i(k),\vect{P}^{i\mbox{+}}(k),\vect{u}_{m}^i(k) ).$$
\State \textbf{Update}:
\begin{itemize}[leftmargin=*]
\item if there is no relative measurements in the network
\begin{align*}
&\text{Robot~} i\!\in\!\mathcal{V}:~ \Hvect{x}^{i\mbox{+}}(k+1)=\Hvect{x}^{i\mbox{-}}(k+1),~\vect{P}^{i\mbox{+}}(k+1)=\vect{P}^{i\mbox{-}}(k+1),\\
&~~~~\quad\text{Server}:~\vect{\Pi}_{i,j}(k+1)=\vect{\Pi}_{i,j}(k), ~~i\!\in\!\mathcal{V},~j\in\mathcal{V}\backslash\{i\}.
\end{align*}
\item if $a\xrightarrow{k+1}b$, $a$ informs the server. The server asks for the following information from robots $a$ and $b$, respectively,
\begin{align}\label{eq::DCL-lmssg}
&\textsl{Landmark-message}\xspace^a=\Big(\vect{z}_{a,b},\Hvect{x}^{a\mbox{-}}(k+1), \vect{P}^{b\mbox{-}}(k+1), \vect{\Phi}^{a}(k+1)\Big),\nonumber\\
&\textsl{Landmark-message}\xspace^b=\Big(\Hvect{x}^{b\mbox{-}}(k+1),\vect{P}^{b\mbox{-}}(k+1), \vect{\Phi}^{b}(k+1)\Big).
\end{align}
Then, server compute
$$\vect{S}_{a,b},\vect{r}^{a},\vect{\Gamma}_{i}\xleftarrow[\eqref{eq::Sab_DCL},\eqref{eq::reletive_Residual},\eqref{eq::intermidate_var2}]{\text{using}}(\textsl{Landmark-message}\xspace^a,\textsl{Landmark-message}\xspace^b).$$
and $\Bvect{r}^{a}\!=\!(\vect{S}_{a,b})^{-\frac{1}{2}}\vect{r}^{a}$. Server passes the following data to every robot $i\in\mathcal{V}$,
\begin{align}\label{eq::updt_mssg}
&\textsl{update-message}^i=\big(\Bvect{r}^{a}\,,\,\vect{\Gamma}_i\big).
\end{align}
Robot $i\in\mathcal{V}\backslash\mathcal{V}_{\text{missed}}(k+1)$ then updates its local state estimate according to
\begin{subequations}
\begin{align}
\Hvect{x}^{\!i\mbox{+}}(k+1)=\,&\Hvect{x}^{i\mbox{-}}(k+1)+\\
&\vect{\Phi}^{i}(k\!+\!1)\textsl{update-message}^i(2)\,\textsl{update-message}^i(1),\nonumber\\
\vect{P}^{\!i\mbox{+}}(k+1)=\,&\vect{P}^{i\mbox{-}}\!(k+1)-\\
&\vect{\Phi}^{\!i}(k\!+\!1)\textsl{update-message}^i(2)\,\textsl{update-message}^i(2)^\top\vect{\Phi}^i\!(k\!+\!1)^{\!\top}.\nonumbe
\end{align}
\end{subequations}
The server updates its local variables, for $i\in\mathcal{V}\backslash\{N\},~j\in\{i+1,\cdots,N\}$:
\begin{align*}
\vect{\Pi}_{i,j}(k\!+\!1)&=\vect{\Pi}_{i,j}(k)-\vect{\Gamma}_i \vect{\Gamma}_j^\top, ~ \text{if~}(i,j)\not\in\mathcal{V}_{\text{missed}}(k\!+\!1)\!\times\!\mathcal{V}_{\text{missed}}(k\!+\!1)
\end{align*}
\end{itemize}
\State $k \leftarrow k+1$
\end{algorithmic}
$\mathcal{V}_{\text{missed}}(k+1)$ is the set of agents missing the update message at timestep $k+1$.
}
\end{algorithm}
\blue{In what follows, the state estimate equations of the robots involved
in a relative measurement do always get updated. }
Without loss of generality, assume that we do not update the state estimate of robots
{$\{m+1,\cdots, N\}$}, for $2<m<N+1$ using the relative measurement
taken by robot {$a \until{m}$} from robot {$b \until{m}$} at some time
$k+1$. That is, assume that agents
{$\mathcal{V}_{\text{missed}}(k+1)=\{m+1,\cdots, N\}$} have missed the update
of the server at time $k+1$. The propagation stage of the Kalman filter
is independent of the observation process, and thus we leave it as
is, see~\eqref{eq::propag_central_Expanded-a}-\eqref{eq::propag_central_Expanded-c}. The
following result gives the minimum variance update equation for robots
$\{1,\cdots,m\}$. Recall that, at any update incident at timestep
$k$, the EKF gain {$\vect{K}$} minimizes
$\text{Trace}(\vect{P}^{\mbox{+}}(k))$, where $\vect{P}^{\mbox{+}}(k)$
in~\eqref{eq::Kgain_central} is an approximation of
$\text{E}[(\vect{x}(k)-\vect{x}^{\mbox{+}}(k))(\vect{x}(k)-\vect{x}^{\mbox{+}}(k))^\top]$--an
approximation based on a system and measurement model linearization
(c.f.~\cite[page 146]{JLC-JLJ:11}). \blue{The following result plays a
similar role.}
\begin{thm}[\blue{Joint EKF CL in the presence of message dropouts}]\label{thm::partial_update}
Consider a joint CL via EKF where the
relative measurement taken by robot
$a\notin\mathcal{V}_{\text{missed}}(k+1)$
from robot $b\notin \mathcal{V}_{\text{missed}}(k+1)$
~at some time $k+1>0$ is used to only update the states of robots
$\mathcal{V}\backslash\mathcal{V}_{\text{missed}}(k\!+\!1)=\{1,\cdots,m\}$, i.e.,
\begin{subequations}\label{eq::update-cent-miss}
\begin{align}
\Hvect{x}^{i\mbox{+}}(k\!+\!1)=&\Hvect{x}^{i\mbox{-}}(k\!+\!1)+
\vect{K}_i(k\!+\!1)\vect{r}^{a}(k\!+\!1),\nonumber\\
&
\qquad \qquad\quad i\in\mathcal{V}\backslash\mathcal{V}_{\text{missed}}(k+1)\label{eq::XRobotCovarUpdate-1:N-1}\\
\Hvect{x}^{i\mbox{+}}(k\!+\!1)=&\Hvect{x}^{i\mbox{-}}(k\!+\!1)\quad
i\in\mathcal{V}_{\text{missed}}(k+1).\label{eq::RobotCovarUpdate-N}
\end{align}
\end{subequations}
Let
$\vect{K}_{1:m}=[\vect{K}_1^\top,\cdots,\vect{K}_m^\top]^\top$. Then,
the Kalman gain $\vect{K}_{1:m}$ that minimizes $\text{Trace}(\vect{P}^{\mbox{+}}(k+1))$, for $i\!\in\!\mathcal{V}\backslash\mathcal{V}_{\text{missed}}(k+1)$, is
\begin{align}\label{eq::gain_partial_update}
\vect{K}_i=(\vect{P}_{i, b}^{\mbox{-}}(k+1)
\Tvect{H}_b^\top+&\vect{P}_{i,a}^{\mbox{-}}(k+1)\Tvect{H}_a^\top)\,{\vect{S}_{a,b}}^{-1}.\end{align}
Moreover, the team covariance update is given by
\begin{subequations}
\begin{align}
&\vect{P}^{i\mbox{+}}(k\!+\!1)\!=\label{eq::robot-covar-updt-miss}\\~~&\begin{cases}
\vect{P}^{i\mbox{-}}(k\!+\!1),~~~~~\qquad
i\in\mathcal{V}_{\text{missed}}(k+1),\\
\vect{P}^{i\mbox{-}}(k\!+\!1)\!-
\!\vect{K}_i
\vect{S}_{a,b}(k\!+\!1)\vect{K}_i(k\!+\!1)^\top,~\text{otherwise}.
\end{cases}\nonumber\\
& \vect{P}_{i,j}^{\mbox{+}}(k\!+\!1)\! =\label{eq::robot-cross-covar-updt-miss} \\
~~&\begin{cases}
\vect{P}_{i,j}^{\mbox{-}}(k\!+\!1),~~~~~\quad (i,j)\in\mathcal{V}_{\text{missed}}(k\!+\!1)\times\mathcal{V}_{\text{missed}}(k\!+\!1),\\
\vect{P}_{i,j}^{\mbox{-}}(k\!+\!1)\!-\!\vect{K}_i(k\!+\!1),
\vect{S}_{a,b}(k\!+\!1)\vect{K}_j(k\!+\!1)^\top,~\text{otherwise}.
\end{cases}\nonumber
\end{align}
\end{subequations}
where for $i\!\in\!\mathcal{V}_{\text{missed}}(k\!+\!1)$ we defined and used the \emph{pseudo}~gain
\begin{equation}\label{eq::pseudo-gain}
\vect{K}_i=(\vect{P}_{i,b}^{\mbox{-}}(k+1)
\Tvect{H}_b^\top+\vect{P}_{i,a}^{\mbox{-}}(k+1)\Tvect{H}_a^\top)
\,{\vect{S}_{a,b}}^{-1}.
\end{equation}
\end{thm}
\blue{The proof of this theorem is given in Appendix. The partial updating equations~\eqref{eq::update-cent-miss}-\eqref{eq::pseudo-gain} are the same as the joint EKF CL~\eqref{eq::central-robotwise} except that the state estimate and corresponding covariance matrix for agents missing the update message and also the cross-covariance matrices between those agents do not get updated.
As such, the \textsc{Split EKF}\xspace representation for~\eqref{eq::update-cent-miss}-\eqref{eq::pseudo-gain} is the same as the one for the joint EKF CL~\eqref{eq::central-robotwise} except that for $i\in\mathcal{V}_{\text{missed}}(k+1)$ we have
\begin{align*}
\Hvect{x}^{i\mbox{+}}(k\!+\!1)=&\Hvect{x}^{i\mbox{-}}(k\!+\!1), ~\vect{P}^{i\mbox{+}}(k+1)=\vect{P}^{i\mbox{-}}(k+1),\\
\vect{\Pi}_{i,j}(k\!+\!1)=&\vect{\Pi}_{i,j}(k),\quad\quad\quad j\!\in\!\mathcal{V}_{\text{missed}}(k\!+\!1)\backslash\{i\}.
\end{align*}
Therefore, for none empty $\mathcal{V}_{\text{missed}}(k+1)$, we can implement the \textsl{SA-split-EKF}\xspace~CL algorithm
exactly as described in Algorithm~\ref{alg::ouralgpar}.
We conclude then that \textsl{SA-split-EKF}\xspace~CL algorithm is robust to
message dropouts and the estimates of the robots receiving the update
message, as stated above, are minimum~variance, in a first-order
approximate sense. }
\section{Numerical and experimental evaluations}
\blue{We demonstrate the performance of the proposed \textsl{SA-split-EKF}\xspace~CL algorithm
with and without occasional communication failure in simulation and compare it to the performance of dead reckoning only localization and that of the algorithm of~\cite{HL-FN:13}. We use a team of four robots moving on a flat terrain
on the square
helical paths shown in Fig.~\ref{fig::simulation} (a) and (b) traversed in $[0,300]$ seconds (crosses show the start points).
The standard deviation of the linear
(resp. rotational) velocity measurement noise of robots $\{1,2,3,4\}$ respectively are assume
to be {$\{35\%,30\%,25\%,20\%\}$} of the linear (resp. {$\{25\%,20\%,20\%,15\%\}$} of the rotational)
velocity of the robot.
For the measurement/communication scenario in Table~\ref{table::simulation_time}, the root mean square (RMS) position error calculated from {$M\!=\!50$} Monte Carlo runs is depicted in Fig.~\ref{fig::simulation} (c)-(f).
As seen, in comparison to dead reckoning localization, CL improves the accuracy of the state estimates. As expected, by keeping an accurate account of the cross covariances, the \textsl{SA-split-EKF}\xspace CL algorithm produces more accurate localization results than the algorithm of
~\cite{HL-FN:13}. Recall that the advantage of the algorithm of~\cite{HL-FN:13} is its relaxed connectivity condition. However, since this algorithm accounts for missing cross-covariance information by conservative estimates, its localization accuracy suffers. Also in this algorithm since only the landmark robots (the robots that relative measurements are taken from them) update their estimates, the robots taking the relative measurement does not benefit from CL. Fig.~\ref{fig::simulation} (c)-(f) also demonstrate the robustness of \textsl{SA-split-EKF}\xspace CL to communication failure, i.e., the robots receiving the update message benefit from CL and the disconnected robot once connected can resume correcting its state estimates. Here, it is also worth recalling that \textsl{SA-split-EKF}\xspace~CL without link failure, similar to algorithms of~\cite{SIR-GAB:02} and \cite{SSK-SF-SM:16}, recovers exactly the state estimate of the joint EKF CL~\eqref{eq::central-robotwise}. However, unlike the algorithms of ~\cite{SIR-GAB:02} and \cite{SSK-SF-SM:16} \textsl{SA-split-EKF}\xspace~CL has robustness to the communication failure. }
\blue{
\setlength\extrarowheight{4pt}
\begin{table}[b]\renewcommand{\arraystretch}{0.7}\scriptsize
\caption{{\small Time table for exteroceptive
measurement times and the disconnected robots.
}
}\label{table::simulation_time}\vspace{-0.06in}
\centering
\begin{tabular}{| m{1.3cm} || m{0.5cm} | m{0.5cm} |m{0.8cm}|m{0.8cm}|m{0.8cm}|m{0.8cm}|}
\hline
Time (sec.)&$\!\!\!\!(45,50]$&$\!\!\!\!(90,95]$&$\!\!\!\!(135,140]$&$\!\!\!\!(180,185]$&$\!\!\!\!(225,230]$&$\!\!\!\!(270,275]$\\ \hline
Measurements &
$\!\!\!\!\!\!\!\left.\begin{array}{c}1\to 2\\ 2\to 3\\3\to 4 \end{array}\right.$&
$\!\!\!\!\!\!\!\left.\begin{array}{c}3\to 4\\ 4\to1 \end{array}\right.$&
$\!\!\!\!\left.\begin{array}{c}1\to 2\\ 3\to 4 \end{array}\right.$&
$\!\!\!\!\left.\begin{array}{c}2\to 3 \end{array}\right.$&
$\!\!\!\!\left.\begin{array}{c}1\to 2\\ 3\to 4\end{array}\right.$&
$\!\!\!\!\left.\begin{array}{c}2\to 3\\ 4\to 1\end{array}\right.$
\\ \hline
\!\! disconnected from server& none& none& robot $4$&robot $4$&none&none\\ \hline
\end{tabular}\vspace{-0.1in}
\end{table}
}
\begin{figure}[t!
\unitlength=0.5in
\centering
\subfloat[true trajectory of robots 1 and 2]{
\! \includegraphics[trim=3 2 4 5,clip,width=0.25\textwidth]{f5}\!\!\!\!\!\!\!
}
\subfloat[true trajectory of robots 3 and 4]
{
\! \includegraphics[trim=3 2 4 5,clip,width=0.25\textwidth]{f6}
}\\ \vspace{-0.08in}
\subfloat[robot 1]{
\! \!\!\! \! \!\!\! \includegraphics[trim=3 1 5 3,clip,width=0.24\textwidth]{f1}\!\!\!\!
}
\subfloat[robot 2]
{
\! \includegraphics[trim=3 1 5 3,clip,width=0.24\textwidth]{f2}
}\\ \vspace{-0.06in}
\subfloat[robot 3]{
\! \!\!\! \! \!\!\! \includegraphics[trim=3 1 5 3,clip,width=0.24\textwidth]{f3}\!\!\!\!
}
\subfloat[robot 4]
{
\! \includegraphics[trim=3 1 5 3,clip,width=0.24\textwidth]{f4}
}
\caption{{\small \blue{Simulation results for position RMS error for the
measurement/communication scenario of
Table~\ref{table::simulation_time} (the orientation RMS error
behaves similarly and omitted for brevity). In plots (c)-(f),
ultra thick gray solid line shows the RMS error for dead-recking only; black dashed line and gray dash dotted line show RMS for \textsl{SA-split-EKF}\xspace CL respectively in the absence and presence of link failure; and blue dotted line shows the RMS plot for the algorithm of~\cite{HL-FN:13}.
} }}\label{fig::simulation}
\end{figure}
\emph{Experimental evaluation}:
we tested the performance of Algorithm~\ref{alg::ouralgpar} and its robustness to message dropouts experimentally, as well.
Our robotic testbed consists of a set of two overhead cameras, a
computer workstation, and $4$ TurtleBot robots (see
Figure~\ref{fig::foto}). This testbed operates under Robot Operating
System (ROS). The overhead cameras, with the help of the set of AR
tags and the ArUco image processing library~\cite{aruco}, are used to
track the motion of the robots and generate a reference trajectory to
evaluate the performance of the CL algorithms. The workstation
serves as the server running a ROS node with the central part of
the~\textsl{SA-split-EKF}\xspace~CL algorithm.
Each robot has a ROS node that
includes programs to propagate the local filter
equations~\eqref{eq::propag_central_Expanded-a}, \eqref{eq::propag_central_Expanded-b} and~\eqref{eq::Phi} using wheel-encoder measurements and
relative-pose measurements from other robots using the onboard
Kinect camera unit. To take relative-pose measurements, the Kinect
camera also uses a set of AR tags and the ArUco image processing
library. The robots communicate with the workstation via WiFi. The AR
tags are placed on top of the TurtleBot's rack and are arranged on a
cube to provide tags in every horizontal and in top directions. The
accuracy of the visual tag measurements is set to $0.03$ meter for
position and to $6$ degree for orientation. For the propagation stage
of every robot, the local filters of the robots apply the velocity
measurement of their wheel encoders and account the noise with $50\%v$
standard deviation.
\begin{figure}[t!
\unitlength=0.5in
\psfrag*{x}[][cc][1.1]{\renewcommand{\arraystretch}{0.5}\begin{tabular}{c}$\ln(|x^i-\mathsf{x}^\star|)$\\~\end{tabular}}
\psfrag*{A}[][cc][1.1]{\renewcommand{\arraystretch}{0.5}\begin{tabular}{c}Agents\\~\end{tabular}}
\psfrag*{t}[][cc][1.1]{\renewcommand{\arraystretch}{2}\begin{tabular}{c}$t$\end{tabular}}
\centering
\!\!\! \subfloat[The robotic testbed]{
\includegraphics[width=0.45\linewidth]{robotsss.JPG}
}
\subfloat[Turtlebot with AR tag]
{
\includegraphics[width=0.4\linewidth]{Turtlebot.png}
}
\caption{{\small Setup for the multi-robot test scenario showing the four TurtleBot robots. Every agent features a cube with tags that enable both the Kinect and the overhead camera to take pose measurements. }
\label{fig::foto}
\end{figure}
The robots move in a $2$m $\times$ $3$m area, which is the active
vision zone of our overhead camera system. The robots move
simultaneously in a counter clock-wise direction along a square
helical path shown in Figure~\ref{fig::foto} and
Figure~\ref{fig::experiment}. Starting each at one of the four inner
corners of this helical path, marked with large green crosses on
Figure~\ref{fig::experiment}, the robots are programmed to arrive at the next corner ahead of them at
the same time. Along the edge of the track the robots use their wheel
encoder measurements to propagate their motion model while, at the
corners, discrete relative-measurement sequences are executed to
update the local-pose estimates of the robots according to
Algorithm~\ref{alg::ouralgpar}. In our experiment, the
relative-measurement scenario is for the robot at region $1$ to take
relative measurement from the robot at region $2$, and the robot at
region $2$ to take relative measurement from the robot at region
$3$. The testbed works under perfect communication but we emulate
message dropouts as described below. In our experiment, we execute
the following four estimation filters simultaneously: (a) an overhead
camera tracking to generate the reference trajectory; (b) a
propagation-only filter to demonstrate the accuracy of position
estimates without relative measurements; (c) an execution of the CL
Algorithm~\ref{alg::ouralgpar} under a perfect communication scenario;
(d) an execution of the CL Algorithm~\ref{alg::ouralgpar} under a
measurement-dropout scenario. Note here that each of the CL filters
(c) and (d) has its own corresponding server node on the workstation.
Figure~\ref{fig::experiment} depicts the result of one of our
experiments. In this experiment, to emulate the message dropout, we
partition our area as shown in Figure~\ref{fig::experiment} into four
regions and designate one of the areas, highlighted in gray, as the
message-dropout zone. In the implementation that executes CL
Algorithm~\ref{alg::ouralgpar} under the message-dropout scenario
(CL filter (d)), the robot passing through the gray zone does not
implement the update-message it receives from the server. In
Figure~\ref{fig::experiment}, the trajectory generated by the overhead
camera (the curve indicated by the black crosses) serves as our
reference trajectory. As seen, as times goes by the
position estimate generated by propagating the pose equations using
the wheel encoder measurements (the trajectory depicted by the dotted
curve) has large estimation error. In Figure~\ref{fig::experiment},
the location estimate of the robots via the CL
Algorithm~\ref{alg::ouralgpar} under perfect-communication and
message-dropout scenarios are depicted, respectively by the solid red
curve and the blue dashed curve. As we can see, whenever a relative
measurement is obtained, the CL algorithms improve the location
accuracy of the robots. Of particular interest is the effect of CL
algorithm on the position accuracy of robots when they pass through
region $4$ (the shaded region on Figure~\ref{fig::experiment}). In our
scenario described above, no relative measurement is taken by or from
the robot in region $4$. However, because of maintained past
correlations among the robots through the server, in the case of the
perfect-communication scenario the robot in region $4$ still
benefits from the relative measurement updates generated by
measurements taken by other robots.
Of
course, in the message-dropout scenario (see the blue dashed line
trajectories) such benefit is lost because the robot in region $4$
does not receive the update message from the server. However, the
trajectories show the robustness of Algorithm~\ref{alg::ouralgpar} to
message dropout, i.e., the robots that receive the update message from
the server continue to improve their localization accuracy while the
robot in region $4$ is momentarily deprived from such
benefit. However, as soon as the latter reconnects and
receives an update message, its accuracy improves again.
\begin{figure}[t!
\unitlength=0.5in
\psfrag*{x}[][cc][1.1]{\renewcommand{\arraystretch}{0.5}\begin{tabular}{c}$\ln(|x^i-\mathsf{x}^\star|)$\\~\end{tabular}}
\psfrag*{A}[][cc][1.1]{\renewcommand{\arraystretch}{0.5}\begin{tabular}{c}Agents\\~\end{tabular}}
\psfrag*{t}[][cc][1.1]{\renewcommand{\arraystretch}{2}\begin{tabular}{c}$t$\end{tabular}}
\centering
\subfloat[robot 1]{
\includegraphics[trim=0 0 0 15,clip,height=1.55in]{clyde}
}
\subfloat[robot 2]
{
\includegraphics[trim=0 0 0 15,clip,height=1.55in]{blinky}
}\\
\subfloat[robot 3]{
\includegraphics[trim=0 0 0 15,clip,height=1.55in]{pinky}
}
\subfloat[robot 4]
{
\includegraphics[trim=0 0 0 15,clip,height=1.55in]{inky}
}
\caption{{\small Trajectories of the robots under an experimental
test generated by $4$ simultaneously running ROS packages, one
for the overhead camera location tracking (the curve indicated
by black crosses), one for the propagation only location
estimate (the black dotted curve), and the other two to obtain
location estimates by the
the~\textsl{SA-split-EKF}\xspace~CL algorithm (Algorithm~\ref{alg::ouralgpar}) under perfect communication (red
solid curve) and message-dropout (dashed blue curve)
scenarios. Region $4$ which is highlighted in gray is the area
where we emulate the message dropout.
} }\label{fig::experiment}
\end{figure}
\section{Conclusions}\vspace{-0.08in}
\blue{For a team of robots with limited computational, storage and
communication resources, we proposed a server assisted distributed CL algorithm which under the perfect communication scenarios renders the same
localization performance as of a joint CL using EKF. In terms of the team size,
this algorithm only requires {$O(1)$} storage and computational cost per
robot and the main computational burden of implementing the EKF for CL
is carried out by the server. }We showed that this algorithm has robustness to occasional communication failure between robots and the server.
Here, we discarded the measurement of the robots that fail to communicate with the server.
Our future work involves utilizing these old measurements using
out-of-sequence-measurement update strategies~\cite{YBS-HC-MM:04} when
the communication link is restored between the corresponding robot and
the~server.
\vspace{-0.06in}
\bibliographystyle{ieeetr}%
\input{main.bbl}
|
2,877,628,090,979 | arxiv | \section{Introduction}
Under specific circumstances a gas of spin-$0$ particles (bosons) undergoes a process of phase transition where the
particles tend to reside in the lowest energy state (see, e.g., \cite{landau}). This is the well-known phenomenon of Bose-Einstein condensation \cite{bose,einstein}
which has become increasingly interesting in the last few years because of its experimental realization; see, e.g., \cite{ande95-269-198,brad95-75-1687,davi95-75-3969,inou98-392-151}. For the description of thermodynamical properties in these experiments, the case in which the Bose gas is confined by a harmonic oscillator potential is the most relevant one \cite{bagnato87,degroot50,grossmann95,kirsten96,haug97-225-18,haug97-55-2922}. However, many other configurations have also been considered. For example,
Bose-Einstein condensation has been studied in flat Minkowski space for rectangular enclosures \cite{greenspoon75,grossmann97,pajkowski77,pathria72}
and for more general, arbitrarily shaped, cavities \cite{kirsten99}.
Also, more general confining potentials than the harmonic oscillator potential have been analyzed in \cite{bagnato87,kirsten98}. Generalizations of these investigations to curved spaces have been considered. In particular studies of Bose-Einstein condensation have been performed for static Einstein manifolds in \cite{altaie78,singh84,parker91} and for higher-dimensional spheres in \cite{shiraishi87}. Moreover, Bose-Einstein condensation as a symmetry breaking phenomenon has been studied on static curved spacetimes of arbitrary spatial sections with and without boundary in \cite{smith96,toms92,toms93}; see also \cite{dowk89-327-267,kirs91-8-2239}.
In this work we utilize $\zeta$-function regularization techniques, related to the ones developed in \cite{dowker78}, in order to analyze the phenomenon of Bose-Einstein condensation
on very general manifolds constructed as a product of the $3$-dimensional Minkowski space and a $d$-dimensional smooth, compact manifold with or without boundary.
More specifically, in the 3-dimensional Minkowski space we assume a confining harmonic oscillator potential as it is used in experiments.
In particular, the dependence of the critical temperature and the specific heat on the dimension and the geometry and topology of the additional compact manifold is
explicitly obtained.
The main motivation for these studies is to understand how significant the impact of additional dimensions in particular on the critical temperature is. Given that the critical temperature can be determined to high accuracy, Bose-Einstein condensation experiments could provide a window into the world of extra dimensions by comparing experimental data with theoretical predictions in the presence of extra dimensions. This procedure has been successfully applied for example in the context of the Casimir effect and information about number and size of extra dimensions could be established; see, e.g., \cite{bord09b,chen08-668-72,fran07-76-015008,hofm04-582-1}.
It is the aim of our article to start research in this direction.
The outline of this article is as follows. In Section II we use heat kernel and zeta function techniques to analyze the partition sum of the system considered. In particular, the high temperature expansion of the partition sum in terms of heat kernel coefficients is provided. In Section III this expansion is used to find the critical temperature and specific heat of the Bose gas. In Section IV we summarize our main findings and explain how these can be used to extract information about extra dimensions present.
\section{Partition Function}
We consider a gas of $N$ non-interacting bosons of mass $M$ on a product manifold $\mathcal{M}=\mathbb{R}^{3}\times \mathcal{N}$
of dimension $D=d+3$, where the additional dimensions are modeled by a smooth, compact $d$-dimensional manifold $\mathcal{N}$ with or without boundary $\partial\mathcal{N}$.
The dynamics of the quantum mechanical system we want to consider is described by the Schr\"{o}dinger equation (here and in the following we set $\hbar=k_{B}=1$)
\begin{equation}\label{1}
\left(-\frac{1}{2M}\Delta_{\mathcal{M}}+V({\bf x})\right)\phi_{{\bf n}}=E_{{\bf n}}\phi_{{\bf n}}\;,
\end{equation}
where the coefficients $E_{{\bf n}}$ represent the energy levels and we choose $V({\bf x})$ to be a $3$-dimensional anisotropic harmonic oscillator potential
\begin{equation}\label{2}
V({\bf x})=\frac{M}{2}\left(\omega_{1}x^{2}+\omega_{2}y^{2}+\omega_{3}z^{2}\right)\;,
\end{equation}
which is the relevant choice for recent experiments. As is well known,
equation (\ref{1}) can be solved by separation of variables, and the spectrum is found to be
\begin{equation}\label{3}
E_{{\bf n},i}=\lambda_i+\sum_{k=1}^{3}\omega_{k}\left(n_{k}+\frac{1}{2}\right)\;,
\end{equation}
with $n_{k}\in\mathbb{N}_{0}$ and where $\lambda_{i}$ denotes the eigenvalues of the Laplace operator $\Delta_{\mathcal{N}}$ on the manifold $\mathcal{N}$, so
\begin{equation}\label{4}
-\Delta_{\mathcal{N}}\varphi_{i}=\lambda_{i}\varphi_{i}\;.
\end{equation}
In order to study the thermodynamical properties of this system, the relevant object is the partition function or grand canonical potential
\begin{equation}\label{5}
q=-\sum_{i}\sum_{{\bf n}}\ln\left[1-e^{-\beta\left(E_{{\bf n},i}-\mu\right)}\right]\;,
\end{equation}
where $\mu$ denotes the chemical potential and we have introduced the standard notation $\beta=1/T$. Since the ground state is
of particular importance in the analysis of Bose-Einstein condensation, we separate its contribution from the rest of the series in (\ref{5})
and we expand the logarithm to obtain
\begin{equation}\label{6}
q=q_{0}-\sum_{m=1}^{\infty}{\sum_{i}}^{\prime}{\sum_{{\bf n}}}^{\prime}\frac{1}{m}e^{-\beta m\left(E_{{\bf n},i}-\mu\right)}\;,
\end{equation}
where the prime indicates the omission of the ground state contribution and, denoting by $g_{0}$ the degeneracy of the lowest eigenvalue $E_{0}$,
\begin{equation}\label{7}
q_{0}=-g_{0}\ln\left[1-e^{-\beta\left(E_{0}-\mu\right)}\right]\;.
\end{equation}
The exponential that appears in the partition function (\ref{6}) can be dealt with by exploiting a Mellin-Barnes integral representation \cite{bytsenko92,elizalde95,kirsten01}
to obtain
\begin{equation}\label{8}
q=q_{0}-\sum_{m=1}^{\infty}\frac{1}{m}e^{-\beta m(\mu_{c}-\mu)}\frac{1}{2\pi i}\int\limits_{r-i\infty}^{r+i\infty}d\alpha\,\Gamma(\alpha)(\beta m)^{-\alpha}{\sum_{i}}^{\prime}{\sum_{{\bf n}}}^{\prime}\left(E_{{\bf n},i}-\mu_{c}\right)^{-\alpha}\;,
\end{equation}
with $r$ chosen in such a way that all the poles of the integrand lie to the left of the contour, and we have introduced the critical chemical potential $\mu_{c}=E_{0}$, since we are considering an ideal gas of bosons.
After these manipulations we notice that the spectral $\zeta$-function
\begin{equation}\label{9}
\zeta(\alpha)={\sum_{i}}^{\prime}{\sum_{{\bf n}}}^{\prime}\left(E_{{\bf n},i}-\mu_{c}\right)^{-\alpha}
\end{equation}
makes an appearance. The remaining
sum over $m$ in (\ref{8}) can be written in terms of polylogarithmic functions \cite{gradshtein07}
\begin{equation}\label{10}
\textrm{Li}_{n}(x)=\sum_{l=1}^{\infty}\frac{x^{l}}{l^{n}}\;,
\end{equation}
such that we are able to cast the partition function (\ref{8}) in the form \cite{kirsten96,kirsten96a,kirsten98,kirsten99}
\begin{equation}\label{11}
q=q_{0}-\frac{1}{2\pi i}\int\limits_{r-i\infty}^{r+i\infty}d\alpha\,\Gamma(\alpha)\,\beta^{-\alpha}\textrm{Li}_{\alpha+1}\left(e^{-\beta(\mu_{c}-\mu)}\right)\zeta(\alpha)\;.
\end{equation}
The integral representation (\ref{11}) is particularly suitable for an asymptotic expansion as $\beta\to 0$. Although for the phenomenon of Bose-Einstein condensation the low temperature regime is the pertinent one, it is clear for example from the references \cite{haug97-225-18,kirsten96} that the high-temperature expansion remains valid up to the condensation point and it can be used to determine the critical temperature at which condensation occurs.
The high-temperature expansion of (\ref{11}) is obtained by shifting the contour to the left. In that process we pick up contributions from poles of the spectral $\zeta$-function $\zeta(\alpha)$. The integral can then be computed by applying the Cauchy residue theorem.
The position of the poles and the corresponding residues are found by exploiting the intimate connection of $\zeta(\alpha)$ with its heat kernel
\begin{equation}\label{12}
K(t)={\sum_{i}}^{\prime}{\sum_{{\bf n}}}^{\prime}e^{-t\left(E_{{\bf n},i}-\mu_{c}\right)}\;.
\end{equation}
The heat kernel in (\ref{12}) can be factorized as $K(t)=K_{H}(t)K_{\mathcal{N}}(t)$, where $K_{H}(t)$ is the heat kernel associated with the spectrum of the anisotropic harmonic oscillator and $K_{\mathcal{N}}(t)$ is the heat kernel for the Laplacian $\Delta_{\mathcal{N}}$ on the manifold $\mathcal{N}$ modified by the critical chemical potential which acts as a constant negative potential in (\ref{12}). The small-$t$ asymptotic expansion of (\ref{12}) which encodes the residues of $\zeta (\alpha )$ can be obtained from the one for $K_{H}(t)$ and $K_{\mathcal{N}}(t)$. The small-$t$ expansion for the harmonic oscillator part is trivially obtained because the heat-kernel can be computed in closed form by observing that the sums are simple infinite geometric series. For $K_{\mathcal{N}} (t)$, under the made assumptions on ${\mathcal{N}}$, the small-$t$ expansion is well known and reads \cite{gilkey95,mina53-17-158,mina49-1-242}
$$K_{\mathcal{N}} (t) \sim \frac 1 {(4\pi t)^{d/2}} \sum_{j=0,1/2,1,...}^\infty \mathscr{A}^{\mathcal{N}}_j t^j$$
with the heat-kernel coefficients $\mathscr{A}^{\mathcal{N}}_j$ of the modified Laplacian on ${\mathcal{N}}$.
Combining the expansions of each factor, in detail we have
\begin{equation}\label{13}
K(t)=\frac{1}{\omega_1\omega_2\omega_3(4\pi)^{\frac{d}{2}}}\sum_{k=0}^{\infty}\left(\sum_{l=0}^{[k/2]}C_{l}(\omega)\mathscr{A}^{\mathcal{N}}_{\frac{k}{2}-l}\right) t^{\frac{k-d-6}{2}}\;,
\end{equation}
where $[x]$ represents the integer part of $x$, $\mathscr{A}^{\mathcal{N}}_{l}$ are the heat kernel coefficients on the manifold $\mathcal{N}$ as given above, and the
$C_{l}(\omega)$, which come from the small-$t$ expansion of $K_{H}(t)$, have the form
\begin{equation}\label{14}
C_{l}(\omega)=(-1)^{l}\sum_{n=0}^{l}\sum_{j=0}^{n}\frac{B_{j}B_{n-j}B_{l-n}}{j!(n-j)!(l-n)!}\omega_{1}^{j}\omega_{2}^{n-j}\omega_{3}^{l-n}\;,
\end{equation}
with $B_j$ denoting the Bernoulli numbers \cite{gradshtein07}. From the knowledge of the asymptotic expansion of the heat kernel (\ref{13}),
one can show that the rightmost poles of $\zeta(\alpha)$ are located at $\alpha_{k}=(d-k)/2+3$, with $k=0,\ldots,(d+5)$ \cite{gilkey95}. Their residues are given by
\begin{equation}\label{15}
\textrm{Res}\,\zeta(\alpha_{k})=\frac{1}{\Gamma(\alpha_{k})}\sum_{l=0}^{[k/2]}C_{l}(\omega)\mathscr{A}^{\mathcal{N}}_{\frac{k}{2}-l}\;.
\end{equation}
Taking into account the first three rightmost poles of $\zeta(\alpha)$, we obtain the following asymptotic expansion
for the partition function valid for $\beta\to 0$
\begin{eqnarray}
q&=&q_{0}+\beta^{-\left(\frac{d+6}{2}\right)}\textrm{Li}_{\frac{d+8}{2}}\left[e^{-\beta(\mu_{c}-\mu)}\right]\frac{\mathscr{A}^{\mathcal{N}}_{0}}{\omega_1\omega_2\omega_3}
+\beta^{-\left(\frac{d+5}{2}\right)}\textrm{Li}_{\frac{d+7}{2}}\left[e^{-\beta(\mu_{c}-\mu)}\right]\frac{\mathscr{A}^{\mathcal{N}}_{1/2}}{\omega_1\omega_2\omega_3}\label{16}\\
&+&\beta^{-\left(\frac{d+4}{2}\right)}\textrm{Li}_{\frac{d+6}{2}}\left[e^{-\beta(\mu_{c}-\mu)}\right]\frac{1}{\omega_1\omega_2\omega_3}\left(\mathscr{A}^{\mathcal{N}}_{1}
+\frac{1}{2}\mathscr{A}^{\mathcal{N}}_{0}(\omega_1+\omega_2+\omega_3)\right)+O\left(\beta^{-\frac{d+3}{2}}\right)\;. \nonumber
\end{eqnarray}
The particle number $N$ is of specific importance in the analysis of Bose-Einstein condensation since it is used in order to define and compute the critical temperature.
It is well known that $N$ can be obtained from $q$ as
\begin{equation}\label{17}
N=\frac{1}{\beta}\frac{\partial q}{\partial \mu}\;,
\end{equation}
with the derivative evaluated at fixed temperature and volume. The last remark, together with the result (\ref{16}), allows us to write
\begin{eqnarray}
N&=&N_{0}+\beta^{-\left(\frac{d+6}{2}\right)}\textrm{Li}_{\frac{d+6}{2}}\left[e^{-\beta(\mu_{c}-\mu)}\right]\frac{\mathscr{A}^{\mathcal{N}}_{0}}{\omega_1\omega_2\omega_3}
+\beta^{-\left(\frac{d+5}{2}\right)}\textrm{Li}_{\frac{d+5}{2}}\left[e^{-\beta(\mu_{c}-\mu)}\right]\frac{\mathscr{A}^{\mathcal{N}}_{1/2}}{\omega_1\omega_2\omega_3}\label{18}\\
&+&\beta^{-\left(\frac{d+4}{2}\right)}\textrm{Li}_{\frac{d+4}{2}}\left[e^{-\beta(\mu_{c}-\mu)}\right]\frac{1}{\omega_1\omega_2\omega_3}\left(\mathscr{A}^{\mathcal{N}}_{1}
+\frac{1}{2}\mathscr{A}^{\mathcal{N}}_{0}(\omega_1+\omega_2+\omega_3)\right)+O\left(\beta^{-\frac{d+3}{2}}\right)\;.\nonumber
\end{eqnarray}
The partition function $q$ also provides the energy of the system through the relation
\begin{equation}\label{18a}
U=\left\{-\frac{\partial}{\partial\beta}+\frac{\mu}{\beta}\frac{\partial}{\partial\mu}\right\}q\;,
\end{equation}
and it reads
\begin{eqnarray}\label{18b}
U&=&U_0+\left(\frac{d+6}{2}\right)\beta^{-\left(\frac{d+8}{2}\right)}\textrm{Li}_{\frac{d+8}{2}}\left[e^{-\beta(\mu_{c}-\mu)}\right]\frac{\mathscr{A}^{\mathcal{N}}_{0}}{\omega_1\omega_2\omega_3}
+\left(\frac{d+5}{2}\right)\beta^{-\left(\frac{d+7}{2}\right)}\textrm{Li}_{\frac{d+7}{2}}\left[e^{-\beta(\mu_{c}-\mu)}\right]\frac{\mathscr{A}^{\mathcal{N}}_{1/2}}{\omega_1\omega_2\omega_3}\nonumber\\
&+&\left(\frac{d+4}{2}\right)\beta^{-\left(\frac{d+6}{2}\right)}\textrm{Li}_{\frac{d+6}{2}}\left[e^{-\beta(\mu_{c}-\mu)}\right]\frac{1}{\omega_1\omega_2\omega_3}\left(\mathscr{A}^{\mathcal{N}}_{1}
+\frac{1}{2}\mathscr{A}^{\mathcal{N}}_{0}(\omega_1+\omega_2+\omega_3)\right)\nonumber\\
&+&\mu_{c}\beta^{-\left(\frac{d+6}{2}\right)}\textrm{Li}_{\frac{d+6}{2}}\left[e^{-\beta(\mu_{c}-\mu)}\right]\frac{\mathscr{A}^{\mathcal{N}}_{0}}{\omega_1\omega_2\omega_3}
+\mu_{c}\beta^{-\left(\frac{d+5}{2}\right)}\textrm{Li}_{\frac{d+5}{2}}\left[e^{-\beta(\mu_{c}-\mu)}\right]\frac{\mathscr{A}^{\mathcal{N}}_{1/2}}{\omega_1\omega_2\omega_3}\nonumber\\
&+&\mu_{c}\beta^{-\left(\frac{d+4}{2}\right)}\textrm{Li}_{\frac{d+4}{2}}\left[e^{-\beta(\mu_{c}-\mu)}\right]\frac{1}{\omega_1\omega_2\omega_3}\left(\mathscr{A}^{\mathcal{N}}_{1}
+\frac{1}{2}\mathscr{A}^{\mathcal{N}}_{0}(\omega_1+\omega_2+\omega_3)\right)
+O\left(\beta^{-\frac{d+3}{2}}\right)\;.
\end{eqnarray}
This expression for the energy will be used, in what follows, in order to compute the specific heat of the Bose gas.
\section{Critical Temperature and Specific Heat}
The critical temperature $T_{c}=1/\beta_{c}$, at which the condensate starts to appear, is obtained from the particle number $N$ in (\ref{18}) by setting $N_{0}=0$.
When the temperature of the system approaches the critical temperature we have $\mu\sim\mu_{c}$ and, hence, we can use the Taylor expansion, valid for $n>2$,
\begin{equation}\label{19}
\textrm{Li}_{n}(e^{-x})=\zeta_{R}(n)-x\zeta_{R}(n-1)+O(x^{2})\;.
\end{equation}
In these circumstances, the critical temperature is approximately defined according to the relation
\begin{eqnarray}\label{20}
N&=&\beta^{-\left(\frac{d+6}{2}\right)}\zeta_{R}\left(\frac{d+6}{2}\right)\frac{\mathscr{A}^{\mathcal{N}}_{0}}{\omega_1\omega_2\omega_3}
+\beta^{-\left(\frac{d+5}{2}\right)}\zeta_{R}\left(\frac{d+5}{2}\right)\frac{\mathscr{A}^{\mathcal{N}}_{1/2}}{\omega_1\omega_2\omega_3}\nonumber\\
&+&\beta^{-\left(\frac{d+4}{2}\right)}\zeta_{R}\left(\frac{d+4}{2}\right)\frac{1}{\omega_1\omega_2\omega_3}\left(\mathscr{A}^{\mathcal{N}}_{1}
+\frac{1}{2}\mathscr{A}^{\mathcal{N}}_{0}(\omega_1+\omega_2+\omega_3)\right)+O\left(\beta^{-\frac{d+3}{2}}\right)\;,
\end{eqnarray}
and it can be found to be
\begin{equation}\label{21}
T_{c}=T_{0}\left\{1-\frac{2\zeta_{R}\left(\frac{d+5}{2}\right)\mathscr{A}^{\mathcal{N}}_{1/2}}{(d+6)(\omega_1\omega_2\omega_3)^{\frac{1}{d+6}}
\zeta_{R}\left(\frac{d+6}{2}\right)^{\frac{d+5}{d+6}}\left(\mathscr{A}_{0}^{\mathcal{N}}\right)^{\frac{d+5}{d+6}}}N^{-\frac{1}{d+6}}\right\}\;,
\end{equation}
if $\partial\mathcal{N}\neq 0$ and
\begin{equation}\label{22}
T_{c}=T_{0}\left\{1-\frac{2\zeta_{R}\left(\frac{d+4}{2}\right)\left[\mathscr{A}^{\mathcal{N}}_{1}+\frac{1}{2}\mathscr{A}^{\mathcal{N}}_{0}(\omega_1+\omega_2+\omega_3)\right]}
{(d+6)(\omega_1\omega_2\omega_3)^{\frac{2}{d+6}}
\zeta_{R}\left(\frac{d+6}{2}\right)^{\frac{d+4}{d+6}}\left(\mathscr{A}_{0}^{\mathcal{N}}\right)^{\frac{d+4}{d+6}}}N^{-\frac{2}{d+6}}\right\}\;,
\end{equation}
when $\partial\mathcal{N}=0$. We would like to point out that in (\ref{21}) and (\ref{22}) we have defined, for brevity,
\begin{equation}\label{23}
T_{0}=(\omega_1\omega_2\omega_3)^{\frac{2}{d+6}}\left[\frac{N}{\zeta_{R}\left(\frac{d+6}{2}\right)\mathscr{A}_{0}^{\mathcal{N}}}\right]^{\frac{2}{d+6}}\;.
\end{equation}
Equation (\ref{22}) shows in detail how the critical temperature correction due to the finite number of particles depends on properties of the extra dimensions and, of course, the frequencies of the harmonic oscillator potential.
In some detail, the leading heat kernel coefficients are well known \cite{gilkey95} and we have that $\mathscr{A}^{\mathcal N}_0$ equals the volume of $\mathcal N$, $\mathscr{A}^{\mathcal N}_{1/2}$ is proportional to the volume of the boundary of $\mathcal N$, and higher order terms involve curvature tensors of the manifold $\mathcal N$ and its boundary. Thus a measurement of the critical temperature as a function of the finite particle number $N$ could reveal properties of the extra dimension like its number and size.
A further interesting thermodynamical quantity associated with the system is the specific heat. It is derived from the energy according to the relation
\begin{equation}\label{24}
C=\frac{\partial U}{\partial T}\;,
\end{equation}
where the derivative is understood as performed by keeping both the number of particles and the volume constant.
From (\ref{18b}) and (\ref{24}), near the critical temperature $T_c$, one has
\begin{eqnarray}\label{25}
C&=&\frac{(d+8)(d+6)}{4\omega_1\omega_2\omega_3}\beta^{-\left(\frac{d+6}{2}\right)}\zeta_{R}\left(\frac{d+8}{2}\right)\mathscr{A}^{\mathcal{N}}_{0}
+\frac{(d+7)(d+5)}{4\omega_1\omega_2\omega_3}\beta^{-\left(\frac{d+5}{2}\right)}\zeta_{R}\left(\frac{d+7}{2}\right)\mathscr{A}^{\mathcal{N}}_{1/2}\nonumber\\
& & +O\left(\beta^{-\frac d 2 -2}\right)\;,
\end{eqnarray}
when the manifold $\mathcal{N}$ satisfies the property $\partial\mathcal{N}\neq 0$, and
\begin{eqnarray}\label{26}
C&=&\frac{(d+8)(d+6)}{4\omega_1\omega_2\omega_3}\beta^{-\left(\frac{d+6}{2}\right)}\zeta_{R}\left(\frac{d+8}{2}\right)\mathscr{A}^{\mathcal{N}}_{0}\nonumber\\
&+&\frac{(d+6)(d+4)}{4\omega_1\omega_2\omega_3}\beta^{-\left(\frac{d+4}{2}\right)}\zeta_{R}\left(\frac{d+6}{2}\right)\left(\mathscr{A}^{\mathcal{N}}_{1}
+\frac{1}{2}\mathscr{A}^{\mathcal{N}}_{0}(\omega_1+\omega_2+\omega_3)\right)\nonumber\\
&-&\frac{(\mu_c-\mu)^{2}(d+6)^{2}}{4g_{0}(\omega_1\omega_2\omega_3)^{2}}\beta^{-d-4}\zeta_{R}\left(\frac{d+6}{2}\right)^{2}\left(\mathscr{A}^{\mathcal{N}}_{0}\right)^{2}
+O\left(\beta^{-\frac d 2 -1}\right)\;,
\end{eqnarray}
for $\partial\mathcal{N}=0$. The results that we have obtained for the critical temperature (\ref{21})-(\ref{22}) and for the specific heat (\ref{25})-(\ref{26}) are
very general and hold for an arbitrary smooth, compact manifold $\mathcal{N}$. More explicit formulas can be obtained once the manifold $\mathcal{N}$ has been specified.
In this case only the knowledge of the first few heat kernel coefficients is necessary, which are well known for a wide variety of manifolds with and without boundary \cite{kirsten01,vassilevich03}.
\section{Final Remarks}
In this work we have presented an analysis of Bose-Einstein condensation on product manifolds by utilizing $\zeta$-function
regularization techniques. The method proves to be very efficient and close to the condensation temperature the general results for the critical temperature and the specific heat depend only on the first few readily available heat kernel coefficients associated with the base manifold $\mathcal{N}$. It is important to notice that the dependence of the critical temperature on the finite number $N$ of particles reveals geometrical and topological properties of the extra Kaluza-Klein dimensions; see equations (\ref{21}) and (\ref{22}).
It would be of particular interest to study in more detail the cases in which the manifold $\mathcal{N}$ is either a $d$-dimensional sphere or a $d$-dimensional torus.
In these situations the spectrum $\lambda_{i}$ is explicitly known and for the sphere one would be led to deal with Barnes $\zeta$-functions, while for the torus Epstein $\zeta$-functions would be the
relevant objects. We plan to investigate these cases in a future work. Furthermore, similar calculations should be done for string inspired models such that future experiments could possibly determine features of extra dimensions in these models too.\\[.3cm]
\noindent
{\bf Acknowledgments:} KK acknowledges support by the National Science Foundation Grant PHY-0757791.
|
2,877,628,090,980 | arxiv | \section{Introduction}
Massive stars have dramatic influence on their surroundings. Due to their strong stellar winds and ionizing flux, they create bubbles/\ion{H}{II} regions which are routinely detected in mid infrared (mid-IR) wavelengths \citep{churchwell06}. These bubbles show spatially coincident emission at mid-IR wavelengths such as {\it Spitzer} MIPS 24 $\mu$m, arising from heated dust grains, and in radio continuum such as 20 cm, due to ionized hydrogen in the bubble interiors \citep{deharveng10}. The bubble rims, on the other hand, are defined by the emission due to polycylic aromatic hydrocarbons (PAHs) visible in certain mid-IR wavelengths including {\it Spitzer} IRAC 8.0 $\mu$m or WISE 12 $\mu$m \citep{churchwell06,deharveng10,kendrew16}. A strong correlation is found to exist between mid-IR bubbles and cold dense clumps in which star formation is likely to occur \citep{kendrew16}. An expanding \ion{H}{II} region may trigger star formation via the radiative-driven implosion (RDI) mechanism \citep{bertoldi89} or the collect-and-collapse (C\&C) process \citep{elmegreen77}.
Mid-IR bubbles were studied by \citet{churchwell06} using the {\it Spitzer} survey, Galactic Legacy Infrared Mid-Plane Survey Extraordinaire (GLIMPSE). They found that one-fourth of the bubbles in their sample had broken morphology which they attributed to lower density of the ambient interstellar medium (ISM) and/or higher ionizing photon flux in the open directions. Whereas \citet{churchwell06} had found only 25\% of bubbles associated with \ion{H}{II} regions, \citet{deharveng10} found that as many as 86\% of bubbles enclose \ion{H}{II} regions. Moreover, they found that 40\% of the bubbles were surrounded by cold dust detected at 870 $\mu$m, whereas 28\% contained interacting condensations. More recently, \citet{kendrew16} examined cold dense clumps detected by the ATLASGAL in and around the inner Galactic plane under the Milkyway Galaxy Project. In their comprehensive study, they found that $\sim$48\% of the cold dense clumps are located in close proximity of bubbles, and among them $\sim$25\% appear projected toward bubble rims. As the star-forming clouds are often fractal and clumpy, an investigation of which mechanism dominates star formation at the boarders of \ion{H}{II} regions requires understanding of the physical connection and interaction of the bubbles/\ion{H}{II} regions with the cold ISM, and their association with stellar/protostellar content and the timescales involved.
IRAS\,10427-6032 was first studied by \citet{kerber00} along with six other planetary nebula (PN) candidates. On the basis of imaging and spectroscopic observations, \citet{kerber00} concluded that IRAS\,10427-6032 is an \ion{H}{II} region, rather than a PN. The flux densities of IRAS\,10427-6032 as measured by IRAS are 3.24(0.16), 8.47(0.42), 92.8(1.29), and 144(2.16) Jy, in 12, 25, 60, and 100 $\mu$m, respectively, where the numbers in parenthesis are the errors in flux densities. Recently, it was identified by \citet{anderson14} as an \ion{H}{II} region based on the all sky mid-IR data from the WISE. It is located at the southern edge of the Carina Nebula, a highly complex and massive star-forming region of our Galaxy. The Carina Nebula is well-known for its extreme stellar content including 70 known O stars, 127 B0--B3 stars, 3 Wolf-Rayet stars, and the prototypical luminous blue variable $\eta$ Carina \citep{walborn95}. It has also been recognized as a prolific stellar nursery \citep{smith06}. Numerous examples of ongoing star formation are found throughout the Nebula despite the ``hostile'' environment, particularly so in the southern pillar of the Nebula \citep{megeath96,smith10,sanchawala07a,rathborne02}. Using a large 6.7 square degree deep near-IR imaging survey of the Nebula -- the VISTA Carina Nebula Survey (VCNS) -- \citet{preibisch14a} discovered several previously unknown embedded clusters/groups of YSO candidates \citep{zeidler16}. One such group, J104437.6-604756, is found near the southern edge of the Nebula, at about 1\fdg3 from the massive star $\eta$ Carina and close to ($\sim$15${\hbox{$^{\prime\prime}$}}$) IRAS\,10427-6032. The near-IR images show a presence of a faint cluster here.
We found that IRAS 10427-6032 features a broken bubble. Moreover, a compact (semi-major axis $\sim$11${\hbox{$^{\prime\prime}$}}$, semi-minor axis $\sim$5${\hbox{$^{\prime\prime}$}}$) and moderately bright (integrated flux of 3.11 Jy) cold dense clump detected by ATLASGAL, AGAL288.069-01.645, is located at an angular distance of $\sim$15${\hbox{$^{\prime\prime}$}}$ from the IRAS source \citep{contreras13}. Figure~1 shows the 2${\hbox{$^\prime$}} \times$2${\hbox{$^\prime$}}$ field around IRAS\,10427-6032 -- {\it Spitzer} 4.5 $\mu$m image in 1a, VISTA $K_s$ band in 1b, and WISE RGB image (4.6 $\mu$m in blue, 12 $\mu$m in green, and 22 $\mu$m in red) in 1c. With the morphology of a broken bubble, presence of an \ion{H}{II} region, and cold dust condensation detected at 870 $\mu$m, it is an interesting object to study star formation and investigate the role of expanding \ion{H}{II} region in ongoing star-formation activity. The young stellar populations of the embedded cluster and the properties of the bubble/\ion{H}{II} region/cluster have not been studied in the literature. In this work, we assume that the region is at the same distance (2.3 kpc) as the Carina Nebula \citep{walborn95}.
We present an analysis of this region using archival and published data from multiple wavelengths including near-IR from VCNS, mid-IR from {\it Spitzer} and WISE, far-IR from ${\it Herschel}$ and radio-continuum data from Molonglo Galactic Plane Survey (MGPS). The rest of the paper is organized as: \S{2} describes the archival data used in this work, \S{3} describes our results and \S{4} gives the conclusions of our work.
\begin{figure}
\begin{center}
\includegraphics[width=7cm]{Fig1a.jpg}\par
\includegraphics[width=8cm]{Fig1b.jpeg}\par
\includegraphics[width=7cm]{Fig1c.jpg}\par
\caption{The 2${\hbox{$^\prime$}} \times$2${\hbox{$^\prime$}}$ (a) {\it Spitzer} 4.5 $\mu$m image, (b) VISTA $K_s$-band image, and (c) the RGB (WISE 4.6 $\mu$m in blue, 12 $\mu$m in green, and 22 $\mu$m in red) image of the region. Sources marked with box and circle on the {\it Spitzer} 4.5 $\mu$m and VISTA $K_s$-band image are discussed in the text (S 3.2). The big circle of diameter 1\farcm6 ($\sim$ 1 pc assuming the distance to the region to be 2.3 kpc) shows the extent of the bubble. The plus symbol marks the position of IRAS\,10427-6032. The partial ring structure of the region is clearly visible in the {\it Spitzer} 4.5 $\mu$m and in WISE 12 $\mu$m. North is up and east is to the left in the image.}
\end{center}
\end{figure}
\section{Archival Data}
We study an area of 5${\hbox{$^\prime$}} \times$5${\hbox{$^\prime$}}$ around the position of the IRAS 10427-6032 for our analysis in this work.
\subsection{Near-IR Data}
The VCNS survey \citep{preibisch14a} was conducted using the 4 m Visible and Infrared Survey Telescope for Astronomy \citep{emerson06} to obtain a deep 2 $\times$ 2 tile image mosaic covering a total sky area of $\sim$6.7 square-degrees ($\sim$ 2\fdg3 $\times$ 2\fdg9) of the Carina Nebula, in the $J$, $H$, and $K_s$-bands. The final VCNS catalogue contains 3,951,580 sources detected in any two of the three bands with 5$\sigma$ magnitude limits for sources $\sim$20.0, 19.4, and 18.5 mag, in the $J$, $H$, and $K_s$ bands, respectively. At the brighter end, stars with magnitudes less than $J=$11.8, $H=$11.2, and $K_s=$10.5 mag, are expected to be in the nonlinear or saturated regimes of the detectors. For these brighter stars, 2MASS magnitudes \citep{skrutskie06} are used. The complete 5${\hbox{$^\prime$}} \times$ 5${\hbox{$^\prime$}}$ data were unavailable in the VCNS as the region lies near the edge of the observed VCNS field. In particular, the survey did not cover the southern $\sim$1\farcm5 region. We thus downloaded 5${\hbox{$^\prime$}} \times$3\farcm5 catalog centered on the position of the IRAS\,10427-6032 from the \citep{preibisch14a} using the VizieR\footnote{http://vizier.u-strasbg.fr/viz-bin/VizieR} catalogue access tool. We as well downloaded an image for the same field in the $K_s$ band from the VISTA archive\footnote{http://horus.roe.ac.uk/vsa/dbaccess.html}. For the southern 5${\hbox{$^\prime$}} \times$1\farcm5 area that lacked coverage in the VCNS, we downloaded sources from the 2MASS catalogue. The 2MASS Point Source Catalog \citep{skrutskie06} has the 10$\sigma$ detection limits of $J \sim$15.8 mag, $H \sim$ 15.1 mag, and $K_s \sim$ 14.3 mag.
From the retrieved VCNS catalogue, there are 2570 detections in the $J$ band, 2715 in the $H$ band, and 2496 detections in the $K_s$ band. For the purpose of our analysis of the near-IR color-color diagram and identification of YSO candidates, we discarded all detections in the $J$, $H$, and $K_s$ bands with signal-to-noise ratio (SN) $<$ 10. This left us with 1888 sources in the $J$ band, 1958 in the $H$ band, and 1833 in the $K_s$ band from the VCNS. From the 2MASS we downloaded 62 sources simultaneously detected in the three bands with SN $>$ 10.
\begin{figure*}
\begin{center}
\includegraphics[width=\columnwidth]{Fig2a.pdf}
\includegraphics[width=\columnwidth]{Fig2b.pdf}
\caption{(a) The near-IR color-color diagram of sources detected in 5${\hbox{$^\prime$}} \times$5${\hbox{$^\prime$}}$ field centered on IRAS\,10427-6032 with SN $>$ 10 in VCNS and in 2MASS for a sub-region with no data within the VCNS. Colors of main-sequence dwarfs and giants are from \citet{bessell88}, and the locus of unreddened CTTS is from \citet{meyer97}. The reddening lines are plotted with a slope of 1.89 as determined by \citet{zeidler16} for this line-of-sight sources. The arrow depicts extinction A$_v$ = 5 Mag. The regions where Class I and Class II YSO candidates are found are marked on the graph. All sources are shown as grey open circles. (b) The near-IR color-color diagram of the reference field. All curves and lines are plotted in an identical manner as in (a).}
\end{center}
\end{figure*}
\subsection{Mid-IR Data}
We made use of the data from two {\it Spitzer} surveys, the Vela Carina Survey \citep{majewski07}, and the Deep Glimpse Survey \citep{whitney11}. The Vela Carina Survey covered the Galactic longitudes 255$^{\circ}$--295$^{\circ}$ for a latitude width of about 2$^{\circ}$, encompassing 86 square degrees of the Carina and Vela regions of the Galactic plane \citep{majewski07}. This area was observed in all the four IRAC bands centered at 3.6, 4.5, 5.8, and 8.0 $\mu$m. For the Deep Glimpse project, {\it Spitzer} observed the regions, 25$^{\circ} <$ l $<$ 65$^{\circ}$, 0$^{\circ} <$ b $<$ +2\fdg7, and 265$^{\circ} <$ l $<$ 350$^{\circ}$, $-$2$^{\circ} <$ b $<$ +0\fdg1 in the two IRAC bands 3.6 and 4.5 $\mu$m only \citep{whitney11}. For both the surveys, two types of source lists are available for download in the InfraRed Science Archive\footnote{This research has made use of the NASA/ IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.}, a highly reliable point source catalogue, and a more complete (but less reliable) point source catalogue. The images available from the Vela Carina Survey of this region had IRAS 10427-6032 at the edge of the observed pointing (\#28850) in all the four bands. Moreover, in the 4.5 $\mu$m and the 8.0 $\mu$m images, the observed field around our target had a defect and did not load upon downloading. Due to these issues, we found no detections either in the highly reliable or in the most complete point-source catalogue in [4.5] and [8.0] bands. We thus largely made use of the 3.6 and 4.5 $\mu$m images and catalogues from the Deep Glimpse Survey in this work.
\subsection{Far-IR Data}
We used the data taken with the Photodetector Array Camera and Spectrometer \citep[PACS;][]{pogli10}
and Spectral and Photometric Imaging Receiver \citep[SPIRE;][]{griffin10} of the {\it Herschel} Space Observatory, as a part of
the Proposal ID `OT1\,tpreibis\,1' (PI: Thomas Preibisch). For our analyses, we obtained the PACS 70 and 160 $\mu$m level-2.5 maps (processed with SPG v14.2.0) and the SPIRE 250, 350, and 500 $\mu$m extended calibrated level-3 (processed with SPG v14.1.0) maps for 5\farcm5 $\times$ 5\farcm5 area centered on IRAS 10427-6032, from the {\it {\it Herschel}} Science Archive\footnote{http://www.cosmos.esa.int/web/{\it Herschel}/science-archive}. The angular resolutions of these maps are
10${\hbox{$^{\prime\prime}$}}$, 13${\hbox{$^{\prime\prime}$}}$, 20${\hbox{$^{\prime\prime}$}}$, 26${\hbox{$^{\prime\prime}$}}$ and 36${\hbox{$^{\prime\prime}$}}$ at 70, 160, 250, 350 and 500 $\mu$m, respectively \citep[see][]{pre12}.
We note that the SPIRE maps are in the unit of MJy sr$^{-1}$, while the PACS maps are in the unit of Jy pixel$^{-1}$.
\subsection{Radio Continuum Data}
The second epoch Molonglo Galactic Plane Survey (MGPS-2) carried out with the Molonglo Observatory Synthesis Telescope surveyed the Galactic longitudes 245$^{\circ}-$365$^{\circ}$ for Galactic latitudes $\lvert{b}\rvert<$ 10$^{\circ}$ ~at 843 MHz \citep{murphy07}. The survey provides 4\fdg3 $\times$ 4\fdg3 mosaic images with 43$\times$43 cosec$\lvert{\delta}\rvert$ arcsec$^{2}$ resolution. We downloaded the original processed image for this region from their website\footnote{http://www.astrop.physics.usyd.edu.au/mosaics/}. We made use of the image to derive the physical parameters of the source, as well as to study the overall morphology of the region.
\section{Results}
\subsection{Identification of YSO candidates}
We made use of the near-IR and mid-IR data to identify YSO candidates of the region. We first plotted a J$-$H vs. H$-K_s$ color-color diagram (see Figure~2a) of sources detected in all the three bands, $J$, $H$, and $K_s$ with SN $>$ 10. There are a total of 2030 such sources (1968 from VCNS and 62 from 2MASS). The reddening lines are plotted using a slightly steeper value of the slope of the reddening law, 1.86, as compared to 1.69 from \citet{rieke85}. This value of the slope was found to most accurately fit the particular line of sight sources for the complete 6.7 square degree field encompassing the whole Carina Nebula by \citet{zeidler16}. The reddening lines originating from the tip of the giant branch and the root of the main-sequence dwarf locus forms the main-sequence reddening band. Sources falling in this reddening band are likely field stars or evolved population of cluster members with little or no near-IR excess \citep{lada92}. Those falling beyond this reddening band, on the redder side, are the ones exhibiting near-IR excesses. We plotted a third reddening line originating from the tip of the empirical CTTS locus \citep{meyer97}. Regions occupied by the reddened Class II and Class I YSOs \citep{lada92} are labeled in the figure. We found 68 sources with near-IR excess, of which 23 are Class II candidates. For comparison, the near-IR color-color diagram of the same size reference field as the target field, centered on $RA=$161\fdg07337, $Dec.=-$60\fdg74447, shows only 3 sources in the region occupied by reddened Class II YSOs (Figure~2b). The remaining 45 sources with near-IR excesses are occupying a region where Herbig Ae/Be stars are found \citep{hillenbrand92}. Some of these sources could be Herbig Ae/Be type stars, however, comparison with Fig.~2b suggests a fraction of them could as well be contaminants or evolved Ae/Be stars. We thus do not include these 45 sources with small near-IR excesses in our discussion henceforth.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{Fig3.pdf}
\caption{The color-color diagram of dereddened K$_s-$[3.6] versus [3.6]$-$[4.5] colors of all detections with SN $>$ 10 for the 5${\hbox{$^\prime$}} \times$5${\hbox{$^\prime$}}$ field centered on IRAS\,10427-6032. The regions occupied by Class I and Class II YSO candidates according to the \citet{gutermuth09} criteria are demarcated on the figure.}
\end{center}
\end{figure}
Our field of interest (5${\hbox{$^\prime$}} \times$ 5${\hbox{$^\prime$}}$ around the IRAS source) did not have a complete coverage in the Vela-Carina Survey, and additionally suffered a defect in the 4.5 and 8.0 $\mu$m images. The Deep Glimpse survey, on the other hand, was carried out in only two of the four IRAC bands, 3.6 and 4.5 $\mu$m. Thus, we could not make use of any color-color diagram involving mid-IR bands alone to identify YSO candidates \citep{allen04,gutermuth09}. We combined the mid-IR data from the Deep Glimpse Survey of {\it Spitzer} with the near-IR data from the VCNS to identify additional YSO candidates. We cross-matched the sources from the two catalogues within 1${\hbox{$^{\prime\prime}$}}$ search radius to identify counterparts. The color-color diagram of unreddened K$_s-$[3.6] vs. [3.6]$-$[4.5] colors of the cross-identified sources is shown in Figure~3. In order to estimate the unreddened $K_s-$[3.6] and [3.6]$-$[4.5] colors from the corresponding observed colors, we first used the J$-$H vs. H$-K_s$ color-color diagram (Figure~2a) to find the line-of-sight extinction to the region. In particular, we dereddened all the sources falling in the reddening band up to a baseline, plotted parallel to the main-sequence dwarf locus (K5--M5), to find the color excesses, E(H$-K_s$). Then we estimated the dereddened K$_s-$[3.6] colors of sources using the relation, ${E_{(H-K_s)}}/{E_{(K_s-[3.6])}}=1.49$, and the dereddened [3.6]$-$[4.5] colors of sources using the relation, ${E_{(H-K_s)}}/{E_{([3.6]-[4.5])}}=1.17$ \citep{flaherty07}. To select Class II YSO candidates from this plot, we followed \citet{gutermuth09} criteria as detailed below:
{\footnotesize
\begin{align*}
[[3.6]-[4.5]]_0 - \sigma_1 & > 0.101 \\
[K_s-[3.6]]_0 - \sigma_2 & > 0 \\
[K_s-[3.6]]_0 - \sigma_2 & > -2.85714 \times ([[3.6]-[4.5]]_0 - \sigma_1 - 0.101) + 0.5
\end{align*}
}%
Here $\sigma_1$ and $\sigma_2$ are found using photometric errors in the three magnitudes $K_s$, [3.6], and [4.5], by error propagation as,
$\sigma_1=\sigma{[[3.6]-[4.5]};\sigma_2=\sigma{[[K_s]-[3.6]]}$.
There are 40 sources that satisfied this set of criteria so are Class II YSO candidates. Five of these sources are found to satisfy an additional criterion and are Class I YSO candidates:
{\footnotesize
\begin{align*}
[K-[3.6]]_0 - \sigma_2 > -2.85714 \times ([[3.6]-[4.5]]_0 - \sigma_1 -0.101) + 0.5
\end{align*}
}%
To further reduce dim extragalactic contaminants, we imposed a cut based on the unreddened 3.6 $\mu$m mag as employed by \citet{gutermuth09}. We discarded all sources with [3.6]$_{0} > 15$ mag for Class II candidates and [3.6]$_{0} > 14.5$ mag for Class I candidates. This leaves us with a total of 11 candidates of which 5 are Class I and 6 are Class II candidates. Combining with YSO candidates from Figure~2a, we have a total of 29 Class II and 5 Class I candidates. Our strict criteria may eliminate some of the genuine YSOs of the region, however we prefer to use this secure sample of YSO candidates to study the region.
\citet{marton16} employed support vector machine algorithm to determine YSO candidates using all-sky data from WISE and 2MASS. Seven of their YSO candidates are found in the region studied in this work. In our classification scheme, we retrieved 3 of these YSO candidates, whereas the remaining four sources turned out to be non-YSOs according to our criteria. We made use of the multi-wavelength information to ascertain the nature of these four sources. One of these sources ($RA$ = 161\fdg1763118, $Dec.=-$60\fdg792419) lacked detection in the {\it Spitzer} IRAC [3.6] and [4.5] bands and thus could not be a YSO. Two sources $RA$ = 161\fdg390312, $Dec.=-$60\fdg788896, and $RA$ = 161\fdg1298658, $Dec.=-$60\fdg8153006, did neither show excess in near- and mid-IR, nor suffered reddening. These sources were found near the main-sequence branch on the near-IR color-color diagram. These sources are thus also ruled out as YSO candidates. The fourth source, $RA$ = 161\fdg1311654, $Dec.=-$60\fdg810078, was found in the main-sequence reddening band in the near-IR color-color diagram, however did not show excess in the mid-IR, 3.6 and 4.5 $\mu$m. Though it could be an evolved YSO such as Class III, since it does not fit in our YSO identification criteria, to be consistent we do not consider it a YSO candidate. We thus conclude that only three of the seven YSO candidates from \citet{marton16} of this region are likely YSOs. Our final list of YSO candidates thus contain 29 Class II candidates and 5 Class I candidates. The ratio of Class II/Class I candidates ($\sim$ 6) indicate that this is a young star-forming region.
\input{Table1}
\begin{figure*}
\begin{center}
\includegraphics[scale=0.19]{Fig4.jpg}
\caption{The best fit SEDs of YSO candidates (a) 161.142605-60.804943 (b) 161.214739-60.814748 (c) 161.163399-60.794834 (d) 161.163985-60.793807 (e) 161.15617-60.799027 (f) 161.157004-60.800308 and (g) 161.133327-60.762207. The black dots mark the data points. The solid black curve is the best fitted model, while the grey curves denote the subsequent good fits for ${\chi}^2 -{\chi}_{min}^{2}$ (per data point) $<$ 3.}
\end{center}
\end{figure*}
\subsubsection{Mass Distribution and Spectral Energy Distribution (SED) Fitting of Selected YSO candidates}
To characterize and understand the nature of YSOs in the cluster, we constructed the SEDs of YSO candidates for which photometric magnitudes are available in seven or more bands. There are 7 such sources. We fitted SEDs to our candidates using the grid of models and fitting tools as described by \citet{robitallie06} using their \textit{SED Fitter python package}\footnote{https://github.com/astrofrog/sedfitter}. These models were computed using Monte Carlo based 20000 2D radiation transfer models from \citet{whitney03a} and by adopting several combinations of the central star, disc, infalling envelope, and bipolar cavity, for a reasonably large parameter space. Their total YSO grids consist of 200,000 SEDs as each of the 20000 YSO models have SEDs for ten different inclination angles. This tool provides various physical parameters of the YSOs making it an ideal tool to study the evolutionary status of YSOs in star-forming regions.
We used the photometric magnitudes of the YSO candidates in the $J$, $H$, $K_s$ from VCNS, 3.6 and 4.5 $\mu$m from {\it Spitzer} Deep Glimpse Survey, and 12 and 22 $\mu$m from WISE W3 and W4 filters. Out of 7, for two of our YSO candidates, we could unambiguously find an optical counterpart in the DSS red image. For these sources, we thus had the 8th data point, namely the $V$ mag of the source, for the SED fitting. The WISE filters W1--W4 are not available in the \textit{SED Fitter python package}. We thus prepared WISE W3 and W4 filters as prescribed in \textit{SED Fitter python package} using a class `Filter' to perform broadband convolution, to obtain the convolved fluxes. To do so we used the per-photon relative system response curves of W3 and W4 bands from \citet{wright10}. While fitting the SEDs we set photometric uncertainties to be 10\% of the magnitudes instead of the formal photometric errors in order to fit without any possible bias caused by an underestimate in the flux uncertainties. Since the distance estimates of the clusters in the Carina Nebula have large uncertainties, 2.0--4.0 kpc \citep{bakkar80,carraro04,degi01,hur12}, partly due to abnormal reddening law \citep{feinstein73,carraro04}, we used a range of 2.0--3.5 kpc as our input to fit the SEDs. For the extinction, we used a range of 0--18 mag as the maximum extinction suffered by the sources in our region of study was found to be $A_V \sim$ 18 mag. Finally we considered models with $\chi^2-\chi^2_{min}$ (per data point) $<$ 3 relative to the model of best-fit for our analysis. The physical parameters from the best fitted models are given in Table~1. The resultant SED fits to the YSO candidates are given in Figure~4. The masses from the SED fitting of the YSO candidates vary from 2.7 to 5.1 M$_{\odot}$. The median age of all YSO candidates is $\sim$0.17 Myr. Two YSO candidates, \#1 and \#4 in Table~1 are the youngest sources with estimated ages $\sim$10$^{4}$ years. Based on the shape of the SED, the YSO candidate \#1 in Table~1 ($RA$=161\fdg142605, $Dec.$=$-$60\fdg804943) is a Class I YSO which is also consistent with its identification based on both near- and mid-IR color excess criteria as defined in \S{3.1}. The only other Class I YSO candidate for which the SED was constructed is \#2 in Table~1 ($RA$=161\fdg214739, $Dec.$=$-$60\fdg814748). The SED for this candidate shows a flat spectrum with an estimated age from the SED fitting, $\sim$0.3 Myr. This YSO thus appears to be somewhat more evolved compared to \#1. All the other YSO candidates whose SEDs are presented were identified to be Class II YSO candidates and show Class II type SEDs.
\begin{figure}
\includegraphics[width=1.1\columnwidth]{Fig5.pdf}
\caption{The color-magnitude diagram $J$ vs. $J-H$ of all sources (grey open circles) detected in 5${\hbox{$^\prime$}} \times$5${\hbox{$^\prime$}}$ field centered on IRAS\,10427-6032 with SN $>$ 10 in VCNS and in 2MASS for a sub-region with no data within the VCNS. The Class II YSO candidates are shown as filled triangles, Class I YSO candidates are shown as filled circles and the candidate massive star is shown as a blue asterisk. Sources for which SEDs are constructed are marked with large open circles. The most massive members of the region are fitted with a 1 Myr main-sequence isochrone (red color) of the Geneva stellar tracks \citep{lejeune01} after dereddening it by $A_V$ = 5.0 mag, whereas low-mass YSO candidates are fitted by pre-main sequence isochrones of 1 Myr (green color) and 2 Myr (black color) from \citet{siess00} for the mass-range of 0.1 M$_{\odot}$ to 7.0 M$_{\odot}$. One of the Class I YSO candidates has no detection in the $J$ band and thus is not plotted on the graph.}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=\columnwidth]{Fig6a.jpg}
\includegraphics[width=\columnwidth]{Fig6b.jpg}
\caption{(a) The two-color 2${\hbox{$^\prime$}} \times$2${\hbox{$^\prime$}}$ field around IRAS\,10427-6032 with the DSS-2 $r$-band in green and the $\mathit{Spitzer}$ IRAC 4.5 $\mu$m in red. The 870 $\mu$m contours from ATLASGAL are overlaid on the image. (b) The 5${\hbox{$^\prime$}} \times$5${\hbox{$^\prime$}}$ field around IRAS\,10427-6032 in the DSS-2 r-band. The 843 MHz contours from the MGPS-2 (yellow) and the 870 $\mu$m contours from the ATLASGAL (cyan) are overlaid on the image.
The YSO candidates are marked as magenta circles (Class II) and red circles (Class I). The candidate massive star is marked as a blue circle.}
\end{center}
\end{figure*}
Figure~5 shows the near-IR color-magnitude diagram, J vs. J$-$H, of all near-IR sources in our sample. A 1-Myr main-sequence isochrone of the Geneva stellar tracks \citep{lejeune01} is overplotted after reddening it by $A_V$ = 5 mag and assuming the distance to be 2.3 kpc. The value of extinction was determined using the photometry of the candidate massive star. As discussed in \S{3.2}, the flux of ionizing photons responsible for the \ion{H}{II} region suggests the expected spectral type of the star to be $\sim$B0--B0.5. We adopt the colors of the late-O and early-B main-sequence stars to determine color excesses, $E(J-H)$ and $E(H-K)$, of the candidate massive star. That gave us a range, $A_V$ = 4.5--5.5 mag, for the extinction suffered by the candidate massive star. We thus used the median value, $A_V$ = 5 mag, to fit the 1-Myr main-sequence isochrone. Our low-mass YSO candidates are found to cluster near a 1-2 Myr PMS isochrones \citep{siess00} drawn for the mass-range of 0.1 M$_{\odot}$ to 7.0 M$_{\odot}$. YSO candidates for which SEDs are constructed show general agreement in parameters derived based on SED fitting and isochrones. Two of the most evolved YSOs based on the age estimation from SED (\#5 and \#6 in Table~1) are appearing close to or on the main-sequence. The mass estimates based on PMS isochrones for roughly half of the YSO candidates are consistent with the estimates derived from the SED fitting (2--5 M$_{\odot}$) whereas show some deviation for the remaining candidates.
\subsubsection{Spatial Distribution of YSO Candidates}
Figure~6a shows a 2$\hbox{$^\prime$} \times$2$\hbox{$^\prime$}$ two-color image centered on IRAS 10427-6032 with Digitized Sky Survey-2 (DSS-2) r-band in green and ${\mathrm Spitzer}$ IRAC [4.5] in red. The cold dense clump detected at 870 $\mu$m (shown as contours) is seen to be adjacent to the optical nebula. Figure~6b shows both the 843 MHz radio continuum emission as well as the 870 $\mu$m emission contours overlaid on a 5$\hbox{$^\prime$} \times$5$\hbox{$^\prime$}$ field centered on the IRAS object on the DSS-2 r-band image. As can be seen, the cold dust clump is protruding into the ionized region. Most of the YSO candidates are spatially coincident with the sub-mm contours. A small number of the remaining YSO candidates are found to lie on the north-western side of the bubble rim whereas a single Class I YSO candidate is found on the eastern side of the bubble. Out of the five Class I YSO candidates, two are found on each side of the bubble, whereas three are coincident with the bubble rim and the cold dust condensation.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{Fig7.jpeg}
\caption{The {\it Spitzer} 5${\hbox{$^\prime$}} \times$5${\hbox{$^\prime$}}$ 3.6 $\mu$m image centered on IRAS\,10427-6032. The contours of 843 MHz radio continuum data from MGPS-2 are overlaid on the image. The position of the IRAS source is marked with a plus symbol. The big circle shows the extent of the bubble and has a diameter of 1\farcm6.}
\end{center}
\end{figure}
\subsection{Physical Properties of Compact H~II Region and Massive Star Candidate}
Figure~7 shows the {\it Spitzer} 3.6 $\mu$m image with 843 MHz MGPS-2 contours over-plotted on it. The ionized emission shows a nearly spherical morphology which fills the bubble interior almost completely. We used the AIPS tasks JMFIT, MAXFIT and IMEAN, on the 843 MHz image to fit the compact core of the ionized emission with a Gaussian model. The obtained results are presented in Table~2. The angular extent of the nearly spherical source is found to be $\sim$110$\hbox{$^{\prime\prime}$}$, which translates to a linear diameter of 1 pc assuming the cluster to be located at 2.3 kpc. We determined the Lyman continuum luminosity (in photons ${\mathrm s^{-1}}$ ) required to generate the observed flux density using \citet{kurtz94} formula:
\[S_* \geqslant \bigg(\frac {7.59 \times 10^{48}}{a(\nu,T_e)}\bigg)\bigg(\frac {S_{\nu}}{\mathrm Jy}\bigg)\bigg(\frac{T_e}{\mathrm K}\bigg)^{-0.5}\bigg(\frac{D}{\mathrm kpc}\bigg)^{2}\bigg(\frac {\nu}{\mathrm GHz}\bigg)^{0.1}\]
where $S_{\nu}$ is the integrated flux density in Jy, D is the distance in kiloparsec, $T_e$ is the electron temperature, a($\nu$, $T_e$) is the correction factor, and $\nu$ is the frequency in GHz at which the luminosity is to be calculated. We assumed $T_e$ to be the typical 10000 K, implying $a(\nu, T_e)$=0.99 as seen from Table 6 of \citet{mezger67}. The Lyman continuum luminosity is found to be $S_*$=$10^{46.9}$ photons $\mathrm{s^{-1}}$.
To estimate the dynamical age of the \ion{H}{II} region ($t$), we used the following formula from \citet{spitzer78}
\[R(t) = R_s\bigg(1 + \frac{7c_{II}t}{4R_s}\bigg)^{\frac{4}{7}}\]
where $R(t)$ is the radius of the \ion{H}{II} region at time $t$, c$_{II}$ is the speed of sound in the \ion{H}{II} region taken to be 11 $\times 10^5$ cm $s^{-1}$ from \citet{palla05} and $R_s$ is the Stromgren radius \citep{stromgren79}, given by,
\[R_s = \bigg(\frac {3S_*}{4\pi n_o ^2 \beta_2}\bigg)^\frac{1}{3}\]
In the above expression, $n_o$ is the initial ambient density in $\mathrm{cm^{-3}}$, and $\beta_2$ is the total recombination coefficient to the first excitetd state of hydrogen. We assumed $\beta_2$ to be $2.6 \times 10^{-13} \mathrm{cm^{3}~s^{-1}}$ \citep{palla05}. To estimate $n_0$, we used the gaseous mass of the \ion{H}{II} region (see \S{3.3}), and the measured size of the \ion{H}{II} region. By assuming a uniform density throughout the \ion{H}{II} region, we deduced $n_o =$ 9.3 $\times$ 10$^{3}$ cm$^{-3}$. However, this value of $n_0$ must be only treated as a lower limit of the actual density since some of the gaseous mass has already converted into stars and some of it has been ionized by the \ion{H}{II} region. The dynamical age of the
\ion{H}{II} region using this value of $n_0$ turns out to be $t$=0.30 Myr. If we use an upper limit for the density instead, say $n_o=$ 10$^{5}$ cm$^{-3}$, the dynamical age of the \ion{H}{II} region turns out to be $t$=0.95 Myr.
\citet{tremblin14} studied the evolution of \ion{H}{II} regions in turbulent molecular clouds. We estimated the dynamical age of the \ion{H}{II} region also using the method outlined in \citet{tremblin14}. Comparing the parameters of IRAS\,10427-6032 with the pressure-size tracks in \citet{tremblin14}, the age of the \ion{H}{II} region is $\sim$0.5 Myr. However, we note that this age is a lower limit as the method of \citet{tremblin14} is more appropriate for classical \ion{H}{II} regions where effects of magnetic fields and gravity are less important. Based on both the methods, thus, the range of dynamical age of the \ion{H}{II} region is 0.3--0.95 Myr.
A comparison of $\mathrm{log} S_*$ value with the values from \citet{panagia73}, assuming a ZAMS, suggests a spectral type of B0.5--B1 for the ionizing source.
\begin{table}
\begin{center}
\caption{Fitting Results of Compact \ion{H}{II} Region}
\begin{tabular}{ll}
\hline
2D Gaussian fit size & $56.62'' \times 53.30''$ \\
Position angle (deg) & 67.446 \\
Peak flux density (mJy beam$^{-1}$) & 133.93 \\
Integrated flux density (mJy) & 171.5 \\
\hline
\end{tabular}
\end{center}
\end{table}
There are two bright sources nearby the position of IRAS 10427-6032, a source $RA$=161\fdg16058, $Dec.$=$-$60\fdg80089 at an angular distance of $\sim$5$^{\hbox{$^{\prime\prime}$}}$, and another source $RA$=161\fdg16329, $Dec.$=$-$60\fdg80317 at an angular distance of $\sim$9$^{\hbox{$^{\prime\prime}$}}$. These bright sources are marked with a box and a circle, respectively in Figure~1. The closer source ($RA=$161\fdg16058, $Dec.=-$60\fdg80089) is detected only in IRAC 3.6 $\mu$m and 4.5 $\mu$m, and becomes too faint to be visible in the 5.8 $\mu$m image from the Vela-Carina Survey. It is also too faint to be visible in the WISE 12 and 22 $\mu$m images. The farther source ($RA=$161\fdg16329, $Dec.=-$60\fdg80317) appears to be a candidate massive star responsible for the \ion{H}{II} region as it is seen to brighten up from the $K_s$ band to {\it Spitzer} IRAC 3.6 $\mu$m and 4.5 $\mu$m to WISE 12 $\mu$m and remains the only visible bright source in WISE 22 $\mu$m in the studied region. We constructed the SED of this source using its photometric magnitudes/fluxes in eight bands, optical $B$, $V$, and $I$, VISTA $J$, $H$, $K_s$, and {\it Spitzer} IRAC 3.6 and 4.5 $\mu$m. The fitted SED is shown in Figure~8. The best fit model suggests a spectral type B0--B0.5 of the source and an effective temperature of 25,000 K. This is consistent with the ionizing flux of the \ion{H}{II} region.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{Fig8.jpg}
\caption{The SED fitting of the zero-age main-sequence candidate massive star. The best-fit model suggests an effective temperature of 25,000 K. The black dots mark the data points. The solid black curve is the best fitted model, while the grey curves denote the subsequent good fits for ${\chi}^2 -{\chi}_{min}^{2}$ (per data point) $<$ 3. }
\end{center}
\end{figure}
\subsection{{\it Herschel} Column Density and Temperature Maps}
{\it {\it Herschel}} observations with a wide wavelength coverage can reveal the dust properties of a cloud complex. To
explore dust properties around IRAS 10427-6032, we derived the column density and temperature maps by performing
a pixel-to-pixel modified black-body fit to the 160, 250,
350 and 500 $\mu$m {\it {\it Herschel}} images following the procedure outlined in \citet{mal15}.
Prior to performing the modified black-body fit, we first converted all the SPIRE images to the PACS flux unit (i.e., Jy pixel$^{-1}$).
Then, we convolved and regrided all the shorter wavelength images to the resolution and pixel size of the 500 $\mu$m map.
Next, to minimize the contribution of possible excess
dust emission along the line-of-sight, we subtracted the corresponding background flux, estimated from a field nearly devoid of emission, from each image.
In the final step, we fitted the modified black-body on
these background subtracted images using the formula given in \citet{mal15}. While fitting,
we used a dust spectral index of $\beta$=2, and the dust opacity per unit mass column density ($\kappa_{\nu}$) as given in
\citet{beck91}, $\kappa_{\nu} = 0.1~(\nu/1000~{\rm GHz})^{\beta}$ cm$^2$/gm, leaving the dust temperature
(T$_{dust}$) and dust column density (N(H$_{2})$) as free parameters. Since we are more interested in cold dust properties, to avoid contribution from stochastically heated small grains \citep[]{comp10,pav13}, we did not use 70 $\mu$m data in the fitting procedure.
\begin{figure}
\centering{
\includegraphics[width=\columnwidth]{Fig9-revised.png}}
\caption{(a) Dust temperature map, and (b) column density map around IRAS\,10427-6032 for a 5\farcm5 $\times$5\farcm5 field derived from {\it Herschel} images in colour scale. We note some of the pixels of these images had NaN values (owing to bad pixel values in the original {\it Herschel} images). These pixels have been replaced with the values of the nearby pixels. (c) The 843 MHz radio continuum map, tracing the ionized gas around IRAS 10427-6032,
with overlaid dust temperature contours at 15,17,19 and 21 K. The green circles represent the approximate inner and outer boundaries of high column density
regions around the \ion{H}{II} region. The 870 $\mu$m emission is shown in cyan contours.}
\end{figure}
Figure~9 shows the low-resolution ($\sim$37${\hbox{$^{\prime\prime}$}}$), beam-averaged, dust temperature and dust column density maps,
and their correlations with the ionized gas over 5\farcm5 $\times$5\farcm5
area centered on IRAS object. As can be seen from Figure~9a, though the temperature shows a distribution between 10 to 21 K, it is higher near the infrared cluster, peaking at $\sim$21 K.
In contrast, the column density map (Figure~9b) shows, in general, a low column density towards the cluster center with an average value of
$\sim$0.6 $\times$ 10$^{22}$ cm$^{-2}$, whereas relatively higher in the outskirts of the cluster, particularly within the annular area marked on the figure. The average value in the annular area is $\sim$1.3 $\times$ 10$^{22}$ cm$^{-2}$. Figure~9c shows
the low-resolution ($\sim$43${\hbox{$^{\prime\prime}$}}$ $\times$ 49${\hbox{$^{\prime\prime}$}}$) radio continuum view of the IRAS 10427-6032 region at 843 MHz. As can be seen,
the radio emission is stronger at the center of the image
corresponding to the location of the massive star (marked with a star symbol) and the cluster. The contours
overlaid on the 843 MHz map are from the temperature map and are at 15, 17, 19 and 21 K. The average temperature at the outskirts of the cluster, particularly in the annular area, is $\sim$10K.
Overall the 843 MHz and temperature maps show strong correlation, i.e., the warmer zone corresponds to the zone of stronger free-free emission implying that the relative high temperature observed at the cluster location is primarily due to the radiation from the members of the cluster.
Though the {\it Herschel} maps are of low-resolution, yet a
careful look seems to indicate a possible presence of a temperature gradient with temperature decreasing from northwest to southwest,
consistent with the broken morphology of the \ion{H}{II} region, observed in the high-resolution optical and infra-red
images. These signatures suggest that the \ion{H}{II} region is possibly in its early phase of ``champagne-flow'' \citep{ten79}.
We estimate the total gaseous mass (M$_{gas}$) associated with the \ion{H}{II} region using the integrated column density over the size of the \ion{H}{II} region using the following equation:
\begin{equation}
M = \mu {\mathrm m_{H}} A_{\mathrm pix} \Sigma {\mathrm H_{2}} \, \label{eq:mass}
\end{equation}
where $\mu$ is the mean molecular weight, ${\mathrm m_{H}}$ is the mass of the hydrogen atom, $\Sigma$H$_2$ is the summed H$_2$ column density, and $A_{\mathrm pix}$ is the area of a pixel in cm$^{-2}$ at the distance of the region. The resultant mass is $\sim$ 220 M$_{\odot}$.
\subsection{Star formation}
Feedback from massive stars plays a critical role in the star formation processes and evolution of molecular clouds. In particular, expanding \ion{H}{II}
regions may have a positive effect on star formation, i.e. they can
trigger new generation of star formation in molecular clouds. From a theoretical point of view, two main triggering mechanisms have been suggested: C\&C and RDI. In the C\&C process, when an \ion{H}{II} region expands in a homogeneous medium, it
sweeps the nearby ISM into a dense shell. If the expansion of the \ion{H}{II} region continues for long enough, the surface density of
the shell increases to the point where the shell becomes self-gravitating and fragments leading to the formation of
massive condensations that are potential sites for subsequent star formation \citep{elmegreen77}. In the C\&C process as shown in simulations \citet{whi94,dal07},
evenly spaced massive fragments are expected around the \ion{H}{II} regions \citep[e.g.][]{zav06,samal14,liu16}. However,
molecular clouds are often fractal and clumpy. Thus the clumpiness of the dense shell (or dense condensations in the shell) can be attributed to density
structures in the fractal molecular cloud into which the \ion{H}{II}
region \citep[see discussions in][]{wal13} expands. In RDI, an expanding \ion{H}{II} region overruns a pre-existing cloud, it drives an ionization front and a shock wave into the cloud. As a consequence, the inner parts of the
cloud are compressed, and may become gravitationally unstable, collapsing to form new stars \citep{bertoldi89,bis11}. Protruding structures
(e.g., elephant trunks or bright rimmed clouds) found at the edges of the \ion{H}{II} regions with YSOs or cores inside, are often considered as the signature of the RDI process \citep[e.g.][]{mor08,cha11a,cha11b,pan14}.
\citet{wal15} performed smoothed particle hydrodynamics simulations of \ion{H}{II} regions expanding into fractal molecular clouds and suggested that in a clumpy medium, a hybrid form of triggering, which combines elements of C\&C and RDI, should be more appropriate \citep[e.g.][]{jose13}.
They found, in a fractal medium, during the expansion of the \ion{H}{II} region and the collection of the dense shell,
the pre-existing density structures are enhanced and lead to a clumpy distribution within the shell.
The masses and locations of the clumps depend on the fractal density structure of the molecular cloud. Subsequently, the clumps grow in mass,
and at the same time they are overrun and compressed by the \ion{H}{II} region, until they become gravitationally unstable
and collapse to form new stars.
As discussed in \S{3.3}, the annular area around the \ion{H}{II} region represents the location of
higher column densities. The average column density within the annular area is approximately higher by a factor of two than the average column density within the \ion{H}{II} region. Though the resolution of the
{\it Herschel} images are not high enough to discuss the morphology of the dust around the compact \ion{H}{II} region in
finer details, in general the column density distribution around the \ion{H}{II} region broadly represents the accumulated cold
matter such as those observed at the borders of several Galactic \ion{H}{II} regions \citep[e.g][]{deharveng10}.
Largely, it appears that the \ion{H}{II} region has accumulated some of the diffuse ISM into a shell.
We find the observed column density in the shell is comparable to the column density required ($\sim$ 6 $\times$ 10$^{21}$ cm$^{-2}$)
for fragmentation to happen through C\&C process \citep[see][]{whi94}. Thus the shell may be in its initial stage of fragmentation.
However, as discussed in Sect. 3.1.2, a compact 870 $\mu$m ATLASGAL clump lies at the western side of the infrared bubble, and this is the only
ATLASGAL clump observed around the \ion{H}{II} region. We note ATLASGAL 870 $\mu$m images are more sensitive to dense cold gas than diffuse gas.
The clump lies $\sim$27 arcsec away from the massive star (see Fig. 9c) and protrudes into the ionization region.
Majority of the YSO candidates identified in the region are found to be
coincident with the ATLASGAL clump, indicating the star formation in the clump is more active compared to the other parts of the region. Our results suggest, although the \ion{H}{II} region has collected some of the cold ISM around its periphery (perhaps through C\&C process),
the enhanced star formation observed at its western side is unlikely due
to the fragmentation of the collected material,
rather seems due to the compression of a pre-existing clump. The fact that the average age of seven YSO candidates with SED fits, 0.17 Myr, is smaller by a factor of 2--5 as compared to the dynamical age of the \ion{H}{II} region (0.30--0.95 Myr), supports the role of expanding \ion{H}{II} region in triggering star formation in the clump. To put
the star formation scenario of the region on a firm footing though,
a detailed velocity and age measurements of the point sources, and kinematics of the cold gas are needed.
\section{Conclusions}
We studied a 5$\hbox{$^\prime$} \times$5$\hbox{$^\prime$}$ region around a compact \ion{H}{II} region, IRAS\,10427-6032, using the near-IR data from the VCNS, archival data from {\it Spitzer}, WISE, ${\it Herschel}$, ATLASGAL, and MGPS-2. We identified YSO candidates of the region using a combination of near-IR and mid-IR data. Our conservative criteria result in 5 Class I and 29 Class II YSO candidates. The ratio ($\sim$6) of the Class II to Class I YSO candidates suggests that this is a young cluster. We derived approximate physical parameters of seven YSO candidates with photometric information in 7 or more bands, by constructing SEDs. The SED fits show that these YSO candidates are all intermediate mass with masses ranging from $\sim$2 to 5 M$_{\odot}$, and in early evolutionary stages with an average age $\sim$0.17 Myr. Whereas the brighter sources are found to lie along a 1 Myr reddened (A$_V$=5.0 mag) main-sequence isochrone, the low- and intermediate mass- YSO candidates are clustered around a 1 Myr and 2 Myr pre-main sequence isochrones. The mass distribution of all YSO candidates based on the isochrone fitting ranges from 0.1 M$_{\odot}$ to 5 M$_{\odot}$.
The 843 MHz radio continuum data shows a nearly spherical compact radio source. The linear dimension of the source, assuming the distance to the region 2.3 kpc, is $\sim$1.2 pc. This implies that IRAS\,10427-6032 is a compact \ion{H}{II} region. The Lyman continuum luminosity of the source, $10^{46.9}$ photons $\mathrm{s^{-1}}$, suggests a ZAMS spectral type of the ionizing source to be B0.5--B1 or earlier assuming a single responsible source. The candidate massive star is found at $\sim$9$\hbox{$^{\prime\prime}$}$ from the IRAS position and correlates well with the ionized emission. Its expected spectral type based on the SED fit, B0--B0.5, also matches with the Lyman continuum luminosity derived from the radio continuum data. The dynamical age of the \ion{H}{II} region is estimated to range between 0.30--0.95 Myr.
We present low-resolution ($\sim$ 37${\hbox{$^{\prime\prime}$}}$), beam-averaged, dust temperature and dust column density maps generated using the ${\it Herschel}$ data. The temperature distribution is found to vary between 10 to 21 K with higher temperature peaking at $\sim$21 K near the location of the infrared cluster, and an average value of $\sim$15.5 K, away from the cluster. In contrast, the column density map shows a low column density towards cluster center with an average value of $\sim$0.6 $\times$ 10$^{22}$ cm$^{-2}$, whereas a relatively higher, $\sim$1.3 $\times$ 10$^{22}$ cm$^{-2}$, in the annular area around the \ion{H}{II} region. This annular region likely represents the accumulated cold matter around the \ion{H}{II} region which is in its initial stage of fragmentation.
IRAS\,10427-6032 shows a broken bubble morphology in mid-IR images. The bubble of $\sim$1\farcm6 diameter has approximately two-third of its western rim intact and about one-third of the eastern side open. The presence of temperature gradient, in which the temperature is seen to decrease from northwest to southwest in the temperature profile constructed using the ${\it Herschel}$ data, is consistent with the broken morphology of the \ion{H}{II} region, hinting that the \ion{H}{II} region is possibly in its early phase of ``champagne-flow''.
The 870 $\mu$m ATLASGAL contours are along the western rim of the bubble, and show some amount of protruding in the ionized region. From the spatial correlation it appears to be an interacting cold dust condensation. Majority of the identified YSO candidates are found to be coincident with the sub-mm contours and are either found to lie along the bubble rim or into the bubble interior adjacent to the western rim. Two of our five Class I YSO candidates are found in the dense shell surrounding the \ion{H}{II} region, one on the bubble rim, whereas the remaining two are found in the bubble interiors. The spatial correlation of YSO candidates with the clump, and the greater dynamical age of the \ion{H}{II} region, by a factor $\sim$2--5, than the average age of the YSO candidates, indicate that the enhanced star formation on the western rim of the \ion{H}{II} region could be due to compression of the pre-existing clump. Spectroscopic information of the ionizing star of the \ion{H}{II} region and YSO candidates on the border, is necessary to strengthen the hypothesis of triggering in this star forming region.
\section*{Acknowledgements}
This research has made use of the VizieR catalogue access tool, CDS, Strasbourg, France. The original description of the VizieR service was published in A\&AS 143, 23. This work makes use of the archival images and catalogues from Deep Glimpse Survey of the {\it Spitzer} Science Telescope. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. This work also makes use of the archival data of 843 MHz radio continuum images observed under the second epoch of Molonglo Galactic Plane Survey. This research makes use of the archival data from the ${\it Herschel}$ far-IR telescope. The ATLASGAL project is a collaboration between the Max-Planck-Gesellschaft, the European Southern Observatory (ESO) and the Universidad de Chile. It includes projects E-181.C-0885, E-078.F-9040(A), M-079.C-9501(A), M-081.C-9501(A) plus Chilean data. This work makes use of the Python based SED fitting tool of \citet{robitallie06}.
|
2,877,628,090,981 | arxiv | \section{Introduction}
Ever since the SDSS survey \citep{yor00} two decades ago, optical imaging and spectroscopic surveys have revolutionized our views on the galaxy evolution and structure formation. In particular, multi-band imaging surveys over large areas, such as PanSTARRS \citep{kai10}, Dark Energy Survey \citep{sev21}, Hyper Suprime-Cam Subaru Strategic Program \citep[HSC SSP,][]{aihara18}, and the forthcoming LSST \citep{ive19}, have become a standard and relatively inexpensive pathway in mapping the distribution of galaxies over the cosmic time by utilizing the so-called photometric redshift (hereafter photo-$z$) technique.
The HSC SSP is a 7-year program conducted during 2014-2021 on the 8.2-m Subaru Telescope, consisting of different combinations of depths and areas (Wide, Deep, and Ultra-Deep). The detector is optimized to observe in the redder optical wavelengths. Five broad-band ($g, r, i, z$ and $y$) and four narrow-band ($NB387, NB816, NB921, NB101$) filters are designed to make the best use of its sensitivity. Among the three HSC survey layers, the HSC Ultra-Deep Survey (UDS), is the deepest component, aiming to reach 5-$\sigma$ AB magnitude detection limits of $g$, $r$, $i$ $\sim28.0$, $z\sim 27.0$, $y\sim26$, and $m_{NB}\sim26$ over two fields widely separated on the sky (each with one HSC pointing, which covers $\sim 1.7$~deg$^{2}$), making it the deepest survey with a few square-degree of coverage ever taken by a ground-based telescope. For comparisons, these correspond to the depths of Great Observatories Origins Deep Survey \citep[GOODS,][]{gia04} but for an area that is 40$\times$ larger, and to depths of 1--2 magnitudes deeper than the Subaru Suprime-Cam data in the original Cosmic Evolution Survey \citep[COSMOS,][]{sco07,cap07} with almost twice the area. The goal the HSC UDS is to directly measure the buildup of galaxies and large scale structure across cosmic time.
The two pointings chosen for the HSC UDS are two extremely data-rich extragalactic survey fields, the COSMOS field and the Subaru-XMM Deep Survey \citep[SXDS,][]{fur08} field. As we would like to measure the luminosity and mass functions accurately, sampling a representative volume of the Universe is crucial. One HSC pointing will map 150$^{2}$ comoving Mpc$^{2}$ at $z\sim3$. However, cosmic variance in one pointing will be at a level of 5-15\% depending on the redshift. By observing two independent fields, we can bring it down to 4-10\% \citep{dri10}.
While the HSC UDS is expected to discover several millions of galaxies up to $z \sim7$, it does not sample the wavelengths blueward of 4000 \AA, which is critical to distinguish the Balmer and Lyman breaks for the precise measurements of photo-$z$, as well as to provide UV-based star formation rate for galaxies at intermediate redshifts. To complement the HSC UDS, we initiated a multi-year $u^\ast$-band imaging campaign in the two HSC UDS fields using MegaCam on Canada-France-Hawaii Telescope, called ``MegaCam Ultra-Deep Survey: $u^\ast$-Band Imaging'' (MUSUBI). This takes advantage of the good $u^\ast$-band quantum efficiency of the MegaCam CCD (74\% at 3800 \AA, cf. 36\% for the HSC CCD at 3800 \AA), which makes MegaCam the best instrument worldwide for this kind of ultraviolet surveys. By combining MUSUBI and existing shallower $u^\ast$-band observations in COSMOS and SXDS, we reach a depth of $u^\ast_{AB}$ $\sim$ 27.5, which is well-matched to the HSC $grizy$ and NB depth. The combination of the MUSUBI and HSC datasets will enable a variety of scientific studies, such as studies of Lyman-alpha emitters \citep{hu02,ouc08,shi18} and UV-luminosity function for galaxies at $2 < z < 3$ \citep{red09,saw12,van10,mou20}, properties of LBG/BM/BX selected populations \citep{ade04,ly12}, and selection of green-valley galaxies at $z < 1$ using the $NUV-r$ color \citep{wyd07,sal14,coe18}
From 2014, another CFHT MegaCam $u$/$u^\ast$-band imaging campaign was launched by CLAUDS (CFHT Large Area $U$-band Deep Survey, \citealp{sawicki19}). CLAUDS imaged the four ``Deep Fields'' in the HSC SSP, mainly using the new $u$-band filter on MegaCam (Figure~\ref{fig1}), to a total area coverage of 18.6 deg$^2$. Because of this large area coverage, CLAUDS is about 0.7 magnitude shallower than MUSUBI. However, CLAUDS also included our $u^\ast$ data and $u^\ast$ data in the CHFT archive acquired by various teams previously, and reached the same depth as MUSIBI in the COSMOS and SXDS fields (aka. CLAUDS ``UltraDeep'' fields). Here we provide our independent reduction and calibration of the CFHT MegaCam images, and release the images and reference catalogs to the community\footnote{\url{http://www.asiaa.sinica.edu.tw/\~whwang/musubi}}. The foci of this paper will be to describe the dataset in details and to compare our source catalogs with other publicly available catalogs in COSMOS and SXDS.
In Section~2, we provide an overview of the MUSIMI observations. The data reductions, calibrations, and resulting data qualities are described in Sections 3, 4, and 5, respectively. In Section 6, we provide details about the released data products. In Section 7, we showcase examples of sciences, specifically the photo-$z$ and the evolution of green-valley galaxies, enabled from the combined MUSUBI and HSC datasets. Throughout the paper, we adopt the AB magnitude system \citep{oke83}, and the standard $\Lambda$CDM cosmological parameters of $H_0=70$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_m = 0.3$, and $\Omega_\Lambda=0.7$.
\section{Observations}
Our MUSUBI team based in Taiwan carried out extremely deep $u^\ast$-band imaging of the COSMOS and the SXDS fields with MegaCam on CFHT, during semesters from 2012B to 2016B (PI: S.\ Foucaud and W.-H.\ Wang). All observations were conducted under the queue observing mode, with active atmospheric extinction and seeing monitoring. The queue mode allows us to only observe when the $r$-band seeing is better than $0\farcs8$, corresponding to average to good seeing on Maunakea. However, the $u^\ast$-band seeing is typically about $0\farcs14$ worse than that at $r$-band, so our actual image quality is typically worse than $0\farcs8$. We requested the queue system to only observe our targets under photometric conditions. The CFHT queue service observing team also acquired standard star observations and twilight flats when the conditions were suitable.
Our primary goal is to support the two Ultra-Deep fields in the HSC SSP. We therefore only imaged the 1-deg area of the center of each of the COSMOS and SXDS fields with one MegaCam pointing, to reach the highest sensitivity given the time constraint. Our exposures were dithered to cover the gaps between the MegaCam CCDs. From semester to semester, we slightly changed the dither pattern and the pointing center for each field, to further even out the sensitivity distribution and to slightly expand the maps. Typical exposure times adopted in our observations are from 480 to 720 seconds, to minimize the readout and dither overheads, and to take advantage of the dark sky. Only a small fraction of the exposures are shorter, from 240 to 320 seconds, to accommodate observing conditions that are less ideal. Over the course of 3.5 years, we had accumulated 20.3 hr of exposures for the COSMOS field, and 41.8 hr for the SXDS field.
In addition to our own observations, in our data reduction, we also include all available archival data that cover our field centers. The exposure times of the archived data are from 120 to 660 seconds. The seeing and extinction variations in the archived data are also larger than ours. Among all our data and the archived data, 5.2\% of exposures for COSMOS were taken under thin cirrus where the extinction exceeds 0.1 magnitude, while the conditions for SXDS exposures remain all photometric. To minimize the impact of poorer data from the archive, our data reduction (Section~\ref{sec_reduction}) down-weighted data taken under poorer conditions automatically. Moreover, we visually inspected all exposures to remove obviously bad ones. The inclusion of the archival data brings the total amount of data to 78.1 hr for COSMOS and 60.5 hr for SXDS. However, the archival data were taken by various teams with various imaging strategy. A substantial fraction of the past observations were to mosaic much wider areas, rather than single focused pointings. Therefore, in our final reduced maps, only the map centers receive more than 60 hr of exposure times for both fields.
All our and archived imaging was conducted using the old $u^\ast$ broad-band filter.\footnote{Since semester 2015A, a new set of broad-band filters were introduced to MegaCam. Its filter transmission curves are shown in Figure~\ref{fig1}. In CFHT's documentation, the old $u^\ast$-band filter is referred as $u^\ast$ or $uS$, while the new $u$-band filter is bluer. Here we simply refer our filter as $u^\ast$ and the readers should not be confused this with the new $u$ filter.} Its filter transmission curve is presented in Figure~\ref{fig1}. The filter itself has a central wavelength of 375 nm, and a bandwidth of 74 nm at 50\% transmission. The average wavelength slightly shifts to 379 nm after taking the telescope/camera optics transmission and CCD quantum efficiency into account. Atmospheric transmission would further shift the average wavelength to longer wavelengths and this is airmass-dependent. Figure~\ref{fig1} also compares our filter with the two adjacent SDSS filters, denoted as $U$ and $G$. Because our $u^\ast$ filter is substantially different from $U$, we cannot directly use the SDSS data to calibrate our $u^\ast$-band photometry (Section~\ref{sec_photo_cal}).
\begin{figure}
\epsscale{0.6}
\plotone{Fig1-eps-converted-to.pdf}
\caption{Transmission profile of the CFHT MegaCam $u^\ast$ filter used in MUSUBI. The solid blue curve is the profile of the filter, while the dashed blue curve is the profile combined with the telescope/camera transmission and CCD quantum efficiency. For readers' reference, we also show the profiles for the new CFHT $u$ filter (solid cyan curve), the new CFHT $u$ filter combined with telescope/camera transmission and CCD quantum efficiency (dashed cyan curve), the SDSS $U$ filter (dotted magenta curve), and the SDSS $G$ filter (dotted green curve). All profiles shown here do not contain the effect of atmospheric absorption.
\label{fig1}}
\end{figure}
\section{Data Reduction}\label{sec_reduction}
All the MegaCam data were preprocessed by the CFHT Elixir system\footnote{\url{http://www.cfht.hawaii.edu/Instruments/Imaging/MegaPrime/}} \citep{magnier04} to remove instrumental features. This includes overscan and bias subtraction, flat fielding to correct for pixel-to-pixel and CCD chip-to-chip sensitivity/illumination non-uniformity, and sky subtraction. The file headers also contain updated astrometry and photometric zero points derived by the Elixir system. Our data reduction starts from the Elixir preprocessed files.
We used subroutines in the SIMPLE Imaging and Mosaicking PipeLinE \citep[SIMPLE,][]{wang10,wang10b} to further process and mosaic the Elixir preprocessed CCD images. We divided images into groups. Images taken by the same team (i.e., similar dithering and exposure), within the same semester, and from the same CCD were grouped and processed together. We first conducted additional passes of sky background subtraction, to remove residual image background, which often share a common pattern among the grouped exposures. In the first pass, we masked detected objects in each individual exposure, derived a median image from the group, and subtracted it from each exposure. Then on each individual exposure, we masked detected objects again, fitted the masked image with a third-degree 2D polynomial surface, and subtracted fitted surface from the image. The object masking was done by first smoothing the image with a $3\times3$ square tophat kernel and then masking pixels that exceed 3 $\sigma$ locally, where $\sigma$ is measured in the unsmoothed image. The above procedure almost always leads to sufficiently flat sky in the images, such that the stacking and mosacking do not produce sharp background offsets along the CCD boundaries. The only exception is the rare cases where extremely bright stars prevent good polynomial background models or median sky models.
The grouped and sky-subtracted images were then fed to SExtractor \citep{bertin96} to generate a source catalog for each of them. The photometry of compact objects that have signal-to-noise raio (S/N) higher than 10 in the SExtractor catalogs were compared, so images taken under non-photometric conditions can be identified. These images were re-normalized so their photometry matches the others. The atmospheric absorption derived for these images were also used to weight these images in the image stacking step according to the atmospheric extinction. The coordinates of objects in the SExtractor catalogs were compared among the images, to derive the exact amounts of the dither offsets and the optical distortion. The optical distortion can be derived because the displacement of stars among the dithered images as a function of position in the images is the first-order derivative of the distortion function (e.g., \citealp{anderson03}; see more details in \citealp{wang10}). The distortion correction was then applied to each image to project them to a tangential sky. All our images in the same field share the same projection center, position angle (0), and image scale ($0\farcs186$, the native pixel scale of MegaCam). The distortion correction and the sky projection take into account the change in area for each pixel in a way that source fluxes are conserved. This is required to achieve a uniform absolution flux calibration across the entire field (``photometric flat-fielding''). We will discuss the absolute flux calibration in Section~\ref{sec_photo_cal}.
Two passes of cosmic ray removal were applied to the images. First, on each exposure, a bright pixel was masked if it exceeds 4 $\sigma$ in its $9\times9$-pixel neighborhood. These criteria were carefully chosen such that only spikes much narrower than the point-spread function (PSF) would be masked and stars would not be affected. Second, in a group of dithered, distortion-corrected, and sky-projected images, pixels that share the same sky coordinates were compared against each other. Outliers in the pixel brightness distribution were considered as spurious and masked. This is repeated for every independent pixel on the sky. Typically several tens of dithered images are in a group, meaning that tens of pixels were compared against each other. Only around the boundary of the CCD where the number of overlapping pixels decreases, this method becomes less effective.
After the above processing, the images were then average-stacked to form a deep image. In the stacking, the images were weighted according to their exposure time and atmospheric extinction. Then the stacked images from all MegaCam CCDs, projects, and semesters were mosaicked and stacked to from the final wide-field, ultra-deep images. In the final stacking and mosaicking, pixels that share the same sky coordinates were again compared against each other to remove residual cosmic ray hits that were not previously masked, before the images were combined. In Figure~\ref{fig2} and Figure~\ref{fig3}, we present the final mosaics for the COSMOS and SXDS fields, respectively, and their associated weighted exposure time maps.
\begin{figure*}[t!]
\epsscale{1.15}
\plotone{Fig2-small-eps-converted-to.pdf}
\caption{Image of the COSMOS field (left) and its weighted exposure distribution (right). The brightness scales are inverted. The entire area shown here has a size of $108\arcmin \times 108\arcmin$. The image shows that this field is relatively free from bright stars and the background subtraction in our reduction is excellent. The exposure map shows the very different mosaicking strategy adopted by various teams, producing an extremely deep core of approximately $11\arcmin \times 14\arcmin$ at the center and a deep area of approximately $60\arcmin \times 60\arcmin$.
\label{fig2}}
\end{figure*}
\begin{figure*}[t!]
\epsscale{1.15}
\plotone{Fig3-small-eps-converted-to.pdf}
\caption{Image of the SXDS field (left) and its weighted exposure distribution (right). The brightness scales are inverted. The entire area shown here has a size of $77\arcmin \times 80\arcmin$. The bright star near the north-western corner has $V=6.49$ and produces slightly less than ideal background subtraction in its neighborhood. The exposure distribution is more uniform and the map size is smaller than the COSMOS field. The deep area is approximately $52\arcmin \times 44\arcmin$.
\label{fig3}}
\end{figure*}
\section{Calibration}\label{sec_calibration}
In this section, we outline how we achieved the calibrations for photometry and astrometry, and present the quality of the calibrations.
\subsection{Photometric Calibration and Quality}\label{sec_photo_cal}
\subsubsection{COSMOS}
Our general strategy is to tie our photometry to the well calibrated CFHT Supernova Legacy Survey (SNLS, e.g., \citealp{astier06,guy10}). The SNLS covered the COSMOS field with MegaCam and with the same $u^\ast$ filter used in MUSUBI. \citet{betoule13} published improved photometric calibration of the SNLS, which took into account the varying passband of the filter across the field and photometric flat-fielding (also see \citealp{regnault09}). The resultant photometric uniformity is 8 milli-magnitude at $u^\ast$, sufficient for the studies of high-redshift galaxies. We note that the Elixir data reduction mentioned in Section~\ref{sec_reduction} also adopts a photometric calibration matched to the SNLS calibration in \citet{betoule13}, since 2015. However, we include substantial amount of data taken prior to that, whose calibration was based on the SDSS calibration of \citet{smith02} transferred to the MegaCam system \citep{magnier04}. To correct for any potential offset between the old MegaCam calibration and the new one, we applied the SNLS-based photometric calibration to each dataset individually before they were combined to form a deep image.
To calibrate our COSMOS $u^\ast$ data, we adopt the ``uniform magnitude'' in the SNLS photometric catalog of \citet{betoule13}, which corrects for the varying filter passband across the field. Then for our own data, we measured galaxy photometry using an aperture with $5\arcsec$ diameter. This is reasonably large for encapsulating the total fluxes of faint galaxies, as this is the aperture that gives fluxes closest to SExtractor auto-aperture fluxes on well detected compact objects for our PSF ($\sim0\farcs9$, see Section~\ref{sec_quality}). This is also similar to the aperture adopted by the SNLS ($15\times$ Gaussian $\sigma$ of the PSF). We calculated the photometric correction by comparing the two for data taken in each observing run. The differences between our measured (Elixir-calibrated) photometry and the SNLS photometry in the various observing runs are always a few percents, even for data taken after 2015. This is likely caused by the difference in seeing and the aperture sizes adopted by Elixir. Our correction eliminates these offsets. The SNLS D2 pointing has an area of 1 deg$^2$, which only covers the central part of our 3 deg$^2$ map. For the outer region that does not have SNLS photometry, we propagated the photometric solution that we obtained from the center using the overlapping regions between the central and outer pointings. Our calibrated images have a map unit of $\mu$Jy per pixel, which is equivalent to an AB magnitude zero point of 23.9.
We demonstrate the calibration quality in Figure~\ref{fig_cosmos_cal}, in which we compare the photometry from our SExtractor catalog derived from our final mosaic (Section~\ref{sex_catalog}) with that from SNLS. Overall, the two catalogs agree with each other, and the median offset is $\Delta u^\ast=-0.0018$. However, there seems to be a small tilt in the sequence in Figure~\ref{fig_cosmos_cal}, ndicated with the cyan dashed line. The median offsets are $-0.0039$ at $u^\ast<21$ and $0.0040$ at $u^\ast>21$. There is not an obvious explanation to this tilt, but the $\pm0.4\%$ offset should not have practical impacts to most science cases. The largest uncertainty of calibration should come from the fact that there are typically only 20 to 30 SNLS objects available for calibrating each MegaCam CCD. Using the dispersion in Figure~\ref{fig_cosmos_cal}, which is 0.055 magnitude, we estimate that the calibration uncertainty would be approximately 0.01 magnitude, which is still quite good. The results here show that we have reached excellent calibration relative to the SNLS photometry in the COSMOS field.
\begin{figure}[t!]
\epsscale{0.6}
\plotone{Fig4-eps-converted-to.pdf}
\caption{Comparison between our COSMOS photometry and the SNLS photometry on objects with $u^\ast$ errors less than 0.05 magnitudes in both catalogs. We use $D=5\arcsec$ apertures from our catalog, which is comparable to the aperture size used in SNLS. The median for the full sample is $-0.0018$. The cyan dashed line is a linear fit to data within $y=\pm0.1$, suggesting that there is a tilt. The nature of this tilt is unclear, and it is nevertheless very small.
\label{fig_cosmos_cal}}
\end{figure}
\subsubsection{SXDS}
In the SXDS field, our pointing and the SNLS D1 pointing are separated by about $2\arcdeg$. Therefore, unlike the COSMOS field, we cannot directly use the SNLS photometric catalog of \citet{betoule13} to calibrate our SXDS data. Here we rely on the SDSS $U$ and $G$ photometry in our field, but converted to the SNLS $u^\ast$ photometric system.
First, we selected blue stars and galaxies from the SDSS DR12 catalog using their $U-G$ colors. This avoids passive galaxies and late-type stars, whose strong 4000~\AA\, breaks can induce large color terms in the $U$ and $G$ bands, particularly for galaxies, whose color terms can be redshift-dependent. We only use galaxies brighter than $U=21$ to avoid large photometric uncertainties. By cross-matching the selected SDSS galaxies and SNLS galaxies, we derived the following conversions with least-square fitting:
\begin{equation}
u^\ast = U - 0.187 (U-G) - 0.1443.
\label{eq_sxds_conversion}
\end{equation}
To better understand this relation, we further picked a sub-sample of galaxies with SDSS spectroscopy and conducted SED fitting to their SDSS photometry at their redshifts. We then integrated their fitted spectra using the $u^\ast$ filter profile to derive their $u^\ast$-band magnitudes. The result is consistent with the above empirical relation, but noisier because of the uncertainties associated with the SED fitting and the smaller sample size. We therefore conclude that Eq.~\ref{eq_sxds_conversion} correctly describes blue stars and galaxies within the SDSS detection limit. We used this relation to derive synthetic $u^\ast$ photometry of SDSS galaxies in our SXDS field to calibrate our CFHT data. In Figure~\ref{fig_sxds_cal}, we show a comparison between our SExtractor $u^\ast$ photometry and the synthetic SDSS $u^\ast$ photometry. Overall, the agreement between the two is very good, though slightly worse than the case for COSMOS based directly on SNLS.The median offset at $u^\ast>16.5$ is 0.012 magnitude, while the dispersion is 0.10. This leads to a calibration uncertainty of roughly 0.02 magnitude for each MegaCam CCD. This is about $2\times$ as worse than the case for COSMOS.
\begin{figure}[t!]
\epsscale{0.6}
\plotone{Fig5-eps-converted-to.pdf}
\caption{Comparison between our SXDS photometry and the synthetic SDSS $u^\ast$ photometry on objects with SDSS $U<21$. We use $D=5\arcsec$ apertures from our catalog. The median at $u^\ast>16.5$ is 0.012. At $u^\ast<16.5$, nonlinear effects start to show up in the MegaCam data. Such bright objects were not used for calibration. They are shown here only to show the nonlinear effects.
\label{fig_sxds_cal}}
\end{figure}
\subsection{Astrometric Calibration and Quality}
We tie our astrometric system to that of the Gaia \citep{gaia16} data. During our our data reduction, the individual exposures from each MegaCam CCD were corrected for distortion, tangentially reprojected on to a common sky plane for each field, and then stacked/mosaicked with other exposures. The reprojection aligned our detected objects with the coordinates in the Gaia DR2 catalog \citep{gaia18,lindergren18}. In this process, we only used Gaia objects that are brighter than Gaia $G=20.5$ and have proper motions measured to be less than 30 milli-arcsec (mas) year$^{-1}$. The projection center for COSMOS is R.A.~(J2000.0) = 09:59:59.59, Decl.~(J2000.0) = +02:12:08.18. The projection center for SXDS is R.A.~(J2000.0) = 02:18:00.00, Decl.~(J2000.0) = $-$05:00:00.00. These positions were roughly determined from the common center of the various pointings of our observations and the archival data. The projected images have position angles of 0.0 on the sky and maintain the native MegaCam pixel size of $0\farcs186$ at the projection centers.
To examine the quality of the astrometry calibration, we show comparisons of the source positions in our final catalog against their Gaia positions in Figure~\ref{fig_astrometry}. For the COSMOS field, where 8360 sources are included in the comparison, the mean positional offsets along R.A.\ and Decl.\ are both at mas levels, practically consistent with zero. The dispersions in the offsets are 67 mas for R.A.\ and 74 mas for Decl., both acceptably small. For the SXDS field, where 2896 sources are included in the comparison, the mean offsets along the R.A.\ and Decl.\ are 6 mas and $-11$ mas, respectively. Therefore, there is a slight tendency of 10 mas for our positions to offset to the south-east relative to Gaia. The dispersions in the offsets for SXDS are 52 mas for R.A.\ and 61 mas for Decl., about 20\% smaller than those in the COSMOS field. This 20\% difference is likely a consequence of the smaller SXDS coverage, so the required distortion correction including reprojection is smaller. If we only use the brightest 15\% of unsaturated sources ($u^\ast=17$--19), the dispersions are reduced by about 14\% (COSMOS) and 5\% (SXDS). These relatively small improvements suggest that the positional errors relative to Gaia are not S/N-driven. This could be the fundamental limit of our overall methodology and pipeline capability. We conclude that the systematic errors in our astrometric calibration is nearly negligible, while the uncertainties in the source positions measured from our images are at a small 50--70 mas level for bright sources that are not limited by S/N.
\begin{figure*}[t!]
\epsscale{1.1}
\plotone{Fig6.pdf}
\caption{Astrometry of our data relative to the Gaia DR2 catalog. The positional offsets shown here are MUSIBI $-$ Gaia. The mean offsets are around 10 mas in both fields. The dispersions are approximately 70 mas for COSMOS and 60 mas for SXDS.
\label{fig_astrometry}}
\end{figure*}
\section{Data Quality}\label{sec_quality}
\subsection{Sensitivity}
To examine our imaging sensitivity, we selected sources detected at $5\pm0.1~\sigma$ with SExtractor auto-apertures, and calculated their median value. The resultant median 5 $\sigma$ limiting magnitudes are 27.19 for the COSMOS and 27.68 for the SXDS.
The values quoted above are for the whole fields. However, as shown in Figure~\ref{fig2} and Figure~\ref{fig3}, our integration time distributions are highly non-uniform, because of the different imaging strategies adopted by the various teams. Therefore, the sensitivity distributions are also highly non-uniform. We measured the 5-$\sigma$ limiting magnitudes in small areas. The results are shown in Figure~\ref{fig_lim_dist}. In the deepest regions in the COSMOS and SXDS, we reach $u^\ast=28.1$ and $u^\ast=28.4$, respectively. In the $\sim1$ deg$^2$ relatively deep regions (yellow to red colors in Figure~\ref{fig_lim_dist}) in the COSMOS and SXDS, we reach 27.7 and 27.8, respectively. These two $\sim1$ deg$^2$ regions were referred as ``CLAUDS UltraDeep'' regions by the CLAUDS team. These are by far the deepest 1 deg$^2$ fields for $u^\ast$ and similar $U$ bands.
All the above-mentioned limiting magnitudes were derived from all detected sources, among which many are extended objects. If we only select point-like objects whose measured sizes are $<1\farcs2$, the limiting magnitudes would become deeper by $\gtrsim0.3$. Also, if we fixed the aperture diameters to be $2\arcsec$, which favors compact objects, the limiting magnitudes would become 0.24 and 0.45 magnitude deeper in the COSMOS and SXDS, respectively. The difference between the two fields when switching to $2\arcsec$ is caused by the different image quality (next subsection).
\begin{figure*}[t!]
\epsscale{1.1}
\plotone{Fig7.pdf}
\caption{$u^\ast$-band sensitivity distribution of our survey. The color shows the median $u^\ast$ magnitudes of all sources detected at 5 $\sigma$ with SExtractor auto-apertures. The angular scales for the two fields are identical. It can be seen that the deep regions (yellow to red) in both fields are slightly less than 1 deg$^2$, and reach 27.7 in the COSMOS and 27.8 in the SXDS. For point-like sources, the limiting magnitudes are roughly 0.3 mag deeper.
\label{fig_lim_dist}}
\end{figure*}
\subsection{Image Quality}
To evaluate the image quality, we selected SDSS photometric stars and spectroscopic quasars in the COSMOS and SXDS fields
with $u^\ast>17$ and measured their FWHM with SExtractor. We only selected $u^\ast>17$ objects to avoid nonlinear effects
in the MegaCam data. The cutoff of SDSS detection limits is roughly $u^\ast=22$. The distributions of the FWHM values are shown in Figure~\ref{fig_fwhm}. The median values are $0\farcs926$ for the COSMOS and $0\farcs947$ for the SXDS, indicated by the two arrows in Figure~\ref{fig_fwhm}.
Like the situation for sensitivity, the image quality in the two fields is not uniform. This is reflected in the asymmetric histograms in Figure~\ref{fig_fwhm}. The histogram for FWHM becomes more symmetric and sharply peaked if we only look at small regions in both fields. We therefore measured the medians of the FWHM distributions in small areas, and show the results in Figure~\ref{fig_fwhm_dist}. Overall, the center of the fields that are covered by our own deep imaging have better image quality, while the outer parts covered by various previous teams have larger image quality variation. The central 1 deg$^2$ region in the COMOS field has FWHM of $0\farcs88$--$0\farcs92$, while the central 1 deg$^2$ region in the SXDS has FWHM of
$0\farcs92$--$0\farcs95$.
\begin{figure}[ht!]
\epsscale{0.6}
\plotone{Fig8-eps-converted-to.pdf}
\caption{$u^\ast$-band FWHM measured on SDSS stars and quasars with $u^\ast=17$--22 in the
COSMOS (solid histogram) and SXDS (dashed histogram) fields. The two downward arrows indicate the medians of the distributions.
\label{fig_fwhm}}
\end{figure}
\begin{figure*}[ht!]
\epsscale{1.1}
\plotone{Fig9.pdf}
\caption{Distribution image quality of our survey. The color shows the median FWHM (in arcsec) of SDSS stars or quasars with $u^\ast=17$--22. The angular scales for the two fields are identical. In the 1 deg$^2$ deep regions shown in Figure~\ref{fig_lim_dist}, the typical image quality is $\sim0\farcs93$ for both fields. In some regions in COSMOS, especially the south-eastern corner, the image quality is significantly better.
\label{fig_fwhm_dist}}
\end{figure*}
\begin{deluxetable}{lr}
\tablewidth{0pt}
\tablecaption{SExtractor Parameters \label{sexpara}}
\tablehead{\colhead{Parameter} & \colhead{Value}}
\startdata
DETECT\_MINAREA & 4 \\
DETECT\_THRESH & 1.0 \\
ANALYSIS\_THRESH & 1.2 \\
FILTER & Y \\
FILTER\_NAME & gauss\_2.5\_5x5.conv \\
DEBLEND\_NTHRESH & 64 \\
DEBLEND\_MINCONT & 0.000001 \\
CLEAN & Y \\
CLEAN\_PARAM & 0.1 \\
SEEING\_FWHM & 0.93 \\
PIXEL\_SCALE & 0.186 \\
MAG\_ZEROPOINT & 23.9 \\
PHOT\_APERTURE & 5.38, 8.06, 10.75, 16.13, 21.51, 26.88\\
BACK\_SIZE & 32 \\
BACK\_FILTERSIZE & 6 \\
BACK\_TYPE & AUTO \\
BACKPHOTO\_TYPE & LOCAL \\
BACKPHOTO\_THICK & 32 \\
WEIGHT\_TYPE & MAP\_WEIGHT \\
WEIGHT\_THRESH & 1000 \\
\enddata
\end{deluxetable}
\section{Source Catalogs}\label{sex_catalog}
In our data release, we provide reference catalogs along with the images. These catalogs can be readily used for scientific studies. However, the users may need to generate their own catalogs if there are special requirements on the photometry, completeness, or sample purity, or if there are needs of photometry based on position priors.
\subsection{Catalog Generation}
We used SExtractor ver.2.5.0 to generate the reference catalogs. The key SExtractor parameters are listed in Table~\ref{sexpara}. Because the two fields have very similar properties, including the $\sim2\%$ difference in median image FWHM, we used identical SExtractors parameters for both fields. For the detection and deblending parameters, we visually inspected the detected galaxies on the images, and then adjusted the parameters. Here we aim for a good balance between detecting faint and blended sources and avoiding detecting too many noise spikes, in both deep and shallow regions (Figure~\ref{fig_extraction}). We set the seeing FWHM to be $0\farcs93$, which only affects the star classification output. Because the image quality is not uniform (Figure~\ref{fig_fwhm_dist}), this value is only an approximate for both fields. Therefore, star class values in our reference catalogs should have uncertainties somewhat larger than those for narrow-field surveys with uniform image quality. The output catalogs contain both fluxes ($\mu$Jy) and AB magnitudes of galaxies, measured with $1\arcsec$, $1\farcs5$, $2\arcsec$, $3\arcsec$, $4\arcsec$, and $5\arcsec$ diameter apertures, as well as SExtractor's auto apertures, which provide estimates of the total fluxes/magnitudes. The complete sets of SExtractor input parameters are provided with the data release, so the users can modify them and quickly create their own catalogs that fit their needs.
\begin{figure*}[ht!]
\epsscale{1.1}
\plotone{Fig10-small-eps-converted-to.pdf}
\caption{Illustrations of source extraction in our reference catalogs, to give readers a general idea about our detection aggressiveness. The center and a shallow corner of each field are shown. Each panel is $1\arcmin$ in size. The two central panels have identical brightness scales, while the two corner panels have $3\times$ larger brightness scales because of the higher noise. The ellipses are drawn according to SExtractor's shape parameters ($A$, $B$, and $THETA$) for objects with S/N $> 5$ (auto aperture) in our reference catalogs. The major and minor axes of the ellipses are made $2\times$ larger so faint objects behind them can be more easily seen. With careful examination, one may notice about 20 (center panels) or 10 (corner panels) objects in each panel missed by SExtractor, either because of their close proximity to birghter objects, or because of their extreme faintness. One may also notice less than a handful of ellipses in each panel that are more likely noise spikes rather than convincing detections. Our source extraction parameters are chosen to reach a balance between detection completeness and spurious detections.
\label{fig_extraction}}
\end{figure*}
\subsection{Comparison with Previous Catalogs}
We briefly compare our catalogs with previously published catalogs in COSMOS and SXDS, so the users can be aware of the systematic differences in these datasets.
\begin{figure*}[ht!]
\epsscale{0.7}
\plotone{Fig11-small-eps-converted-to.pdf}
\caption{Comparisons of photometry between MUSUBI and COSMOS2015 (a), COSMOS2020 (b), and SPLASH-SXDF (c). The comparisons are made on $3\arcsec$ aperture photometry derived from objects with magnitude errors less than 0.05. The median differences for objects with $u^\ast=17$--23 (the two vertical dotted lines) are given in each panel and plotted as the horizontal dashed lines. The histograms show the distributions for $u^\ast=17$--23 objects.
\label{fig_cat_compare}}
\end{figure*}
\subsubsection{COSMOS2015}
The COSMOS2015 multi-band catalog \citep{laigle16} has been the golden standard for the photometry and photo-$z$ for the COSMOS field. The CFHT $u^\ast$-band data included in COSMOS2015 were all included in MUSUBI, but MUSUBI also included our new data. We compared the $3\arcsec$ aperture magnitudes in COSMOS2015 and our reference catalog. This aperture is larger than the optimal aperture for detecting the faintest compact objects. It is chosen because it is less sensitive to the small PSF size difference between the two datasets. With this aperture size, the median 5-$\sigma$ limiting magnitudes for COSMOS2015 and MUSUBI are 25.63 and 27.17, respectively. The numbers of 5-$\sigma$ detected objects are $2.69\times10^5$ and $8.86\times10^5$, respectively. The area covered by $u^\ast$-band detected objects are 2.62 and 3.25 deg$^2$, respectively. These differences are mostly caused by the new data obtained after the compilation of COSMOS2015. In Figure~\ref{fig_cat_compare} (a), we compare the $3\arcsec$ photometry on sources whose magnitude errors are less than 0.05 in both catalogs. The calibrations of the two catalogs are highly consistent, as the median magnitude difference for objects with $u^\ast = 17$--23 is 0.0005. At the bright end of $u^\ast < 17$, the catalogs start to suffer from saturation effects.
\subsubsection{COSMOS2020}
COSMOS2020 \citep{weaver22} is a new compilation of multi-band data for COSMOS, including Subaru HSC data.
It includes all the available $u^\ast$ data, like MUSUBI, but with its own data reduction. However, the 5-$\sigma$ limiting magnitude with $3\arcsec$ apertures in COSMOS2020 is 27.11, slightly shallower than ours. There are $5.90\times10^5$ objects detected at 5 $\sigma$, covering an area of 3.37 deg$^2$. Their area coverage is comparable to ours, but the detected objects are about 30\% fewer, likely caused by the differences in target selection criteria. In Figure~\ref{fig_cat_compare} (b), we compare the $3\arcsec$ photometry on sources with magnitude errors less than 0.05. There is a $-0.053$ magnitude offset between the two catalogs. This offset should be caused by the different calibration strategies: COSMOS2020 uses SDSS as the calibration reference while we use SNLS. If we take this offset into account, the difference in limiting magnitudes becomes even larger, i.e., MUSUBI is 0.11 magnitude deeper than COSMOS2020. This should also contribute partially to the larger number of detected objects in the MUSUBI catalog.
\subsection{SPLASH-SXDF}
SPLASH (Spitzer Large Area Survey with Hyper-Suprime-Cam, \citealp{steinhardt14}) is a warm-Spitzer imaging program for both COSMOS and SXDS (aka.\ SXDF). The multi-band catalog for SPLASH-SXDF was published by \citet{mehta18}, including CFHT $u^\ast$-band photometry from MUSUBI. Their $u^\ast$-band photometry was derived from an earlier version of our reduced image. The differences between that early version and the present version are the photometric and astrometric references. So the two catalogs should be highly consistent, in principle. However, the SPLASH-SXDS photometry is derived from images with PSF homogenization across optical, near-IR, and the Spitzer IRAC bands. As a result, there is not a meaningful direct comparison between the photometry in the catalogs of SPLASH-SXDS and MUSUBI. In Figure~\ref{fig_cat_compare} (c), we show the comparison between the $3\arcsec$ photometry on sources with magnitude errors less than 0.05 in both catalogs. There is a 0.177 magnitude offset between the two. If we switch to SExtractor auto-magnitudes, the difference between the two catalogs reduces by $\sim50\%$ but is still quite significant. Such differences are likely caused by the PSF-homogenization process. On the other hand, it can be seen that the scatter of the differences is much narrower and flatter, comparing to the cases in Figure~\ref{fig_cat_compare} (a) and (b). This reflects the fact that the SPLASH-SXDS and MUSUBI catalogs are derived from images based on identical datasets
and very similar reductions conducted by us.
\section{Application Examples}
\subsection{Photometric Redshifts}\label{photoz}
To demonstrate the value of our MUSUBI $u^\ast$-band data, we compare photo-$z$ derived with and without the $u^\ast$ data. We use the empirical machine learning code, Direct Empirical Photometric Code \citep[DEmP,][]{hsieh14} to derive the photo-$z$. We combined the SExtractor $u^\ast$-band AUTO photometry in the MUSUBI catalog and the HSC second public data release $grizy$ afterburner photometry \citep[HSC PDR2,][]{aihara19} to compile a $u^{\ast}grizy$ multi-band photometric catalog. For the COSMOS field, the training sets are generated by matching the MUSUBI/HSC ${u^\ast}grizy$ multi-band photometric catalog to the redshifts in the COSMOS2020 catalog \citep{weaver22}. The COSMOS2020 catalog has two versions, CLASSIC and FARMER, which are derived using different photometric techniquesc. We used $lp\_ZBEST$ derived using the LePHARE photo-$z$ code \citep{arnouts02,ilb06} in the FARMER catalog as the training references for photo-$z$,. For the SXDS field, we repeated the same procedure to generate the training sets by matching the MUSUBI/HSC multi-band catalog to the redshifts in the SPLASH SXDF catalog \citep{mehta18}. We used $Z\_BEST$ in the SPLASH SXDF catalog as the training references for photo-$z$, which is also derived using LePHARE. We emphasize that the $u^\ast$-band photometry is derived using a different technique from that used to derive the HSC $grizy$ photometry. Therefore any analyses that need accurate $u^\ast - [g,r,i,z,y]$ colors can be seriously affected, such as template SED fitting for photo-$z$. However, because the same photometry/color offsets exist in both the training set and the target set, the conversion between the photometry and the derived quantity (e.g., photo-$z$) should be identical for the training set and the target set. Therefore the effect typically does not impact the results from an empirical machine learning code like DEmP.
Because the training set completely overlaps with the target set, we run the DEmP code in the ``leave-one-out'' mode to prevent overfitting. DEmP always generates a dedicated subset of the training set for each target object. In the leave-one-out mode, DEmP excludes the training object with the identical ID to the target object in the dedicated subset of the training set. With the leave-one-out technique, we are able to compute accurate statistics of the derived quantities (e.g., photo-$z$, or stellar mass, see Section~\ref{sec_green_valley}) for the whole sample.
\begin{deluxetable*}{lrrrrrrr}[h!]
\tablecolumns{7}
\tablewidth{0pt}
\tablecaption{Photometric Redshift Performance or $u^\ast \leq 27.0$\label{qphotoz}}
\tablehead{
\multicolumn{1}{c}{Samples} &
\multicolumn{3}{c}{ALL} & &
\multicolumn{3}{c}{$z>2.0$} \\
\cline{2-4} \cline{6-8}
& scatter \tablenotemark{a} &
bias \tablenotemark{b} &
f$_{out}$ \tablenotemark{c} & &
scatter \tablenotemark{a} &
bias \tablenotemark{b} &
f$_{out}$ \tablenotemark{c}}
\startdata
COSMOS $grizy$ & 0.045 & $-0.0008$ & 19.7\% & &
0.110 & $-0.059$ & 30.1\% \\
COSMOS${u^\ast}grizy$ & 0.043 & $0.0001$ & 16.8\% & &
0.088 & $-0.050$ & 24.0\% \\
SXDS $grizy$ & 0.067 & $-0.0037$ & 28.2\% & &
0.234 & $-0.132$ & 49.2\% \\
SXDS ${u^\ast}grizy$ & 0.061 & $-0.0016$ & 24.6\% & &
0.153 & $-0.081$ & 40.5\% \\
\enddata
\tablenotetext{a}{$1.48\times$Median Absolute Deviation (MAD) of $\Delta{z}$,
where $\Delta{z} = \frac{\mathit{photo\mhyphen z} - \mathit{reference\mhyphen z}}{1 + \mathit{reference\mhyphen z}} $}
\tablenotetext{b}{Median of $\Delta{z}$}
\tablenotetext{c}{Outlier fraction: fraction of objects with
$\left | \Delta{z} \right | > 0.15$}
\end{deluxetable*}
\begin{figure*}[ht!]
\begin{center}
\includegraphics[width=0.9\textwidth,clip]{Fig12.pdf}
\end{center}
\caption{Photometric redshift performance of adding MUSUBI $u^\ast$-band data to HSC $grizy$ data for objects with $u^\ast\leq27.0$. The left panels are for the COSMOS field and the right panels are for the SXDS field. The upper panels are the results derived using the HSC filters alone, while the lower panels are the results derived using the HSC filters and the MUSUBI $u^\ast$-band. Adding the MUSUBI $u^\ast$-band improves the photo-$z$ quality in terms of scatter, bias, as well as outlier fraction.}
\label{fig:photoz}
\end{figure*}
The results are shown in Table~\ref{qphotoz} and Figure~\ref{fig:photoz}. The left panels of Figure~\ref{fig:photoz} are for the COSMOS field, while the right panels are for the SXDS field. All the statistics are calculated for objects with $u^\ast\leq27.0.$ For the COSMOS field, the scatter, bias, and outlier ($\left | \Delta{z} \right | > 0.15$) fraction for the results using the HSC filters alone are 0.045, $-0.0008$, and 19.7\%, respectively. Those with the MUSUBI $u^\ast$-band are 0.043, $0.0001$, and 16.8\%. Adding the $u^\ast$ band improves the scatter, bias, and the outlier fractions. For the SXDS field, the scatter, bias, and outlier fraction for the results using the HSC filters alone are 0.067, $-0.0037$, and 28.2\%, respectively. With the $u^\ast$ band, these values are 0.061, $-0.0016$, and 24.6\%. The scatter, bias, and outlier fraction are all improved by adding the $u^\ast$-band data. These are similar to what we find in the COSMOS field.
At $z>2$, the photo-$z$ performance improvement extends to the outlier fractions. The photo-$z$ performance mainly relies on strong features of galaxy spectrum such as the Lyman break and the 4000~\AA\ break. The HSC filter with the longest effective wavelength is $y$, which is not able to sample the 4000~\AA\ break for galaxies at $z > 1.44$. Therefore the photo-$z$ scatter increases dramatically at $z > 1.5$. However, adding the MUSUBI $u^\ast$-band data can help sampling the Lyman break ($\lambda_{\rm rest}=1216$~\AA) for galaxies at $z > 2.0$ and the Lyman limit ($\lambda_{\rm rest}=912$~\AA) for galaxies at $z > 3.1$. Therefore, the photo-$z$ performance at $z > 2.0$ can be significantly improved. For galaxies at $z > 2.0$ in the COSMOS field, the photo-$z$ scatter, bias, and outlier fraction are 0.110, $-0.059$, and 30.1\%, respectively when using only the HSC filters. These values are improved to 0.088, $-0.050$, and 24.0\% after adding the $u^\ast$-band data. For the SXDS field, the scatter, bias, and outlier fraction for galaxies at $z > 2.0$ are 0.234, $-0.132$, and 49.2\%, respectively with the HSC filters alone. They are 0.153, $-0.081$, and 40.5\% after adding the $u^\ast$-band data. The improvements in the scatter, bias, and outlier fraction are quite substantial. We conclude that adding $u^\ast$-band data to $grizy$ can significantly improve the photo-$z$ in the HSC UDS fields.
We note that this test is just to demonstrate the improvement of photo-$z$ quality produced by adding the MUSUBI $u^\ast$-band data; the result does not represent the optimal absolute photo-$z$ performance that can be derived using the MUSUBI/HSC catalog.
\subsection{Green-Valley Galaxies at $0.4 < z < 0.6$}\label{sec_green_valley}
We further demonstrate the power of the combination of our $u^\ast$-band data and the HSC UDS $grizy$ data with a mini study of galaxies in the ``green valley'' at $z=0.4$--0.6 down to $10^{9.1}~M_\sun$ for a $u^{\ast} \leq 25$ sample. The green valley is a sparse region between the blue cloud and the red sequence, and is often thought of as the transition zone in which galaxies are in the process of migrating from an active star-forming phase to a quiescence phase \citep{wyd07,mar07,sal14}. The presence of the low galaxy density in the green valley, if firmly established, can have profound implications in galaxy evolution. For example, the inferred timescale of galaxy transition can not be very long, otherwise one would expect to see a continuous distribution extended from the green valley to the red color space \citep[e.g.,][]{sal14}. Some other studies, however, have suggested that the quenching time scales of green valley galaxies vary with morphology and environment \citep{sal12,sch14,sme15,jia20}. Therefore, quantifying the fraction of green-valley galaxies and studying their properties provide crucial insights into the quenching mechanisms.
Among various color combinations, the rest-frame $NUV-R$ color has been suggested to be efficient in selecting galaxies with intermediate specific star formation rate between the star-forming and quiescent populations \citep{wyd07,sal14,coe18}. We focus on the redshift of $z\sim0.5$, where the MUSUBI $u^\ast$ band directly samples the rest-frame $NUV$ wavelength, which is sensitive to ongoing star formation. When combined with the HSC UDS $grizy$ photometry, we are able to accurately characterize galaxies in the $NUV-R$ versus stellar mass space and quantify the frequency of galaxies in different populations. \citet{hsieh14} demonstrate that physical quantities besides redshift (e.g., stellar mass) can also be derived from photometry directly. To derive stellar mass and rest-frame $NUV$ and $R$-band luminosity for the green-valley analysis, we repeated the procedure described in Section~\ref{photoz}. For the COSMOS field, the training sets were generated using $lp\_mass\_best$, $lp\_MNUV$, and $lp\_MR$ in the COSMOS2020 FARMER catalog for stellar mass, $NUV$ absolute magnitude, and $R$-band absolute magnitude, respectively. For the SXDS field, $MASS\_BEST$, $LUM\_NUV\_BEST$, and $LUM\_R\_BEST$ in the SPLASH SXDF catalog were used to generate the training sets for stellar mass, $NUV$ luminosity, and $R$-band luminosity, respectively.
We select galaxies with $u^{\ast} \leq 25$. The limiting magnitudes (3\arcsec aperture; 5-$\sigma$) in the HSC
$grizy$ bands are significantly deeper (27.5, 27.2, 27.0, 26.6, 25.9 in $g, r,i, z,$ and $y$, respectively) than 25.0. Since the observed $u^\ast - [g,r,i,z,y]$ colors are nearly all greater than $-1$ in the redshift range used in this analysis, the $u^{\ast} \leq 25$ selection ensures that the majority of galaxies are also detected in the HSC bands, with non-detection rates in the HSC bands between 0.01\% and 0.05\%.
\begin{figure}[t!]
\epsscale{0.8}
\plotone{Fig13-small-eps-converted-to.pdf}
\caption{Rest-frame $NUV-R$ color vs.\ stellar mass for HSC UDS + MUSUBI galaxies at $0.4 < z < 0.6$. The encoded color scale is in the log scale. The region between two dark green lines is the green valley zone. Solid and dashed black vertical lines denote the mass completeness limits for star-forming and quiescent galaxies, respectively.}
\label{fig:nuvsm}
\end{figure}
\begin{figure}[t!]
\epsscale{0.7}
\plotone{Fig14.pdf}
\caption{Fractions of star-forming (blue), quiescent (red), and green valley galaxies (green) as functions of galaxy stellar mass at $0.4 < z < 0.6$ using the HSC UDS + MUSUBI data. The color bands are the standard deviations estimated using the bootstrap resampling method from 2000 trials. }
\label{fig:gv}
\end{figure}
Figure~\ref{fig:nuvsm} displays the distribution of galaxies of our HSC UDS sample in the redshift range of $0.4 < z < 0.6$. While there are various definitions of the green valley zones in the literature, it may not be straightforward to apply those selections in our dataset directly due to possible systematics in the measurements of color and stellar mass. Therefore, we choose to separate galaxies into three populations, i.e., star-forming, quiescent, and green valley galaxies, by following the iterative procedure similar to the one described in \cite{jia20}. In short, we first adopt a constant $NUV-R = 3.5$ on the $NUV-R$ vs.\ $M_\star$ plane to divide galaxies into two broad groups, blue and red populations, and find the median values of the $NUV-R$ color of the two groups at a given stellar mass bin, separately. We proceed to fit the median $NUV-R$ versus $\log M_\star$ distributions with a linear relation for the two sequences, where the log mass ranges used for the fitting are between 8.7 and 9.5 and between 9.7 and 11.1 for the blue and red populations, respectively. Next, we define the middle points of two sequences as the green valley line and the green valley zone is then defined as the region with $\pm$0.5 color value from the green valley line. The star-forming and quiescent galaxies are referred to as those located above and below the upper and lower boundaries of the green valley, respectively. Once the star-forming and quiescent populations are defined, we fit again the linear relations for the two populations, define the green valley zone, and iterate this process until the green valley converges. The final mass-dependent color criterion for the green valley is as follows:
\begin{equation}\label{eq1}
NUV-R = 0.446 \times \log_{10}(M_\star/M_{\sun}) - 1.348 \pm 0.5.
\end{equation}
In Figure~\ref{fig:gv}, we show the fraction of star-forming (blue), quiescent (red), and green-valley (green) galaxies as functions of $M_\star$, where the sum of the three fractions is unity. The errors are estimated using bootstrap resampling from 2000 runs. The stellar mass completeness limit is estimated using a method similar to that described in \cite{ilb10}. In the redshift range of $0.4 < z < 0.6$, we compare the stellar mass distributions in $u^{\ast} \leq 25$ and $u^{\ast}\leq 27$ samples, assuming that the $u^{\ast}\leq 27$ sample is complete for the stellar mass range of our interest. We compute the fractions of galaxies with $u^{\ast} \leq 25$ in the complete sample ($u^{\ast}$ $\leq$ 27) as a function of stellar mass. We then define the lower limit of the stellar mass as the mass at which 30$\%$ of the galaxies are fainter than $u^{\ast}$ = 25. Because of the exceptional depth of the HSC and MUSUBI datasets used in this study, the stellar mass completeness limit in our sample reaches down to 10$^{9.1}$ $M_{\sun}$ for quiescent galaxies and 10$^{8.7}$ $M_{\sun}$ for star-forming galaxies, respectively, almost one order of magnitude lower than that in \cite{coe18}. We then choose the mass limit of quiescent galaxies to represent the mass limit for the whole sample to ensure that at this mass limit, star-forming and green-valley galaxies are also complete. It can be seen from Figure \ref{fig:gv} that the fraction of star-forming (quiescent) galaxies is a strong function of stellar mass, decreasing (increasing) rapidly with increasing mass. In contrast, the fraction of green valley galaxies is roughly constant ($\sim$ 25$\%$) in the stellar mass range between 10$^{9.8}$ and 10$^{10.8}$ $M_{\sun}$, but gradually declines to $\sim$0.12 as the stellar mass decreases to 10$^{9.1}$ $M_{\sun}$. The static green valley fraction at the high mass end is in broad agreement with the results obtained by \citet{coe18}. On the other hand, the small green valley fraction for low-mass galaxies revealed in this study suggests that quenching is inefficient in low mass ($M_\star < 10^{10} M_{\sun}$) galaxies. This result supports the finding by \citet{lin14}, who addressed stellar mass dependent quenching with a different approach and show that the quenching efficiency strongly increases with stellar mass. \citet{lin14} found that the stellar mass quenching becomes dominant over the environmental quenching only for galaxies more massive than $10^{10} M_{\sun}$. We therefore speculate that the low green valley fraction at the low-mass end seen in this work might be due to the lack of stellar mass quenching below $10^{10} M_{\sun}$.
\section{Summary}
We conducted extremely deep $u^\ast$-band imaging with CFHT MegaCam in the COSMOS and SXDS fields, named ``MUSUBI,'' to sample the rest-frame UV of galaxies at $z\lesssim3$ to compliment the Subaru HSC UDS $grizy$ imaging in these two fields. Our deep imaging covers $\gtrsim1$~deg$^2$ in each field. By combining with shallower $u^\ast$ data in the CFHT archive, we reach 5-$\sigma$ limiting magnitudes of $u^\ast=28.1$ and 28.4 on faint galaxies in the deepest areas of our COSMOS and SXDS maps, respectively. In the central 1~deg$^2$ regions, which are more representative for the survey, the limiting magnitudes are 27.7 and 27.8 for COSMOS and SXDS, respectively. The image quality is quite uniform, with FWHM of $0\farcs88$--$0\farcs95$ measured on stars, in the 1~deg$^2$ regions in the two fields. Our photometry is calibrated to the highly accurate CFHT SNLS $u^\ast$ photometry. We estimate that the uncertainties of the calibration are 0.01 magnitude for COSMOS and 0.02 magnitude for SXDS. We tied our astrometry to Gaia DR2. The astrometric uncertainties of our data are 70 mas for COSMOS and 60 mas for SXDS. Using a machine-learning photo-$z$ code, DEmP, we show that adding our $u^\ast$-band data to the HSC $grizy$ data can significantly improve the photo-$z$ in the scatter and bias at $z=0$ to $z\sim3$, and also can mildly improve the photo-$z$ outlier fractions at $z>2$. We also demonstrate that combining the $u^\ast$ and $grizy$ data enables the identification of green-valley galaxies at $z=0.4$--0.6 down to $10^{9.1}~M_\sun$. This allows to study their evolution as a function of stellar mass and their fraction relative to star-forming and quiescent galaxies. We publicly release our reduced and calibrated $u^\ast$ images for COSMOS and SXDS, as well as reference SExtractor catalogs that are science-ready.
\acknowledgments
We thank the referee for comments that greatly improve the manuscript.
We thank the CFHT staff for the observational support, in particular for making the legacy $u^\ast$ filter available to us when MegaCam was migrating to the new filter system. MegaCam is a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Science de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. The observations at the Canada-France-Hawaii Telescope were performed with care and respect from the summit of Maunakea which is a significant cultural and historic site. We are most fortunate to have the opportunity to conduct observations from this mountain. We gratefully acknowledge supports from the Ministry of Science and Technology of Taiwan through grants 110-2112-M-001-006- (W.H.W.), 108-2628-M-001-001-MY3 (L.L.\ and H.Y.J.), and 110-2112-M-001-004 and 109-2112-M-001-005 (Y.T.L.), and from Academia Sinica through the Career Development Awards CDA-107-M03 (L.L.\ and H.Y.J.) and CDA-106-M01 (Y.T.L.). This work was conducted partially when W.H.W. was visiting CFHT as a resident astronomer. W.H.W. am grateful to the hospitality of the CFHT ohana.
|
2,877,628,090,982 | arxiv | \section{Introduction}
Fruit constitutes an indispensable component of our daily nourishment
and a significant fraction of export income for many countries (for example, $\approx$ \euro 15-20 billion
for EU member states \cite{EU_fruit_vegetable_sector_2020}). However,
during budding and/or flowering periods, buds or flowers of many fruits are vulnerable to low temperature levels, especially to temperatures below $0 ^{\circ}$C,
and hence for them frost occurrence can cause a significant yield loss.
As an example, the frost occurrence on March 31, 2014 in Malatya, Turkey
(which is the world capital for dry apricot production) caused
around 95\% of yield loss whose economic value was \$1.2 billion US \cite{Malatya_IGTHM_2016}.
Active heaters which blow hot air on trees (to be called hot air blowing active
heaters (HABAHs) in the rest of the paper) during frost periods
can be a good choice for frost prevention in large-scale
orchards \cite{Atam_Arteconi_2017,Atam_et_al_2020}.
However, a significant barrier for widespread use of such systems
is the installation and operation costs. Adoption of renewable
or hybrid energy solutions (combination of renewable and non-renewable energies) to create hot
air can reduce these costs up to some level. However, independent of the used energy source,
two important design problems for active heater-based frost prevention systems are (i) the optimal distribution of a given number of
active heaters inside a large-scale orchard so that maximum protection
against frost episodes will be achieved and (ii) if hot air is distributed to HABAHs through a piping network, then optimization of the distribution pipe network to reduce pipeline cost and thermal energy losses.
A simplified schematic of the proposed hybrid energy-based
frost prevention system where HABAHs are used is given in Figure \ref{fig:rost_prev_energy_system_new}.
This system is called a hybrid energy system because a combination
of renewable and non-renewable energy sources is used.
The working principle of the system is as follows.
Solar energy through solar collectors will be used to heat water and store it inside an insulated pool. The stored hot water
will be used to heat air via a water-air heat exchanger
and the hot air will be distributed through a pipe network
to a number of blowers inside the orchard which
will blow hot air on trees during frost periods. When necessary, additional
energy from the grid will also be used to heat water in the pool (option 1),
or to directly heat air via an array of electrical air heaters (option 2).
\begin{figure*}[h!]
\centering
\includegraphics[scale=0.29]{figures/frost_prev_energy_system_new.eps}
\caption{Schematic of a hybrid energy-based frost prevention system integrating HABAHs for large-scale fruit orchards.}
\label{fig:rost_prev_energy_system_new}
\end{figure*}
The use of a solar thermal system can provide significant advantages for regions with rich solar radiation,
and for such regions, the extra energy that may be needed from the electricity grid to further heat the water or air will be a small fraction of the total energy used for frost prevention.
As a result, the proposed hybrid solution will be an economically feasible and cost-effective solution. The economic feasibility analysis of the proposed hybrid frost prevention system (such as calculation of investment/operational costs and payback time)
is not in the scope of the current paper and this will be studied in a future paper.
Note that the use of solar thermal collectors and the different insulation designs have already
been considered for heating swimming pools
with promising results \cite{Francey_et_al_1980, Dongellini_et_al_2015}.
In the proposed hybrid energy solution, the purpose of the pool is to store solar thermal energy
for a different application and pool will be completely covered from all sides with the most effective insulation material
to minimize energy losses. As a result, the insulation and hence the
storage efficiency requirements of the solar thermal-driven pool system
of this study are more demanding.
Next, we developed a multi-objective robust optimization-based approach for optimal placement
of a number of HABAHs inside a given large-scale orchard
for effective frost prevention and optimal design of the energy distribution network of the proposed framework. The basic building blocks for the developed optimization scheme
consists of (i) assuming a physically-reasonable function for spatial variation of the heating effect of a heater,
(ii) modeling the optimal heating of the orchard against frost
as an optimization constraint,
(iii) constructing a k-node minimum spanning tree (k-MST)
from a large undirected graph with unknown edge weights
for optimal design of the layout of the energy distribution pipe network to reduce installation and energy loss costs. The resulting problem is a large-scale
multi-objective robust mixed-integer nonlinear programming
problem where we use a discretization scheme
to approximate the problem with a mixed-integer linear programming (MILP) problem.
Moreover, we developed a MILP-based k-MST formulation which is very useful for multi-objective optimization
problems for which k-MST is a part. The k-MST is known to be NP-complete~\cite{Fischetti1994}. Furthermore,
suboptimal k-MST heuristics developed in the literature,
such as \cite{Arya_Ramesh_1998, Arora_Karakostas_2006, Garg_1996, Garg_2005},
are not found suitable for the considered application since
multiple objective functions are studied simultaneously, in addition to other non-traditional constraints that should be satisfied.
This paper is the
first attempt in the open literature to propose
a novel hybrid energy-based solution to the frost prevention problem
in large-scale fruit orchards. This is also a pioneering work in
proposing a multi-objective robust optimization-based approach,
including a novel MILP-based k-MST formulation to tackle the challenging, but important inherent optimal design problems
in applications based on a k-MST.
The rest of the paper is organized as follows.
In Section~\ref{sec:Spatial heater power effect variation modeling},
an empirical, but physically reasonable model for the spatial
variation of the heating effect of a HABAH is given.
The development of the multi-objective robust optimization formulation
for the considered application is described in detail in Section \ref{sec:Energy distribution network optimization formulation},
which includes the proposed discretization scheme for optimization approximation, the k-MST and robust counterpart formulations, each of which is a part of the overall optimization problem.
A case study is given in Section \ref{sec:A case study} to demonstrate the
optimal savings using the optimization-based design compared to a heuristic-based design.
Finally, the main findings of this study along with some
future research directions are given in Section \ref{sec:Conclusions}.
\section{Spatial heater power effect variation modeling}
\label{sec:Spatial heater power effect variation modeling}
The heating effect of any hot air blower-type active heater decreases
with distance and this effect depends on a number of factors
such as the installation configuration of the air blower,
the mass flow rate of blown air and its temperature.
In this paper, we assume the following representative empirical
function for the spatial variation of the heating effect of a given HABAH:
\begin{align}
P_{x_i, y_i}(x, y)=& P_0 k_i^u\underbrace{e^{-k_{tun}\sqrt{(x-x_i)^2+(y-y_i)^2}}}_{\triangleq f(x,y;x_i,y_i)} \nonumber \\
=& P_0 k_i^u f(x,y;x_i, y_i)
\end{align}
\noindent where $f(x,y;x_i,y_i)$ is the ``effective heating power" of the $i$-th active heater, which is centered at $(x_i, y_i)$, at the point $(x, y)$. Basically, $f(x,y;x_i, y_i)$ reflects the fraction of the maximum heating power $P_0$ transferred to the point $(x,y)$ in the orchard.
The parameter $k_{tun}$ is a tuning parameter to vary the heating effect
and the parameter $k_i^u$ is an uncertain parameter,
lying in the interval $[\underline{k_i^u}, \, \overline{k_i^u}]$,
to account for uncertainty in $P_{x_i, y_i}(x, y)$.
For a specific type of hot air blowing heater, a function similar to $P_{x_i, y_i}(x, y)$ can be used, and hence the developed general optimization framework
in the next section can be used for any hot air blower.
\section{Energy distribution network optimization formulation}
\label{sec:Energy distribution network optimization formulation}
\subsection{The optimization problem}
The motivation behind the use of optimization for the presented application is as follows. (i) For a given orchard, we want to locate $k$ HABAHs
inside the orchard in order to
heat the given orchard optimally in a balanced way. (ii)
The heaters are connected through a pipe network in such a way that
the length of the energy distribution pipeline network
is minimized. Minimization of this length has a
twofold benefit: first, the installation cost is minimized
and, second the energy losses from the pipe network
during energy circulation is minimized (the shorter the total pipe length, the smaller is the thermal energy loss because heat loss increases linearly with total pipe length).
As constraints of the optimization problem, the following conditions should be satisfied:
\begin{align}
& \underline{f}-\mu_{s}^l \le \displaystyle \sum_{i=1}^{k}k_i^uf(x_s^{cp},y_s^{cp}; x_i,y_i) \le \overline{f}+\mu_{s}^u,\, s=1,\cdots,n_{cp} \label{power_fr_const} \\
& (x_i-x_j^t)^2+ (y_i-y_j^t)^2 \ge d_{ht}^2, \quad i=1,\cdots,k,\, j=1,\cdots,n_t
\label{distance_from_heater_to_tree_constraint}
\end{align}
Here, \eqref{power_fr_const} is used to enforce the condition
that at each selected discrete check point
$(x_s^{cp},y_s^{cp})$
in the orchard (``cp" meaning check point) the sum
of power fractions from all heaters should be in the range $[\underline{f}, \overline{f}]$
whenever possible (if not possible, then minimum violations $\mu_{s}^l, \mu_{s}^u$
are allowed);
\eqref{distance_from_heater_to_tree_constraint} is used to ensure that the distance between a heater and
the root of a tree is a minimum of $d_{ht}$ meters because heaters should not be installed in
areas occupied by tree stems and branches (assuming that the area occupied by a tree is approximately
a circular area with center $(x_j^t, y_j^t)$ and radius $d_{ht}$).
Note that the constraints in \eqref{power_fr_const} are uncertain
constraints since $k_i^u$s are uncertain parameters,
lying in the interval $[\underline{k_i^u}, \, \overline{k_i^u}]$.
The cost function to be minimized is
\begin{align} \label{cost_function}
\sum_{(i,j) \in V \times V,\, j > i}q_{ij}\sqrt{(x_i-x_j)^2+(y_i-y_j)^2}+\alpha\sum_{s=1}^{n_{cp}}(\mu_s^l+\mu_s^u)
\end{align}
where $V=\{1, 2, \cdots, k\}$,
$q_{ij}$ is a binary variable indicating whether the energy pipe network contains a ``direct" pipe branch ($q_{ij}=1$) or not ($q_{ij}=0$) from heater $i$ to heater~$j$.
The objective function consists of the sum of two terms
where the first one (the summation term) denotes the length of the minimum spanning tree consisting of k nodes (k-MST) and
the second term is used to penalize power range violations at check points.
Note that the above optimization problem consisting of
the cost function \eqref{cost_function}, constraints
\eqref{power_fr_const}-\eqref{distance_from_heater_to_tree_constraint}
and k-MST constraints (which we did not write at this point since
they will be developed later in detail) is a multi-objective
robust nonlinear programming problem.
\subsection{Discretization of orchard domain}
\label{subsection:Approach 2: discretization of nonlinear terms}
In this section, we propose a discretization-based
approach to be used in solving approximately the multi-objective
robust nonlinear programming problem given in the previous section.
In this approach we create a set of uniform discrete points inside the orchard
satisfying the constraints in \eqref{distance_from_heater_to_tree_constraint}
as candidate heater location points (ch-lps) to place the heaters and we denote this set by
$\mathcal{V}$ with $|\mathcal{V}|=n_{ch-lps} \gg k$. Next, we construct an undirected weighted graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$
where $\mathcal{E}$ is the set of weighted edges
between each candidate heater location point and the edge weights $d_{e}^{ch-lps}$,
$e \in \mathcal{E}$, are the distance between the candidate heater location points.
The advantage of using this discretization
technique is that the nonlinear constraints in
\eqref{distance_from_heater_to_tree_constraint}
will be eliminated from the optimization problem, and we will be able to replace the nonlinear terms in \eqref{power_fr_const} and \eqref{cost_function}
with linear terms as shown later.
Since k-MST is a part of the considered multi-objective robust optimization
problem, next we develop a mixed-integer linear programming
formulation for the k-MST problem.
\subsection{MILP-based formulation of k-MST problem}
In this section, we extend the original Miller-Tucker-Zemlin (MTZ) MILP model developed for the travelling salesman problem~\cite{MTZ1960} to the k-MST problem.
\subsubsection{Model structure and main variables}
Consider a generic undirected graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$ where
$\mathcal{V}$ is the node set and $\mathcal{E}$ is the set of weighted undirected edges. For every edge $e = \{i,j\} \in \mathcal{E}$, where $i, j \in \mathcal{V}$, a binary decision variable $z_e$ is defined which represents the edge's inclusion (value of 1)/exclusion (value of 0) in the k-MST. Furthermore, for every node $i$, we define a binary variable $\ell_i$ that equals 1 if node~$i$ is included in the k-MST and zero otherwise. It is required to construct a tree with exactly $k$ nodes to achieve the stated objectives. Next, we develop the necessary constraints of the model.
\subsubsection{Constraints}
The MTZ formulation is based on defining a pair of binary variables
for each edge that suit the directed traveling salesman tour, denoted as $w_{(i,j)}$ and $w_{(j,i)}$. The following set of constraints represent their relationships with $z_e=z_{\{i,j\}}$:
\begin{equation}
z_e = w_{(i,j)} + w_{(j,i)} \;\;\;\; \forall e = \{i,j\} \in \mathcal{E} \label{KMST_1}
\end{equation}
The relationships provided in~\eqref{KMST_1} enable smooth translations between the decisions of including/excluding the hypothetical directed arcs $(i,j)$ and $(j,i)$ and the inclusion/exclusion of the undirected edge $\{i,j\}$.
In similar minimum spanning tree formulations, a node in $\mathcal{V}$ is arbitrarily selected and labeled as the terminal node. All resultant directed paths using the hypothetical directed arcs should end at that terminal node as part of the restrictions that lead to the formation of a tree~\cite{Abdelmaguid2018}. In the case of k-MST, since not all the nodes will be included in the tree, that terminal node cannot be chosen from the nodes in $\mathcal{V}$. Therefore, we add a dummy node $\tau$ to represent the terminal node in the current formulation. We also define a set of dummy edges $D = \{\{i,\tau\}: i \in \mathcal{V}\}$. The lengths of the edges in $D$ do not affect the objective functions, and therefore, their values are not of concern. For every dummy edge $e \in D$, a binary decision variable $z_e$ is augmented to the model, as well as pairs of $w_{(i,j)}$ and $w_{(j,i)}$ binary variables associated with its corresponding hypothetical directed arcs. Accordingly, similar to constraints~\eqref{KMST_1}, the following constraints are added to the model.
\begin{equation}
z_e = w_{(i,\tau)} + w_{(\tau,i)} \;\;\;\; \forall e = \{i,\tau\} \in D
\label{KMST_4}
\end{equation}
In the MTZ formulation, there should be exactly one arc directed out of node $i$, as well as exactly one arc directed into it in order to complete the travelling salesman tour. This restriction is not suitable for trees, since a node in a tree can have more than two edges connecting it to more than two nodes. As demonstrated in~\cite{Abdelmaguid2018}, this can be circumvented in the minimum spanning tree formulation by allowing only one restriction. That is, having exactly one arc directed out of a node, except the terminal node $\tau$. In the k-MST formulation, this has to be governed by the condition of whether this node is included in the tree or not. The following constraints, represent these conditions:
\begin{align}
& \sum\limits_{j \in \mathcal{V} \cup \{\tau\}, j \neq i}{w_{(i, j)}} = \ell_i & \forall i \in \mathcal{V} \label{KMST_2} \\
& \sum\limits_{j \in \mathcal{V}, j \neq i}{w_{(j, i)}} \leq (k-1) \ell_i & \forall i \in \mathcal{V}
\label{KMST_3}
\end{align}
Here, the constraints in \eqref{KMST_2} restrict the number of selected outgoing arcs starting at node $i$ to be exactly 1 if it is included in the k-MST, and to be zero otherwise, for all nodes $i \in \mathcal{V}$. Meanwhile, the constraints in \eqref{KMST_3} make sure that node $i$ will be connected by incoming arc(s) only when it is selected to be included in the k-MST.
Constraints~\eqref{KMST_2} and~\eqref{KMST_3} will result in a set of paths that start at a subset of nodes and can intersect at intermediate nodes. In the current model, all such paths should end at the dummy terminal node ($\tau$). To achieve that, the following two constraints are added:
\begin{equation}
\sum\limits_{j \in \mathcal{V}}{w_{(\tau, j)}} = 0
\label{KMST_5}
\end{equation}
\begin{equation}
\sum\limits_{i \in \mathcal{V}}{w_{(i, \tau)}} = 1
\label{KMST_6}
\end{equation}
Constraints~\eqref{KMST_5} and~\eqref{KMST_6} make sure that only one edge connecting node $\tau$ will appear in the final solution. This restriction is necessary to make sure that all resultant $k$ nodes will be connected. The only edge that connects node $\tau$ to one of the other $k$ nodes can then be excluded when the final MILP solution is interpreted.
Subtours in the MTZ formulation are eliminated by introducing continuous variables $u_i$ for each node $i \in V \cup \{\tau\}$. The elimination is done by allowing a directed arc $(i,j)$ to appear in the solution only when $u_i > u_j$. The following constraints maintain this logic:
\begin{equation}
u_i \geq u_j + w_{(i, j)} - k (1 - w_{(i, j)}) \;\;\; \forall i \in \mathcal{V}-\{j\} \;\; \forall j \in \mathcal{V} \cup \{\tau\}
\label{KMST_7}
\end{equation}
The range of values that can be assigned to the $u_i$ variables are defined by the following constraints:
\begin{align}
& u_{\tau} = 0 & \label{KMST_8} \\
& u_i \leq (k - 1) \ell_i & \forall i \in \mathcal{V} \label{KMST_9} \\
& u_i \geq \ell_i & \forall i \in \mathcal{V} \label{KMST_10}
\end{align}
Finally, the constraint that specifies the number of selected nodes to be exactly $k$ and the domain constraints are defined as
\begin{align}
& \sum\limits_{i \in V}{\ell_i} = k & \label{MST_11} \\
& \ell_i \in \{0, 1\} & \forall i \in \mathcal{V} \label{MST_12} \\
& w_{(i, j)}, z_{\{i,j\}} \in \{0, 1\} & \forall i,j \in \mathcal{V} \cup \{\tau\} \label{KMST_13}
\end{align}
\subsection{Robust counterpart formulation for uncertain constraints}
\label{subsec:Robust counterpart formulation}
As mentioned before, the constraints in \eqref{power_fr_const} are uncertain
constraints since the $k_i^u$ are uncertain parameters
lying in the interval $[\underline{k_i^u}, \, \overline{k_i^u}]$.
To develop a deterministic MILP problem corresponding to the robust MILP, which is called the ``robust counterpart" in the robust optimization
literature \cite{Bertsimas_Sim_2004, Li_et_al_2011},
we need to express the uncertain constraints \eqref{power_fr_const}
in a deterministic form such that they are satisfied for all realizations of the uncertain parameters $k_i^u,\, i=1,\cdots, k$ in their range $ \underline{k_i^u}\le k_i^u \le \overline{k_i^u}$. This happens
if in \eqref{power_fr_const} we replace $ k_i^u$s with their lower bounds $\underline{k_i^u}$ and upper bounds $\overline{k_i^u}$
for "$\le$" and "$\ge$", respectively.
Using this replacement combined with
the discretization scheme, we obtain the ``robust counterpart constraints"
as
\begin{subequations}
\begin{align}
& \underline{f}-\mu_{s}^l \le \displaystyle \sum_{i=1}^{n_{ch-lps}}\ell_i\underline{k_i^u}h_{is} & s=1,\cdots,n_{cp}\\
& \displaystyle \sum_{i=1}^{n_{ch-lps}}\ell_i\overline{k_i^u}h_{is} \le \bar{f}+\mu_{s}^u & s=1,\cdots,n_{cp}
\end{align}
\end{subequations}
where $h_{is} \triangleq f(x_s^{cp},y_s^{cp}; x_i^{ch-lp},y_i^{ch-lp})$ denotes
heat influence of a heater located at the $i$-th candidate heater location point with coordinates $(x_i^{ch-lp},y_i^{ch-lp})$ at the
$s$-th check point with coordinates $(x_s^{cp},y_s^{cp})$.
\subsection{The overall approximated optimization problem}
Collecting all the previous developments, we can express
the overall multi-objective robust optimization problem as in
\eqref{approx_MILP}. The
parameters $\beta_1^{nor},\beta_2^{nor}$
are used to normalized each part of the
cost function to lie in the interval [0,1].
For a given large-scale orchard, it is important that a proper number of candidate heater location points
and check points are created so that a good trade off is achieved between
the optimality of the resulting approximated MILP problem and the number of
resulting associated binary variables, which affect the solvability and
the time required for the solution of the approximate MILP.
\begin{figure*}[h!]
\hrule
\vspace{0.1cm}
Approximate MILP:
\vspace{0.1cm}
\hrule
\begin{subequations} \label{approx_MILP}
\begin{eqnarray}
&\hspace{-1cm} \displaystyle \min_{z_{\{i,j\}}, w_{(i,j)}, w_{(j,i)}, w_{(i,\tau)}, w_{(\tau,i)}, \ell_i, u_i, \mu_s^l, \mu_s^u}\left\{\frac{1}{\beta_1^{nor}}\sum_{\{i,j\} \in \mathcal{E} }z_{\{i,j\}}d_{i,j}^{ch-lp}+\frac{\alpha}{\beta_2^{nor}}\sum_{s=1}^{n_{cp}}(\mu_s^l+\mu_s^u) \right\} & \label{approx_MILP_obj_func}\\
& \eqref{KMST_1}-\eqref{KMST_13} & (\text{k-MST constraints}) \label{approx_MILP_kMST_cons} \\
& \underline{f}-\mu_{s}^l \le \displaystyle \sum_{i=1}^{n_{ch-lps}}\ell_i\underline{k_i^u}h_{is} & s=1,\cdots,n_{cp} \label{approx_MILP_thermal_effect_lower_cons}\\
& \displaystyle \sum_{i=1}^{n_{ch-lps}}\ell_i\overline{k_i^u}h_{is} \le \bar{f}+\mu_{s}^u & s=1,\cdots,n_{cp} \label{approx_MILP_thermal_effect_upper_cons}\\
& \mu_s^l, \mu_s^u \ge 0 & s=1,\cdots,n_{cp} \label{approx_MILP_mu positivity cons}
\end{eqnarray}
\end{subequations}
\hrule
\end{figure*}
\section{A case study}
\label{sec:A case study}
In this section we will consider a case study and
compare the results from a heuristic-based design
with the multi-objective robust optimization results.
The considered large-scale orchard together
with candidate heater location points and check points are given in Figure~\ref{fig:case_study_ch_loc_cp}
and its parameters are given in
Table~\ref{table:Orchard and optimization parameters}.
The heuristic-based design
consists of two stages. In the
first stage, we divide the orchard into $k$
parts having
the same area and put a heater at the center of each part. If a heater is put too close to a tree (within $d_{ht}$ meters), it is pushed from the tree in the same direction so that the distance between the tree and the heater is $d_{ht}$ m. By doing so, we ensure that no heater is located too close to the trees. In the second stage, we find pairwise
distances between the centers
to construct an undirected graph and then
a MST from the resulting graph using
Kruskal's algorithm.
The heater locations and pipe network using this
heuristic are given in Figure \ref{fig:Intuition-based heater locations and pipe network}.
\begin{figure*}[h!]
\subfigure[]{%
\includegraphics[width=0.5\textwidth]{figures/trees_cand_heaters.eps} \label{fig:trees_cand_heaters}
}
\quad
\subfigure[]{%
\includegraphics[width=0.5\textwidth]{figures/trees_check_points.eps} \label{fig:trees_check_point}
}
\caption{Distribution of trees, candidate heater locations and check points: (a) tree and candidate heater locations, (b) tree and check point locations.}
\label{fig:case_study_ch_loc_cp}
\end{figure*}
\begin{table}
\centering
\caption{Orchard and optimization parameters}
\label{table:Orchard and optimization parameters}
\begin{tabular}{|p{1.5cm}||p{5cm}||l|}
\hline
\bf{parameter} & \bf{description} &\bf{value} \\ \hline \hline
$L$ & orchard length (m) & 180 \\ \hline
$W$ & orchard width (m) & 120 \\ \hline
$n_{t}$ & number of trees & 216 \\ \hline
$k$ & number of heaters & 21 \\ \hline
$n_{ch-lps}$ & number of candidate heater location points & 187 \\ \hline
$n_{cp}$ & number of check points & 216 \\ \hline
$d_{ht}$ & distance between the root of a tree and center of a heater (m) & 3 \\ \hline
$\underbar{f}$ & minimum total power fraction at check points & 0.5 \\ \hline
$\bar{f}$ & maximum total power fraction at check points & 1 \\ \hline
$k_i^u$ & parameter to represent the uncertainty in the heating effect of a heater
at a point & $[0.8, \, 1]$ \\ \hline
$\underline{k_i^u}$ & lower bound of $k_i^u$ & 0.8 \\ \hline
$\overline{k_i^u}$ & upper bound of $k_i^u$ & 1 \\ \hline
$k_{tun}$ & tuning variable for spatial variation of the heating effect of heaters & 0.01 \\ \hline
$\Delta_{t}$ & horizontal and vertical distance between the roots of adjacent trees in the orchard (m) & 10 \\ \hline
$\Delta_{ch-lps}$ & horizontal and vertical distance between adjacent candidate heater locations in the orchard (m) & 10 \\ \hline
$\Delta_{cp}$ & horizontal and vertical distance between adjacent check points in the orchard (m) & 10 \\ \hline
$\beta_1^{nor}$ & normalization parameter for the length of k-MST & 600 \\ \hline
$\beta_2^{nor}$ & normalization parameter for
the sum of power range violations & 240 \\ \hline
\end{tabular}
\end{table}
\begin{figure*}[h!]
\centering
\includegraphics[scale=0.55]{figures/heuristic_results.eps}
\caption{Heuristic-based heater locations and pipe network.}
\label{fig:Intuition-based heater locations and pipe network}
\end{figure*}
The multi-objective robust optimization results were obtained using Gurobi\cite{gurobi_2021} as a solver on a laptop with the following hardware specifications: 8GB RAM, Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz 1.99 GHz. Solver time was set to 3 hours
to find a solution close to optimal.
The optimization results are given
in Table \ref{table:multi-objective optimization and intuition-based results} where the first column
includes a series of values for the multi-objective optimization parameter $\alpha$, the second column gives
the optimality gaps, and the last two columns
denote
\begin{align*}
& \text{obj}_\text{part1} \triangleq \sum_{e=\{i,j\} \in \mathcal{E} }z_{\{i,j\}}d_{i,j}^{ch-lp} \\
& \text{obj}_\text{part2} \triangleq \frac{1}{n_{cp}}\sum_{s=1}^{n_{cp}}(\mu_s^l+\mu_s^u)
\end{align*}
which are the total pipe length and average power
fraction range violation, respectively.
The Pareto curve of
the multi-objective robust optimization and
the corresponding optimal heater locations and pipe network
for each case of the considered $\alpha$ value
are given in Figure~\ref{fig:pareto_curve} and Figure~\ref{fig:multi-objective optimization configs}, respectively,
from which we conclude that among the considered
$\alpha$ values $\alpha=5$ seems a good choice with
$\text{obj}_\text{part1}=411.87$ (m) and $\text{obj}_\text{part2}=0.149$.
In the Pareto curve we notice that the point
corresponding to $\alpha=1$ is higher than the
point corresponding to $\alpha=0.1$. Normally,
the opposite should be the case. The cause for this
abnormality is the fact that the optimality gap corresponding to $\alpha=1$
($35.9 \%$)
is considerably higher than
the optimality gap corresponding to $\alpha=0.1$
($6.23 \%$)
obtained after 3 hours run of the optimization
algorithm (see, Table \ref{table:multi-objective optimization and intuition-based results}).
\begin{table}
\centering
\caption{Multi-objective optimization and heuristic-based results}
\label{table:multi-objective optimization and intuition-based results}
\begin{tabular}{|l|l|l|l|}
\hline
$\alpha$ & $ \text{optimality gap} (\%)$ & $\text{obj}_\text{part1}$ (m) & $\text{obj}_\text{part2} (-)$ \\ \hline\hline
$0.1 $ & 6.23 & 200 & 0.379 \\ \hline
$1 $ & 35.9 & 208,28 & 0.394 \\ \hline
$\bf{5} $ & \bf{31.94} & \bf{411.87} &\bf{0.149} \\ \hline
$10$ & 24.56 & 513.31 & 0.123 \\ \hline
$100$ & 25.40 & 583.64 & 0.116 \\ \hline
$1000$ & 30.50 & 593.43 & 0.114 \\ \hline \hline
\bf{Heuristic-based} & ---- & \bf{542.85} & \bf{0.326} \\ \hline
\end{tabular}
\end{table}
Table \ref{table:multi-objective optimization and intuition-based results} also shows the
heuristic-based design results. When calculating
the power fraction range violations in the heuristic
case, $k_i^u$ value for each heater was created
as a random number in the range [0.8, 1]. When the
multi-objective robust optimization results corresponding
to the best case $\alpha=5$ are compared to
the heuristic-based results,
we have a 24.13\% reduction in the total pipe length
and a 54.29\% reduction in power fraction range violations, which clearly show the savings provided
by the developed multi-objective
robust optimization approach.
\begin{figure*}[h!]
\centering
\includegraphics[scale=0.5]{figures/pareto_curve.eps}
\caption{Pareto plot for the multi-objective robust optimization problem.}
\label{fig:pareto_curve}
\end{figure*}
\begin{figure*}[ht!]
\centering
\subfigure[]{%
\includegraphics[width=0.475\textwidth]{figures/fig_alpha_01.eps}
}
\quad
\subfigure[]{%
\includegraphics[width=0.475\textwidth]{figures/fig_alpha_1.eps}
}
\quad
\subfigure[]{%
\includegraphics[width=0.475\textwidth]{figures/fig_alpha_5.eps}
}
\quad
\subfigure[]{%
\includegraphics[width=0.475\textwidth]{figures/fig_alpha_10.eps}
}
\quad
\subfigure[]{%
\includegraphics[width=0.475\textwidth]{figures/fig_alpha_100.eps}
}
\quad
\subfigure[]{%
\includegraphics[width=0.485\textwidth]{figures/fig_alpha_1000.eps}
}
\caption{Optimal heater locations and pipe network using multi-objective robust optimization-based design for a set of $\alpha$ values.}
\label{fig:multi-objective optimization configs}
\end{figure*}
\newpage
\section{Conclusions}
\label{sec:Conclusions}
Cost-effective frost prevention in large-scale orchards is a challenging problem, and development of
any cheap and widespread usable technology for it will have a huge societal impact. Being aware of the importance of this problem, in this paper, first we presented a renewable energy-integrated (solar thermal) solution for frost prevention in horticulture, thus providing an important potential application where renewable energy can be used for. Second, we developed a multi-objective robust MILP formulation
(i) to determine optimally the location of hot air blowers to heat the orchard
in a balanced way to achieve maximum frost protection, (ii) to optimize the layout of
hot air distribution pipe network to minimize the investment cost and energy losses.
The proposed optimization method involved, as part of it,
the development of a MILP-based k-MST formulation suitable for multi-objective optimization problems.
The proposed optimization method was tested on a case study with the dimensions $L \times W =180 \text{m} \times 120 \text{m}$. The optimization results
corresponding to the the optimal value of
$\alpha$ obtained from the Pareto plot
and a 3 hours run of the algorithm
were compared with those of a heuristic-based design to show the degree of savings, which are a 24.13\% reduction in the total pipe length
and a 54.29\% reduction in power fraction range violations.
These savings clearly illustrate the importance of the use of
a numerical optimization framework in optimal design of the suggested hybrid energy frost prevention system. Although here we focused on frost prevention
for horticulture, the proposed framework can be used with minor modifications for other agricultural applications as well.
Examples of future works are
economic feasibility analysis of the proposed hybrid energy solution,
determination of the minimal number of heaters to be used for a given
large-scale orchard and integration of a decomposition
scheme for very large-scale optimization problems corresponding to very large-scale orchards.
\bibliographystyle{IEEEtran}
|
2,877,628,090,983 | arxiv | \section{Introduction}
\label{sec:intro}
The mathematical modelling and simulation of biochemical systems is of paramount importance in several areas, \textit{e.g., } computational and systems biology, model-based pharmacology, chemistry.
The current \emph{de facto} standard for \emph{modelling} biochemical systems is \emph{Systems Biology Markup Language} (SBML, \citealp{hucka-etal:2003:bioinf}, \mbox{\url{http://www.sbml.org}}), an XML-based markup language allowing the definition of biochemical models in terms of reactions, species, compartments, and parameters.
SBML allows the quantitative modelling of various kinds of biological phenomena, including metabolic networks, cell signalling pathways, regulatory networks, infectious diseases, just to mention a few.
\emph{Simulation} of SBML models of practical relevance is crucial for their analysis, as they are often too large or intricate for being analysed statically.
Indeed, many third-party simulators of SBML models have been developed and are currently publicly available (see, \textit{e.g., } the SBML web-site).
\subsection{Motivations}
\label{sec:motivations}
Available SBML simulators do not fully support the integration, within open-standard simulation ecosystems, of SBML models with models defined using \emph{other} languages.
This severely hinders the possibility to \emph{co-simulate} and \emph{integrate} SBML models within large \emph{model networks} comprising biochemical as well as other kinds of models, possibly at different levels of abstraction (\emph{multi-scale} model networks, see, \textit{e.g., } \citealp{debono-etal:2012:biotech}), and applying standard systems engineering approaches for the model-based analysis of such heterogeneous model networks.
For example, the interconnection of quantitative models of the human physiology (\textit{e.g., } Physiomodel, \citealp{matejak-etal:2015:embc}), drugs pharmacokinetics/pharmacodynamics (\textit{e.g., } Open Systems Pharmacology Suite, \citealp{eissing-etal:2011:frontiers}), (possibly semi-autonomous) biomedical devices, pharmacological protocol guidelines or treatment schemes, enables the set-up of \emph{in silico clinical trials} for the (model-based) safety and efficacy pre-clinical assessment of such drugs, protocols, treatments, devices, using \emph{standard system engineering approaches} to perform their simulation-based analysis at system level (see, \textit{e.g., } \citealp{kanade-etal:2009:cav,mancini-etal:2013:cav,mancini-etal:2014:pdp,zuliani-etal:2013:fmsd,zuliani:2015:jttt,mancini-etal:2016:micpro,mancini-etal:2017:ipl}).
Works in this direction include, \textit{e.g., } \cite{schaller-etal:2016:mpc,messori-etal:2018:individualized}, where a model-based verification activity of a sensor-augmented insulin pump is conducted against a model of the human glucose metabolism in patients with diabetes mellitus,
\cite{madec-etal:2019:lab-on-chips}, where a model of a penicillin bio-sensor (integrating biochemistry, electrochemistry, and electronics models) is simulated to compute a first dimensioning of the sensor,
and \cite{mancini-etal:2014:fmcad,mancini-etal:2015:iwbbio}, where representative populations of virtual patients are generated from parametric models of the human physiology, a key step to enable \emph{in silico} clinical trials (see, e.g., \citealp{mancini-etal:2018:rcra-treatment}).
One of the most widely adopted open-standard languages for modelling dynamical systems is Modelica (\url{http://www.modelica.org}), a general-purpose fully-fledged language based on ordinary differential as well as algebraic equations plus procedural snippets.
The language supports object-orientation and allows the definition of complex systems as networks of smaller subsystems.
Modelica is widespread in application domains as diverse as mechanical, electrical, electronic, hydraulic, thermal, control, electric engineering, but also physiology and pharmacology (see, \textit{e.g., } \citealp{matejak-etal:2015:embc}), and several efficient and highly-configurable simulators are currently available: proprietary (\textit{e.g., } Dymola
and Wolfram System Modeler)
as well as open-source (\textit{e.g., } OpenModelica and JModelica).
A Modelica model can also be easily exported into a Functional Mock-Up Unit (FMU), an executable \emph{opaque} (binary) object implementing the Functional Mock-Up Interface (FMI, \url{http://fmi-standard.org}), one of the currently most widespread open standards for model exchange, integration and co-simulation.
Being \emph{black-box}, FMU models can be shared or integrated within larger model networks while protecting their \emph{intellectual property} (see, \textit{e.g., } \citealp{mancini-etal:2016:fundam}). This is crucial when sharing, integrating, or co-simulating models coming from different providers (\textit{e.g., } pharma companies, or manufacturers of novel biomedical devices).
The FMI/FMU standard is currently supported by more than 100 simulators for \emph{virtually all} application domains, making it the largest open-standard ecosystem for (language-independent) model exchange, integration, and co-simulation.
\subsection{Contributions}
\label{sec:contributions}
In this paper we present SBML2\-Mod\-el\-ica\xspace, a software system that translates SBML models into well-modularised user-intelligible \emph{Modelica} code, which preserves both the structure and the documentation of the input SBML models.
The generated Modelica models can then be easily modified, integrated within other models, and can be readily run using \emph{any} available Modelica simulator. Furthermore, the generated Modelica models can be easily exported into FMUs, thus allowing their seamless co-simulation and integration into model networks within open-standard language-independent simulation ecosystems (a helper tool is provided in the SBML2\-Mod\-el\-ica\xspace repository which generates a FMU directly from an SBML model by leveraging the JModelica API).
SBML2\-Mod\-el\-ica\xspace is compliant to the \emph{latest} SBML standard (\SBMLLV{3}{2}, \citealp{hucka-etal:2018:sbml-l3v2}) and succeeds on 96.47\%\xspace of the SBML Test Suite Core v3.3.0 (see Section~\ref{sec:results:correctness}), with a few rare, intricate, and easily avoidable combinations of constructs (see Section~\ref{sec:discussion}) unsupported and cleanly signalled to the user.
Furthermore, our experimental campaign on 613\xspace models from the BioModels database (with up to 5438\xspace variables) shows that major open-source (\emph{general-purpose}) Modelica and FMU simulators (OpenModelica and JModelica, with the latter that converts the input Modelica model into an FMU and then simulates such an FMU), when used in their default configurations achieve performance comparable to state-of-the-art \emph{specialised} SBML simulators (see Section~\ref{sec:results:performance}).
SBML2\-Mod\-el\-ica\xspace can be freely downloaded for non-commercial uses.
The system has been implemented in Java and can be executed on all platforms for which a Java Virtual Machine is available. This includes most computer operating systems.
\subsection{Available SBML simulators}
\label{sec:soa}
A plethora of systems for the simulation of models written in SBML are currently available (most of them are listed in the SBML web-site),
and a comprehensive review of them is out of the scope of this paper.
We note, however, that both their functionalities and compliance to the SBML standard is highly variable.
In particular, only for six systems a certified report of their compliance to the official SBML Test Suite Core is (at the time of writing) publicly available (see the SBML web-site).
Some of such systems (namely: libRoadRunner, \citealp{somogyi-etal:2015:libroadrunner}; libSBMLSim, \citealp{takizawa-etal:2013:libsbmlsim}; and Simulation Core Library, \citealp{keller-etal:2013:simcorelib}) are pure SBML simulators, allowing the user to numerically simulate the SBML model given as input. All of them support a previous SBML standard (\SBMLLV{3}{1}), while SBML2\-Mod\-el\-ica\xspace supports the \emph{latest} standard (\SBMLLV{3}{2}) with only a few minor limitations (see Section~\ref{sec:discussion}).
The other systems (namely: BioUML, \citealp{kolpakov:2019:biouml}; iBioSim, \citealp{myers-etal:2009:ibiosim}; and COmplex PAthways SImulator --COPASI, \citealp{lee-etal:2006:copasi}) are more general platforms that, beyond model simulation, allow the user to modify, extend, and connect different SBML models together. Of them, only BioUML supports \SBMLLV{3}{2}, as iBioSim and COPASI only support the older \SBMLLV{3}{1}.
By translating SBML models into an open-standard general-purpose widely-adopted simulation language as Modelica (preserving both the structure and the documentation of the input models), SBML2\-Mod\-el\-ica\xspace not only allows simulation of the generated models (using any Modelica simulator), but also opens up a huge plethora of new possibilities to integrate SBML biochemical models with models of other kinds of systems (see Section~\ref{sec:motivations}) written in languages different than SBML.
Enabling interoperability and integration of biochemical models into cross-domain model networks has been strongly advocated.
Attempts in this directions include, \textit{e.g., } \cite{madec-etal:2017:bbspice}, where biochemical models are converted into Spice, a standard integrated electronic circuit simulator, for the model-based design of bio-sensors and labs-on-chip. Model conversion is performed exploiting clever analogies between the behaviour of biochemical systems and electronic circuits and between molecular diffusion and heat diffusion \cite{gendrault-etal:2014:ieee-tbe,madec-etal:2019:lab-on-chips}.
SBML2\-Mod\-el\-ica\xspace acts at a higher level, by translating SBML models into a genuinely general-purpose cross-domain open system modelling language (Modelica), hence enabling seamless integration and co-simulation of SBML models with models of virtually \emph{all} application domains, without the need to exploit cross-domain analogies, hence fully preserving model readability and extensibility.
The possibility to export Modelica models into FMUs is one step further, allowing model integration and co-simulation in a language-independent way.
Previous approaches to translate SBML models into \emph{general-purpose} simulation platforms include
the SimBiology Matlab toolbox and
Wolfram SystemModeler (a proprietary simulator accepting the Modelica language) with the BioChem plug-in \cite{larsdotter-etal:2003:biochem,fritzson-etal:2007:biochem}.
In particular, by providing a user-friendly interface and high-level library abstractions, the BioChem SystemModeler plug-in allows the definition of visually appealing biochemical networks in some Modelica editors.
Differently from SBML2\-Mod\-el\-ica\xspace, both SimBiology and Syst\-emMod\-eler+BioChem are based on commercial simulators and support only subsets of older SBML standards (\SBMLLV{3}{1} and \SBMLLV{2}{4} respectively), with several major limitations, including lack of support of delayed and prioritised events.
Also, for none of them a compliance assessment to the SBML Test Suite core is available.
\begin{methods}
\section{Materials and methods}
\label{sec:methods}
In the following, we briefly outline the main structure of an SBML (Section~\ref{sec:methods:sbml}) and of a Modelica model (Section~\ref{sec:methods:modelica}), before sketching (Section~\ref{sec:methods:encoding}) how SBML2\-Mod\-el\-ica\xspace generates a structured Modelica model from an input SBML model.
\subsection{High-level view of an SBML model}
\label{sec:methods:sbml}
Here we recall the main constructs of SBML, namely: parameters, compartments, species, and events.
The reader interested to a more in-depth description is referred to the official SBML web-site (\url{http://www.sbml.org}) for the full language specification.
\emph{Parameters}
denote quantities with a symbolic name. Such quantities can be either constant or varying during model evolution.
\emph{Compartments}
denote containers of a particular type and positive \emph{size} (possibly varying during model evolution).
\emph{Species}
represent model entities (\textit{e.g., } biochemical substances), whose \emph{amount} may vary during model evolution.
Each species belongs to a compartment.
Species may take part to \emph{reactions}. At any time, the \emph{concentration} of a species in its compartment is defined as $\frac{\Fun{amount}}{\Fun{size}}$, where \Fun{size} is the size of the species compartment at that time.
Model parameters, species, and size of compartments are defined by means of \emph{model variables} and can be assigned to values.
An \emph{initial assignment} defines the value of a model variable at time 0.
\emph{Reactions}
are statements describing any transformation, transport or binding process that changes the amount of one or more species.
A reaction of the form $\alpha \to \beta$ (where $\alpha$ and $\beta$ are \emph{mixtures}, defined as linear combinations of species, \textit{e.g., } $\alpha = s_1 + 2 s_2$, $\beta = s_3$, where $s_1, s_2, s_3$ are species) describes how (and how much of) certain species (those in $\alpha$, called reactants) are transformed into certain other species (those in $\beta$, called products). Reactions have associated \emph{kinetic rate expressions} that describe how quickly they take place.
According to the SBML specification, for any species $s$, the set of reactions $R_1, \ldots, R_n$ in which $s$ occurs (together with their associated kinetic rate expressions $k_{R_1}, \ldots, k_{R_n}$) collectively define the time derivative of the available amount of $s$ as:
\begin{equation}
\label{eq:methods:der_species}
\derive{s} =
\sum_{i=1}^{n} k_{R_i} \times \Stoich{s, R_i}
\end{equation}
where $\Stoich{s, R_i}$ is the sum of the coefficients that multiply the occurrences of $s$ in reaction $R_i$. Coefficients occurring on the left side of $R_i$ are multiplied by $-1$ in order to model species consumption, while those occurring on the right side are taken as they are in order to model species production (see forthcoming Example~\ref{ex:methods:example}).
\emph{Events}
represent \emph{instantaneous} and \emph{discontinuous} changes in the value of some quantities (\textit{e.g., } amounts of species, parameters, size of compartments) of the model.
An event is defined in terms of a \emph{trigger condition} (a Boolean formula), and a set of \emph{assignments}, which update some model variables when the event \emph{occurs} (\textit{i.e., } when the trigger condition switches from false to true).
Optionally, events can be \emph{delayed} by a certain time interval, whose length could change during model evolution.
To avoid that two events occurring at the same time assign different values to the same variable, a \emph{priority} expression (on the model variables) can be defined for events. Event priority expressions, evaluated when events occur, define the order in which concurrent events must be handled.
SBML events can be either persistent or non-persistent.
Let $e$ be an event, $t$ be the time instant when the trigger condition of $e$ becomes true, and $d$ be the event delay.
\emph{Non-persistent} (respectively, \emph{persistent}) event $e$ must be executed at time instant $t+d$ \emph{only if} (respectively, \emph{regardless of whether}) the trigger condition remains true during the whole delay period (\textit{i.e., } from time $t$ to time $t+d$).
\emph{Rules} provide additional means to define the values of variables in a model in ways that cannot be expressed using reactions or initial assignments. The following three types of rules are provided (below, $\vecvar{V}$ is a set of --possibly all-- model variables).
\emph{Algebraic rules} are of the form $f(\vecvar{V}) = 0$.
\emph{Assignment rules} are of the form: $x = f(\vecvar{V} - \{x\})$.
Finally, \emph{rate rules} define the rate of change of a model variable and are of the form: $\derive{x} = f(\vecvar{V})$.
Example~\ref{ex:methods:example} shows a simple SBML model that will be used as a running example when outlining how SBML2\-Mod\-el\-ica\xspace works (Section~\ref{sec:methods:encoding}). Although the model is clearly artificial and might not recall a known biochemical mechanism, it has the merit of compactly showing all the most important SBML constructs that we will address in the remainder of the paper.
\begin{example}[Running example]
\label{ex:methods:example}
\newcommand{\Header}[1]{\par\smallskip\noindent\textup{\textbf{#1}}}
Our SBML model (whose code is available in the SBML2\-Mod\-el\-ica\xspace repository) consists of the following elements:
\Header{Parameters:}
$p_1$ with constant value
$10^{-6}~[\text{l}~\text{sec}^{-1}]$;
$p_2$ whose value is initially set to
$1~[\text{mol}^{-1} \text{sec}^{-1}]$;
$p_3$ with constant value
$10^{-3}~[\text{mol}]$;
$p_4$ whose initial value is set to
$300~[\text{sec}]$.
\Header{Species}~(all in $[\text{mol}]$):
$s_1$, $s_2$, $s_3$ (initially set to $10^{-3}$),
and
$s_4$.
\Header{Compartments:} One compartment $c$ containing all the species.
\Header{Reactions:} Reaction
$R : s_1 + 2 s_2 \longrightarrow s_3$
with kinetic rate expression
$k_R = p_2 s_1 s_2~[\text{mol}~\text{sec}^{-1}]$.
\Header{Events:}
\begin{itemize}
\item $e_1$ with trigger condition $s_1 s_2 \leq 10^{-7} \lor s_3 s_4 \leq 10^{-7}$ and priority equal to $s_4$. When the event is triggered, parameter $p_2$ is set to $0$.
\item $e_2$ with the same trigger condition as $e_1$, but with a delay of $p_4$ and priority equal to $s_2$. When triggered, parameter $p_2$ is set to $-1$ and parameter $p_4$ is set to $0$.
\end{itemize}
\Header{Rules:}
\begin{itemize}
\item Rate rule $r_1$, defining $\derive{p_2} = 0$;
\item Assignment rule $r_2$, which sets the $\Fun{size}$ of compartment $c$ to $1 + p_1 \times t~[\text{l}]$, where $t$ is the value of the current time-instant;
\item Algebraic rule $r_3$, which imposes that constraint
$\frac{s_2}{s_1} - \frac{s_3}{s_4} = 0$
holds at all time points.
\end{itemize}
\Header{Initial assignment} which sets the value of $s_2$ to $p_3$ at time zero.
\end{example}
The model in Example~\ref{ex:methods:example} comprises 4 species ($s_1, \ldots, s_4$) all belonging to a single compartment (whose size is constantly increasing as dictated by assignment rule $r_2$). The time-evolution of the available amount of $s_1$, $s_2$, and $s_3$ is governed by reaction $R$, while that of $s_4$ is determined by algebraic rule $r_3$ (hence, $s_4$ is always equal to $\frac{s_1 s_3}{s_2}$).
The kinetic rate expression $k_R$ of reaction $R$ dictates how quickly the reaction takes place.
In our example, reaction $R$ defines the following time-derivatives for the available amounts of the involved species, according to Eq.~\eqref{eq:methods:der_species}:
$\derive{s_1} = -k_R$, $\derive{s_2} = -2 k_R$, $\derive{s_3} = k_R$.
However, $k_R$ is not constant in time, and, moreover, depends on parameter $p_2$, whose value is affected by the two events $e_1$ and $e_2$. This makes our model not trivial.
Figure~\ref{fig:methods:example} shows the time evolution of the available amount of each species of our model, when starting from the initial state, where the available amount of each species is $10^{-3}$, $p_1 = 10^{-6}$, $p_2 = 1$, $p_3 = 10^{-3}$, $p_4 = 300$.
With $p_2 = 1$, $k_R$ is positive, hence $R$ defines a reaction where $s_1$ and $s_2$ are consumed in favour of the production of $s_3$.
When, at time $1233.10~[\text{sec}]$, $s_1 s_2$ becomes $\leq 10^{-7}$ both events $e_1$ and $e_2$ are triggered. Given that $e_2$ has a \emph{delay} of $p_4 = 300~[\text{sec}]$, $e_1$ is processed immediately, while $e_2$ is processed only after $300$ more seconds.
This implies that $p_2$ is immediately set to 0 (as dictated by $e_1$). The kinetic rate expression $k_R$ of reaction $R$ is thus set to $0$ and the system stabilises.
When, at time $1533.10~[\text{sec}]$, also $e_2$ is processed, parameter $p_2$ is set to $-1$ and $p_4$ to $0$.
The new value for $p_2$ makes $k_R$ negative, hence reaction $R$ turns into modelling the consumption of $s_3$ in favour of the production of $s_1$ and $s_2$.
This behaviour continues until time $3078.48~[\text{sec}]$, when $s_3 s_4$ becomes $\leq 10^{-7}$, thus triggering again both $e_1$ and $e_2$. This time, however, $e_2$ has a delay of $p_4 = 0~[\text{sec}]$, hence $e_1$ and $e_2$ are now triggered and processed \emph{simultaneously}. Since the priority of $e_2$ (\textit{i.e., } $s_2$) is higher than the priority of $e_1$ (\textit{i.e., } $s_4$), the SBML specification stipulates that the two events are processed in the order $e_2, e_1$. This implies that $p_2$ is first set to $-1$ and then (at the \emph{same} time point) to $0$.
With $p_2$ being set to $0$, also $k_R$ becomes $0$ and the system stabilises again.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\linewidth]{running-example.pdf}
\caption{Time evolution of the SBML model of Example~\ref{ex:methods:example}.
}
\label{fig:methods:example}
\end{center}
\end{figure}
\subsection{High-level structure of a Modelica model}
\label{sec:methods:modelica}
Modelica is an object-oriented language for the definition of systems of differential-algebraic equations. Below, we briefly recall the general structure of a Modelica model.
The reader interested to a more in-depth description is referred to the official Modelica web-site (\url{http://www.modelica.org}) for the full language specification.
A Modelica model is a network of \emph{objects}.
Each object defines a set of \emph{variables}, initial assignments as well as differential and algebraic \emph{equations} for them, \emph{events} and \emph{algorithmic} sections.
At any time point, the state of a model is the value of the variables belonging to all its objects.
Variables belonging to an object can be referenced from other objects via proper \emph{connections}.
\subsection{Modelica code generation}
\label{sec:methods:encoding}
Differently from the BioChem plug-in of Wolfram SystemModeler (the only other Modelica-based SBML simulator available), SBML2\-Mod\-el\-ica\xspace does not rely on library abstractions, but generates stand-alone yet well-structured and human-intelligible Modelica code.
In particular, the Modelica model generated by SBML2\-Mod\-el\-ica\xspace starting from the SBML model given as input is a network of different objects of 5 different classes (whose code is stored in separate files), following the structure shown in Figure~\ref{fig:methods:encoding:structure}.
This structure ensures full portability and extensibility of the generated Modelica code (no plug-ins are required), and enables easy modifications at the level of each basic component (as no library classes are involved).
\begin{figure}
\begin{center}
\includegraphics[width=0.9\linewidth]{ModelicaModelStructure.pdf}
\caption
UML class diagram of the Modelica models generated by SBML2\-Mod\-el\-ica\xspace.
}
\label{fig:methods:encoding:structure}
\end{center}
\end{figure}
\subsubsection{The Model object}
\label{sec:methods:encoding:model}
The Model object acts as an orchestrator, by holding links to the other model objects and defining, via proper connections, the inter-object visibility of model variables.
Furthermore, the Model object defines the algebraic equations encoding the algebraic rules occurring in the input SBML model, as they may constrain variables belonging to different Modelica objects.
For instance, the Model object generated from Example~\ref{ex:methods:example} would define an algebraic equation encoding algebraic rule $r_3$.
Finally, the Model object hosts a set of auxiliary functions (also generated by SBML2\-Mod\-el\-ica\xspace) needed to handle conflicting assignments from simultaneous events (see Section~\ref{sec:methods:encoding:event}).
\subsubsection{The Compartment objects}
\label{sec:methods:encoding:compartment}
SBML2\-Mod\-el\-ica\xspace defines a Modelica \emph{Comp\-art\-ment} object for each SBML model compartment. Such objects are then linked with the Model object.
The object variables define the compartment size and the amount and concentrations of all the species belonging to the compartment.
For example, the object associated to compartment $c$ of Example~\ref{ex:methods:example} would define variables
\ModelicaCompartmentSize{c},
\ModelicaSpeciesAmount{s_1},
\ModelicaSpeciesConc{s_1},
\ldots,
\ModelicaSpeciesAmount{s_4},
\ModelicaSpeciesConc{s_4}.
Initial assignments to the object variables (if defined within the input SBML model, as it happens in Example~\ref{ex:methods:example}) and differential/algebraic equations for the species belonging to the compartment as well for the compartment size are encoded using information from SBML initial assignments, reactions, and rules.
For instance, the Modelica code generated from Example~\ref{ex:methods:example} would define
the time-derivative of the amount of each species involved in reaction $R$ (\textit{i.e., } of variables \ModelicaSpeciesAmount{s_1}, \ModelicaSpeciesAmount{s_2}, and \ModelicaSpeciesAmount{s_3})
as stipulated by Eq.~\eqref{eq:methods:der_species}, referencing variables (which store data on the reaction) belonging to the Reactions object (see below).
Hence, we would have differential equations
$\derive{\ModelicaSpeciesAmount{s_i}} =
\Stoich{s_i, R} \times k_R
$, for $i \in [1,3]$.
Also, algebraic equation
$\ModelicaCompartmentSize{c} = 1 + p_1 \times t$
(where $t$ refers to the current time instant) would be generated to encode assignment rule $r_2$.
The time-derivative of the amount of any species involved in a rate rule (there are none in Example~\ref{ex:methods:example}) would instead be encoded using its associated rule.
Conversely, equations for species whose amount is defined by means of an SBML algebraic rule (like $s_4$ in Example~\ref{ex:methods:example}) are defined within the Model object, as they represent constraints whose scope may span several Modelica objects.
Finally, variables representing the concentrations of all species are defined from their amounts.
So, for compartment $c$ of Example~\ref{ex:methods:example}, we would have
variable assignments
$
\ModelicaSpeciesConc{s_i} \gets
\frac
{\ModelicaSpeciesAmount{s_i}}
{\ModelicaCompartmentSize{c}}
$
($i \in [1,4]$).
Suitable assertions are injected into the Modelica code to ensure that variables referring to compartment sizes are always strictly positive (as dictated by the SBML semantics), hence guaranteeing that species concentration variables are always defined.
\subsubsection{The Reactions object}
\label{sec:methods:encoding:reactions}
SBML2\-Mod\-el\-ica\xspace defines a single \emph{Reactions} Modelica object storing data for all reactions defined in the input SBML model.
Such an object is then linked with the Model object.
Object variables hold the kinetic rate expression for each reaction $R$ in the model, as well as the coefficients $\Stoich{s,R}$ for each species $s$ occurring in reaction $R$ (see Eq.~\eqref{eq:methods:der_species}).
Thus, as for the single reaction $R$ of Example~\ref{ex:methods:example}, we would have variable
$k_R$ defining the kinetic rate expression of $R$ via the algebraic equation $k_R = p_2 \times \ModelicaSpeciesAmount{s_1} \times \ModelicaSpeciesAmount{s_2}$, plus variables $\Stoich{s_1,R}$, $\Stoich{s_2,R}$, and $\Stoich{s_3,R}$ set to constant values $-1$, $-2$, and $+1$ respectively.
\subsubsection{The Event objects}
\label{sec:methods:encoding:event}
An \emph{Event} Modelica object is defined for each event $e$ defined in the input SBML model, in order to represent the event trigger condition, the event priority and delay (if any). All such objects are linked with the Model object.
In order to properly handle simultaneous events with conflicting assignments, the management of event assignments is split in three parts.
\emph{First}, an auxiliary variable is defined in the Event object encoding $e$ for each SBML model variable
assigned by $e$.
When $e$ occurs (and after the event delay, if any), each such variable is set to the new value to be assigned to its associated model variable $v$, as stipulated by $e$.
\emph{Second}, conflicting assignments stemming from simultaneous events are resolved (within the Model object, Section~\ref{sec:methods:encoding:model}) by means of auxiliary functions, which also take into account the priorities of the competing events.
\emph{Third}, the objects owning the variables to be assigned after the occurring event(s) are informed of the final required changes and take care of actually performing the assignments.
Depending on whether each event is persistent or non-persistent, SBML2\-Mod\-el\-ica\xspace generates different code. Specialised and more efficient code is also generated for the common case of events with no delay.
\subsubsection{The Parameters object}
\label{sec:methods:encoding:parameters}
A single \emph{Parameters} Modelica object is defined to encode all the parameters of the input SBML model, together with their associated initial assignments and differential/algebraic equations stemming from SBML rate or assignments rules.
Hence, the Parameters object for Example~\ref{ex:methods:example} (which is linked with the Model object) would define and properly initialise variables $p_1$, $p_2$, $p_3$, $p_4$, and encode differential equations $\derive{p_2} = 0$ (stemming from rate rule $r_1$) and $\derive{p_4} = 0$ (stemming from the fact that value to $p_4$ changes only upon events).
\end{methods}
\section{Results}
\label{sec:results}
Section~\ref{sec:results:correctness} below shows the results of our experiments aimed at assessing the correctness of SBML2\-Mod\-el\-ica\xspace against the SBML Test Suite Core, while Section~\ref{sec:results:performance} compares the performance of general-purpose Modelica and FMU simulators when running the Modelica code generated by SBML2\-Mod\-el\-ica\xspace against the other SBML simulators for which an SBML Test Suite Core report is publicly available.
\subsection{Compliance to the SBML Test Suite Core}
\label{sec:results:correctness}
In order to assess the correctness of SBML2\-Mod\-el\-ica\xspace, we ran it against the test cases provided by SBML Test Suite Core v3.3.0, available in the SBML web-site.
As SBML2\-Mod\-el\-ica\xspace aims at supporting the \emph{latest} SBML standard (\SBMLLV{3}{2}), we ignored the test cases involving \emph{deprecated} and \emph{discouraged} constructs such as \emph{fast reactions}.
Hence, we ran SBML2\-Mod\-el\-ica\xspace against the remaining 1588\xspace out of the overall 1623\xspace test cases and simulated the generated Modelica code with the two major open-source Modelica implementations, namely OpenModelica (we used v1.32.2\xspace) and JModelica (we used v2.4\xspace), with the latter converting the input Modelica code into an FMU and then simulating such an FMU.
SBML2\-Mod\-el\-ica\xspace achieves very high marks: the output (always identical between OpenModelica and JModelica) is exactly as expected on 1532\xspace out of 1588\xspace test cases (96.47\%\xspace).
Figure~\ref{fig:test-suite} compares the test cases of the SBML Test Suite Core v3.3.0 successfully simulated by OpenModelica/JModelica (on input provided by SBML2\-Mod\-el\-ica\xspace) to the results \emph{declared} by the other six systems for which a certified public report is available in the SBML web-site.
Namely, the figure shows, for each system, a series of 1588\xspace thin vertical bars, one per test case (sorted by their identifier in ascending order from left to right).
Each vertical bar is coloured in green (respectively, grey) if the simulator output is (respectively, is not) exactly as expected by the SBML Test Suite Core upon numerical simulation of that test case.
\begin{figure}
\begin{center}
\resizebox{0.9\linewidth}{!}{
\begin{minipage}{0.4\linewidth}
\renewcommand{\baselinestretch}{1.4}
\setlength{\parindent}{0pt}
\small
\textbf{SBML2\-Mod\-el\-ica\xspace (96.47\%\xspace)}
\par
BioUML v2018.2 (100\%)
\par
COPASI v4.23.189 (74.26\%)
\par
Simul.\ Core Lib.\ v1.5 (70.04\%)
\par
iBioSIM v3.0.0 (70.04\%)
\par
libRoadRunner v1.4.18 (63.62\%)
\par
libSBMLSim v1.4.0 (61.30\%)
\end{minipage}
%
\begin{minipage}{0.5\linewidth}
\includegraphics[width=\linewidth]{test-suite.pdf}
\end{minipage}
}%
\end{center}
\caption{Compliance of SBML2\-Mod\-el\-ica\xspace-generated code (simulated by OpenModelica and JModelica)
to the SBML Test Suite Core v3.3.0, compared to other SBML simulators with certified public reports (test cases with deprecated SBML constructs are ignored).
}
\label{fig:test-suite}
\end{figure}
The figure shows that SBML2\-Mod\-el\-ica\xspace{+}OpenModelica/JModelica rank among the top-compliant SBML simulators, being second only to BioUML.
Note that libSBMLSim, iBioSim, Simulation Core Library and libRoadRunner fail over a large number of test cases located on the right part of their plot. This is due to the fact that the test cases containing the constructs introduced in the latest SBML standard (not supported by them) have the highest values of their identifiers.
As for the 56\xspace test cases for which the output computed by SBML2\-Mod\-el\-ica\xspace{+}OpenModelica/JModelica differs from the output expected by the test suite,
in 8\xspace cases the difference is \emph{semantically meaningless}. In particular, the time series for the model variables computed by our Modelica simulators contain one more line with respect to the SBML Test Suite Core expected output, returning the value of the model variables immediately before each event (even if such events do not occur at time-points multiple of the requested sampling time).
This is an intended behaviour of OpenModelica and JModelica, aimed at better showing the discontinuities in the values of the model variables that arise when events occur.
By ignoring such additional lines, our output is exactly as expected.
The other 48\xspace test cases where the output of SBML2\-Mod\-el\-ica\xspace{+}OpenModelica/JModelica differs from the output expected from the test suite are due to combinations of SBML constructs \emph{unsupported} by SBML2\-Mod\-el\-ica\xspace.
Such combinations are discussed in Section~\ref{sec:discussion}.
However, we anticipate that they are very rare in practice, semantically intricate, and easily avoidable.
Although some more involved/cryptic Modelica code could be in principle be generated to handle them, we chose to keep our output Modelica models as structured and human-intelligible as possible, in order to ease their extension and integration within larger model networks.
Anyway, there is no risk to accidentally generate flawed Modelica code, since any problematic combinations of constructs are \emph{statically detected} by SBML2\-Mod\-el\-ica\xspace, which warns the user during model translation about possible issues.
The user can then act directly on the generated Modelica code to fix any raised issue.
Also, to further assist the user, suitable \emph{assertions} are injected in the generated Modelica code that would raise proper exceptions during simulation in case the Modelica model (generated \emph{with} warnings) actually behaves in a way not fully compliant to the SBML semantics.
\subsection{Model simulation performance}
\label{sec:results:performance}
In this section we aim at assessing to what extent translating SBML models into an open-standard general-purpose modelling language such as Modelica and into an open-standard general-purpose simulation ecosystem such as FMI/FMU introduces an overhead in simulation performance, when compared to simulation algorithms specialised to biochemical models.
To this end, we consider the major open-source (\emph{general-purpose}) Modelica and FMU simulators (OpenModelica and JModelica, respectively). Note that, while OpenModelica simulates the input model directly, JModelica works by translating the input Modelica model into an FMU and then by actually simulating such an FMU using the FMI API.
The performance of OpenModelica and JModelica/FMI in simulating SBML models translated by SBML2\-Mod\-el\-ica\xspace is compared against that of the \emph{specialised} SBML simulators reported in Section~\ref{sec:results:correctness} plus SystemModeler+BioChem, the only other available Modelica-based simulator for SBML models.
Although our results are by \emph{no means} to be intended as the outcome of a competition among SBML simulators, they clearly show that the advantages of using SBML2\-Mod\-el\-ica\xspace{+}OpenModelica and SBML2\-Mod\-el\-ica\xspace{+}JModelica/FMI \emph{do not generally come at any significant performance overhead}.
\subsubsection{Benchmarks}
\label{sec:results:performance:benchmarks}
We used the BioModels Database \cite{lenovere-etal:2006:biomodels}, a well-known repository of mathematical models of biological and biomedical systems taken from the scientific literature.
Models manually reviewed to guarantee reproducibility of results belong to the set of \emph{manually curated} models. This set of models is widely used as a benchmark for SBML interpreters and simulators.
We selected the subset of manually curated models of the BioModels Database (as of December 2018) that are \emph{accepted} by SBML2\-Mod\-el\-ica\xspace (\textit{i.e., } do not contain deprecated constructs or the unsupported combinations of constructs described in Section~\ref{sec:discussion}).
As a result, our benchmark set consists of 613\xspace models (out of the 641\xspace manually curated models of the BioModels Database), which have from 6\xspace to 5438\xspace variables.
\subsubsection{Experimental setting}
\label{sec:results:performance:setting}
The computational complexity of a model simulation is affected by many factors, which are way beyond the mere number of variables. For example: number and structure of the differential as well algebraic equations; number and frequency of occurrence of events and complexity of their trigger conditions; algorithm and parameters used by the simulation engine.
Setting up a methodologically-sound competition among OpenModelica, JModelica/FMI, and specialised SBML simulators is a complex task, which
is clearly out of the scope of this paper.
So is the choice of the optimal simulation algorithm and configuration for a given model (in particular both OpenModelica and JModelica/FMI offer a wide portfolio of highly-configurable integrators to choose from).
Given our goals (see Section~\ref{sec:results:performance}), in our analysis we proceeded as follows.
\emph{First}, for each system we measured the \emph{core simulation time}, i.e., the time of simulating the given model from its (system-specific) \emph{internal representation}.
This is consistent with the most demanding use-cases, such as parameter identification or estimation procedures, where simulator initialisation \emph{and} model preparation are performed only once, while a large number of simulations (with different parameter assignments) takes place, among which such initialisation costs are amortised.
As for OpenModelica and JModelica/FMI, this means that we ignored SBML2\-Mod\-el\-ica\xspace translation time, which, anyway, always takes \emph{less than 6\% of the simulation time}.
For the same reason, as for JModelica/FMI we also ignored the time to generate the FMU (which is also \emph{negligible}).
\emph{Second}, we used all systems with their \emph{default} integrators and settings.
\emph{Third}, we fixed the simulation horizon and the maximum time step to, respectively, 100\xspace and 0.01\xspace (model) time units.
This last choice allowed us to extrapolate a clear performance trend of each system on the basis of the number of model variables.
All simulations were performed on a commodity computer (AMD A12-9720P CPU, 12 GB RAM, SSD, standard Linux environment), with a time-out of 360 seconds\xspace.
\subsubsection{Experimental results}
\label{sec:results:performance:results}
\begin{figure*}
\begin{center}
\includegraphics[width=\linewidth]{performance.pdf}
\caption{%
Performance trend (log-log plot) of general-purpose OpenModelica (v1.32.2\xspace) and JModelica/FMI (v2.4\xspace) simulators on our benchmark models (from BioModels) translated with SBML2\-Mod\-el\-ica\xspace, compared with that of specialised SBML simulators.
(Due to the need of manual GUI interaction, results of Wolfram SystemModeler+BioChem and iBioSim are reported only for models with at least 250 variables.)
}
\label{fig:performance}
\end{center}
\end{figure*}
The scatter (log-log) plot in Figure~\ref{fig:performance} shows a dot for each model in our benchmark set and each system (if that system terminated within our time-out).
For each dot in position $(x,y)$, $x$ denotes the number of variables of the associated SBML model, while $y$ denotes the time (in seconds) required by the system associated to the dot colour to terminate.
SBML model import in Wolfram SystemModeler+BioChem and iBioSim requires manual user interaction via GUI and could not be automated. We overcame this issue by manually launching such systems on all models with \emph{at least} 250\xspace variables.
Figure~\ref{fig:performance} shows that OpenModelica and JMod\-el\-ica/FMI are \emph{competitive} on most of the benchmark set, although, on the two largest models, there is a visible performance gap with respect to some of the other systems, with OpenModelica and JModelica going in time-out for, respectively, one and both of them.
Also, OpenModelica appears generally faster than JModelica/FMI, even if it seems to slow down a bit when simulating models with frequently-occurring events (as shown by the bimodal behaviour on models having a similar number of variables). However, also in these cases, its overall performance remain aligned to that of several specialised SBML simulators. Conversely, performance of JModelica/FMI are more stable, regardless of the events occurrence frequency.
Besides the above peculiarities, the \emph{overall trend} emerging from Figure~\ref{fig:performance} is that the benefits enabled by a translation into a \emph{general-purpose} open-standard modelling language for dynamical systems such as Modelica and, as for JModelica/FMI, into a general-purpose open-standard simulation ecosystem such as FMI/FMU, generally come at \emph{no significant overhead}, when compared against performance of \emph{specialised} SBML simulators.
\section{Discussion}
\label{sec:discussion}
Section~\ref{sec:results} shows that, by translating SBML models into Modelica (and in turn, as for JModelica/FMI, into FMUs), SBML2\-Mod\-el\-ica\xspace effectively enables \emph{seamless integration} of SBML models into larger heterogeneous model networks without (in most cases) sacrificing simulation performance.
Below we analyse the 48\xspace test cases in the SBML Test Suite Core containing those combinations of SBML constructs not supported by SBML2\-Mod\-el\-ica\xspace (see Section~\ref{sec:results:correctness}).
Note that such construct combinations are \emph{rare} in real-world models, \emph{semantically intricate}, and can be \emph{easily circumvented}.
SBML2\-Mod\-el\-ica\xspace always \emph{detects} such cases, and \emph{warns} the user accordingly, hence there is \emph{no risk} to run flawed Modelica code.
\subsection{Events recomputation}
\label{sec:discussion:unsupported:recomputation}
If, at a certain time instant $t$, \emph{multiple} events are \emph{simultaneously} triggered, such events are ordered by their priorities and the event $e$ with the \emph{highest} priority is executed first (Example~\ref{ex:methods:example} shows one such case).
However, the execution of $e$ might change the value of variables that occur in the priority expression or in the trigger condition of some of the other events waiting to be executed.
In such situations (which can easily be circumvented by modelling the underlying mechanisms of such interfering events more explicitly), SBML2\-Mod\-el\-ica\xspace cannot generate correct Modelica code.
To avoid to generate a flawed translation, SBML2\-Mod\-el\-ica\xspace statically detects whether such interferences might occur between events and warns the user.
\subsection{Nested triggers}
\label{sec:discussion:unsupported:nested}
Assume that a persistent event $e$ is triggered for the first time at time $t_1$ and that it is requested to be delayed by duration $d_1$.
If, for some reason, event $e$ is triggered again at time $t_2$ such that $t_1 < t_2 < t_1 + d_1$ and is requested to be delayed by duration $d_2$, the trigger must be recorded and $e$ must execute at time $t_2 + d_2$.
As we cannot statically detect how many times the trigger of a persistent event can be nested, SBML2\-Mod\-el\-ica\xspace raises a warning during model translation, informing the user that a second trigger for a persistent model event might potentially occur when a previous trigger for the same event is on hold because of a delay.
Also, SBML2\-Mod\-el\-ica\xspace injects an assertion into the generated Modelica code which stops the simulation in case this situation \emph{actually} occurs at run-time (making the model behaviour not compliant with the SBML semantics).
Again, such situations can be easily circumvented by modelling the underlying causes of event delays in more explicit ways.
\subsection{Negative time}
\label{sec:discussion:unsupported:negative-time}
SBML allows the definition of quantities also in \emph{negative} time-points. This may cause issues.
For example, the value of a model variable $x$ at time 0 could be re-assigned by an event trigger (processed at time 0) using an expression such as $\left(\pre{x}\right)(0)$. In this case, the SBML semantics stipulates that $\left(\pre{x}\right)(0)$ must be determined \emph{somewhere else} in the model (otherwise, a semantic error occurs).
Another example of negative time is the occurrence of $f(t - d)$, where $f$ is a function and $d > 0$ is the duration of a \emph{delay}.
According to the SBML specifications, the value of $f(t - d)$ is defined also when $t < d$, by assuming that $f$ is defined also for negative time-points.
Conversely, the Modelica language specification forbids negative time-points, as time-point 0 is assumed to be the \emph{initial time-point} for simulation.
Although, in principle, additional Modelica code could be generated by SBML2\-Mod\-el\-ica\xspace in order to properly handle such cases (\textit{e.g., } by computing a suitable time-offset for all the model variables, and by artificially shifting in time the evolution of the entire model), we decided not to support this possibility, in order to keep the generated Modelica code well-structured and fully readable.
Hence, when the above situations are detected, SBML2\-Mod\-el\-ica\xspace issues a warning. The user interested in supporting such cases, can take full responsibility by acting directly on the generated Modelica code.
\subsection{Unsupported math}
\label{sec:discussion:unsupported:math}
SBML allows users to explicitly assign value \texttt{NaN} to variables. When such values are found in the input SBML model, SBML2\-Mod\-el\-ica\xspace notices the user, since the numerical model simulation is clearly not possible.
\section{Conclusions}
\label{sec:conclusions}
In this paper we presented SBML2\-Mod\-el\-ica\xspace, an \SBMLLV{3}{2}--compliant software system that translates SBML models into well-structured, user-intelligible, easily modifiable \emph{Modelica} code, an open-standard general-purpose modelling language for which several efficient simulators (both commercial and open-source) are available.
Modelica models can also be exported into black-box language-independent FMUs, an open standard supported by more than 100 simulators from virtually all application domains.
All this paves the way to the seamless integration (without lack of simulation performance) of SBML models within open-standard ecosystems, where biochemical models can be used as components of large heterogeneous model networks (together with models of, \textit{e.g., } human physiology, clinical protocol guidelines, treatment schemes, biomedical devices), and where standard system engineering approaches can be employed to perform their simulation-based analysis at system level.
\paragraph*{Acknowledgements.}
This work was partially supported by:
Italian Ministry of University and Research under grant ``Dipartimenti di Eccellenza 2018--2022'' of the Department of Computer Science of Sapienza University of Rome;
EC FP7 project PAEON (Model Driven Computation of Treatments for Infertility Related Endocrinological Diseases, 600773);
INdAM ``GNCS Project 2019'';
Sapienza University 2018 project RG11816436BD4F21 ``Computing Complete Cohorts of Virtual Phenotypes for In Silico Clinical Trials and Model-Based Precision Medicine''.
Authors are very grateful to the anonymous reviewers for their valuable comments.
|
2,877,628,090,984 | arxiv | \section{\label{intro} Introduction}
Shear layers with spatially variable fluid physical properties occur in a variety of industrial and natural systems. The variations in density and/or viscosity may occur due to temperature gradients, as in the case of a plasma torch \citep{Duan2002}, static mixers \citep{Cao2003}, reacting flows \citep{Pathikonda2021}, or due to species concentration gradients, such as salinity gradients set up when an estuary enters an ocean. While most situations will feature both density and viscosity effects, the fluid dynamics of variable density jets have been extensively studied, both theoretically \citep{Huerre1985, Yu1990, Lesshafft2006} and experimentally \citep{Kyle1993, Yu1993, Hallberg2006}. In particular, low-density jets have been shown to be a member of a class of globally unstable flows \citep{Huerre1990}, which are characterized by enhanced by the sudden onset of a regime with enhanced mixing, self-sustained oscillations and insensitivity to external forcing. The frequency of the global modes in the near-field of low density jets has been linked to the existence of local profiles over a finite streamwise extent that are absolutely unstable in the framework of local spatio-temporal linear stability analysis \citep{Chomaz2005, Pier2001}. While the primary mechanism of breakdown of the flow is inviscid, arising from the baroclinic torque established by gradients in density and pressure, there are some indications \citep{Hallberg2006} that viscosity does modify the onset of global modes, as well as their frequency. \citet{Hallberg2006} conducted experiments with multiple nozzle geometries, thereby independently studying the effects of shear layer thickness, density ratio and jet Reynolds number, and found a weak but perceptible effect of jet Reynolds number on the global mode frequency. The linear stability calculations of \citet{Lesshafft2006} and \citet{Srinivasan2010} also suggest that the frequency and transition boundary between convective and absolute instability are affected by the viscosity in the form of the Reynolds number. It is therefore natural to inquire into the effects of variations in viscosity between the jet and ambient, which is the focus of this study.
Strong gradients in viscosity are unlikely to be established in gas flows, and we look to other situations where the role of viscosity gradients has been more extensively investigated. In fact, in contrast to free shear flow, an extensive body of literature on variable viscosity flows addresses pressure-driven internal flows of high Schmidt number fluids \citep{Govindarajan2014b}. While viscosity is instinctively assumed to have a stabilizing influence on the growth of disturbances, it is responsible for altering the base state of a flow, often creating sharp velocity gradients through the no-slip condition and therefore serving as a source of disturbance kinetic energy. It has long been known \citep{Yih1967} that a jump in viscosity across a sharp interface can lead to long-wave at any Reynolds number or short-wave instabilities at low Re \citep{Hooper1983}. Here we focus on flows of miscible fluids; the immiscible situation is covered in reviews by \citet{Joseph1997} and more recently, \citet{Govindarajan2014b}. Mention should also be made of the extensive work done in the context of liquid atomization, on planar shear layers with gas-;iquid streams \citep{ Matas2011, Otto2013, Fuster2013, Ling2019, Bozonnet2022}. Together, these studies have shown that viscous stability calculations are required to match theory with experiments on gas-liquid shear layers; that absolute instability of co-flowing gas/liquid streams is supported when velocity defects immediately downstream of a splitter plate are considered, and match experimentally observed frequencies; and that confinement and the finite thickness of the gas stream play an important role in destabilization. However, viscosity ratios of the two streams, when considered, were always extremely small. and the effects of this ratio were rarely isolated.
\citet{Ern2003} showed the destabilizing effects of a finite thickness interface marked by gradients in velocity and viscosity, and demonstrated that for certain parameter ranges, the instability could be stronger than that of the corresponding sharp-interface configuration. Sharp gradients in velocity profile, combined with variations in velocity profile, lead to additional source terms in the equation for disturbance kinetic energy, which drive the growth of instabilities near the diffuse interface. \citet{Ranganathan2001} performed a temporal stability analysis of the effects of diffusion in channel flow of two fluids in a three-layer configuration, and found that when the critical layer (the region where the wave speed matched the mean velocity profile), the flow was strongly stabilized or destabilized when the more viscous fluid was adjacent to the wall or in the interior. For the pipe geometry, Selvam et al. \citep{Selvam2007,Selvam2009} performed a linear stability analysis that predicted the onset of absolute instability for low Reynolds numbers when the viscosity contrast is sufficiently high, and the diffuse interface is located in a certain range of radial locations with respect to the pipe radius. They find that when the core fluid is more viscous, the flow can be at best unstable over a certain Reynolds number range, with the axisymmetric mode being dominant. When the less viscous liquid is in the core, helical modes are favored, and can lead to absolute instability. These calculations were supported by Direct Numerical Simulation and a global linear stability analysis. These calculations partially reproduced the experimental observations of \citet{DOlce2008}, who reported pearl- and mushroom-shaped instabilities in the Reynolds number range of 2-60, with no sharp transitions in either wavelength or frequency that would provide strong evidence of a global mode. The helical modes observed by \citet{Cao2003} for injection of a low-viscosity fluid into a static mixer are in line with the theoretical results discussed above.
Virtually no experimental data are available for miscible free shear layer flows with strong viscosity gradients, in either planar or cylindrical geometries. A notable situation that features shear layers with significant viscosity variation, accompanied by density variation is that of the buoyant jet \citep{Subbarao1992, Bharadwaj2017}. This has been confirmed to be an absolute instability \citep{Chakravarthy2018} when realistic viscosity profiles are included and either the density ratio between the ambient and the jet is greater than 2, or when the Richardson number is greater than 1; however viscosity ratios are not large.
It is difficult to translate insights from confined flows to unconfined shear layers such as jet flows, due to the fundamental differences in velocity profiles, characterized by inflection points in the case of jets. Any insights from pressure-driven flow studies has to be interpreted with caution, since confinement is known to play both stabilizing and destabilizing roles in other situations involving absolute instability of single phase \citep{Juniper2006a, Healey2009, Yang2021} or two-phase flows \citep{Bozonnet2022}. Further, the seminal works of Rayleigh [] and Tomotika \cite{Tomotika1935} considered capillary flows of liquid filaments in another viscous medium in the limit of negligible Reynolds number and are not relevant to the present work which is focused on large Reynolds numbers. However, it is interesting to note that the helical mode is found to arise from an inviscid mechanism based on the Rayleigh criterion for the base profiles used. This would be expected to further favor the establishment of such modes in free shear flows. \citet{Sahu2014} considered a planar shear layer configuration, and the emergence of an overlap mode when the gradients in velocity and viscosity occur in the same region. Destabilization was enhanced when these layers overlapped, and with decreasing thickness of either of the gradient regions. In line with inviscid theory, the configuration was found to be absolutely unstable when countercurrent velocity profiles were used for the base state. More recently, \citet{Yang2022} carried out a linear stability analysis of base profiles corresponding to the near-field of a jet emerging into an ambient with a different viscosity. Their base profiles reflected modifications to the standard tanh- profile typically used in the analysis of jet flows \citep{Mattingly1974}, such as an inward radial shift due to the decelerating effects of a more viscous ambient, and concentration gradient regions that were much thinner than the momentum thickness. As is typical of jet flows, the axisymmetric and helical m odes had nearly equal temporal growth rates over a wide range of conditions specified by the jet Reynolds number, ambient-to-jet viscosity ratio, momentum and concentration layer thicknesses. However, beyond a critical value of viscosity ratio that was Reynolds number=dependent, absolute instability of the flow was supported, with the helical mode being strongly favored over the axisymmetric mode. A more systematic study of the transition boundary between convective and absolute instability as a function of the operating parameters is currently underway.
With the above preliminary results in mind, we carry out a study that seeks to isolate the effects of large viscosity contrast between a jet and its surroundings. The goals of the present study are to characterize the near-field of a low-viscosity jet at moderate Reynolds numbers ($1500 < Re < 3500$) for ambient-to-jet viscosity ratios ranging from 1 to 45, and to examine the flow field for any evidence of global modes. This article is structured as fellows. Section 2 describes the experimental facility used to achieve a neutrally buoyant jet with high viscosity contrast. Section 3 describes the flow visualization and the observation of disturbance modes. Section 5 describes identification of the dominant modes using a Proper Orthogonal Decomposition (POD)-based technique applied to the images from visualization. Section 6 provides a summary and conclusions.
\section{Experiments}
The study of the effects of viscosity gradients alone on the development of instabilities in the near-field of a jet requires the elimination of density effects, as well as a quiet facility with a minimum amount of external disturbances. Accordingly, experiments were carried out in a jet facility shown in Fig. \ref{fig:jetfacility} that utilizes gravity to attain the required flow rates. The experiments were performed in the vertical configuration. A large overhead reservoir delivered fluid to a nozzle located in a test section of square cross-section through a flow meter, a diffuser section and a flow straightener composed of laminar flow elements. The nozzle has a fifth-order polynomial profile with zero slope and curvature at its inlet and exit planes, and imposes an area contraction of 87 on the entering flow. The nozzle exit diameter $D$ is 6 mm.
\begin{figure}
\centering
\includegraphics[height=3in]{figures/hotwire_setup.jpg}
\caption{Sketch of the jet facility used to produce a low-viscosity jet sing gravity-driven flow}
\label{fig:jetfacility}
\end{figure}
Jet Reynolds numbers are defined based on the nozzle exit diameter D and the average velocity $\bar{U}$ of the flow, as inferred from measurement of the volumetric flow rate from the flowmeter (Dwyer ****, accuracy of 2\%),
\begin{equation}
Re = \frac{\rho \bar{U}D}{\mu _j}
\end{equation}
where $\mu _j$ is the viscosity of the injected fluid.
The requirement of having a wide range of viscosity contrast defined in terms of the ambient-to-jet viscosity ratio requires the use of liquids. Salt water of nominal density 1042 $kg/m^3$) is chosen as the test fluid, in order to facilitate density-matching as explained below. Reynolds numbers up to 4000 can be attained using this reservoir/nozzle combination.
The jet exhausts into the test section, which is made of transparent polycarbonate and has a square cross-section with inner dimensions $240\times 240 mm$. Overflow ports near the top of the tank enable maintenance of a constant fluid height in the test section during operation. The top of the tank is open to allow direct mounting of a hot-film anemometry system. The fluid in the tank creates the desired viscosity ratio, which is defined as
\begin{equation}
M = \frac{\mu _\infty}{\mu _j}
\end{equation}
where the subscript $\infty$ refers to test section conditions. For this study, propylene glycol and salt water were used as the two fluids. Propylene glycol in its pure form has a viscosity of 42 $mPa·s$, approximately 45 times that of water, and has a density of 1036 $kg/m^3$, which is only a few percent above the density of water. Industrial-grade propylene glycol used in this work was often found to have even higher viscosity values, and therefore each batch of glycol was measured for its density and dynamic viscosity. A salt water solution was then prepared in order to match the density to within a tenth of a percent ($\frac{\Delta \rho}{\rho} = |\frac{\rho _j -\rho _\infty}{\rho _j}| < 0.0005$). These fluids are Newtonian over the range of strain rates imposed, are very miscible with each other, eliminating surface tension as a relevant parameter. Nevertheless, as we shall see, the interface thickness has no time to develop diffusively and essentially remains a sharp interface in the near -field of the jet.
\subsection{Constant Property Jet Profiles}
Hot-film anemometry was used to first characterize the jet facility to establish the base flow for a constant property jet. Fpr a water jet issuing to a water ambient, anemometry was used to characterize the mean velocity profiles and background noise level, as well the shear layer thickness of the jet at the nozzle exit plane. Figure \ref{fig:meanprofiles}(a) shows velocity profiles emerging from the jet for multiple Reynolds numbers. The profiles are mostly top-hat, characterized by a steep decrease in magnitude in the shear layer towards the quiescent ambient fluid. A two-dimensional trace of voltage (Fig. \ref{fig:meanprofiles}b) at the exit plane (z/D=0.1) confirms the axisymmetric nature of these profiles. Momentum thickness of the shear layer were evaluated as a function of Reynolds number by integrating radially from the centerline to a location where the velocity decreased to 10\% of the centerline; further radial measurements were avoided as the hot film responds unreliably to the low velocities in the entrained flow. The momentum thickness is evaluated as
\begin{equation}
\theta \theta = \int_{0}^{\infty} \frac{U(r)-U_{\infty}}{U_c-U_{\infty}}[1-\frac{U(r)-U_{\infty}}{U_c-U_{\infty}}]dr
\end{equation}
The laminar nature of the jet boundary layer at the exit plane is checked (Fig. ~\ref{fig:meanprofiles}(c)) by observing a linear relationship between $D/\theta$ and $\sqrt{Re}$. The constants in the fit are unique to the nozzle geometry, reflecting the acceleration imposed by the area contraction and the resultant thinning of the boundary layer entering the nozzle. Profiles at multiple downstream locations within the first half-diameter can be well-represented by an equation of the form used by Mattingly and Chang \cite{Mattingly1974}:
\begin{equation}
\frac{u}{U_c} = 0.5(1+tanh(\frac{D}{8\theta}(\frac{1}{r}-r))
\end{equation}
We now turn to the fluctuations in the jet velocity at the exit plane. The turbulence intensity normalized by the average velocity. The profiles are shown in figure \ref{fig:turb_base}(a) alongside profiles measured by Todde et al.
(2009) in their work on low Reynolds number free jets. It can be seen that the turbulence intensity profile has a comparable trend, although the current work has much lower centerline turbulence intensity. This
speaks to the benefit of having a gravity-fed jet, free from any disturbances downstream from pumps or fans.
Lastly, we examine the spectral content of the flow at the exit plane in fig. \ref{fig:turb_base}, and find no discrete peaks in the frequency spectrum, assuring that the jet is a low-turbulence system with little ambient noise.
\begin{figure}
\centering
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{selfsimilar.jpg}
\end{subfigure}
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=0.75\textwidth]{Ian_3D_vel_profile.jpg}
\end{subfigure}\\
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{figures/theta_vs_Re.png}
\end{subfigure}
\caption{(a) Velocity profiles at multiple Reynolds numbers (b) Two-dimensional trace of velocity at z/D=0.1, showing symmetry of profile about the axis (c) Shear layer thickness as a function of Reynolds number } \label{fig:baseflow1}
\label{fig:meanprofiles}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/toddeRe2500_1.png}
\end{subfigure}
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{M=1_spectrum.jpg}
\end{subfigure}\\
\caption{(a) Radial profile of turbulence intensity at Re = 1688, M=1 (b) Frequency spectrum of voltage fluctuations } \label{fig:turb_base}
\end{figure}
\subsection{Test Procedure}
For experiments with propylene glycol as the ambient fluid, each set of experiments is conducted at a constant Reynolds number governed by the flow rate. Over the course of several test runs, the viscosity of the tank (and hence the value of M) decreases due to mixing with the injected salt water. This makes the test runs inherently quasi-transient. Therefore, a careful procedure was followed to minimize the effects of variation of M during each test run. We first estimate the variation of M during a typical test run. At a salt water-glycol interface, diffusion acts to thicken the interface to yield a concentration thickness given by $\sqrt{\gamma t} $ where $\gamma$ is the binary diffusion coefficient of propylene glycol into water ($1.1 \times 10^{-9} \hspace{0.2em}m^2/s$). For a one-minute long trial run, this yields a diffusion length of the order of 0.01D. In practice, test runs were much shorter, typically lasting 20-30s after the initial starting vortex had passed out of the field of view. During this period, high-speed images were acquired digital camera operating at 500 fps and at $1024\times 1024$ px resolution. After the image acquisition was completed, the flow was turned off, the tank was stirred with a mixer and allowed to settle and become quiescent again, before the next trial (typically 30 minutes). A sample of tank fluid was taken for subsequent viscosity and density measurement for determining the value of M for each trial. In this way, at each Reynolds numbers, values of M starting from 50 and descending down to 15 were attained.
It is reasonable to expect that since the salt-water and propylene glycol were initially density matched to within 0.1\% before the start of the experiments, that the density would remain the same through the experiments, even as the bulk viscosity in the tank decreased. This expectation was somewhat belied --- aqueous solutions of propylene glycol undergo a slight contraction in volume that is concentration-dependent. Figure \ref{fig:density_M_vs_trial}(a) shows the variation of density and viscosity in the test section after a series of trial runs. It is apparent that the specific gravity varies between the 1.036 of pure propylene glycol to a maximum of 1.051, or about 1.44\%. The injected salt water jet continues to have a specific gravity of 1.036, raising the prospect of confounding buoyancy effects. The importance of buoyant forces relative to jet inertia is assessed by evaluating the Richardson number,
\begin{equation}
Ri = \frac{g\Delta \rho D}{\rho _mU_j^2}
\end{equation}
This is plotted in Fig. \ref{fig:density_M_vs_trial}(b) against the Reynolds number, assuming an average density difference of 0.7\% over all runs. For Reynolds numbers greater than 1500, Ri is less than 0.1 and can be ignored.
\begin{figure}
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{Ian_rho_mu.jpg}
\end{subfigure}
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/Ri_Re.png}
\end{subfigure}
\caption{(a) Variation of density and viscosity in test section during a sequence of runs (b) Richardson number as a function of Re}
\label{fig:density_M_vs_trial}
\end{figure}
\section{Results}
\subsection{Flow Visualization}
Preliminary images for M=1 (water jet into water ambient) were acquired with an 18MP camera whose lens was equipped with an orange filter. The jet fluid was dyed with Rhodamine 6G, and the tank volume was illuminated with a blue LED light. The emission by rhodamine in the orange part of the spectrum was captured and shows the breakdown of the jet. Figure \ref{fig:flowviz} shows the axisymmetric nature of the instabilities dominating the breakdown process, after developing from an initial nearly parallel near-field region. As Re increases, this distance becomes palpably shorter. Unlike the observations of \citet{Mattingly1974}, no evidence of an eventual competition between the axisymmetric mode and a growing helical mode in the far-field is observed. On the other hand, when the jet emerges into an ambient medium of propylene glycol (M=45), helical instabilities are observed over a range of Re from approximately 1600 to 2600. Figure \ref{fig:flowviz}(b) shows a sequence of images taken at M=1 and a Reynolds number of 2009. Of note is the disappearance of the parallel flow region in the near-field, with the helical mode almost instantaneously developing at the exit. Some discrete bright spots visible in every image are artifacts due to bubbles being introduced in the test section during the stirring process, which remain suspended due to the high fluid viscosity. We also note that the wavelength of the disturbances seems substantially lower than that of the axisymmetric instability at M=1.
These two sets of images suggest that there must exist a transition value (or range) of M for every fixed value of Re, where the dominant mode changes from axisymmetric to helical, and experiments were conducted to elucidate the transition behavior. Figure~\ref{fig:const_Re_varying_M} shows a sequence of images captured for Re= 2009. The transition of the dominant instability from helical to axisymmetric, as M decreases from * to * is clearly evident. Nevertheless, it is difficult to assign a precise value for the transition value of M with confidence in all cases. Inspection of multiple images allows to assign a transition value of M close to ** in this instance. However, in some cases, the images appear to show both axisymmetric and helical features, with two distinct frequency peaks in the hot film spectrum (discussed subsequently), and no clear transition is evident, especially at lower Re. Due to the nature of the experiment, involving discrete steps in M, a fine-grained transition value could not be determined in all cases. Nevertheless, observations clearly indicate that this transition value of M is Reynolds number-dependent. Figure~\ref{fig:M_Re_plane} shows our estimate for the transition value of M as a function of Reynolds number. For Re$>$ 1600, the transition M increases as a function of Re; below Re=1600, the instability was weak, and it was difficult to distinguish the nature of the mode. For higher Re, the transition value of M increases with Re, and at Re=2800 (not shown in Fig. \ref{fig:M_Re_plane}, the first trial showed a helical mode before switching into an axisymmetric mode, suggesting the critical viscosity required is higher.
\begin{figure}
\centering
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{M=1_flowviz.jpg}
\caption{}
\end{subfigure}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{M=45_flowviz.jpg}
\caption{}
\end{subfigure}
\caption{(Top) Growth of axisymmetric instabilities for M=1 and multiple Re: (a) Re=428 (b) Re=1036 (c) Re=1545 (d) Re=2009 (e) Re=2539 (f) Re = 3009 (Bottom) Helical modes observed at M=45, Re = 1332, 1676, 2013, 2339.}
\label{fig:flowviz}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{constant_Re_varying_M.jpg}
\caption{Images showing the transition from helical to axisymmetric modes as M is decreased}
\label{fig:const_Re_varying_M}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=3in]{M_Re.jpg}
\caption{Transition boundary in (M,Re) space from helical to axisymmetric modes at a fixed Reynolds number of 2013. From left to right, the values of M are 45, 33, 28, 23 and 20. Here M=28 appears to be closest to the transition between modes.}
\label{fig:M_Re_plane}
\end{figure}
\clearpage
\subsection{Hot Film Anemometry}
Hot film anemometry was used to characterize the flow for values of M greater than unity. With the mixing of the two fluids and the change in Prandtl number, the calibration for the hot film could no longer be used, and the voltage response is presented. Here we are interested in the spectral content of the velocity fluctuations at different downstream distances, as well as the rate of growth of the disturbance relative to the constant property jet. Figure \ref{fig:HW_FFT_vs_z}(a) and (b) show the evolution of the spectrum along the centerline and in the shear layer for M= * and Re=1682. A distinct frequency peak is visible at all locations in the near field. The strength of voltage fluctuations (Fig. \ref{fig:vol_fluc}) for M=1 shows a relatively gentle increase downstream; for large M the strength shows a sharp increase within one jet diameter, appearing to saturate within a few diameters.
The dependence of the dominant frequency on the viscosity ratio, as detected by hot film anemometry, is plotted in Fig. \ref{fig:HW_f_vs_M}. Following the experimental sequence and moving from high values of M to low values, one sees an increasing trend while the helical mode remains dominant. .
\begin{figure}[hb]
\centering
\includegraphics[width=\textwidth]{fft_re_2013m_39.jpg}
\caption{Evolution of velocity spectra in the downstream direction for M=39, Re= 2013 on the jet axis and in the shear layer. Top: centerline variation for Re=2013. Bottom: spectra in the shear layer. }
\label{fig:HW_FFT_vs_z}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=2.25in]{vol_fluc_re_2000.jpg}
\caption{Root mean square value of voltage fluctuations along the centerline and shear layer for M=1 and M=45 at Re=2000}
\label{fig:vol_fluc}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=2.25in]{figures/Ian_St_vs_M.png}
\caption{Stanton numbers corresponding to the dominant frequency, as identified by hot film anemometry, as a function of M for Re=1682.}
\label{fig:HW_f_vs_M}
\end{figure}
\subsection{Image Analysis}
The hot film measurements strongly indicate the existence of a single dominant mode that saturates in intensity in the first few diameters downstream of the jet exit. However, the increased conductivity of the liquid due to the dissolved salts resulted in increased contamination, pickup of electrical line noise despite probe shielding, and the occasional air bubbles introduced into the tank due to mixing that would stick to the hot film, together resulted in a very low rate at which meaningful data were acquired. As a result, LID data were chosen as a means of investigating the growth of unstable modes. The orange filter on the camera lens ensured that the jet could be strongly distinguished against the background, by isolating the emission from the Rhodamine dye under blue illumination. Applying a threshold intensity to the grayscale images allows determination of the jet boundary; the diameter of the jet as determined from the result was very weakly sensitive to the threshold value, but we are primarily interested in the frequency, which is unaffected by the choice of threshold. To study the spatial evolution of the oscillations of the interface, we examine the jet width at 4 locations downstream of the jet exit, as plotted in Fig. \ref{fig:jet_width_oscillations}. The amplitude of oscillations shows a non-linear increase, and in Fig. \ref{fig:interface_FFT} we further examine the amplitude and frequency of these oscillations. Figure \ref{fig:interface_FFT} shows the sharp peak in the power spectrum that might be surmised from the oscillatory waveform in Fig. \ref{fig:jet_width_oscillations}. Figure \ref{fig:interface_FFT}(b) shows the variation of the amplitude of the dominant frequency in the downstream direction. Again, as with the anemometry measurements, the oscillations show an exponential increase in the disturbance amplitude, before saturating at z/D=* .
\begin{figure}[b]
\centering
\includegraphics[width=\textwidth]{figures/Jet_width_global.png}
\caption{(a) Time variation of the jet width in pixels at different downstream distance (b) Square of the amplitude of the coefficients of the Fourier transform, $A^2(f)$ }
\label{fig:jet_width_oscillations}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=0.95\textwidth]{Power_spectrum_A2.png}
\caption{}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=0.95\textwidth]{A2_growth_zbyD.png}
\caption{}
\end{subfigure}
\caption{(a) Power spectrum of oscillation of interface at z/D= for M=38, Re=2400. (b) Growth of the square of the amplitude of the Fourier coefficient of the dominant mode in the downstream direction}
\label{fig:interface_FFT}
\end{figure}
To ascertain the nature of this instability, that develops much faster than the axisymmetric instability of the constant property jet, we verified that the frequencies at the different downstream stations shown in Fig. \ref{fig:jet_width_oscillations} are identical. Another way of assessing the spatially invariant `global' nature of this frequency is to examine the intensity records of at single pixels in the shear layer. Figure \ref{fig:pxfluc} shows the frequency spectrum of 4 pixels at two different downstream locations, on either side of the jet. The frequencies are identical, and provide further circumstantial evidence that the instability observed at large M is a global mode, corresponding to absolute instability of the near-field profiles. This putative global mode has a frequency that depends on the parameters that define the flow, such as the inlet Reynolds number, viscosity ratio M, and the inlet velocity profile, specified by the momentum thickness $\theta$. In the experiment, the values of Re and $\theta$ are conjoined through the specific geometry of the nozzle. Further, it is experimentally difficult to conduct trials at constant M, while changing Re. Therefore, we present the global mode frequency at constant Re (and $\theta$) as a function of M. Figure \ref{fig:f_lambda_vs_M}(a) shows that the waves developing on the interface have a frequency that decreases as the ambient viscosity is reduced from a starting value of M$\approx$ 45. For this data set taken at Re= 2000, there is a sharp increase in frequency near the observed transition from helical to axisymmetric mode. We interpret this as further evidence of the helical mode being driven by an absolute instability of near-field profiles. After the transition to the axisymmetric mode, there is a further decrease in frequency, before the curve displays an asymptotic behavior as M is further reduced.
The corresponding wavelengths, as determined from inspection of the images, are shown in Figure \ref{fig:f_lambda_vs_M}(b). Knowledge of the wavelength and frequency allows us to calculate the phase velocity of the dominant mode; this is plotted in Figure \ref{fig:f_lambda_vs_M}(c).
\begin{figure}
\centering
\includegraphics[width=\textwidth]{pxfluc_FFT.png}
\caption{Spectrum of pixel intensity fluctuation in shear layer at four locations in the near-field of the jet.}
\label{fig:pxfluc}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=0.9\textwidth]{figures/Image_St_vs_M.png}
\caption{}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=0.9\textwidth]{figures/Image_lambda_vs_M.png}
\caption{}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=0.9\textwidth]{figures/Image_phasevel_vs_M.png}
\caption{}
\end{subfigure}
\caption{Variation of instability characteristics along the constant (Re,$\theta$) curve, as M decreases dueing the experiment. (a) Frequency (b) Wavelength (c) Phase velocity }
\label{fig:f_lambda_vs_M}
\end{figure}
\section{Summary and Discussion}
We have presented experiments characterizing the flow behavior for the specific configuration of a low-viscosity jet with a laminar boundary layer emerging into an ambient medium of relatively higher viscosity. The fluids, while perfectly miscible, have a low binary diffusion coefficient, with the result that the diffusion region is expected to be extremely thin, approaching a sharp interface with zero surface tension. The flow visualization results clearly indicate the onset of a helical mode at or close to the nozzle exit plane, when the viscosity ratio M is increased substantially beyond unity, with enhanced mixing relative to the M=1 case. For that baseline case, it is apparent that the jet can stay coherent and nearly parallel for a significant distance downstream indicating a slowly evolving spatial instability that saturates in the far-field. For latge M, hot film anemometry confirms the presence of a self-sustained oscillation with a discrete peak in the spectrum, with the disturbance energy rapidly growing and saturating within the first few diameters downstream. Further, we have characterized this dominant frequency as a function of M and Re over a limited range, showing that this frequency is an increasing function of Re and a decreasing function of M. However, over the range of M studied, the anemometry does not display any sharp jumps in the frequency, leaving open the question of the convective/absolute nature of the instability. To resolve issues with anemometry in conducting fluids, a larger range of Reynolds numbers were examined using image analysis. With this technique,the initial linear growth, followed by an exponential rise in the amplitude of oscillations is more clearly evident. We are able to confirm the frequency trends observed with anemometry, a discrete jump in frequency at a certain value of M, as well as the spatial invariance of this frequency.
Turning to the convective/absolute nature of the instability, at first glance, one cannot rule out the possibility that these observations reflect a rapidly growing convective instability that happens to be clearly visible against a low-noise background established by pump-less flow. However, there are several features of the flow, as discussed above, that suggest otherwise. It is generally accepted that for an instability to be considered a global mode that arises from a linear, absolutely unstable mechanism (as understood in the context of spatio-temporal linear stability analysis), it should display certain hallmarks. These are: the presence of a single self-sustained oscillation that can be detected everywhere in the domain; the sharp onset of a regime of enhanced mixing in which this instability appears, as determined by the appropriate control parameters; and the insensitivity to external forcing. The first criterion is found to be amply satisfied in the present experiment. The image analysis does show a sharp jump in frequency and wavelength as the viscosity ratio is decreased, and the transition value of M is reasonably consistent with visual observations. Further evidence for the sharp onset is afforded by an examination of the singular values obtained through Principal Component Analysis, which suggest that near the critical value of M corresponding to the frequency jump, there is always only one dominant mode, implying no mode competition close to the transition point. We also note that in line with the theoretical understanding that that the development length for the global mode is, in principle, infinite at the transition boundary and onset of the global mode (in terms of the controlling parameters) \citep{Couairon1997} but shortens away from the transition boundary as one moves into the regime of absolute instability, the disappearance of a region of parallel flow for large M, compared to M=1 is an indicator of the instability being controlled by inlet conditions.
The question of sensitivity (or lack thereof) to external noise remains to be explored. Since the flow is gravity-fed, pump-driven oscillations such as those used by \citet{dOlce2009} are difficult o implement and current work focuses on applying vibrations to the diffuser section upstream of the nozzle. A strong response by the system, as in terms of enhanced amplitude of oscillations, to forcing frequencies near the natural frequency of the instability would provide further strong support for the idea that the observed helical mode correspodnds to the absolutely unstable mode found in the companion computational paper by \citet{Yang2021}. However, it should be noted that this may not necessarily constitute clinching evidence; indeed, \citet{Hallberg2008} have shown that the global mode is a low-density jet can be overwhelmed by sufficiently strong forcing. We also note that the decrease in the Strouhal number with increasing M as exemplified in Figs. \ref{fig:HW_f_vs_M} and \ref{fig:f_lambda_vs_M}(a) is closely aligned with behavior of the absolutely unstable frequency, as predicted by spatio-temporal linear stability analysis for a specific tanh-type profile (see Fig. of citet{Yang2022}).
Lastly, it should be noted that global mode characteristics are closely linked to inlet profiles, and therefore the influence of the boundary layer thickness needs to be explicitly addressed. In the present study, this thickness is directly linked to the Reynolds number through the nozzle contraction profile, and future studies will require disentangling the relative contributions of these two parameters to the evolution of the flow.
\textbf{Acknowledgement} We are grateful for useful discussions with David Forliti and Paul Strykowski during the preparation of this manuscript. We also acknowledge the assistance from Justin Chen in acquiring some of the images.
\bibliographystyle{jfm}
\section{\label{intro} INTRODUCTION}
Shear layers with spatially variable fluid physical properties occur in a variety of industrial and natural systems. The variations in density and/or viscosity may occur due to temperature gradients, as in the case of a plasma torch \cite{Duan2002}, static mixers \cite{Cao2003}, reacting flows \cite{Pathikonda2021}, or due to species concentration gradients, such as salinity gradients set up when an estuary enters an ocean. While most situations will feature both density and viscosity effects, the fluid dynamics of variable density jets have been extensively studied, both theoretically \cite{Huerre1985, Yu1990, Lesshafft2006} and experimentally \cite{Kyle1993, Yu1993, Hallberg2006}. In particular, low-density jets have been shown to be a member of a class of globally unstable flows \cite{Huerre1990}, which are characterized by enhanced by the sudden onset of a regime with enhanced mixing, self-sustained oscillations and insensitivity to external forcing. The frequency of the global modes in the near-field of low density jets has been linked to the existence of local profiles over a finite streamwise extent that are absolutely unstable in the framework of local spatio-temporal linear stability analysis \cite{Chomaz2005, Pier2001}. While the primary mechanism of breakdown of the flow is inviscid, arising from the baroclinic torque established by gradients in density and pressure, there are strong indications \cite{Hallberg2006} that viscosity does modify the onset of global modes, as well as their frequency. Hallberg and Strykowski [] conducted experiments with multiple nozzle geometries, thereby independently studying the effects of shear layer thickness, density ratio and jet Reynolds number, and found a weak but perceptible effect of jet Reynolds number on the global mode frequency. The linear stability calculations of Lesshafft\cite{Lesshafft2006} and Srinivasan et. al. \cite{Srinivasan2010} also suggest that the frequency and transition boundary between convective and absolute instability are affected by the viscosity in the form of the Reynolds number. It is therefore natural to inquire into the effects of variations in viscosity between the jet and ambient, which is the focus of this study. \\
Strong gradients in viscosity are unlikely to be established in gas flows, and we look to other situations where the role of viscosity gradients has been more extensively investigated. In fact, in contrast to free shear flow, an extensive body of literature on variable viscosity flows addresses pressure-driven internal flows of high Schmidt number fluids \cite{Govindarajan2014b}. While viscosity is instinctively assumed to have a stabilizing influence on the growth of disturbances, it is responsible for altering the base state of a flow, often creating sharp velocity gradients through the no-slip condition and therefore serving as a source of disturbance kinetic energy. It has long been known that a jump in viscosity across a sharp interface can lead to long-wave at any Reynolds number \cite{Yih1967} or short-wave instabilities at low Re \cite{Hooper1983}. Here we focus on flows of miscible fluids; the immiscible situation is covered in reviews by Joseph et al. \cite{Joseph1997} and more recently, Govindarajan and Sahu \cite{Govindarajan2014b}. Mention should also be made of the extensive work done in the context of liquid atomization, on planar shear layers with gas-;iquid streams \cite{ Matas2011, Otto2013, Fuster2013, Ling2019, Bozonnet2022}. Together, these studies have shown that viscous stability calculations are required to match theory with experiment; that absolute instability of co-flowing gas/liquid streams is supported when velocity defects immediately downstream of a splitter plate are considered, and match experimentally observed frequencies; and that confinement and the finite thickness of the gas stream play an important role in destabilization. However, viscosity ratios of the two streams, when considered, were always extremely small. and the effects of this ratio were rarely isolated.
Ern et al. \cite{Ern2003} showed the destabilizing effects of a finite thickness interface marked by gradients in velocity and viscosity, and demonstrated that for certain parameter ranges, the instability could be stronger than that of the corresponding sharp-interface configuration. Sharp gradients in velocity profile, combined with variations in velocity profile, lead to additional source terms in the equation for disturbance kinetic energy, which drive the growth of instabilities near the diffuse interface. Ranganathan and Govindarajan \cite{Ranganathan2001} performed a temporal stability analysis of the effects of diffusion in channel flow of two fluids in a three-layer configuration, and found that when the critical layer (the region where the wave speed matched the mean velocity profile), the flow was strongly stabilized or destabilized when the more viscous fluid was adjacent to the wall or in the interior. For the pipe geometry, Selvam et al. \cite{Selvam2007,Selvam2009} performed a linear stability analysis that predicted the onset of absolute instability for low Reynolds numbers when the viscosity contrast is sufficiently high, and the diffuse interface is located in a certain range of radial locations with respect to the pipe radius. They find that when the core fluid is more viscous, the flow can be at best unstable over a certain Reynolds number range, with the axisymmetric mode being dominant. When the less viscous liquid is in the core, helical modes are favored, and can lead to absolute instability. These calculations were supported by Direct Numerical Simulation and a global linear stability analysis. These calculations partially reproduced the experimental observations of \cite{DOlce2008}, who reported pearl- and mushroom-shaped instabilities in the Reynolds number range of 2-60, with no sharp transitions in either wavelength or frequency that would provide strong evidence of a global mode. The helical modes observed by \cite{Cao2003} for injection of a low-viscosity fluid into a static mixer are in line with the theoretical results discussed above.\\
Virtually no experimental data are available for miscible free shear layer flows with strong viscosity gradients, in either planar or cylindrical geometries. A notable situation that features shear layers with significant viscosity variation, accompanied by density variation is that of the buoyant jet \cite{Subbarao1992, Bharadwaj2017}. This has been confirmed to be an absolute instability \cite{Chakravarthy2018} when realistic viscosity profiles are included and either the density ratio between the ambient and the jet is greater than 2, or when the Richardson number is greater than 1; however viscosity ratios are not large.
It is difficult to translate insights from confined flows to unconfined shear layers such as jet flows, due to the fundamental differences in velocity profiles, characterized by inflection points in the case of jets. Any insights from pressure-driven flow studies has to be interpreted with caution, since confinement is known to play both stabilizing and destabilizing roles in other situations involving absolute instability of single phase \cite{Juniper2006a, Healey2009, Yang2021} or two-phase flows \cite{Bozonnet2022}. Further, the seminal works of Rayleigh [] and Tomotika \cite{Tomotika1935} considered capillary flows of liquid filaments in another viscous medium in the limit of negligible Reynolds number and are not relevant to the present work which is focused on large Reynolds numbers. However, it is interesting to note that the helical mode is found to arise from an inviscid mechanism based on the Rayleigh criterion for the base profiles used. This would be expected to further favor the establishment of such modes in free shear flows. Sahu and Govindarajan \cite{Sahu2014} considered a planar shear layer configuration, and the emergence of an overlap mode when the gradients in velocity and viscosity occur in the same region. Destabilization was enhanced when these layers overlapped, and with decreasing thickness of either of the gradient regions. In line with inviscid theory, the configuration was found to be absolutely unstable when countercurrent velocity profiles were used for the base state. More recently, Yang and Srinivasan \cite{Yang2022} carried out a linear stability analysis of base profiles corresponding to the near-field of a jet emerging into an ambient with a different viscosity. Their base profiles reflected modifications to the standard tanh- profile typically used in the analysis of jet flows \cite{Mattingly1974}, such as an inward radial shift due to the decelerating effects of a more viscous ambient, and concentration gradient regions that were much thinner than the momentum thickness. As is typical of jet flows, the axisymmetric and helical m odes had nearly equal temporal growth rates over a wide range of conditions specified by the jet Reynolds number, ambient-to-jet viscosity ratio, momentum and concentration layer thicknesses. However, beyond a critical value of viscosity ratio that was Reynolds number=dependent, absolute instability of the flow was supported, with the helical mode being strongly favored over the axisymmetric mode. A more systematic study of the transition boundary between convective and absolute instability as a function of the operating parameters is currently underway. \\
With the above preliminary results in mind, we carry out a study that seeks to isolate the effects of large viscosity contrast between a jet and its surroundings. The goals of the present study are to characterize the near-field of a low-viscosity jet at moderate Reynolds numbers ($1500 < Re < 3500$) for ambient-to-jet viscosity ratios ranging from 1 to 45, and to examine the flow field for any evidence of global modes. This article is structured as fellows. Section 2 describes the experimental facility used to achieve a neutrally buoyant jet with high viscosity contrast. Section 3 describes the flow visualization and the observation of disturbance modes. Section 5 describes identification of the dominant modes using a Proper Orthogonal Decomposition (POD)-based technique applied to the images from visualization. Section 6 provides a summary and conclusions.
\section{Experiments}
The study of the effects of viscosity gradients alone on the development of instabilities in the near-field of a jet requires the elimination of density effects, as well as a quiet facility with a minimum amount of external disturbances. Accordingly, experiments were carried out in a jet facility shown in Fig. \ref{fig:jetfacility} that utilizes gravity to attain the required flow rates. The experiments were performed in the vertical configuration. A large overhead reservoir delivered fluid to a nozzle located in a test section of square cross-section through a flow meter, a diffuser section and a flow straightener composed of laminar flow elements. The nozzle has a fifth-order polynomial profile with zero slope and curvature at its inlet and exit planes, and imposes an area contraction of 87 on the entering flow. The nozzle exit diameter $D$ is 6 mm. \\
Jet Reynolds numbers are defined based on the nozzle exit diameter D and the average velocity $\bar{U}$ of the flow, as inferred from measurement of the volumetric flow rate from the flowmeter (Dwyer ****, accuracy of 2\%),
\begin{equation}
Re = \frac{\rho \bar{U}D}{\mu _j}
\end{equation}
where $\mu _j$ is the viscosity of the injected fluid.
The requirement of having a wide range of viscosity contrast defined in terms of the ambient-to-jet viscosity ratio requires the use of liquids. Salt water of nominal density 1042 $kg/m^3$) is chosen as the test fluid, in order to facilitate density-matching as explained below. Reynolds numbers up to 4000 can be attained using this reservoir/nozzle combination. \\
\begin{figure}
\centering
\includegraphics[height=4in]{figures/hotwire_setup.jpg}
\caption{Sketch of the jet facility used to produce a low-viscosity jet sing gravity-driven flow}
\label{fig:jetfacility}
\end{figure}
The jet exhausts into the test section, which is made of transparent polycarbonate and has a square cross-section with inner dimensions $240\times 240 mm$. Overflow ports near the top of the tank enable maintenance of a constant fluid height in the test section during operation. The top of the tank is open to allow direct mounting of a hot-film anemometry system. The fluid in the tank creates the desired viscosity ratio, which is defined as
\begin{equation}
M = \frac{\mu _\infty}{\mu _j}
\end{equation}
where the subscript $\infty$ refers to test section conditions. For this study, propylene glycol and salt water were used as the two fluids. Propylene glycol in its pure form has a viscosity of 42 $mPa·s$, approximately 45 times that of water, and has a density of 1036 $kg/m^3$, which is only a few percent above the density of water. Industrial-grade propylene glycol used in this work was often found to have even higher viscosity values, and therefore each batch of glycol was measured for its density and dynamic viscosity. A salt water solution was then prepared in order to match the density to within a tenth of a percent ($\frac{\Delta \rho}{\rho} = |\frac{\rho _j -\rho _\infty}{\rho _j}| < 0.0005$). These fluids are Newtonian over the range of strain rates imposed, are very miscible with each other, eliminating surface tension as a relevant parameter. Nevertheless, as we shall see, the interface thickness has no time to develop diffusively and essentially remains a sharp interface in the near -field of the jet.
\subsection{Constant Property Jet Profiles}
Hot-film anemometry was used to first characterize the jet facility to establish the base flow for a constant property jet. Fpr a water jet issuing to a water ambient, anemometry was used to characterize the mean velocity profiles and background noise level, as well the shear layer thickness of the jet at the nozzle exit plane. Figure \ref{fig:meanprofiles}(a) shows velocity profiles emerging from the jet for multiple Reynolds numbers. The profiles are mostly top-hat, characterized by a steep decrease in magnitude in the shear layer towards the quiescent ambient fluid. A two-dimensional trace of voltage (Fig. \ref{fig:meanprofiles}b) at the exit plane (z/D=0.1) confirms the axisymmetric nature of these profiles. Momentum thickness of the shear layer were evaluated as a function of Reynolds number by integrating radially from the centerline to a location where the velocity decreased to 10\% of the centerline; further radial measurements were avoided as the hot film responds unreliably to the low velocities in the entrained flow. The momentum thickness is evaluated as
\begin{equation}
\theta \theta = \int_{0}^{\infty} \frac{U(r)-U_{\infty}}{U_c-U_{\infty}}[1-\frac{U(r)-U_{\infty}}{U_c-U_{\infty}}]dr
\end{equation}
The laminar nature of the jet boundary layer at the exit plane is checked (Fig. ~\ref{fig:meanprofiles}(c)) by observing a linear relationship between $D/\theta$ and $\sqrt{Re}$. The constants in the fit are unique to the nozzle geometry, reflecting the acceleration imposed by the area contraction and the resultant thinning of the boundary layer entering the nozzle. Profiles at multiple downstream locations within the first half-diameter can be well-represented by an equation of the form used by Mattingly and Chang \cite{Mattingly1974}:
\begin{equation}
\frac{u}{U_c} = 0.5(1+tanh(\frac{D}{8\theta}(\frac{1}{r}-r))
\end{equation}
We now turn to the fluctuations in the jet velocity at the exit plane. The turbulence intensity normalized by the average velocity. The profiles are shown in figure \ref{fig:turb_base}(a) alongside profiles measured by Todde et al.
(2009) in their work on low Reynolds number free jets. It can be seen that the turbulence intensity profile has a comparable trend, although the current work has much lower centerline turbulence intensity. This
speaks to the benefit of having a gravity-fed jet, free from any disturbances downstream from pumps or fans.
Lastly, we examine the spectral content of the flow at the exit plane in fig. \ref{fig:turb_base}, and find no discrete peaks in the frequency spectrum, assuring that the jet is a low-turbulence system with little ambient noise.
\begin{figure}
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{selfsimilar.jpg}
\end{subfigure}
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{Ian_3D_vel_profile.jpg}
\end{subfigure}\\
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{D_Theta.png}
\end{subfigure}
\caption{(a) Velocity profiles at multiple Reynolds numbers (b) Two-dimensional trace of velocity at z/D=0.1, showing symmetry of profile about the axis (c) Shear layer thickness as a function of Reynolds number } \label{fig:baseflow1}
\label{fig:meanprofiles}
\end{figure}
\begin{figure}
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{vel_fluc.jpg}
\end{subfigure}
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{M=1_spectrum.jpg}
\end{subfigure}\\
\caption{(a) Radial profile of turbulence intensity at Re = 1688, M=1 (b) Frequency spectrum of voltage fluctuations } \label{fig:turb_base}
\end{figure}
\subsection{Test Procedure}
For experiments with propylene glycol as the ambient fluid, each set of experiments is conducted at a constant Reynolds number governed by the flow rate. Over the course of several test runs, the viscosity of the tank (and hence the value of M) decreases due to mixing with the injected salt water. This makes the test runs inherently quasi-transient. Therefore, a careful procedure was followed to minimize the effects of variation of M during each test run. We first estimate the variation of M during a typical test run. At a salt water-glycol interface, diffusion acts to thicken the interface to yield a concentration thickness given by $\sqrt{\gamma t} $ where $\gamma$ is the binary diffusion coefficient of propylene glycol into water ($1.1 \times 10^{-9} \hspace{0.2em}m^2/s$). For a one-minute long trial run, this yields a diffusion length of the order of 0.01D. In practice, test runs were much shorter, typically lasting 20-30s after the initial starting vortex had passed out of the field of view. During this period, high-speed images were acquired digital camera operating at 500 fps and at $1024\times 1024$ px resolution. After the image acquisition was completed, the flow was turned off, the tank was stirred with a mixer and allowed to settle and become quiescent again, before the next trial (typically 30 minutes). A sample of tank fluid was taken for subsequent viscosity and density measurement for determining the value of M for each trial. In this way, at each Reynolds numbers, values of M starting from 50 and descending down to 15 were attained. \\
It is reasonable to expect that since the salt-water and propylene glycol were initially density matched to within 0.1\% before the start of the experiments, that the density would remain the same through the experiments, even as the bulk viscosity in the tank decreased. This expectation was somewhat belied --- aqueous solutions of propylene glycol undergo a slight contraction in volume that is concentration-dependent. Figure \ref{fig:density_M_vs_trial}(a) shows the variation of density and viscosity in the test section after a series of trial runs. It is apparent that the specific gravity varies between the 1.036 of pure propylene glycol to a maximum of 1.051, or about 1.44\%. The injected salt water jet continues to have a specific gravity of 1.036, raising the prospect of confounding buoyancy effects. The importance of buoyant forces relative to jet inertia is assessed by evaluating the Richardson number,
\begin{equation}
Ri = \frac{g\Delta \rho D}{\rho _mU_j^2}
\end{equation}
This is plotted in Fig. \ref{fig:density_M_vs_trial}(b) against the Reynolds number, assuming an average density difference of 0.7\% over all runs. For Reynolds numbers greater than 1500, Ri is less than 0.1 and can be ignored.
\begin{figure}
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{Ian_rho_mu.jpg}
\end{subfigure}
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{Ri_Re.jpg}
\end{subfigure}
\caption{(a) Variation of density and viscosity in test section during a sequence of runs (b) Richardson number as a function of Re}
\label{fig:density_M_vs_trial}
\end{figure}
\section{Results}
\subsection{Flow Visualization}
Preliminary images for M=1 (water jet into water ambient) were acquired with an 18MP camera whose lens was equipped with an orange filter. The jet fluid was dyed with Rhodamine 6G, and the tank volume was illuminated with a blue LED light. The emission by rhodamine in the orange part of the spectrum was captured and shows the breakdown of the jet. Figure \ref{fig:flowviz} shows the axisymmetric nature of the instabilities dominating the breakdown process, after developing from an initial nearly parallel near-field region. As Re increases, this distance becomes palpably shorter. Unlike the observations of Mattingly \& Chang \cite{Mattingly1974}, no evidence of an eventual competition between the axisymmetric mode and a growing helical mode in the far-field is observed. On the other hand, when the jet emerges into an ambient medium of propylene glycol (M=45), helical instabilities are observed over a range of Re from approximately 1600 to 2600. Figure \ref{fig:flowviz}(b) shows a sequence of images taken at M=1 and a Reynolds number of 2009. Of note is the disappearance of the parallel flow region in the near-field, with the helical mode almost instantaneously developing at the exit. Some discrete bright spots visible in every image are artifacts due to bubbles being introduced in the test section during the stirring process, which remain suspended due to the high fluid viscosity. We also note that the wavelength of the disturbances seems substantially lower than that of the axisymmetric instability at M=1.
These two sets of images suggest that there must exist a transition value (or range) of M for every fixed value of Re, where the dominant mode changes from axisymmetric to helical, and experiments were conducted to elucidate the transition behavior. Figure~\ref{fig:const_Re_varying_M} shows a sequence of images captured for Re= 2009. The transition of the dominant instability from helical to axisymmetric, as M decreases from * to * is clearly evident. Nevertheless, it is difficult to assign a precise value for the transition value of M with confidence in all cases. Inspection of multiple images allows to assign a transition value of M close to ** in this instance. However, in some cases, the images appear to show both axisymmetric and helical features, with two distinct frequency peaks in the hot film spectrum (discussed subsequently), and no clear transition is evident, especially at lower Re. Due to the nature of the experiment, involving discrete steps in M, a fine-grained transition value could not be determined in all cases. Nevertheless, observations clearly indicate that this transition value of M is Reynolds number-dependent. Figure~\ref{fig:M_Re_plane} shows our estimate for the transition value of M as a function of Reynolds number. For Re$>$ 1600, the transition M increases as a function of Re; below Re=1600, the instability was weak, and it was difficult to distinguish the nature of the mode. For higher Re, the transition value of M increases with Re, and at Re=2800 (not shown in Fig. \ref{fig:M_Re_plane}, the first trial showed a helical mode before switching into an axisymmetric mode, suggesting the critical viscosity required is higher.
\begin{figure}
\centering
\begin{subfigure}{\textwidth}
\includegraphics[width=0.98\textwidth]{M=1_flowviz.jpg}
\caption{}
\end{subfigure}
\begin{subfigure}{\textwidth}
\includegraphics[height=3.5in]{M=45_flowviz.jpg}
\caption{}
\end{subfigure}
\caption{(a) Growth of axisymmetric instabilities for M=1 and multiple Re, (b) Helical modes observed at M=45, Re = }
\label{fig:flowviz}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=4in]{constant_Re_varying_M.jpg}
\caption{Images showing the transition from helical to axisymmetric modes as M is decreased}
\label{fig:const_Re_varying_M}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=3in]{M_Re.jpg}
\caption{Transition boundary in (M,Re) space from helical to axisymmetric modes. Both modes apprear to be present at low Re.}
\label{fig:M_Re_plane}
\end{figure}
\clearpage
\section{Hot Film Anemometry}
Hot film anemometry was used to characterize the flow for values of M greater than unity. With the mixing of the two fluids and the change in Prandtl number, the calibration for the hot film could no longer be used, and the voltage response is presented. Here we are interested in the spectral content of the velocity fluctuations at different downstream distances, as well as the rate of growth of the disturbance relative to the constant property jet. Figure \ref{fig:HW_FFT_vs_z}(a) and (b) show the evolution of the spectrum along the centerline and in the shear layer for M= * and Re=1682. A distinct frequency peak is visible at all locations in the near field. The strength of voltage fluctuations (Fig. \ref{fig:vol_fluc}) for M=1 shows a relatively gentle increase downstream; for large M the strength shows a sharp increase within one jet diameter, appearing to saturate within a few diameters.
The dependence of the dominant frequency on the viscosity ratio, as detected by hot film anemometry, is plotted in Fig. \ref{fig:HW_f_vs_M}. Following the experimental sequence and moving from high values of M to low values, one sees an increasing trend while the helical mode remains dominant. .
\begin{figure}
\centering
\vspace{3in}
\caption{Evolution of velocity spectra in the downstream direction for M=39, Re= on the jet axis and in the shear layer. Top: centerline variation for Re=2013. Bottom: spectra in the shear layer. }
\label{fig:HW_FFT_vs_z}
\end{figure}
\begin{figure}
\centering
\vspace{3in}
\caption{Root mean square value of voltage fluctuations along the centerline and shear layer for M=1 and M=45 at Re=2000}
\label{fig:vol_fluc}
\end{figure}
\begin{figure}
\centering
\vspace{3in}
\caption{Dominant frequency, as identified by hot film anemometry, as a function of M for Re=1682.}
\label{fig:HW_f_vs_M}
\end{figure}
\clearpage
\subsection{Image Analysis}
The hot film measurements strongly indicate the existence of a single dominant mode that saturates in intensity in the first few diameters downstream of the jet exit. However, the increased conductivity of the liquid due to the dissolved salts resulted in increased contamination, pickup of electrical line noise despite probe shielding, and the occasional air bubbles introduced into the tank due to mixing that would stick to the hot film, together resulted in a very low rate at which meaningful data were acquired. As a result, LID data were chosen as a means of investigating the growth of unstable modes. The orange filter on the camera lens ensured that the jet could be strongly distinguished against the background, by isolating the emission from the Rhodamine dye under blue illumination. Applying a threshold intensity to the grayscale images allows determination of the jet boundary; the diameter of the jet as determined from the result was very weakly sensitive to the threshold value, but we are primarily interested in the frequency, which is unaffected by the choice of threshold. To study the spatial evolution of the oscillations of the interface, we examine the jet width at 4 locations downstream of the jet exit, as plotted in Fig. \ref{fig:jet_width_oscillations}. The amplitude of oscillations shows a non-linear increase, and in Fig. \ref{fig:interface_FFT} we further examine the amplitude and frequency of these oscillations. Figure \ref{fig:interface_FFT} shows the sharp peak in the power spectrum that might be surmised from the oscillatory waveform in Fig. \ref{fig:jet_width_oscillations}. Figure \ref{fig:interface_FFT}(b) shows the variation of the amplitude of the dominant frequency in the downstream direction. Again, as with the anemometry measurements, the oscillations show an exponential increase in the disturbance amplitude, before saturating at z/D=* .
\begin{figure}[b]
\centering
\vspace{3in}
\caption{(a) Time variation of the jet width in pixels at different downstream distance (b) Square of the amplitude of the coefficients of the Fourier transform, $A^2(f)$ }
\label{fig:jet_width_oscillations}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=0.95\textwidth]{Power_spectrum_A2.png}
\caption{}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=0.95\textwidth]{A2_growth_zbyD.png}
\caption{}
\end{subfigure}
\caption{(a) Power spectrum of oscillation of interface at z/D= for M=38, Re=2400. (b) Growth of the square of the amplitude of the Fourier coefficient of the dominant mode in the downstream direction}
\label{fig:interface_FFT}
\end{figure}
To ascertain the nature of this instability, that develops much faster than the axisymmetric instability of the constant property jet, we verified that the frequencies at the different downstream stations shown in Fig. \ref{fig:jet_width_oscillations} are identical. Another way of assessing the spatially invariant `global' nature of this frequency is to examine the intensity records of at single pixels in the shear layer. Figure \ref{fig:pxfluc} shows the frequency spectrum of 4 pixels at two different downstream locations, on either side of the jet. The frequencies are identical, and provide further circumstantial evidence that the instability observed at large M is a global mode, corresponding to absolute instability of the near-field profiles. This putative global mode has a frequency that depends on the parameters that define the flow, such as the inlet Reynolds number, viscosity ratio M, and the inlet velocity profile, specified by the momentum thickness $\theta$. In the experiment, the values of Re and $\theta$ are conjoined through the specific geometry of the nozzle. Further, it is experimentally difficult to conduct trials at constant M, while changing Re. Therefore, we present the global mode frequency at constant Re (and $\theta$) as a function of M. Figure \ref{fig:f_lambda_vs_M}(a) shows that the waves developing on the interface have a frequency that decreases as the ambient viscosity is reduced from a starting value of M$\approx$ 45. For this data set taken at Re= 2000, there is a sharp increase in frequency near the observed transition from helical to axisymmetric mode. We interpret this as further evidence of the helical mode being driven by an absolute instability of near-field profiles. After the transition to the axisymmetric mode, there is a further decrease in frequency, before the curve displays an asymptotic behavior as M is further reduced.
The corresponding wavelengths, as determined from inspection of the images, are shown in Figure \ref{fig:f_lambda_vs_M}(b). Knowledge of the wavelength and frequency allows us to calculate the phase velocity of the dominant mode; this is plotted in Figure \ref{fig:f_lambda_vs_M}(c).
\begin{figure}
\centering
\includegraphics[width=\textwidth]{pxfluc_FFT.png}
\caption{Spectrum of pixel intensity fluctuation in shear layer at four locations in the near-field of the jet.}
\label{fig:pxfluc}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=0.9\textwidth]{f_vs_M.png}
\caption{}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=0.9\textwidth]{lambda_vs_M.png}
\caption{}
\end{subfigure}
\caption{Variation of instability characteristics along the constant (Re,$\theta$) curve, as M decreases dueing the experiment. (a) Frequency (b) Wavelength (c) Phase velocity }
\label{fig:f_lambda_vs_M}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=4in]{SVD_values.jpg}
\caption{The first twenty singular values of the spatial modes, obtained from a sequence of images at Re=2400 and M=39.}
\label{fig:SVD_values}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}{0.32\textwidth}
\includegraphics[width=\textwidth]{SVD_Mode_1.eps}
\caption{}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\includegraphics[width=\textwidth]{SVD_Mode_2.eps}
\caption{}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\includegraphics[width=\textwidth]{SVD_Mode_3.eps}
\caption{}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\includegraphics[width=\textwidth]{SVD_Mode_4.eps}
\caption{}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\includegraphics[width=\textwidth]{SVD_Mode_5.eps}
\caption{}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\includegraphics[width=\textwidth]{SVD_Mode_6.eps}
\caption{}
\end{subfigure}
\caption{The first 6 spatial modes, obtained using Singular Value Decomposition performed on 4000 images acquired for a helical mode at Re=2009, M=39}
\label{fig:SVD_Modes}
\end{figure}
\textbf{Acknowledgement} We are grateful for useful discussions with David Forliti and Paul Strykowski during the preparation of this manuscript. We also acknowledge the assistance from Akash Dhotre and Justin Chen in acquiring some of the images.
\clearpage
\bibliographystyle{unsrt}
\section{\label{intro} Introduction}
Shear layers with spatially variable fluid physical properties occur in a variety of industrial and natural systems. The variations in density and/or viscosity may occur due to temperature gradients, as in the case of a plasma torch \citep{Duan2002}, static mixers \citep{Cao2003}, reacting flows \citep{Pathikonda2021}, or due to species concentration gradients, such as salinity gradients set up when an estuary enters an ocean. While most situations will feature both density and viscosity effects, the fluid dynamics of variable density jets have been extensively studied, both theoretically \citep{Huerre1985, Yu1990, Lesshafft2006} and experimentally \citep{Kyle1993, Yu1993, Hallberg2006}. In particular, low-density jets have been shown to be a member of a class of globally unstable flows \citep{Huerre1990}, which are characterized by enhanced by the sudden onset of a regime with enhanced mixing, self-sustained oscillations and insensitivity to external forcing. The frequency of the global modes in the near-field of low density jets has been linked to the existence of local profiles over a finite streamwise extent that are absolutely unstable in the framework of local spatio-temporal linear stability analysis \citep{Chomaz2005, Pier2001}. While the primary mechanism of breakdown of the flow is inviscid, arising from the baroclinic torque established by gradients in density and pressure, there are some indications \citep{Hallberg2006} that viscosity does modify the onset of global modes, as well as their frequency. \citet{Hallberg2006} conducted experiments with multiple nozzle geometries, thereby independently studying the effects of shear layer thickness, density ratio and jet Reynolds number, and found a weak but perceptible effect of jet Reynolds number on the global mode frequency. The linear stability calculations of \citet{Lesshafft2006} and \citet{Srinivasan2010} also suggest that the frequency and transition boundary between convective and absolute instability are affected by the viscosity in the form of the Reynolds number. It is therefore natural to inquire into the effects of variations in viscosity between the jet and ambient, which is the focus of this study.
Strong gradients in viscosity are unlikely to be established in gas flows, and we look to other situations where the role of viscosity gradients has been more extensively investigated. In fact, in contrast to free shear flow, an extensive body of literature on variable viscosity flows addresses pressure-driven internal flows of high Schmidt number fluids \citep{Govindarajan2014b}. While viscosity is instinctively assumed to have a stabilizing influence on the growth of disturbances, it is responsible for altering the base state of a flow, often creating sharp velocity gradients through the no-slip condition and therefore serving as a source of disturbance kinetic energy. It has long been known \citep{Yih1967} that a jump in viscosity across a sharp interface can lead to long-wave at any Reynolds number or short-wave instabilities at low Re \citep{Hooper1983}. Here we focus on flows of miscible fluids; the immiscible situation is covered in reviews by \citet{Joseph1997} and more recently, \citet{Govindarajan2014b}. Mention should also be made of the extensive work done in the context of liquid atomization, on planar shear layers with gas-;iquid streams \citep{ Matas2011, Otto2013, Fuster2013, Ling2019, Bozonnet2022}. Together, these studies have shown that viscous stability calculations are required to match theory with experiments on gas-liquid shear layers; that absolute instability of co-flowing gas/liquid streams is supported when velocity defects immediately downstream of a splitter plate are considered, and match experimentally observed frequencies; and that confinement and the finite thickness of the gas stream play an important role in destabilization. However, viscosity ratios of the two streams, when considered, were always extremely small. and the effects of this ratio were rarely isolated.
\citet{Ern2003} showed the destabilizing effects of a finite thickness interface marked by gradients in velocity and viscosity, and demonstrated that for certain parameter ranges, the instability could be stronger than that of the corresponding sharp-interface configuration. Sharp gradients in velocity profile, combined with variations in velocity profile, lead to additional source terms in the equation for disturbance kinetic energy, which drive the growth of instabilities near the diffuse interface. \citet{Ranganathan2001} performed a temporal stability analysis of the effects of diffusion in channel flow of two fluids in a three-layer configuration, and found that when the critical layer (the region where the wave speed matched the mean velocity profile), the flow was strongly stabilized or destabilized when the more viscous fluid was adjacent to the wall or in the interior. For the pipe geometry, Selvam et al. \citep{Selvam2007,Selvam2009} performed a linear stability analysis that predicted the onset of absolute instability for low Reynolds numbers when the viscosity contrast is sufficiently high, and the diffuse interface is located in a certain range of radial locations with respect to the pipe radius. They find that when the core fluid is more viscous, the flow can be at best unstable over a certain Reynolds number range, with the axisymmetric mode being dominant. When the less viscous liquid is in the core, helical modes are favored, and can lead to absolute instability. These calculations were supported by Direct Numerical Simulation and a global linear stability analysis. These calculations partially reproduced the experimental observations of \citet{DOlce2008}, who reported pearl- and mushroom-shaped instabilities in the Reynolds number range of 2-60, with no sharp transitions in either wavelength or frequency that would provide strong evidence of a global mode. The helical modes observed by \citet{Cao2003} for injection of a low-viscosity fluid into a static mixer are in line with the theoretical results discussed above.
Virtually no experimental data are available for miscible free shear layer flows with strong viscosity gradients, in either planar or cylindrical geometries. A notable situation that features shear layers with significant viscosity variation, accompanied by density variation is that of the buoyant jet \citep{Subbarao1992, Bharadwaj2017}. This has been confirmed to be an absolute instability \citep{Chakravarthy2018} when realistic viscosity profiles are included and either the density ratio between the ambient and the jet is greater than 2, or when the Richardson number is greater than 1; however viscosity ratios are not large.
It is difficult to translate insights from confined flows to unconfined shear layers such as jet flows, due to the fundamental differences in velocity profiles, characterized by inflection points in the case of jets. Any insights from pressure-driven flow studies has to be interpreted with caution, since confinement is known to play both stabilizing and destabilizing roles in other situations involving absolute instability of single phase \citep{Juniper2006a, Healey2009, Yang2021} or two-phase flows \citep{Bozonnet2022}. Further, the seminal works of Rayleigh [] and Tomotika \cite{Tomotika1935} considered capillary flows of liquid filaments in another viscous medium in the limit of negligible Reynolds number and are not relevant to the present work which is focused on large Reynolds numbers. However, it is interesting to note that the helical mode is found to arise from an inviscid mechanism based on the Rayleigh criterion for the base profiles used. This would be expected to further favor the establishment of such modes in free shear flows. \citet{Sahu2014} considered a planar shear layer configuration, and the emergence of an overlap mode when the gradients in velocity and viscosity occur in the same region. Destabilization was enhanced when these layers overlapped, and with decreasing thickness of either of the gradient regions. In line with inviscid theory, the configuration was found to be absolutely unstable when countercurrent velocity profiles were used for the base state. More recently, \citet{Yang2022} carried out a linear stability analysis of base profiles corresponding to the near-field of a jet emerging into an ambient with a different viscosity. Their base profiles reflected modifications to the standard tanh- profile typically used in the analysis of jet flows \citep{Mattingly1974}, such as an inward radial shift due to the decelerating effects of a more viscous ambient, and concentration gradient regions that were much thinner than the momentum thickness. As is typical of jet flows, the axisymmetric and helical m odes had nearly equal temporal growth rates over a wide range of conditions specified by the jet Reynolds number, ambient-to-jet viscosity ratio, momentum and concentration layer thicknesses. However, beyond a critical value of viscosity ratio that was Reynolds number=dependent, absolute instability of the flow was supported, with the helical mode being strongly favored over the axisymmetric mode. A more systematic study of the transition boundary between convective and absolute instability as a function of the operating parameters is currently underway.
With the above preliminary results in mind, we carry out a study that seeks to isolate the effects of large viscosity contrast between a jet and its surroundings. The goals of the present study are to characterize the near-field of a low-viscosity jet at moderate Reynolds numbers ($1500 < Re < 3500$) for ambient-to-jet viscosity ratios ranging from 1 to 45, and to examine the flow field for any evidence of global modes. This article is structured as fellows. Section 2 describes the experimental facility used to achieve a neutrally buoyant jet with high viscosity contrast. Section 3 describes the flow visualization and the observation of disturbance modes. Section 5 describes identification of the dominant modes using a Proper Orthogonal Decomposition (POD)-based technique applied to the images from visualization. Section 6 provides a summary and conclusions.
\section{Experiments}
The study of the effects of viscosity gradients alone on the development of instabilities in the near-field of a jet requires the elimination of density effects, as well as a quiet facility with a minimum amount of external disturbances. Accordingly, experiments were carried out in a jet facility shown in Fig. \ref{fig:jetfacility} that utilizes gravity to attain the required flow rates. The experiments were performed in the vertical configuration. A large overhead reservoir delivered fluid to a nozzle located in a test section of square cross-section through a flow meter, a diffuser section and a flow straightener composed of laminar flow elements. The nozzle has a fifth-order polynomial profile with zero slope and curvature at its inlet and exit planes, and imposes an area contraction of 87 on the entering flow. The nozzle exit diameter $D$ is 6 mm.
\begin{figure}
\centering
\includegraphics[height=3in]{figures/hotwire_setup.jpg}
\caption{Sketch of the jet facility used to produce a low-viscosity jet sing gravity-driven flow}
\label{fig:jetfacility}
\end{figure}
Jet Reynolds numbers are defined based on the nozzle exit diameter D and the average velocity $\bar{U}$ of the flow, as inferred from measurement of the volumetric flow rate from the flowmeter (Dwyer ****, accuracy of 2\%),
\begin{equation}
Re = \frac{\rho \bar{U}D}{\mu _j}
\end{equation}
where $\mu _j$ is the viscosity of the injected fluid.
The requirement of having a wide range of viscosity contrast defined in terms of the ambient-to-jet viscosity ratio requires the use of liquids. Salt water of nominal density 1042 $kg/m^3$) is chosen as the test fluid, in order to facilitate density-matching as explained below. Reynolds numbers up to 4000 can be attained using this reservoir/nozzle combination.
The jet exhausts into the test section, which is made of transparent polycarbonate and has a square cross-section with inner dimensions $240\times 240 mm$. Overflow ports near the top of the tank enable maintenance of a constant fluid height in the test section during operation. The top of the tank is open to allow direct mounting of a hot-film anemometry system. The fluid in the tank creates the desired viscosity ratio, which is defined as
\begin{equation}
M = \frac{\mu _\infty}{\mu _j}
\end{equation}
where the subscript $\infty$ refers to test section conditions. For this study, propylene glycol and salt water were used as the two fluids. Propylene glycol in its pure form has a viscosity of 42 $mPa·s$, approximately 45 times that of water, and has a density of 1036 $kg/m^3$, which is only a few percent above the density of water. Industrial-grade propylene glycol used in this work was often found to have even higher viscosity values, and therefore each batch of glycol was measured for its density and dynamic viscosity. A salt water solution was then prepared in order to match the density to within a tenth of a percent ($\frac{\Delta \rho}{\rho} = |\frac{\rho _j -\rho _\infty}{\rho _j}| < 0.0005$). These fluids are Newtonian over the range of strain rates imposed, are very miscible with each other, eliminating surface tension as a relevant parameter. Nevertheless, as we shall see, the interface thickness has no time to develop diffusively and essentially remains a sharp interface in the near -field of the jet.
\subsection{Constant Property Jet Profiles}
Hot-film anemometry was used to first characterize the jet facility to establish the base flow for a constant property jet. Fpr a water jet issuing to a water ambient, anemometry was used to characterize the mean velocity profiles and background noise level, as well the shear layer thickness of the jet at the nozzle exit plane. Figure \ref{fig:meanprofiles}(a) shows velocity profiles emerging from the jet for multiple Reynolds numbers. The profiles are mostly top-hat, characterized by a steep decrease in magnitude in the shear layer towards the quiescent ambient fluid. A two-dimensional trace of voltage (Fig. \ref{fig:meanprofiles}b) at the exit plane (z/D=0.1) confirms the axisymmetric nature of these profiles. Momentum thickness of the shear layer were evaluated as a function of Reynolds number by integrating radially from the centerline to a location where the velocity decreased to 10\% of the centerline; further radial measurements were avoided as the hot film responds unreliably to the low velocities in the entrained flow. The momentum thickness is evaluated as
\begin{equation}
\theta \theta = \int_{0}^{\infty} \frac{U(r)-U_{\infty}}{U_c-U_{\infty}}[1-\frac{U(r)-U_{\infty}}{U_c-U_{\infty}}]dr
\end{equation}
The laminar nature of the jet boundary layer at the exit plane is checked (Fig. ~\ref{fig:meanprofiles}(c)) by observing a linear relationship between $D/\theta$ and $\sqrt{Re}$. The constants in the fit are unique to the nozzle geometry, reflecting the acceleration imposed by the area contraction and the resultant thinning of the boundary layer entering the nozzle. Profiles at multiple downstream locations within the first half-diameter can be well-represented by an equation of the form used by Mattingly and Chang \cite{Mattingly1974}:
\begin{equation}
\frac{u}{U_c} = 0.5(1+tanh(\frac{D}{8\theta}(\frac{1}{r}-r))
\end{equation}
We now turn to the fluctuations in the jet velocity at the exit plane. The turbulence intensity normalized by the average velocity. The profiles are shown in figure \ref{fig:turb_base}(a) alongside profiles measured by Todde et al.
(2009) in their work on low Reynolds number free jets. It can be seen that the turbulence intensity profile has a comparable trend, although the current work has much lower centerline turbulence intensity. This
speaks to the benefit of having a gravity-fed jet, free from any disturbances downstream from pumps or fans.
Lastly, we examine the spectral content of the flow at the exit plane in fig. \ref{fig:turb_base}, and find no discrete peaks in the frequency spectrum, assuring that the jet is a low-turbulence system with little ambient noise.
\begin{figure}
\centering
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{selfsimilar.jpg}
\end{subfigure}
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=0.75\textwidth]{Ian_3D_vel_profile.jpg}
\end{subfigure}\\
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{figures/theta_vs_Re.png}
\end{subfigure}
\caption{(a) Velocity profiles at multiple Reynolds numbers (b) Two-dimensional trace of velocity at z/D=0.1, showing symmetry of profile about the axis (c) Shear layer thickness as a function of Reynolds number } \label{fig:baseflow1}
\label{fig:meanprofiles}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/toddeRe2500_1.png}
\end{subfigure}
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{M=1_spectrum.jpg}
\end{subfigure}\\
\caption{(a) Radial profile of turbulence intensity at Re = 1688, M=1 (b) Frequency spectrum of voltage fluctuations } \label{fig:turb_base}
\end{figure}
\subsection{Test Procedure}
For experiments with propylene glycol as the ambient fluid, each set of experiments is conducted at a constant Reynolds number governed by the flow rate. Over the course of several test runs, the viscosity of the tank (and hence the value of M) decreases due to mixing with the injected salt water. This makes the test runs inherently quasi-transient. Therefore, a careful procedure was followed to minimize the effects of variation of M during each test run. We first estimate the variation of M during a typical test run. At a salt water-glycol interface, diffusion acts to thicken the interface to yield a concentration thickness given by $\sqrt{\gamma t} $ where $\gamma$ is the binary diffusion coefficient of propylene glycol into water ($1.1 \times 10^{-9} \hspace{0.2em}m^2/s$). For a one-minute long trial run, this yields a diffusion length of the order of 0.01D. In practice, test runs were much shorter, typically lasting 20-30s after the initial starting vortex had passed out of the field of view. During this period, high-speed images were acquired digital camera operating at 500 fps and at $1024\times 1024$ px resolution. After the image acquisition was completed, the flow was turned off, the tank was stirred with a mixer and allowed to settle and become quiescent again, before the next trial (typically 30 minutes). A sample of tank fluid was taken for subsequent viscosity and density measurement for determining the value of M for each trial. In this way, at each Reynolds numbers, values of M starting from 50 and descending down to 15 were attained.
It is reasonable to expect that since the salt-water and propylene glycol were initially density matched to within 0.1\% before the start of the experiments, that the density would remain the same through the experiments, even as the bulk viscosity in the tank decreased. This expectation was somewhat belied --- aqueous solutions of propylene glycol undergo a slight contraction in volume that is concentration-dependent. Figure \ref{fig:density_M_vs_trial}(a) shows the variation of density and viscosity in the test section after a series of trial runs. It is apparent that the specific gravity varies between the 1.036 of pure propylene glycol to a maximum of 1.051, or about 1.44\%. The injected salt water jet continues to have a specific gravity of 1.036, raising the prospect of confounding buoyancy effects. The importance of buoyant forces relative to jet inertia is assessed by evaluating the Richardson number,
\begin{equation}
Ri = \frac{g\Delta \rho D}{\rho _mU_j^2}
\end{equation}
This is plotted in Fig. \ref{fig:density_M_vs_trial}(b) against the Reynolds number, assuming an average density difference of 0.7\% over all runs. For Reynolds numbers greater than 1500, Ri is less than 0.1 and can be ignored.
\begin{figure}
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{Ian_rho_mu.jpg}
\end{subfigure}
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{figures/Ri_Re.png}
\end{subfigure}
\caption{(a) Variation of density and viscosity in test section during a sequence of runs (b) Richardson number as a function of Re}
\label{fig:density_M_vs_trial}
\end{figure}
\section{Results}
\subsection{Flow Visualization}
Preliminary images for M=1 (water jet into water ambient) were acquired with an 18MP camera whose lens was equipped with an orange filter. The jet fluid was dyed with Rhodamine 6G, and the tank volume was illuminated with a blue LED light. The emission by rhodamine in the orange part of the spectrum was captured and shows the breakdown of the jet. Figure \ref{fig:flowviz} shows the axisymmetric nature of the instabilities dominating the breakdown process, after developing from an initial nearly parallel near-field region. As Re increases, this distance becomes palpably shorter. Unlike the observations of \citet{Mattingly1974}, no evidence of an eventual competition between the axisymmetric mode and a growing helical mode in the far-field is observed. On the other hand, when the jet emerges into an ambient medium of propylene glycol (M=45), helical instabilities are observed over a range of Re from approximately 1600 to 2600. Figure \ref{fig:flowviz}(b) shows a sequence of images taken at M=1 and a Reynolds number of 2009. Of note is the disappearance of the parallel flow region in the near-field, with the helical mode almost instantaneously developing at the exit. Some discrete bright spots visible in every image are artifacts due to bubbles being introduced in the test section during the stirring process, which remain suspended due to the high fluid viscosity. We also note that the wavelength of the disturbances seems substantially lower than that of the axisymmetric instability at M=1.
These two sets of images suggest that there must exist a transition value (or range) of M for every fixed value of Re, where the dominant mode changes from axisymmetric to helical, and experiments were conducted to elucidate the transition behavior. Figure~\ref{fig:const_Re_varying_M} shows a sequence of images captured for Re= 2009. The transition of the dominant instability from helical to axisymmetric, as M decreases from * to * is clearly evident. Nevertheless, it is difficult to assign a precise value for the transition value of M with confidence in all cases. Inspection of multiple images allows to assign a transition value of M close to ** in this instance. However, in some cases, the images appear to show both axisymmetric and helical features, with two distinct frequency peaks in the hot film spectrum (discussed subsequently), and no clear transition is evident, especially at lower Re. Due to the nature of the experiment, involving discrete steps in M, a fine-grained transition value could not be determined in all cases. Nevertheless, observations clearly indicate that this transition value of M is Reynolds number-dependent. Figure~\ref{fig:M_Re_plane} shows our estimate for the transition value of M as a function of Reynolds number. For Re$>$ 1600, the transition M increases as a function of Re; below Re=1600, the instability was weak, and it was difficult to distinguish the nature of the mode. For higher Re, the transition value of M increases with Re, and at Re=2800 (not shown in Fig. \ref{fig:M_Re_plane}, the first trial showed a helical mode before switching into an axisymmetric mode, suggesting the critical viscosity required is higher.
\begin{figure}
\centering
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{M=1_flowviz.jpg}
\caption{}
\end{subfigure}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{M=45_flowviz.jpg}
\caption{}
\end{subfigure}
\caption{(Top) Growth of axisymmetric instabilities for M=1 and multiple Re: (a) Re=428 (b) Re=1036 (c) Re=1545 (d) Re=2009 (e) Re=2539 (f) Re = 3009 (Bottom) Helical modes observed at M=45, Re = 1332, 1676, 2013, 2339.}
\label{fig:flowviz}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{constant_Re_varying_M.jpg}
\caption{Images showing the transition from helical to axisymmetric modes as M is decreased}
\label{fig:const_Re_varying_M}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=3in]{M_Re.jpg}
\caption{Transition boundary in (M,Re) space from helical to axisymmetric modes at a fixed Reynolds number of 2013. From left to right, the values of M are 45, 33, 28, 23 and 20. Here M=28 appears to be closest to the transition between modes.}
\label{fig:M_Re_plane}
\end{figure}
\clearpage
\subsection{Hot Film Anemometry}
Hot film anemometry was used to characterize the flow for values of M greater than unity. With the mixing of the two fluids and the change in Prandtl number, the calibration for the hot film could no longer be used, and the voltage response is presented. Here we are interested in the spectral content of the velocity fluctuations at different downstream distances, as well as the rate of growth of the disturbance relative to the constant property jet. Figure \ref{fig:HW_FFT_vs_z}(a) and (b) show the evolution of the spectrum along the centerline and in the shear layer for M= * and Re=1682. A distinct frequency peak is visible at all locations in the near field. The strength of voltage fluctuations (Fig. \ref{fig:vol_fluc}) for M=1 shows a relatively gentle increase downstream; for large M the strength shows a sharp increase within one jet diameter, appearing to saturate within a few diameters.
The dependence of the dominant frequency on the viscosity ratio, as detected by hot film anemometry, is plotted in Fig. \ref{fig:HW_f_vs_M}. Following the experimental sequence and moving from high values of M to low values, one sees an increasing trend while the helical mode remains dominant. .
\begin{figure}[hb]
\centering
\includegraphics[width=\textwidth]{fft_re_2013m_39.jpg}
\caption{Evolution of velocity spectra in the downstream direction for M=39, Re= 2013 on the jet axis and in the shear layer. Top: centerline variation for Re=2013. Bottom: spectra in the shear layer. }
\label{fig:HW_FFT_vs_z}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=2.25in]{vol_fluc_re_2000.jpg}
\caption{Root mean square value of voltage fluctuations along the centerline and shear layer for M=1 and M=45 at Re=2000}
\label{fig:vol_fluc}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=2.25in]{figures/Ian_St_vs_M.png}
\caption{Stanton numbers corresponding to the dominant frequency, as identified by hot film anemometry, as a function of M for Re=1682.}
\label{fig:HW_f_vs_M}
\end{figure}
\subsection{Image Analysis}
The hot film measurements strongly indicate the existence of a single dominant mode that saturates in intensity in the first few diameters downstream of the jet exit. However, the increased conductivity of the liquid due to the dissolved salts resulted in increased contamination, pickup of electrical line noise despite probe shielding, and the occasional air bubbles introduced into the tank due to mixing that would stick to the hot film, together resulted in a very low rate at which meaningful data were acquired. As a result, LID data were chosen as a means of investigating the growth of unstable modes. The orange filter on the camera lens ensured that the jet could be strongly distinguished against the background, by isolating the emission from the Rhodamine dye under blue illumination. Applying a threshold intensity to the grayscale images allows determination of the jet boundary; the diameter of the jet as determined from the result was very weakly sensitive to the threshold value, but we are primarily interested in the frequency, which is unaffected by the choice of threshold. To study the spatial evolution of the oscillations of the interface, we examine the jet width at 4 locations downstream of the jet exit, as plotted in Fig. \ref{fig:jet_width_oscillations}. The amplitude of oscillations shows a non-linear increase, and in Fig. \ref{fig:interface_FFT} we further examine the amplitude and frequency of these oscillations. Figure \ref{fig:interface_FFT} shows the sharp peak in the power spectrum that might be surmised from the oscillatory waveform in Fig. \ref{fig:jet_width_oscillations}. Figure \ref{fig:interface_FFT}(b) shows the variation of the amplitude of the dominant frequency in the downstream direction. Again, as with the anemometry measurements, the oscillations show an exponential increase in the disturbance amplitude, before saturating at z/D=* .
\begin{figure}[b]
\centering
\includegraphics[width=\textwidth]{figures/Jet_width_global.png}
\caption{(a) Time variation of the jet width in pixels at different downstream distance (b) Square of the amplitude of the coefficients of the Fourier transform, $A^2(f)$ }
\label{fig:jet_width_oscillations}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=0.95\textwidth]{Power_spectrum_A2.png}
\caption{}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=0.95\textwidth]{A2_growth_zbyD.png}
\caption{}
\end{subfigure}
\caption{(a) Power spectrum of oscillation of interface at z/D= for M=38, Re=2400. (b) Growth of the square of the amplitude of the Fourier coefficient of the dominant mode in the downstream direction}
\label{fig:interface_FFT}
\end{figure}
To ascertain the nature of this instability, that develops much faster than the axisymmetric instability of the constant property jet, we verified that the frequencies at the different downstream stations shown in Fig. \ref{fig:jet_width_oscillations} are identical. Another way of assessing the spatially invariant `global' nature of this frequency is to examine the intensity records of at single pixels in the shear layer. Figure \ref{fig:pxfluc} shows the frequency spectrum of 4 pixels at two different downstream locations, on either side of the jet. The frequencies are identical, and provide further circumstantial evidence that the instability observed at large M is a global mode, corresponding to absolute instability of the near-field profiles. This putative global mode has a frequency that depends on the parameters that define the flow, such as the inlet Reynolds number, viscosity ratio M, and the inlet velocity profile, specified by the momentum thickness $\theta$. In the experiment, the values of Re and $\theta$ are conjoined through the specific geometry of the nozzle. Further, it is experimentally difficult to conduct trials at constant M, while changing Re. Therefore, we present the global mode frequency at constant Re (and $\theta$) as a function of M. Figure \ref{fig:f_lambda_vs_M}(a) shows that the waves developing on the interface have a frequency that decreases as the ambient viscosity is reduced from a starting value of M$\approx$ 45. For this data set taken at Re= 2000, there is a sharp increase in frequency near the observed transition from helical to axisymmetric mode. We interpret this as further evidence of the helical mode being driven by an absolute instability of near-field profiles. After the transition to the axisymmetric mode, there is a further decrease in frequency, before the curve displays an asymptotic behavior as M is further reduced.
The corresponding wavelengths, as determined from inspection of the images, are shown in Figure \ref{fig:f_lambda_vs_M}(b). Knowledge of the wavelength and frequency allows us to calculate the phase velocity of the dominant mode; this is plotted in Figure \ref{fig:f_lambda_vs_M}(c).
\begin{figure}
\centering
\includegraphics[width=\textwidth]{pxfluc_FFT.png}
\caption{Spectrum of pixel intensity fluctuation in shear layer at four locations in the near-field of the jet.}
\label{fig:pxfluc}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=0.9\textwidth]{figures/Image_St_vs_M.png}
\caption{}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=0.9\textwidth]{figures/Image_lambda_vs_M.png}
\caption{}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=0.9\textwidth]{figures/Image_phasevel_vs_M.png}
\caption{}
\end{subfigure}
\caption{Variation of instability characteristics along the constant (Re,$\theta$) curve, as M decreases dueing the experiment. (a) Frequency (b) Wavelength (c) Phase velocity }
\label{fig:f_lambda_vs_M}
\end{figure}
\section{Summary and Discussion}
We have presented experiments characterizing the flow behavior for the specific configuration of a low-viscosity jet with a laminar boundary layer emerging into an ambient medium of relatively higher viscosity. The fluids, while perfectly miscible, have a low binary diffusion coefficient, with the result that the diffusion region is expected to be extremely thin, approaching a sharp interface with zero surface tension. The flow visualization results clearly indicate the onset of a helical mode at or close to the nozzle exit plane, when the viscosity ratio M is increased substantially beyond unity, with enhanced mixing relative to the M=1 case. For that baseline case, it is apparent that the jet can stay coherent and nearly parallel for a significant distance downstream indicating a slowly evolving spatial instability that saturates in the far-field. For latge M, hot film anemometry confirms the presence of a self-sustained oscillation with a discrete peak in the spectrum, with the disturbance energy rapidly growing and saturating within the first few diameters downstream. Further, we have characterized this dominant frequency as a function of M and Re over a limited range, showing that this frequency is an increasing function of Re and a decreasing function of M. However, over the range of M studied, the anemometry does not display any sharp jumps in the frequency, leaving open the question of the convective/absolute nature of the instability. To resolve issues with anemometry in conducting fluids, a larger range of Reynolds numbers were examined using image analysis. With this technique,the initial linear growth, followed by an exponential rise in the amplitude of oscillations is more clearly evident. We are able to confirm the frequency trends observed with anemometry, a discrete jump in frequency at a certain value of M, as well as the spatial invariance of this frequency.
Turning to the convective/absolute nature of the instability, at first glance, one cannot rule out the possibility that these observations reflect a rapidly growing convective instability that happens to be clearly visible against a low-noise background established by pump-less flow. However, there are several features of the flow, as discussed above, that suggest otherwise. It is generally accepted that for an instability to be considered a global mode that arises from a linear, absolutely unstable mechanism (as understood in the context of spatio-temporal linear stability analysis), it should display certain hallmarks. These are: the presence of a single self-sustained oscillation that can be detected everywhere in the domain; the sharp onset of a regime of enhanced mixing in which this instability appears, as determined by the appropriate control parameters; and the insensitivity to external forcing. The first criterion is found to be amply satisfied in the present experiment. The image analysis does show a sharp jump in frequency and wavelength as the viscosity ratio is decreased, and the transition value of M is reasonably consistent with visual observations. Further evidence for the sharp onset is afforded by an examination of the singular values obtained through Principal Component Analysis, which suggest that near the critical value of M corresponding to the frequency jump, there is always only one dominant mode, implying no mode competition close to the transition point. We also note that in line with the theoretical understanding that that the development length for the global mode is, in principle, infinite at the transition boundary and onset of the global mode (in terms of the controlling parameters) \citep{Couairon1997} but shortens away from the transition boundary as one moves into the regime of absolute instability, the disappearance of a region of parallel flow for large M, compared to M=1 is an indicator of the instability being controlled by inlet conditions.
The question of sensitivity (or lack thereof) to external noise remains to be explored. Since the flow is gravity-fed, pump-driven oscillations such as those used by \citet{dOlce2009} are difficult o implement and current work focuses on applying vibrations to the diffuser section upstream of the nozzle. A strong response by the system, as in terms of enhanced amplitude of oscillations, to forcing frequencies near the natural frequency of the instability would provide further strong support for the idea that the observed helical mode correspodnds to the absolutely unstable mode found in the companion computational paper by \citet{Yang2021}. However, it should be noted that this may not necessarily constitute clinching evidence; indeed, \citet{Hallberg2008} have shown that the global mode is a low-density jet can be overwhelmed by sufficiently strong forcing. We also note that the decrease in the Strouhal number with increasing M as exemplified in Figs. \ref{fig:HW_f_vs_M} and \ref{fig:f_lambda_vs_M}(a) is closely aligned with behavior of the absolutely unstable frequency, as predicted by spatio-temporal linear stability analysis for a specific tanh-type profile (see Fig. of citet{Yang2022}).
Lastly, it should be noted that global mode characteristics are closely linked to inlet profiles, and therefore the influence of the boundary layer thickness needs to be explicitly addressed. In the present study, this thickness is directly linked to the Reynolds number through the nozzle contraction profile, and future studies will require disentangling the relative contributions of these two parameters to the evolution of the flow.
\textbf{Acknowledgement} We are grateful for useful discussions with David Forliti and Paul Strykowski during the preparation of this manuscript. We also acknowledge the assistance from Justin Chen in acquiring some of the images.
\bibliographystyle{jfm}
\section{\label{intro} INTRODUCTION}
Shear layers with spatially variable fluid physical properties occur in a variety of industrial and natural systems. The variations in density and/or viscosity may occur due to temperature gradients, as in the case of a plasma torch \cite{Duan2002}, static mixers \cite{Cao2003}, reacting flows \cite{Pathikonda2021}, or due to species concentration gradients, such as salinity gradients set up when an estuary enters an ocean. While most situations will feature both density and viscosity effects, the fluid dynamics of variable density jets have been extensively studied, both theoretically \cite{Huerre1985, Yu1990, Lesshafft2006} and experimentally \cite{Kyle1993, Yu1993, Hallberg2006}. In particular, low-density jets have been shown to be a member of a class of globally unstable flows \cite{Huerre1990}, which are characterized by enhanced by the sudden onset of a regime with enhanced mixing, self-sustained oscillations and insensitivity to external forcing. The frequency of the global modes in the near-field of low density jets has been linked to the existence of local profiles over a finite streamwise extent that are absolutely unstable in the framework of local spatio-temporal linear stability analysis \cite{Chomaz2005, Pier2001}. While the primary mechanism of breakdown of the flow is inviscid, arising from the baroclinic torque established by gradients in density and pressure, there are strong indications \cite{Hallberg2006} that viscosity does modify the onset of global modes, as well as their frequency. Hallberg and Strykowski [] conducted experiments with multiple nozzle geometries, thereby independently studying the effects of shear layer thickness, density ratio and jet Reynolds number, and found a weak but perceptible effect of jet Reynolds number on the global mode frequency. The linear stability calculations of Lesshafft\cite{Lesshafft2006} and Srinivasan et. al. \cite{Srinivasan2010} also suggest that the frequency and transition boundary between convective and absolute instability are affected by the viscosity in the form of the Reynolds number. It is therefore natural to inquire into the effects of variations in viscosity between the jet and ambient, which is the focus of this study. \\
Strong gradients in viscosity are unlikely to be established in gas flows, and we look to other situations where the role of viscosity gradients has been more extensively investigated. In fact, in contrast to free shear flow, an extensive body of literature on variable viscosity flows addresses pressure-driven internal flows of high Schmidt number fluids \cite{Govindarajan2014b}. While viscosity is instinctively assumed to have a stabilizing influence on the growth of disturbances, it is responsible for altering the base state of a flow, often creating sharp velocity gradients through the no-slip condition and therefore serving as a source of disturbance kinetic energy. It has long been known that a jump in viscosity across a sharp interface can lead to long-wave at any Reynolds number \cite{Yih1967} or short-wave instabilities at low Re \cite{Hooper1983}. Here we focus on flows of miscible fluids; the immiscible situation is covered in reviews by Joseph et al. \cite{Joseph1997} and more recently, Govindarajan and Sahu \cite{Govindarajan2014b}. Mention should also be made of the extensive work done in the context of liquid atomization, on planar shear layers with gas-;iquid streams \cite{ Matas2011, Otto2013, Fuster2013, Ling2019, Bozonnet2022}. Together, these studies have shown that viscous stability calculations are required to match theory with experiment; that absolute instability of co-flowing gas/liquid streams is supported when velocity defects immediately downstream of a splitter plate are considered, and match experimentally observed frequencies; and that confinement and the finite thickness of the gas stream play an important role in destabilization. However, viscosity ratios of the two streams, when considered, were always extremely small. and the effects of this ratio were rarely isolated.
Ern et al. \cite{Ern2003} showed the destabilizing effects of a finite thickness interface marked by gradients in velocity and viscosity, and demonstrated that for certain parameter ranges, the instability could be stronger than that of the corresponding sharp-interface configuration. Sharp gradients in velocity profile, combined with variations in velocity profile, lead to additional source terms in the equation for disturbance kinetic energy, which drive the growth of instabilities near the diffuse interface. Ranganathan and Govindarajan \cite{Ranganathan2001} performed a temporal stability analysis of the effects of diffusion in channel flow of two fluids in a three-layer configuration, and found that when the critical layer (the region where the wave speed matched the mean velocity profile), the flow was strongly stabilized or destabilized when the more viscous fluid was adjacent to the wall or in the interior. For the pipe geometry, Selvam et al. \cite{Selvam2007,Selvam2009} performed a linear stability analysis that predicted the onset of absolute instability for low Reynolds numbers when the viscosity contrast is sufficiently high, and the diffuse interface is located in a certain range of radial locations with respect to the pipe radius. They find that when the core fluid is more viscous, the flow can be at best unstable over a certain Reynolds number range, with the axisymmetric mode being dominant. When the less viscous liquid is in the core, helical modes are favored, and can lead to absolute instability. These calculations were supported by Direct Numerical Simulation and a global linear stability analysis. These calculations partially reproduced the experimental observations of \cite{DOlce2008}, who reported pearl- and mushroom-shaped instabilities in the Reynolds number range of 2-60, with no sharp transitions in either wavelength or frequency that would provide strong evidence of a global mode. The helical modes observed by \cite{Cao2003} for injection of a low-viscosity fluid into a static mixer are in line with the theoretical results discussed above.\\
Virtually no experimental data are available for miscible free shear layer flows with strong viscosity gradients, in either planar or cylindrical geometries. A notable situation that features shear layers with significant viscosity variation, accompanied by density variation is that of the buoyant jet \cite{Subbarao1992, Bharadwaj2017}. This has been confirmed to be an absolute instability \cite{Chakravarthy2018} when realistic viscosity profiles are included and either the density ratio between the ambient and the jet is greater than 2, or when the Richardson number is greater than 1; however viscosity ratios are not large.
It is difficult to translate insights from confined flows to unconfined shear layers such as jet flows, due to the fundamental differences in velocity profiles, characterized by inflection points in the case of jets. Any insights from pressure-driven flow studies has to be interpreted with caution, since confinement is known to play both stabilizing and destabilizing roles in other situations involving absolute instability of single phase \cite{Juniper2006a, Healey2009, Yang2021} or two-phase flows \cite{Bozonnet2022}. Further, the seminal works of Rayleigh [] and Tomotika \cite{Tomotika1935} considered capillary flows of liquid filaments in another viscous medium in the limit of negligible Reynolds number and are not relevant to the present work which is focused on large Reynolds numbers. However, it is interesting to note that the helical mode is found to arise from an inviscid mechanism based on the Rayleigh criterion for the base profiles used. This would be expected to further favor the establishment of such modes in free shear flows. Sahu and Govindarajan \cite{Sahu2014} considered a planar shear layer configuration, and the emergence of an overlap mode when the gradients in velocity and viscosity occur in the same region. Destabilization was enhanced when these layers overlapped, and with decreasing thickness of either of the gradient regions. In line with inviscid theory, the configuration was found to be absolutely unstable when countercurrent velocity profiles were used for the base state. More recently, Yang and Srinivasan \cite{Yang2022} carried out a linear stability analysis of base profiles corresponding to the near-field of a jet emerging into an ambient with a different viscosity. Their base profiles reflected modifications to the standard tanh- profile typically used in the analysis of jet flows \cite{Mattingly1974}, such as an inward radial shift due to the decelerating effects of a more viscous ambient, and concentration gradient regions that were much thinner than the momentum thickness. As is typical of jet flows, the axisymmetric and helical m odes had nearly equal temporal growth rates over a wide range of conditions specified by the jet Reynolds number, ambient-to-jet viscosity ratio, momentum and concentration layer thicknesses. However, beyond a critical value of viscosity ratio that was Reynolds number=dependent, absolute instability of the flow was supported, with the helical mode being strongly favored over the axisymmetric mode. A more systematic study of the transition boundary between convective and absolute instability as a function of the operating parameters is currently underway. \\
With the above preliminary results in mind, we carry out a study that seeks to isolate the effects of large viscosity contrast between a jet and its surroundings. The goals of the present study are to characterize the near-field of a low-viscosity jet at moderate Reynolds numbers ($1500 < Re < 3500$) for ambient-to-jet viscosity ratios ranging from 1 to 45, and to examine the flow field for any evidence of global modes. This article is structured as fellows. Section 2 describes the experimental facility used to achieve a neutrally buoyant jet with high viscosity contrast. Section 3 describes the flow visualization and the observation of disturbance modes. Section 5 describes identification of the dominant modes using a Proper Orthogonal Decomposition (POD)-based technique applied to the images from visualization. Section 6 provides a summary and conclusions.
\section{Experiments}
The study of the effects of viscosity gradients alone on the development of instabilities in the near-field of a jet requires the elimination of density effects, as well as a quiet facility with a minimum amount of external disturbances. Accordingly, experiments were carried out in a jet facility shown in Fig. \ref{fig:jetfacility} that utilizes gravity to attain the required flow rates. The experiments were performed in the vertical configuration. A large overhead reservoir delivered fluid to a nozzle located in a test section of square cross-section through a flow meter, a diffuser section and a flow straightener composed of laminar flow elements. The nozzle has a fifth-order polynomial profile with zero slope and curvature at its inlet and exit planes, and imposes an area contraction of 87 on the entering flow. The nozzle exit diameter $D$ is 6 mm. \\
Jet Reynolds numbers are defined based on the nozzle exit diameter D and the average velocity $\bar{U}$ of the flow, as inferred from measurement of the volumetric flow rate from the flowmeter (Dwyer ****, accuracy of 2\%),
\begin{equation}
Re = \frac{\rho \bar{U}D}{\mu _j}
\end{equation}
where $\mu _j$ is the viscosity of the injected fluid.
The requirement of having a wide range of viscosity contrast defined in terms of the ambient-to-jet viscosity ratio requires the use of liquids. Salt water of nominal density 1042 $kg/m^3$) is chosen as the test fluid, in order to facilitate density-matching as explained below. Reynolds numbers up to 4000 can be attained using this reservoir/nozzle combination. \\
\begin{figure}
\centering
\includegraphics[height=4in]{figures/hotwire_setup.jpg}
\caption{Sketch of the jet facility used to produce a low-viscosity jet sing gravity-driven flow}
\label{fig:jetfacility}
\end{figure}
The jet exhausts into the test section, which is made of transparent polycarbonate and has a square cross-section with inner dimensions $240\times 240 mm$. Overflow ports near the top of the tank enable maintenance of a constant fluid height in the test section during operation. The top of the tank is open to allow direct mounting of a hot-film anemometry system. The fluid in the tank creates the desired viscosity ratio, which is defined as
\begin{equation}
M = \frac{\mu _\infty}{\mu _j}
\end{equation}
where the subscript $\infty$ refers to test section conditions. For this study, propylene glycol and salt water were used as the two fluids. Propylene glycol in its pure form has a viscosity of 42 $mPa·s$, approximately 45 times that of water, and has a density of 1036 $kg/m^3$, which is only a few percent above the density of water. Industrial-grade propylene glycol used in this work was often found to have even higher viscosity values, and therefore each batch of glycol was measured for its density and dynamic viscosity. A salt water solution was then prepared in order to match the density to within a tenth of a percent ($\frac{\Delta \rho}{\rho} = |\frac{\rho _j -\rho _\infty}{\rho _j}| < 0.0005$). These fluids are Newtonian over the range of strain rates imposed, are very miscible with each other, eliminating surface tension as a relevant parameter. Nevertheless, as we shall see, the interface thickness has no time to develop diffusively and essentially remains a sharp interface in the near -field of the jet.
\subsection{Constant Property Jet Profiles}
Hot-film anemometry was used to first characterize the jet facility to establish the base flow for a constant property jet. Fpr a water jet issuing to a water ambient, anemometry was used to characterize the mean velocity profiles and background noise level, as well the shear layer thickness of the jet at the nozzle exit plane. Figure \ref{fig:meanprofiles}(a) shows velocity profiles emerging from the jet for multiple Reynolds numbers. The profiles are mostly top-hat, characterized by a steep decrease in magnitude in the shear layer towards the quiescent ambient fluid. A two-dimensional trace of voltage (Fig. \ref{fig:meanprofiles}b) at the exit plane (z/D=0.1) confirms the axisymmetric nature of these profiles. Momentum thickness of the shear layer were evaluated as a function of Reynolds number by integrating radially from the centerline to a location where the velocity decreased to 10\% of the centerline; further radial measurements were avoided as the hot film responds unreliably to the low velocities in the entrained flow. The momentum thickness is evaluated as
\begin{equation}
\theta \theta = \int_{0}^{\infty} \frac{U(r)-U_{\infty}}{U_c-U_{\infty}}[1-\frac{U(r)-U_{\infty}}{U_c-U_{\infty}}]dr
\end{equation}
The laminar nature of the jet boundary layer at the exit plane is checked (Fig. ~\ref{fig:meanprofiles}(c)) by observing a linear relationship between $D/\theta$ and $\sqrt{Re}$. The constants in the fit are unique to the nozzle geometry, reflecting the acceleration imposed by the area contraction and the resultant thinning of the boundary layer entering the nozzle. Profiles at multiple downstream locations within the first half-diameter can be well-represented by an equation of the form used by Mattingly and Chang \cite{Mattingly1974}:
\begin{equation}
\frac{u}{U_c} = 0.5(1+tanh(\frac{D}{8\theta}(\frac{1}{r}-r))
\end{equation}
We now turn to the fluctuations in the jet velocity at the exit plane. The turbulence intensity normalized by the average velocity. The profiles are shown in figure \ref{fig:turb_base}(a) alongside profiles measured by Todde et al.
(2009) in their work on low Reynolds number free jets. It can be seen that the turbulence intensity profile has a comparable trend, although the current work has much lower centerline turbulence intensity. This
speaks to the benefit of having a gravity-fed jet, free from any disturbances downstream from pumps or fans.
Lastly, we examine the spectral content of the flow at the exit plane in fig. \ref{fig:turb_base}, and find no discrete peaks in the frequency spectrum, assuring that the jet is a low-turbulence system with little ambient noise.
\begin{figure}
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{selfsimilar.jpg}
\end{subfigure}
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{Ian_3D_vel_profile.jpg}
\end{subfigure}\\
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{D_Theta.png}
\end{subfigure}
\caption{(a) Velocity profiles at multiple Reynolds numbers (b) Two-dimensional trace of velocity at z/D=0.1, showing symmetry of profile about the axis (c) Shear layer thickness as a function of Reynolds number } \label{fig:baseflow1}
\label{fig:meanprofiles}
\end{figure}
\begin{figure}
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{vel_fluc.jpg}
\end{subfigure}
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{M=1_spectrum.jpg}
\end{subfigure}\\
\caption{(a) Radial profile of turbulence intensity at Re = 1688, M=1 (b) Frequency spectrum of voltage fluctuations } \label{fig:turb_base}
\end{figure}
\subsection{Test Procedure}
For experiments with propylene glycol as the ambient fluid, each set of experiments is conducted at a constant Reynolds number governed by the flow rate. Over the course of several test runs, the viscosity of the tank (and hence the value of M) decreases due to mixing with the injected salt water. This makes the test runs inherently quasi-transient. Therefore, a careful procedure was followed to minimize the effects of variation of M during each test run. We first estimate the variation of M during a typical test run. At a salt water-glycol interface, diffusion acts to thicken the interface to yield a concentration thickness given by $\sqrt{\gamma t} $ where $\gamma$ is the binary diffusion coefficient of propylene glycol into water ($1.1 \times 10^{-9} \hspace{0.2em}m^2/s$). For a one-minute long trial run, this yields a diffusion length of the order of 0.01D. In practice, test runs were much shorter, typically lasting 20-30s after the initial starting vortex had passed out of the field of view. During this period, high-speed images were acquired digital camera operating at 500 fps and at $1024\times 1024$ px resolution. After the image acquisition was completed, the flow was turned off, the tank was stirred with a mixer and allowed to settle and become quiescent again, before the next trial (typically 30 minutes). A sample of tank fluid was taken for subsequent viscosity and density measurement for determining the value of M for each trial. In this way, at each Reynolds numbers, values of M starting from 50 and descending down to 15 were attained. \\
It is reasonable to expect that since the salt-water and propylene glycol were initially density matched to within 0.1\% before the start of the experiments, that the density would remain the same through the experiments, even as the bulk viscosity in the tank decreased. This expectation was somewhat belied --- aqueous solutions of propylene glycol undergo a slight contraction in volume that is concentration-dependent. Figure \ref{fig:density_M_vs_trial}(a) shows the variation of density and viscosity in the test section after a series of trial runs. It is apparent that the specific gravity varies between the 1.036 of pure propylene glycol to a maximum of 1.051, or about 1.44\%. The injected salt water jet continues to have a specific gravity of 1.036, raising the prospect of confounding buoyancy effects. The importance of buoyant forces relative to jet inertia is assessed by evaluating the Richardson number,
\begin{equation}
Ri = \frac{g\Delta \rho D}{\rho _mU_j^2}
\end{equation}
This is plotted in Fig. \ref{fig:density_M_vs_trial}(b) against the Reynolds number, assuming an average density difference of 0.7\% over all runs. For Reynolds numbers greater than 1500, Ri is less than 0.1 and can be ignored.
\begin{figure}
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{Ian_rho_mu.jpg}
\end{subfigure}
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{Ri_Re.jpg}
\end{subfigure}
\caption{(a) Variation of density and viscosity in test section during a sequence of runs (b) Richardson number as a function of Re}
\label{fig:density_M_vs_trial}
\end{figure}
\section{Results}
\subsection{Flow Visualization}
Preliminary images for M=1 (water jet into water ambient) were acquired with an 18MP camera whose lens was equipped with an orange filter. The jet fluid was dyed with Rhodamine 6G, and the tank volume was illuminated with a blue LED light. The emission by rhodamine in the orange part of the spectrum was captured and shows the breakdown of the jet. Figure \ref{fig:flowviz} shows the axisymmetric nature of the instabilities dominating the breakdown process, after developing from an initial nearly parallel near-field region. As Re increases, this distance becomes palpably shorter. Unlike the observations of Mattingly \& Chang \cite{Mattingly1974}, no evidence of an eventual competition between the axisymmetric mode and a growing helical mode in the far-field is observed. On the other hand, when the jet emerges into an ambient medium of propylene glycol (M=45), helical instabilities are observed over a range of Re from approximately 1600 to 2600. Figure \ref{fig:flowviz}(b) shows a sequence of images taken at M=1 and a Reynolds number of 2009. Of note is the disappearance of the parallel flow region in the near-field, with the helical mode almost instantaneously developing at the exit. Some discrete bright spots visible in every image are artifacts due to bubbles being introduced in the test section during the stirring process, which remain suspended due to the high fluid viscosity. We also note that the wavelength of the disturbances seems substantially lower than that of the axisymmetric instability at M=1.
These two sets of images suggest that there must exist a transition value (or range) of M for every fixed value of Re, where the dominant mode changes from axisymmetric to helical, and experiments were conducted to elucidate the transition behavior. Figure~\ref{fig:const_Re_varying_M} shows a sequence of images captured for Re= 2009. The transition of the dominant instability from helical to axisymmetric, as M decreases from * to * is clearly evident. Nevertheless, it is difficult to assign a precise value for the transition value of M with confidence in all cases. Inspection of multiple images allows to assign a transition value of M close to ** in this instance. However, in some cases, the images appear to show both axisymmetric and helical features, with two distinct frequency peaks in the hot film spectrum (discussed subsequently), and no clear transition is evident, especially at lower Re. Due to the nature of the experiment, involving discrete steps in M, a fine-grained transition value could not be determined in all cases. Nevertheless, observations clearly indicate that this transition value of M is Reynolds number-dependent. Figure~\ref{fig:M_Re_plane} shows our estimate for the transition value of M as a function of Reynolds number. For Re$>$ 1600, the transition M increases as a function of Re; below Re=1600, the instability was weak, and it was difficult to distinguish the nature of the mode. For higher Re, the transition value of M increases with Re, and at Re=2800 (not shown in Fig. \ref{fig:M_Re_plane}, the first trial showed a helical mode before switching into an axisymmetric mode, suggesting the critical viscosity required is higher.
\begin{figure}
\centering
\begin{subfigure}{\textwidth}
\includegraphics[width=0.98\textwidth]{M=1_flowviz.jpg}
\caption{}
\end{subfigure}
\begin{subfigure}{\textwidth}
\includegraphics[height=3.5in]{M=45_flowviz.jpg}
\caption{}
\end{subfigure}
\caption{(a) Growth of axisymmetric instabilities for M=1 and multiple Re, (b) Helical modes observed at M=45, Re = }
\label{fig:flowviz}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=4in]{constant_Re_varying_M.jpg}
\caption{Images showing the transition from helical to axisymmetric modes as M is decreased}
\label{fig:const_Re_varying_M}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=3in]{M_Re.jpg}
\caption{Transition boundary in (M,Re) space from helical to axisymmetric modes. Both modes apprear to be present at low Re.}
\label{fig:M_Re_plane}
\end{figure}
\clearpage
\section{Hot Film Anemometry}
Hot film anemometry was used to characterize the flow for values of M greater than unity. With the mixing of the two fluids and the change in Prandtl number, the calibration for the hot film could no longer be used, and the voltage response is presented. Here we are interested in the spectral content of the velocity fluctuations at different downstream distances, as well as the rate of growth of the disturbance relative to the constant property jet. Figure \ref{fig:HW_FFT_vs_z}(a) and (b) show the evolution of the spectrum along the centerline and in the shear layer for M= * and Re=1682. A distinct frequency peak is visible at all locations in the near field. The strength of voltage fluctuations (Fig. \ref{fig:vol_fluc}) for M=1 shows a relatively gentle increase downstream; for large M the strength shows a sharp increase within one jet diameter, appearing to saturate within a few diameters.
The dependence of the dominant frequency on the viscosity ratio, as detected by hot film anemometry, is plotted in Fig. \ref{fig:HW_f_vs_M}. Following the experimental sequence and moving from high values of M to low values, one sees an increasing trend while the helical mode remains dominant. .
\begin{figure}
\centering
\vspace{3in}
\caption{Evolution of velocity spectra in the downstream direction for M=39, Re= on the jet axis and in the shear layer. Top: centerline variation for Re=2013. Bottom: spectra in the shear layer. }
\label{fig:HW_FFT_vs_z}
\end{figure}
\begin{figure}
\centering
\vspace{3in}
\caption{Root mean square value of voltage fluctuations along the centerline and shear layer for M=1 and M=45 at Re=2000}
\label{fig:vol_fluc}
\end{figure}
\begin{figure}
\centering
\vspace{3in}
\caption{Dominant frequency, as identified by hot film anemometry, as a function of M for Re=1682.}
\label{fig:HW_f_vs_M}
\end{figure}
\clearpage
\subsection{Image Analysis}
The hot film measurements strongly indicate the existence of a single dominant mode that saturates in intensity in the first few diameters downstream of the jet exit. However, the increased conductivity of the liquid due to the dissolved salts resulted in increased contamination, pickup of electrical line noise despite probe shielding, and the occasional air bubbles introduced into the tank due to mixing that would stick to the hot film, together resulted in a very low rate at which meaningful data were acquired. As a result, LID data were chosen as a means of investigating the growth of unstable modes. The orange filter on the camera lens ensured that the jet could be strongly distinguished against the background, by isolating the emission from the Rhodamine dye under blue illumination. Applying a threshold intensity to the grayscale images allows determination of the jet boundary; the diameter of the jet as determined from the result was very weakly sensitive to the threshold value, but we are primarily interested in the frequency, which is unaffected by the choice of threshold. To study the spatial evolution of the oscillations of the interface, we examine the jet width at 4 locations downstream of the jet exit, as plotted in Fig. \ref{fig:jet_width_oscillations}. The amplitude of oscillations shows a non-linear increase, and in Fig. \ref{fig:interface_FFT} we further examine the amplitude and frequency of these oscillations. Figure \ref{fig:interface_FFT} shows the sharp peak in the power spectrum that might be surmised from the oscillatory waveform in Fig. \ref{fig:jet_width_oscillations}. Figure \ref{fig:interface_FFT}(b) shows the variation of the amplitude of the dominant frequency in the downstream direction. Again, as with the anemometry measurements, the oscillations show an exponential increase in the disturbance amplitude, before saturating at z/D=* .
\begin{figure}[b]
\centering
\vspace{3in}
\caption{(a) Time variation of the jet width in pixels at different downstream distance (b) Square of the amplitude of the coefficients of the Fourier transform, $A^2(f)$ }
\label{fig:jet_width_oscillations}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=0.95\textwidth]{Power_spectrum_A2.png}
\caption{}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=0.95\textwidth]{A2_growth_zbyD.png}
\caption{}
\end{subfigure}
\caption{(a) Power spectrum of oscillation of interface at z/D= for M=38, Re=2400. (b) Growth of the square of the amplitude of the Fourier coefficient of the dominant mode in the downstream direction}
\label{fig:interface_FFT}
\end{figure}
To ascertain the nature of this instability, that develops much faster than the axisymmetric instability of the constant property jet, we verified that the frequencies at the different downstream stations shown in Fig. \ref{fig:jet_width_oscillations} are identical. Another way of assessing the spatially invariant `global' nature of this frequency is to examine the intensity records of at single pixels in the shear layer. Figure \ref{fig:pxfluc} shows the frequency spectrum of 4 pixels at two different downstream locations, on either side of the jet. The frequencies are identical, and provide further circumstantial evidence that the instability observed at large M is a global mode, corresponding to absolute instability of the near-field profiles. This putative global mode has a frequency that depends on the parameters that define the flow, such as the inlet Reynolds number, viscosity ratio M, and the inlet velocity profile, specified by the momentum thickness $\theta$. In the experiment, the values of Re and $\theta$ are conjoined through the specific geometry of the nozzle. Further, it is experimentally difficult to conduct trials at constant M, while changing Re. Therefore, we present the global mode frequency at constant Re (and $\theta$) as a function of M. Figure \ref{fig:f_lambda_vs_M}(a) shows that the waves developing on the interface have a frequency that decreases as the ambient viscosity is reduced from a starting value of M$\approx$ 45. For this data set taken at Re= 2000, there is a sharp increase in frequency near the observed transition from helical to axisymmetric mode. We interpret this as further evidence of the helical mode being driven by an absolute instability of near-field profiles. After the transition to the axisymmetric mode, there is a further decrease in frequency, before the curve displays an asymptotic behavior as M is further reduced.
The corresponding wavelengths, as determined from inspection of the images, are shown in Figure \ref{fig:f_lambda_vs_M}(b). Knowledge of the wavelength and frequency allows us to calculate the phase velocity of the dominant mode; this is plotted in Figure \ref{fig:f_lambda_vs_M}(c).
\begin{figure}
\centering
\includegraphics[width=\textwidth]{pxfluc_FFT.png}
\caption{Spectrum of pixel intensity fluctuation in shear layer at four locations in the near-field of the jet.}
\label{fig:pxfluc}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=0.9\textwidth]{f_vs_M.png}
\caption{}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=0.9\textwidth]{lambda_vs_M.png}
\caption{}
\end{subfigure}
\caption{Variation of instability characteristics along the constant (Re,$\theta$) curve, as M decreases dueing the experiment. (a) Frequency (b) Wavelength (c) Phase velocity }
\label{fig:f_lambda_vs_M}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=4in]{SVD_values.jpg}
\caption{The first twenty singular values of the spatial modes, obtained from a sequence of images at Re=2400 and M=39.}
\label{fig:SVD_values}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}{0.32\textwidth}
\includegraphics[width=\textwidth]{SVD_Mode_1.eps}
\caption{}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\includegraphics[width=\textwidth]{SVD_Mode_2.eps}
\caption{}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\includegraphics[width=\textwidth]{SVD_Mode_3.eps}
\caption{}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\includegraphics[width=\textwidth]{SVD_Mode_4.eps}
\caption{}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\includegraphics[width=\textwidth]{SVD_Mode_5.eps}
\caption{}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\includegraphics[width=\textwidth]{SVD_Mode_6.eps}
\caption{}
\end{subfigure}
\caption{The first 6 spatial modes, obtained using Singular Value Decomposition performed on 4000 images acquired for a helical mode at Re=2009, M=39}
\label{fig:SVD_Modes}
\end{figure}
\textbf{Acknowledgement} We are grateful for useful discussions with David Forliti and Paul Strykowski during the preparation of this manuscript. We also acknowledge the assistance from Akash Dhotre and Justin Chen in acquiring some of the images.
\clearpage
\bibliographystyle{unsrt}
|
2,877,628,090,985 | arxiv | \section{Framework}
We start by reminding the set-up of our computation. We consider the usual action of Higgs inflation, ignoring the potential to focus on the effects of the non minimal coupling
\begin{equation}
\mathcal{S} = \int d^4x \sqrt{-g}\left(\frac{1}{2}M_P^2\Omega^2 R - |\partial H|^2\right) \hspace{\baselineskip}\text{with}\hspace{\baselineskip}\Omega^2 = 1 + \frac{2\xi|H|^2}{M_P^2}
\end{equation}
Calculations with this action are simpler in the Einstein frame to which we go with the transformation:
\begin{equation}\label{conformal_transformation}
g_{\mu\nu} \rightarrow \Omega^{-2}g_{\mu\nu}, \hspace{\baselineskip} g^{\mu\nu} \rightarrow \Omega^{2}g^{\mu\nu}, \hspace{\baselineskip} \sqrt{-g}\rightarrow \Omega^{-4}\sqrt{-g}, \hspace{\baselineskip}R \rightarrow \Omega^2 R - \underline{3/2\Omega^{-2}(\partial\Omega^2)^2}
\end{equation}
The underlined term in the transformation of $R$ being present in the metric formalism but not in Palatini. From now, we will only work in the Einstein frame. The action then becomes
\begin{equation}\label{Einstein_frame_action}
\mathcal{S} = \int d^4x \sqrt{-g}\left(\frac{1}{2}M_P^2 R - \frac{1}{\Omega^2}|\partial H|^2 - \underline{\frac{3M_P^2}{4}\frac{(\partial\Omega^2)^2}{\Omega^4}}\right)
\end{equation}
In \cite{Antoniadis} we considered the Higgs to be a complex singlet $H = (\phi_1 + i\phi_2)/\sqrt{2}$. Although not "realistic", it seemed enough to make our point on the ultraviolet behaviour. Then, we introduced by hand a large, inflationary background $2\bar H^2 \gg M_P^2/\xi$, taken in the direction of $\phi_1$ without loss of generality to be $\phi_1 = \bar\phi_1 + \phi_1'$. In this notation $\phi_1'$ is then the physical Higgs and $\phi_2$ is a Goldstone boson. After doing so, we expanded the different terms of the action to find the following interactions between $\phi_1'$ and $\phi_2$, using $\bar\phi_1^2 \gg M_P^2/\xi$. This results into several simplifications. For instance, in the Palatini formalism we obtain
the following lagrangian terms in a self-explanatory notation:
\begin{equation}\label{interactions_previous_computation}
\mathcal{L}_{\chi_1'^3} = \frac{\sqrt{\xi}}{M_P}\chi_1'(\partial\chi_1')^2, \hspace{\baselineskip}
\mathcal{L}_{\chi_1'\chi_2^2} = \frac{\sqrt{\xi}}{M_P}\chi_1'(\partial\chi_2)^2, \hspace{\baselineskip}
\mathcal{L}_{\chi_1'^2\chi_2^2} = -\frac{3\xi}{2M_P^2}\chi_1'^2(\partial\chi_2)^2 + \frac{\xi}{2M_P^2}\chi_2^2(\partial\chi_1)^2
\end{equation}
Here, $\chi_1' = M_P\phi_1'/(\sqrt{\xi}\bar\phi_1)$ and $\chi_2 = M_P\phi_2/(\sqrt{\xi}\bar\phi_1)$ are the canonically normalized counterparts of $\phi_1', \phi_2$. From these interactions we deduce the relevant vertices. For instance, the vertex coming from $\mathcal{L}_{\chi_1'^3}$ is
\begin{equation}
V_{\chi_1'^3} = -\frac{2i\sqrt{\xi}}{M_P}\left(p_{\chi_1', 1}\cdot p_{\chi_1', 2} + p_{\chi_1', 1}\cdot p_{\chi_1', 3} + p_{\chi_1', 2}\cdot p_{\chi_1', 3}\right)\,,
\end{equation}
where the $p_{\chi_1', i}$ are the momenta entering the vertex. Here, the minus sign is a $i^2$ from the two derivatives. \\
Using the above vertices and the propagator $G_{\chi_i}(p) = -{i}/{p^2}$, we proceed to compute the $\chi_1'\chi_2 \rightarrow \chi_1'\chi_2$ amplitude (the $\chi_1'\chi_1' \rightarrow \chi_1'\chi_1'$ vanishes due to crossing symmetry, and the same for $\chi_2$). Ignoring graviton exchange that does not depend on $\xi$ in the Einstein frame, there are four graphs corresponding to this amplitude at tree level: one for the quartic vertex, one for $\chi_1'$ exchange in the $t$ channel, two for $\chi_2$ exchange in the $s$ and $u$ channels. We denote the corresponding subamplitudes $\mathcal{M}^4, \mathcal{M}^t_{\chi_1'}, \mathcal{M}^s_{\chi_2}, \mathcal{M}_{\chi_2}^u$. Computing them yields:
\begin{equation}\label{subamplitudes_previous_computation}
\mathcal{M}^4 = -\frac{2\xi t}{M_P^2}, \hspace{\baselineskip} \mathcal{M}^t_{\chi_1'} = \frac{\xi t}{M_P^2}, \hspace{\baselineskip} \mathcal{M}^s_{\chi_2} = -\frac{\xi s}{M_P^2}, \hspace{\baselineskip} \mathcal{M}^u_{\chi_2} = -\frac{\xi u}{M_P^2}
\end{equation}
here, $s = -(p_1+p_2)^2, t = -(p_1-p_3)^2, u=-(p_1-p_4)^2$ are the Mandelstam variables in mostly plus convention (in \cite{Antoniadis}, we used a different definition, without the minus sign, but this does not make much difference), where $p_1, p_2$ (resp. $p_3, p_4$) are the momenta of the incoming (resp. outgoing) $\chi_1',\chi_2$. Correcting a sign error in the scalar propagator made in \cite{Antoniadis}, we obtain a vanishing amplitude
\begin{equation}
\mathcal{M}(\chi_1'\chi_2 \rightarrow \chi_1'\chi_2) = \mathcal{M}^4 + \mathcal{M}^t_{\chi_1'} + \mathcal{M}^s_{\chi_2} + \mathcal{M}_{\chi_2}^u = 0
\end{equation}
This computation was made in the Palatini formulation but the same cancellation occurs in the metric formalism. As a consequence of this, demanding tree-level unitarity, it seems that the usual cutoff $\Lambda\sim M_P/\sqrt{\xi}$ of Higgs inflation in a large background should be lifted up to $M_P$, where graviton interactions start to cause trouble. But before claiming this, we should make more precise the behavior in the $\bar\phi_1 \gg M_P/\sqrt{\xi}$ limit, and check what happens if we consider the Higgs to be a complex doublet, as it is in the Standard Model. This is what the next two sections are devoted to.
\section{Extension to general background}
Let us now extend the previous calculation to a general background $\bar\phi_1$, that is, without making the assumption $\bar\phi_1^2 \gg M_P^2/\xi$. The only requirement is that this background varies slowly in time and space, for it to be considered constant in the region and timescale of the interactions between $\phi_1'$ and $\phi_2$. Setting
\begin{equation}
\bar\Omega^2 = 1 + \frac{\xi\bar\phi_1^2}{M_P^2} \hspace{\baselineskip}\text{and}\hspace{\baselineskip} x^2 = \frac{\xi \bar\phi_1^2}{M_P^2\bar\Omega^2}\,,
\end{equation}
we proceed to the computation just as before first in the Palatini and then in the metric formalism.
\subsection{Palatini formalism}
In the Palatini formalism, after developing (\ref{Einstein_frame_action}), we extract the relevant interactions of the canonically normalized fields for the process $\chi_1'\chi_2 \rightarrow \chi_1'\chi_2$
\begin{equation}\label{interactions_Palatini_background}
\mathcal{L}_{\chi_1'^3} = \frac{x\sqrt{\xi}}{M_P}\chi_1'(\partial\chi_1')^2, \hspace{\baselineskip}
\mathcal{L}_{\chi_1'\chi_2^2} = \frac{x\sqrt{\xi}}{M_P}\chi_1'(\partial\chi_2)^2, \hspace{\baselineskip}
\mathcal{L}_{\chi_1'^2\chi_2^2} = \frac{(1-4x^2)\xi}{2M_P^2}\chi_1'^2(\partial\chi_2)^2 + \frac{\xi}{2M_P^2}\chi_2^2(\partial\chi_1)^2
\end{equation}
When $\bar\phi_1^2 \rightarrow M_P^2/\xi$ we have $x \rightarrow 1$ and of course we recover (\ref{interactions_previous_computation}). Here as before, the $\chi_i = \phi_i/\bar\Omega$ are the canonically normalized counterparts of the $\phi_i$. From these interactions, we deduce the corresponding vertices and using the same notation as in equation (\ref{subamplitudes_previous_computation}), we compute the following subamplitudes
\begin{equation}
\mathcal{M}^4 = \frac{2(1-2x^2)\xi t}{M_P^2}, \hspace{\baselineskip}
\mathcal{M}^t_{\chi_1'} = \frac{\xi x^2 t}{M_P^2}, \hspace{\baselineskip}
\mathcal{M}^s_{\chi_2} = -\frac{\xi x^2 s}{M_P^2}, \hspace{\baselineskip}
\mathcal{M}^u_{\chi_2} = -\frac{\xi x^2 u}{M_P^2}
\end{equation}
Adding them together yields:
\begin{equation}
\mathcal{M}(\chi_1'\chi_2 \rightarrow \chi_1'\chi_2) = \frac{2(1-x^2)\xi t}{M_P^2}\,,
\end{equation}
which is vanishing when $x \rightarrow 1$. More precisely, we can develop $x^2 \sim 1 - M_P^2/(\xi\bar\phi_1^2)$ when $\bar\phi_1^2 \gg M_P^2/\xi$, so that:
\begin{equation}\label{amplitude_Palatini_background}
\mathcal{M}(\chi_1'\chi_2 \rightarrow \chi_1'\chi_2) \simeq \frac{2t}{\bar\phi_1^2} \sim \frac{E^2}{\bar\phi_1^2}\hspace{\baselineskip}\mathrm{when}\hspace{\baselineskip}\bar\phi_1^2 \gg M_P^2/\xi
\end{equation}
It then turns out that in this limit, the energy at which $\mathcal{M} \sim 1$, identified with the cutoff of the effective theory, is $\Lambda \sim \bar\phi_1$ in the Einstein frame. This is above the usual $M_P/\sqrt{\xi}$. We will discuss it more later.
\subsection{Metric formalism}
In the metric formalism, after developing (\ref{Einstein_frame_action}), the relevant interactions for $\chi_1'\chi_2 \rightarrow \chi_1'\chi_2$ are:
\begin{eqnarray}
\mathcal{L}_{\chi_1'^3} &=& \frac{(1-6\xi+12\xi x^2)x\sqrt{\xi}}{(1+6\xi x^2)^{3/2}M_P}\chi_1'(\partial\chi_1')^2\\
\mathcal{L}_{\chi_1'\chi_2^2} &=& \frac{x\sqrt{\xi}}{(1+6\xi x^2)^{1/2}M_P}\chi_1'(\partial\chi_2)^2
- \frac{6\xi^{3/2}x}{(1+6\xi x^2)^{1/2}M_P}\chi_2(\partial\chi_1'\cdot\partial\chi_2)\label{interactions_metric_background}
\end{eqnarray}
and
\begin{equation}
\mathcal{L}_{\chi_1'^2\chi_2^2} =
\frac{(1-4x^2)\xi}{2(1+6\xi x^2)M_P^2}\chi_1'^2(\partial\chi_2)^2
+ \frac{(1+12\xi x^2)\xi}{2(1+6\xi x^2)M_P^2}\chi_2^2(\partial\chi_1')^2
- \frac{6(1 - 4x^2)\xi^2}{(1+6\xi x^2)M_P^2}\chi_1'\chi_2(\partial\chi_1'\cdot\partial\chi_2)\,,
\end{equation}
where $\chi_1' = (1+6\xi x^2)\phi_1'/\bar\Omega$ and $\chi_2 = \phi_2/\bar\Omega$ are the canonically normalized counterparts of the $\phi'_i$ and $\phi_2$, respectively.
From these interaction terms, using the same notation as in equation (\ref{subamplitudes_previous_computation}), we compute the following subamplitudes
\begin{eqnarray}
&&\mathcal{M}^4 = \frac{2(1-2x^2)(1+3\xi)}{1+6\xi x^2}\frac{\xi t}{M_P^2}, \hspace{\baselineskip}
\mathcal{M}^t_{\chi_1'} = \frac{(1+6\xi)(1-6\xi+12\xi x^2)}{(1+6\xi x^2)^2}\frac{\xi x^2 t}{M_P^2}\\
&&\mathcal{M}^s_{\chi_2} = -\frac{\xi x^2 s}{(1+6\xi x^2)M_P^2}, \hspace{\baselineskip}
\mathcal{M}^u_{\chi_2} = -\frac{\xi x^2 u}{(1+6\xi x^2)M_P^2}
\end{eqnarray}
Summing them yields
\begin{equation}\label{amplitude_metric_background}
\mathcal{M}(\chi_1'\chi_2 \rightarrow \chi_1'\chi_2) = \frac{2((1-x^2)+3\xi(1-x^4))\xi t}{(1+6\xi x^2)^2M_P^2}\,.
\end{equation}
A first observation here is that the terms proportional to $\xi^3$ cancel in the numerator, meaning that for fixed $x$, in the limit where $\xi \gg 1$, we have $\mathcal{M}\sim E^2/M_P^2$, giving a cutoff $\Lambda \sim M_P$, above $M_P/\sqrt{\xi}$. Then, $\mathcal{M}$ vanishes when $x \rightarrow 1$. More precisely, developing $x^2 \sim 1 - M_P^2/(\xi\bar\phi_1^2)$, when $\bar\phi_1^2 \gg M_P^2/\xi$, we obtain
\begin{equation}\label{amplitude_metric_background}
\mathcal{M}(\chi_1'\chi_2 \rightarrow \chi_1'\chi_2) \simeq \frac{2t}{(1+6\xi)\bar\phi_1^2} \sim \frac{E^2}{\xi\bar\phi_1^2}\hspace{\baselineskip}\mathrm{when}\hspace{\baselineskip}\bar\phi_1^2 \gg M_P^2/\xi
\hspace{\baselineskip}\mathrm{and}\hspace{\baselineskip}\xi \gg 1\,\,.
\end{equation}
Then it turns out that in this limit the Einstein frame cutoff of the effective theory is $\Lambda \sim \bar\phi_1\sqrt{\xi}$. Thus, in both the Palatini and metric formalisms, cancellations in the amplitude of $\chi_1'\chi_2 \rightarrow \chi_1'\chi_2$ scattering between the canonically normalized physical Higgs and Goldstone boson suggest a cutoff scale $\Lambda\sim\bar\phi_1$ or $\lambda\sim\bar\phi_1\sqrt\xi$ that can be much higher than the usual $M_P/\sqrt{\xi}$ in a large background. We will now check whether this result holds if we consider the scalar to be an $SU(2)$ doublet, as is the Higgs in the Standard Model.
\section{Extension to a complex Higgs doublet}
When $H$ is a complex $SU(2)$ doublet, we can parameterize it as
\begin{equation}
H = \frac{1}{\sqrt{2}}\left(\begin{matrix}\phi_1 + i\phi_2\\\phi_3 + i\phi_4\end{matrix}\right)
\end{equation}
Then, we may, without loss of generality, introduce a background in the direction of $\phi_1$ as before. In this case, the Goldstone bosons $\phi_2, \phi_3, \phi_4$ play exactly the same role and are interchangeable, so that the amplitudes of e.g. $\chi_1'\chi_3 \rightarrow \chi_1'\chi_3$ are the same as in equations (\ref{amplitude_Palatini_background}) and (\ref{amplitude_metric_background}). Note however that now there are new possible scattering processes between different Goldstone bosons. Without loss of generality we shall consider $\chi_2\chi_3 \rightarrow \chi_2\chi_3$ .
\subsection{Palatini formalism}
In this case, the relevant interactions we obtain after developing the action (\ref{Einstein_frame_action}) are $\mathcal{L}_{\chi_1'\chi_3^2}$, which is the same as $\mathcal{L}_{\chi_1'\chi_2^2}$ in (\ref{interactions_Palatini_background}), and
\begin{equation}
\mathcal{L}_{\chi_2^2\chi_3^2} = \frac{\xi}{2}\chi_2^2(\partial\chi_3)^2 + \frac{\xi}{2}\chi_3^2(\partial\chi_2)^2\,.
\end{equation}
The corresponding vertices follow as previously. Ignoring graviton exchange as before, there are now two graphs corresponding to the $\chi_2\chi_3 \rightarrow \chi_2\chi_3$ amplitude at tree level: one for the quartic vertex and one for $\chi_1'$ exchange in the $t$ channel. We denote the corresponding subamplitudes $\mathcal{M}^4$ and $\mathcal{M}_{\chi_1'}^t$ respectively. We get
\begin{equation}\label{amplitude_Palatini_doublet}
\mathcal{M}^4 = \frac{2\xi t}{M_P^2} \hspace{\baselineskip}\text{and}\hspace{\baselineskip}
\mathcal{M}_{\chi_1'}^t = -\frac{\xi x^2t}{M_P^2}\hspace{\baselineskip}\text{so that}\hspace{\baselineskip}
\mathcal{M}(\chi_2\chi_3 \rightarrow \chi_2\chi_3) = \frac{(2-x^2)\xi t}{M_P^2}
\end{equation}
This time, the amplitude does not vanish when $x \rightarrow 1$, so $\mathcal{M}\sim \xi E^2/M_P^2$, giving an Einstein frame cutoff at $\Lambda\sim M_P/\sqrt{\xi}$.
\subsection{Metric formalism}
In this case, the relevant interactions we obtain after developing the action (\ref{Einstein_frame_action}) are $\mathcal{L}_{\chi_1'\chi_3^2}$, which is the same as $\mathcal{L}_{\chi_1'\chi_2^2}$ in (\ref{interactions_metric_background}), and
\begin{equation}
\mathcal{L}_{\chi_2^2\chi_3^2} = \frac{\xi}{2}\chi_2^2(\partial\chi_3)^2 + \frac{\xi}{2}\chi_3^2(\partial\chi_2)^2 - 6\xi^2\chi_2\chi_3(\partial\chi_2\cdot\partial\chi_3)
\end{equation}
Using the same notations as in (\ref{amplitude_Palatini_doublet}), we compute the following subamplitudes
\begin{equation}
\mathcal{M}^4 = \frac{2(1+3\xi)\xi t}{M_P^2} \hspace{\baselineskip}\text{and}\hspace{\baselineskip}
\mathcal{M}_{\chi_1'}^t = -\frac{(1+6\xi)^2\xi x^2 t}{(1+6\xi x^2)M_P^2}\,.
\end{equation}
Summing them yields
\begin{equation}\label{amplitude_metric_doublet}
\mathcal{M}(\chi_2\chi_3\rightarrow\chi_2\chi_3) = \frac{2-x^2 + 6\xi}{1+6\xi x^2}\frac{\xi t}{M_P^2}\,.
\end{equation}
Again, this does not vanish in the large background limit $x \rightarrow 1$ but goes to $\mathcal{M}\sim\xi t/M_P^2$. Therefore, in this limit or in the limit where $\xi \gg 1$, we have $\mathcal{M}\sim \xi E^2/M_P^2$, giving the usual cutoff at $\Lambda\sim M_P/\sqrt{\xi}$.
\section{Discussion and conclusion}
Let us collect the results (\ref{amplitude_Palatini_background}), (\ref{amplitude_metric_background}), (\ref{amplitude_Palatini_doublet}), (\ref{amplitude_metric_doublet}) at leading order when $\xi \gg 1$ and $\bar\phi_1^2\gg M_P^2/\xi$:
\begin{center}\begin{tabular}{c||c|c||c|c||c|c}
\multirow{2}{1pt}{}& \multicolumn{2}{c||}{amplitude $\mathcal{M}$} & \multicolumn{2}{c||}{cutoff $\Lambda$ (Einstein)} & \multicolumn{2}{c}{cutoff $\Lambda$ (Jordan)}\\
& Palatini & metric & Palatini & metric & Palatini & metric \\
\hline\hline
& & & & & &\\
$\chi_1'\chi_2 \rightarrow \chi_1'\chi_2$ & $E^2/\bar\phi_1^2$ & $E^2/(\xi\bar\phi_1^2)$ & $\bar\phi_1$ & $\bar\phi_1\sqrt{\xi}$ & $\bar\phi_1^2\sqrt{\xi}/M_P$ & $\xi\bar\phi_1^2/M_P$\\
& & & & & &\\\hline& & & & & &\\
$\chi_2\chi_3 \rightarrow \chi_2\chi_3$ & $\xi E^2/M_P^2$ & $\xi E^2/M_P^2$ & $M_P/\sqrt{\xi}$ & $M_P/\sqrt{\xi}$ & $\bar\phi_1$ & $\bar\phi_1$\\
& & & & & &
\end{tabular}\end{center}
Here, in addition to the amplitudes and associated cutoff in the Einstein frame, in which we worked here, we reported cutoff in the Jordan frame. The two are simply linked by the conformal transformation (\ref{conformal_transformation}):
\begin{equation}
\Lambda^{(J)} = \bar\Omega\Lambda^{(E)} \hspace{\baselineskip}\text{and}\hspace{\baselineskip} \bar\Omega\simeq \frac{\bar\phi_1\sqrt{\xi}}{M_P} \hspace{\baselineskip}\text{when}\hspace{\baselineskip}\bar\phi_1^2 \gg \frac{M_P^2}{\xi}\,.
\end{equation}
This can help to compare with \cite{Antoniadis}, where as it has been pointed out the background $\bar{\phi}_1$ is the effective Jordan frame cutoff in all cases for both the Palatini and metric formalisms. However, summarising, we stick to the Einstein frame.
In \cite{Antoniadis} we investigated a simplified model where the Higgs was a singlet. In this case, only the first line of the table is relevant. We find that the cutoff of this simplified model can be much higher, at $\bar\phi_1$ (Palatini) or $\bar\phi_1\sqrt{\xi}$ (metric). When the Higgs is, rightfully, considered to be a doublet, there are new nonvanishing Goldstone-Goldstone amplitudes (second line of the table) for which no cancellation occurs yielding the usual cutoff at $M_P/\sqrt{\xi}$ as in \cite{Ito}. Therefore, the singlet model has a softer behaviour than the more realistic doublet model at high energies.
\section*{Acknowledgments}
We thank Asuka Ito, Wafaa Khater and Syksy Räsänen for communications allowing us to find a sign error.
\bibliographystyle{unsrt}
|
2,877,628,090,986 | arxiv | \section{Introduction}
Given a polarized surface $(S,L)$ and an integer $\delta \geq 0$,
the \emph{Severi variety} $\sev L \delta$ is the parameter
space for irreducible, $\delta$-nodal curves in the linear system
$|L|$ (see \S~\ref{s:severi}).
This text is dedicated to the proof of the following result:
\begin{thm}
\label{t:main}
Let $(S,L)$ be a primitively polarized $K3$ surface of genus $p \geq
11$ such that $\mathop{\rm Pic}(S)=\ensuremath{\mathbf{Z}} [L]$,
and $\delta$ a non-negative integer such that $4\delta-3 \leq p$.
The Severi variety $\sev L \delta$ is irreducible.
\end{thm}
It had already been proven by Keilen \cite{keilen} that in the
situation of Theorem~\ref{t:main}, for all integer $k\geq 1$ the
Severi variety $\sev {kL} \delta$ is irreducible if
\begin{equation*}
\delta <
\frac {6(2p-2)+8}
{\bigl(11(2p-2)+12\bigr)^2}
\cdot k^2 \cdot (2p-2)^2
\qquad \left(
\sim_{p \to \infty} \frac {12}{121}\cdot k^2\cdot p
\right),
\end{equation*}
and later by Kemeny \cite{kemeny} that the same holds if
$\delta \leq \frac 1 6 \bigl(2+k(p-1)\bigr)$.
Our result is valid only in the case $k=1$, i.e., for curves in the
\emph{primitive class}, but in this case our condition is better.
In a slightly different direction, we have proven some time ago in
\cite{CD} that
the \emph{universal families} of the $\sev L \delta$'s are irreducible
for all $\delta$ ($\delta = p$ included) if $3 \leq p \leq 11$ and
$p\neq 10$.
Kemeny's result is based on the observation that for any smooth
polarized surface $(S,L)$, the Severi variety $\sev L \delta$ is
somehow trivially irreducible if $L$ is $(3\delta-1)$-very ample:
Indeed, in this case the curves in $|L|$ with nodes at
$p_1,\ldots,p_\delta$ form a dense subset of a projective space of
constant dimension
for \emph{any} set of pairwise distinct points
$p_1,\ldots,p_\delta$.
Kemeny then applies a numerical criterion for $n$-very ampleness
on $K3$ surfaces due to Knutsen \cite{knutsen}.
The central idea of the present article is close in spirit to Kemeny's
observation, to the effect that provided $\dim |L| \geq 3\delta$,
the curves in $|L|$ with nodes at
$p_1,\ldots,p_\delta$ should form in nice circumstances a dense subset
of a projective space of constant dimension
for a \emph{general} choice of $\delta$ pairwise disjoint points.
It is indeed so for curves in the primitive class of
a $K3$ surface, thanks to a result of Chiantini and the first-named
author, see Proposition~\ref{prop:dk3}.
One thus gets a distinguished irreducible component of the Severi
variety $\sev L \delta$ which we call its \emph{standard component}.
For any other irreducible component $V$, the nodes of the members of
$V$ sweep out a locus of positive codimension $h_V$ in the Hilbert
scheme $S^{[\delta]}$, see Section~\ref{S:standard};
we call $h_V$ the excess of $V$.
Our applications then rely on the observation
that, in the $K3$ situation of Theorem~\ref{t:main}, for all $C \in V$ the preimage of the nodes defines a linear
series of type $g^h_{2\delta}$ on the normalisation of $C$
(see Lemma~\ref{l:dim-deltatilde}),
together with some recent results in \cite{CK} and \cite{KLM}
(Theorems~\ref{t:CK} and \ref{t:KLM} respectively)
which give some control on the families
of linear series that may exist on the normalisations of primitive
curves on $K3$ surfaces.
The latter results hold only for curves in the primitive class, and
this is the main obstruction to carry out our approach in the
non-primitive situation.
One may for instance give a two-lines proof of irreducibility in the
range $p \geq 5\delta-3$, as follows.
Assume by contradiction that there is a non-standard irreducible
component $V$ of the Severi variety $\sev L \delta$. Then for all $C
\in V$ the normalisation of $C$ has a $g^1_{2\delta}$.
By \cite{KLM} this implies
$\dim(V) = p-\delta \leq 4\delta-2$,
which is impossible in the range under consideration.
We obtain the better bound in Theorem~\ref{t:main} by proving the
estimate $h_V >2$ for all non-standard components of $\sev L \delta$.
This is done in Section~\ref{S:estim-h} by a careful study of the
singularities of curves in the intersection of the standard component
with a hypothetical non-standard component, which we are again able to
control thanks to Brill--Noether theoretic results for singular curves
on $K3$ surfaces.
\medskip
This work originates from the Oberwolfach Mini-Workshop: Singular
Curves on $K3$ Surfaces and Hyperkähler Manifolds. We thank all the
participants for the friendly atmosphere and stimulating discussions.
\section{Preliminaries}
\subsection {Severi varieties}
\label{s:severi}
We work over $\ensuremath{\mathbf{C}}$ throughout the text.
We denote by $\mathcal K_p$ the irreducible, 19-dimensional stack of
primitively polarized $K3$ surfaces $(S,L)$ of genus $p\geqslant 2$,
i.e., $S$ is a compact, complex surface with $h^ 1(S,\mathcal O_S)=0$
and $\omega_S\cong \mathcal O_S$, and $L$ a big and nef, primitive
line bundle on $S$ with $L^ 2=2p-2$, hence $\dim(|L|)=p$. The
\emph{arithmetic genus} of the curves $C\in |L|$ is $p_a(C)=p$.
In this paper we will often assume that $\mathop{\rm Pic} (S)=\ensuremath{\mathbf{Z}} [L]$, which is
the case if $(S,L)\in \mathcal K_p$ is very general, so that $L$ is
globally generated and ample, and very ample if $p\geqslant 3$.
For any non-negative integer $g\leqslant p$, we consider the locally
closed subset $\gsev L g$ of $|L|$ consisting of curves $C\in |L|$ of
\emph{geometric genus} $p_g(C)=g$, i.e., curves $C$ whose
normalization has genus $g$ (see \cite [\S~1.2] {DeSe}). We will set
$\delta=p-g$, which is usually called the \emph{$\delta$-invariant} of
the curve.
\begin{prop}[see {\cite[Proposition~4.5]{DeSe}}]
\label{prop:dim}
Every irreducible component of $\gsev L g$ has dimension $g$.
\end{prop}
For every non-negative integer $\delta\leqslant p$, we will denote by
$\sev L \delta$ the \emph{Severi variety}, i.e., the locally closed
subset of $|L|$ consisting of curves with $\delta$ nodes and no other
singularities, whose geometric genus is $g=p-\delta$. The following is
classical:
\begin {prop} [see {\cite[\S 3--4]{DeSe}}]
\label {prop:sevv}
The Severi variety $\sev L \delta$, if not empty, is smooth and pure of
dimension $g$. More precisely, if $C\in \sev L \delta$, and $\Delta$ is
the set of nodes of $C$, then the projective tangent space to
$\sev L \delta$ at $C$ in $|L|$ is the $g$-dimensional linear system
$|L(-\Delta)|:=\P (\H^ 0(S, L \otimes \mathcal I_{\Delta,S}))$ of
curves in $|L|$ containing $\Delta$.
\end{prop}
\noindent
It is indeed true that the Severi varieties of a general primitively
polarized $K3$ surface are non-empty.
\begin{prop}[see \cite {XCh99}]
If $(S,L)\in \mathcal K_p$ is general,
then $\sev L \delta$ is not empty for every non-negative integer
$\delta\leqslant p$.
\end{prop}
By Propositions \ref {prop:dim} and \ref {prop:sevv}, each irreducible
component of $\sev L \delta$ is dense in a component of $V^ L_{g}$.
Xi Chen \cite{XCh-pre16} has shown that moreover if $g>0$, then
$\sev L \delta$ is dense in $\gsev L g$ for general $(S,L) \in \mathcal{K}_p$.
We shall need the following weaker result, in which however the
generality assumption is explicit.\footnote
{Actually, the assumption in \cite[Proposition~4.8] {DeSe} is that
$(S,L)$ be very general; it is straightforward to check that the
condition $\mathop{\rm Pic}(S)=L$ is indeed sufficient for the proof in
\cite{DeSe}.}
\begin{prop}[{\cite[Proposition~4.8] {DeSe}}]
\label{prop:sev}
Let $(S,L)\in\mathcal K_p$ be such that $\mathop{\rm Pic} (S)=\ensuremath{\mathbf{Z}} [L]$. If
$2\delta <p$, then $\sev L \delta$ is dense in $\gsev L g$.
\end{prop}
\subsection{Local structure of Severi varieties}
The following is a restatement of the well-known fact that the nodes
of a nodal curve on a $K3$ surface may be smoothed independently.
It is a consequence of Proposition~\ref{prop:sevv}.
\begin{prop}
\label{prop:ext}
Let $(S,L)\in \mathcal K_p$,
$\delta < \epsilon$ be two non-negative integers,
and $V$ be an irreducible component of
$\sev L \epsilon$. Consider a curve $C\in V$, and let
$\{p_1,\ldots,p_\epsilon\}$ be the set of its nodes.
Then:\\
\begin{inparaenum}
\item [(i)] the Zariski closure $\csev L \delta$ of $\sev L
\delta$ contains $V$;\\
\item [(ii)] locally around $C$, $\csev L \delta$ consists of
$\binom \epsilon \delta$ analytic sheets $\mathcal V_\mathfrak d$, which
are in $1:1$ correspondence with the subsets
$\mathfrak d \subset \{p_1,\ldots,p_\epsilon\}$ of order $\delta$,
and such that when the general point $C'$ of $\mathcal
V_\mathfrak d$ specializes at $C$, the set of $\delta$ nodes of
$C'$ specializes at $\mathfrak d$;\\
\item [(iii)] for each such $\mathfrak d$, the sheet $\mathcal
V_\mathfrak d$ is smooth at $C$ of dimension $p-\delta$, relatively
transverse to all other similar sheets.\footnote
{in the sense that for all $\mathfrak d'$ of cardinality $\delta$,
the sheets $\mathcal V_{\mathfrak d}$ and $\mathcal V_{\mathfrak d'}$
intersect exactly along the local sheet
$V_{\mathfrak d \cup \mathfrak d'}$ of
$\csev L {|{\mathfrak d \cup \mathfrak d'}|}$ at $C$,
and their respective tangent spaces at $C$ intersect exactly along the
tangent space of
$V_{\mathfrak d \cup \mathfrak d'}$ at $C$.}
\end{inparaenum}
\end{prop}
\noindent
As an immediate consequence, we have:
\begin{cor}\label{cor:incl}
Let $(S,L)\in \mathcal K_p$ and let $V$, $V'$ be irreducible
components of $\sev L \delta$ and $\sev L {\delta'}$, with
$\delta\leqslant \delta'$. If $V'$ intersects the Zariski closure
$\overline V$ of $V$, then $V'\subset \overline V$.
\end{cor}
\subsection{Brill--Noether theory of curves on K3 surfaces}
We will use the following results.
\begin{thm}[{\cite[Theorem~5.3 and Remark~5.6]{KLM}}]
\label{t:KLM}
Let $(S,L)$ be such that $\mathop{\rm Pic}(S)=\ensuremath{\mathbf{Z}}[L]$, and
$V \subset \gsev L g$ a non-empty reduced scheme.
Let $k$ be a positive integer.
Assume that for all $C \in V$, there exists a $g^1_k$ on the
normalisation $\tilde C$ of $C$.
Then one has
\begin{equation*}
\dim(V) + \dim \bigl( G^1_k(\tilde C) \bigr)
\leq 2k-2
\end{equation*}
for general $C \in V$.
\end{thm}
\begin{thm}[{\cite[Theorem~3.1]{CK}}]
\label{t:CK}
Let $(S,L)\in \mathcal K_p$ be such that $\mathop{\rm Pic}(S)= \ensuremath{\mathbf{Z}}[L]$,
and $C \in \gsev L g$; let $\delta=p-g$.
Let $r,d$ be nonnegative integers.
If there exists a $g^r_d$ on the normalization of $C$, then
\[
\delta \geq
\alpha \bigl( rg - (d-r)(\alpha r+1)
\bigr),
\quad \text{where} \quad
\alpha =
\left\lfloor \frac {gr + (d-r)(r-1)}
{2r(d-r)}
\right\rfloor.
\]
\end{thm}
\begin{thm}[{\cite{L,GL,BFT,Gomez}}]
\label{t:GL-BFT}
Let $(S,L)\in \mathcal K_p$ be such that $\mathop{\rm Pic}(S)= \ensuremath{\mathbf{Z}}[L]$,
and $C\in |L|$
The Clifford index of $C$, computed with sections of rank one torsion
free sheaves on $C$ (see
\cite [p.~202]{DeSe} or \cite {BFT}), equals
$\lfloor \frac {p-1} 2 \rfloor$.
\end{thm}
\section{Standard components}
\label{S:standard}
\subsection{The nodal map}
Let $(S,L)\in \mathcal K_p$. For any positive integer $n$, we denote
by $S^{[n]}$ the Hilbert scheme of $0$-dimensional subschemes of $S$ of
length $n$. Recall that $S^{[n]}$ is smooth of dimension $2n$ (see \cite
{Fog}).
Consider the morphism
\[
\phi_{L,\delta}: \sev L \delta\longrightarrow S^{[\delta]},
\]
called the \emph{nodal map}, which maps a curve $C\in \sev L \delta$ to
the scheme $\Delta$ of its nodes, indeed $0$-dimensional of length
$\delta$. We
set $\Phi_{L,\delta}:={\rm Im}(\phi_{L,\delta})$. If $V$ is an
irreducible component of $\sev L \delta$, we set
\[
\phi_V:=
\restr {\phi_{L,\delta}} V\,\,\, \text{and}\,\,\,\,
\Phi_V:= {\rm Im}(\phi_{V}).
\]
Let $\Delta$ be a general point in $\Phi_{V}$. Then $\phi_{V}^
{-1}(\Delta)$ is an open subset of the linear system
$|L(-2\Delta)|:=\P (\H^ 0(S, L \otimes \mathcal I^ 2_{\Delta,S}))$ of
curves in $|L|$ singular at $\Delta$. We set
\begin{equation*}\label{eq:dim1}
\dim (|L(-2\Delta)|)=p-3\delta+h_V,
\end{equation*}
which defines the non-negative integer $h_V$, called the \emph{excess}
of $V$. By Proposition \ref {prop:sevv}, one has
\begin{equation}\label{eq:dim2}
\dim(\Phi_{V})=2\delta-h_V.
\end{equation}
The following is immediate:
\begin{lem}
\label{lem:im}
Let $(S,L)\in \mathcal K_p$, and let $V_1, V_2$ be two distinct
irreducible components of $\sev L \delta$. Then $\Phi_{V_1}$ and
$\Phi_{V_2}$ have distinct Zariski closures in $S^{[\delta]}$.
\end{lem}
\subsection{A useful lemma}
Let $C \in |L|$ be a reduced curve, and consider the conductor ideal
$A \subset \O_C$ of the normalization $\nu: \tilde C \to C$.
There exists a divisor $\tilde\Delta$ on $\tilde C$ such that
$A=\nu_*\O_{\tilde C}(-\tilde\Delta)$,
and one has
$\omega _{\tilde C} = \nu^* \omega_C \otimes \O_{\tilde
C}(-\tilde\Delta)$.
It is a classical result that
$\nu^* |L\otimes A| = |\omega_{\tilde C}|$,
see \cite[Lemma~3.1]{DeSe}.
The same argument proves that
$\nu^* |L\otimes A^{\otimes 2}| = |\omega_{\tilde C}(-\tilde\Delta)|$.
Consider the particular case when $C$ has ordinary cusps
$p_1,\ldots,p_k$ and nodes $p_{k+1},\ldots,p_\delta$ as its only
singularities. Denote by $p_1,\ldots,p_k \in \tilde C$
the respective preimages of $p_1,\ldots,p_k \in C$ by the
normalisation $\nu$, abusing notations,
and by $p_i'$ and $p_i''$ the two preimages
of $p_i$ for $i=k+1,\ldots,\delta$.
Then $A$ is the product of the maximal ideals of
$p_{1},\ldots,p_\delta$,
i.e., $A={\ensuremath{\mathcal{I}}}_{\Delta,S} \otimes \O_C$ with
$\Delta=\{p_1,\ldots,p_\delta\}$,
and
\[
\tilde\Delta = 2 \sum\nolimits _{i=1} ^k p_i
+ \sum\nolimits _{i=k+1} ^\delta (p_{i}' + p_{i}'').
\]
The previous identity $\nu^* |L\otimes A^{\otimes 2}| =
|\omega_{\tilde C}(-\tilde\Delta)|$ readily implies the following.
\begin{lem}
\label{l:indep-adj^2}
Let $j$ be the closed immersion $C \hookrightarrow S$.
One has
\[
(j\circ \nu)^* \bigl(\,
|L(-2\Delta)| \,\bigr)
= |\omega _{\tilde C} (-\tilde \Delta)|,
\]
and therefore
\dim \bigl(\, |L(-2\Delta)| \,\bigr)
= h^0 \bigl( \omega _{\tilde C} (-\tilde \Delta) \bigr)
$
\end{lem}
\subsection{Standard components}
Let $V$ be an irreducible component of $\sev L \delta$. We call $V$
\emph{standard} if $h_V=0$. If $V$ is standard and $\Delta\in \Phi_V$
is general, then
\[
0\leqslant \dim(\phi_V^ {-1}(\Delta))=\dim
(|L(-2\Delta)|)=p-3\delta,
\] hence $p\geqslant 3\delta$. Moreover if
$V$ is standard, then $\dim(\Phi_V)=2\delta$, hence $\Phi_V$ is dense
in $S^{[\delta]}$. We will prove in Proposition \ref {prop:stand}
below that if $p\geqslant 3\delta$ and if ${\rm Pic}(S)=\ensuremath{\mathbf{Z}}[L]$, then
there is a unique standard component of $\sev L \delta$. To do this, we
need to recall some basic fact from \cite {CC}.
Let $Y\subset \P^ N$ be an irreducible, $n$-dimensional,
non-degenerate, projective variety. Let $\mathcal H$ be the linear
system cut out on $Y$ by the hyperplanes of $\P^ N$, i.e.,
\[
\mathcal H=\P({\rm Im}(r))
\quad \text{where}\quad
r: \H^ 0(\P^ N,\O_{\P^ N}(1))\to
\H^ 0(Y,\O_Y\otimes \O_{\P^ N}(1))
\]
is the restriction map.
Let $k$ be a non-negative integer. The variety $Y$ is said to be
$k$-\emph{weakly defective} if given $p_0,\ldots, p_k\in Y$ general
points, the general element of
$\mathcal H(-2p_0-\ldots -2p_k)$
has a positive dimensional singular locus,
where $\mathcal H(-2p_0-\ldots -2p_k)$ denotes
the linear system of divisors in $\mathcal H$ singular at
$p_0,\ldots, p_k$.
\begin{prop}[{\cite[Theorem~1.4] {CC}}]
\label{prop:wd}
Let $Y\subset\P^ N$ be an irreducible, $n$-dimensional,
non-degenerate, projective variety. Let $k$ be a non-negative integer
such that $N\geq (n+1)(k+1)$.
If $Y$ is not $k$-weakly defective, then given $p_0,...,p_{k}$
general points on $Y$, one has:\\
\begin{inparaenum}
\item [(i)] $\dim({\mathcal H}(-2p_0-...-2p_{k}))=N-(n+1)(k+1)$;\\
\item [(ii)] the general divisor $H\in {\mathcal H}(-2p_0-...-2p_{k})$
has \emph{ordinary} double points at $p_0,...,p_{k}$, i.e., double
points with tangent cone of maximal rank $n$, and no other
singularity.
\end{inparaenum}
\end{prop}
In \cite [Theorem 1.3]{CC} one finds the classification of $k$-weakly
defective surfaces. After an inspection which we leave to the reader,
one sees that:
\begin{prop}
\label{prop:dk3}
Let $(S,L)\in \mathcal K_p$ be such that $\mathop{\rm Pic}(S)=\ensuremath{\mathbf{Z}}[L]$, and assume
$p\geq 3$. Consider $S$ embedded in $\P^ p$ via the morphism
determined by $|L|$. Then $S$ is not $k$-weakly defective for any
non-negative integer $k$.
\end{prop}
We can therefore apply Proposition \ref {prop:wd} and conclude that:
\begin{prop}\label{cor:dk3}
Maintain the assumptions of Proposition~\ref{prop:dk3},
and let $\delta$ be a non-negative integer such that
$3 \delta \leq p$.
Then given $\Delta\in S^{[\delta]}$ general,
one has $\dim (|L(-2\Delta)|)=p-3\delta$ and the general curve in
$|L(-2\Delta)|$ has nodes at $\Delta$ and no other singularities.
\end{prop}
As a consequence we have:
\begin{prop}\label{prop:stand}
Under the assumptions of Proposition~\ref{cor:dk3},
there is a unique standard component
$\stsev L \delta$
of $\sev L \delta$, which is the unique irreducible component $V$ of
$\sev L \delta$ such that $\phi_V: V\to S^{[\delta]}$ is dominant.
\end{prop}
\begin{proof}
Proposition \ref {cor:dk3} implies that there is a standard component $V$
of $\sev L \delta$ such that $\phi_V: V\to S^{[\delta]}$ is
dominant. By Lemma \ref {lem:im}, it is the unique standard
component. \end{proof}
\section{A lower bound on the excess}
\label{S:estim-h}
\noindent
This section is entirely devoted to the proof of the following:
\begin{prop}
\label{lem:main1}
Let $p \geq 11$ and $\delta >1$, $(p,\delta) \neq (12,4)$,
be integers such that $3 \delta \leq p$.
We consider $(S,L) \in \mathcal{K}_p$ such that $\mathop{\rm Pic}(S) = \ensuremath{\mathbf{Z}}[L]$.
For all non-standard component $V$ of
$\sev L \delta$, one has $h_V\geq 3$.
\end{prop}
Let $V$ be a non-standard component of $\sev L \delta$ as above. One
has $h_V >0$ by definition, and we shall proceed by contradiction to
show that $h_V$ may neither equal $1$ nor $2$.
\subsection{Proof that $h_V \neq 1$}
In the setup of Proposition~\ref{lem:main1}, we assume by
contradiction that $h_V=1$.
Then the closure of $\Phi_V$ is an irreducible divisor in
$S^{[\delta]}$. Let $\Delta\in \Phi_{V}$ be a general point.
It can be seen as the limit of a general 1-dimensional family
$\{\Delta_t\}_{t\in \ensuremath{\mathbf{D}}}$, where $\ensuremath{\mathbf{D}}$ is a complex disk, and $\Delta_t$ is
general in $S^{[\delta]}$ for $t\neq 0$.
In particular, we may assume
$\dim(\phi^{-1}_{L,\delta}(\Delta_t))=p-3\delta$ for $t\in \ensuremath{\mathbf{D}}-\{0\}$.
We define the limit $\mathcal L_\Delta$ of $\phi^
{-1}_{L,\delta}(\Delta_t)$ as $t\to 0$ as the fibre over $0 \in \ensuremath{\mathbf{D}}$
of the closure of
$\bigcup _{t\neq 0} \bigl( \phi^ {-1}_{L,\delta}(\Delta_t) \bigr)$
inside $|L| \times \ensuremath{\mathbf{D}}$.
Then:\\
\begin{inparaenum}
\item $\mathcal L_\Delta$ is a
$(p-3\delta)$-dimensional sublinear system of $|L(-2\Delta)|$;\\
\item $\mathcal L_\Delta$ is contained in
$\overline V \cap \cstsev L \delta$;\\
\item since $\sev L \delta$ is smooth, by (ii) the general
curve in $\mathcal L_\Delta$ does not belong to $\sev L \delta$, i.e.,
it has singularities worse than only nodes at the points of
$\Delta$;\\
\item as $\Delta$ moves in a suitable dense open subset $U$ of
$\Phi_V$, the union $\bigcup _{\Delta\in U} \mathcal L_\Delta$ describes a
locally closed subset of dimension
\[
\dim(\Phi_V)+(p-3\delta)=(2\delta-1)+(p-3\delta)=g-1,
\]
which is dense in an irreducible component $W$ of
$\overline V\cap \cstsev L \delta$,
where $g=p-\delta$ as usual.
\end{inparaenum}
Let $C$ be the general curve in $W$, which belongs to $\mathcal
L_\Delta$ for some general $\Delta\in \Phi_V$. By (i) and (iii) above, $C$
is singular at $\Delta$ but it is not $\delta$-nodal. By Proposition
\ref {prop:dim} one has $p_g(C)\geqslant g-1$, hence $g-1\leqslant
p_g(C)\leqslant g$. We will show that each of these two possible
values leads to a contradiction, thus proving that $h_V\neq 1$.
\subsubsection{Case $p_g(C)=g-1$}
Since $\dim(W)=g-1$, it follows from Proposition \ref {prop:sev} that
$W$ is dense in the closure of a component of $\sev L {\delta+1}$, i.e.,
$C$ is a $(\delta+1)$-nodal curve, with only one extra node
$p_{\delta+1}\not\in \Delta$. By Proposition \ref {prop:ext}, locally
around $C$ there is only one smooth branch $\mathcal V$ of $\csev L
\delta$ containing $W$ and such that when the general point of
$\tilde C$ of $\mathcal V$ specializes at $C$, then set of $\delta$
nodes of $\tilde C$ specializes at $\Delta$. This is a
contradiction, because both $\overline V$ and $\cstsev L \delta$
contain $W$. Therefore, it is impossible that
$p_g(C)=g-1$.
\subsubsection{Case $p_g(C)= g$}
Since $C$ is singular at $\Delta=p_1+\ldots+p_\delta$, it is singular
only there, and has only nodes and (simple) cusps (with local equation
$x^ 2=y^ 3$); it must have at least one cusp by (iii).
\begin{claim}\label{claim:1} $C$ has only one cusp. \end{claim}
\begin{proof} [Proof of the Claim]
Suppose that $C$ has cusps at $p_1,\ldots, p_k$ and nodes at
$p_{k+1},\ldots,p_\delta$, with $k\geqslant 1$. The tangent space to
the equisingular deformations of $C$ in $S$ is $\H^ 0(C, L\otimes
\mathcal I\otimes \mathcal O_C)$, where
$\mathcal I$ is the ideal sheaf associated
to the \emph{equisingular ideal} (see \cite [\S~3] {DeSe})
$I=\prod_{i=1}^ \delta I_{p_i}$, where:\\
\begin{inparaitem}
\item $I_{p_i}=(x, y^ 2)$, if the local equation of $C$ around $p_i$
is $x^ 2=y^ 3$, for $i=1,\ldots,k$;\\
\item $I_{p_i}$ is the maximal ideal at $p_i$, for $i=k+1,\ldots,\delta$.
\end{inparaitem}
Let $\nu: \tilde C\to C$ be the normalization.
We abuse notation and denote by
$p_1,\ldots, p_k$ their counterimages by $\nu$, whereas we denote by
$p_{i}'$ and $p_i''$ the two points of $\tilde C$ in the preimage of
$p_i$ by $\nu$, for $i=k+1,\ldots,\delta$.
By pulling back by $\nu$ the sections of
$\H^ 0(C, L\otimes {\ensuremath{\mathcal{I}}}\otimes \mathcal O_C)$
and dividing by sections vanishing at the fixed divisor
$2\sum_{i=1}^ k p_i+\sum_{i=k+1}^ \delta(p_i'+p_i'')$
(see \cite [\S 3.3] {DeSe}), we find an isomorphism
\[
\nu^ *: \H^ 0(C, L\otimes {\ensuremath{\mathcal{I}}}\otimes \mathcal O_C)
\cong
\H^0(\tilde C,\omega_{\tilde C}(-p_1-\ldots- p_k)),
\]
hence
\begin{equation}\label{eq:in}
h^ 0(\tilde C,\omega_{\tilde C}(-p_1-\ldots- p_k))
= h^ 0(C, L\otimes {\ensuremath{\mathcal{I}}}\otimes \mathcal O_C)
\geqslant \dim(W)=g-1.
\end{equation}
This implies that the points $p_1,\ldots,p_k$ are all identified by
the canonical map of $\tilde C$,
which is possible only if either $k=1$, or $k=2$ and
$\dim(|p_1+p_2|)=1$. We now prove that
$\tilde C$ may not be hyperelliptic,
hence the latter case does not occur.
By Theorem~\ref{t:KLM}, if $\tilde C$ is hyperelliptic then
$\dim(W) = g-1 \leq 2$. This contradicts our assumptions that $3\delta
\leq p$ and $p \geq 11$: indeed, as $g=p-\delta$ they imply that $g>3$.
Hence the only possibility left is that
$k=1$,
which proves the claim.
\end{proof}
Note moreover that since $k=1$, equality holds in \eqref{eq:in}.
Let $N_{C/S}\cong \restr L C$ be the normal bundle of $C$ in $S$.
We have the exact sequence
\[
\textstyle
0\to N'_{C/S} \to N_{C/S} \to T^ 1_C\cong
\mathcal O_{p_1}^2 \oplus \bigoplus_{i=2}^ \delta \mathcal O_{p_i} \to 0
\]
where $N'_{C/S}$ is the \emph{equisingular normal sheaf} of $C$ in
$S$, and one has
$
N'_{C/S}\cong N_{C/S}\otimes \mathcal I
$.
So $\H^ 0(C,N'_{C/S})=\H^ 0(C, L\otimes \mathcal I\otimes \mathcal O_C)$
is the tangent space to the equisingular deformations of $C$ in $S$.
We have $h^ 0(C,N_{C/S})=p$ and, as we saw, $h^
0(C,N'_{C/S})=g-1=p-\delta-1$. Thus the map
\begin{equation}\label{eq:T}
\H^ 0(C,N_{C/S})\to T^ 1_C
\end{equation}
is surjective, and
$\H^1(C,N'_{C/S})\cong \H^ 1(C,N_{C/S})\cong \ensuremath{\mathbf{C}}$.
Moreover the obstruction space to deformations of $C$ in $S$,
contained in $\H^ 1(C,N_{C/S})$, is zero as is well-known (see, e.g.,
\cite[\S~4.2]{DeSe}).
This implies that, locally around $C$, $\csev L \delta$ is the
product of the equigeneric deformation spaces inside the versal
deformation spaces of the singularities of $C$. By looking at the
versal deformation space of a cusp (see, e.g., \cite [p. 98] {HM}), we
deduce that $\csev L \delta$ has a double point at $C$ with a
single cuspidal sheet. This is a contradiction, because we assumed
that both $\overline V$ and $\cstsev L \delta$ contain
$C$. This contradiction proves that $p_g(C)=g$ cannot occur.\medskip
In conclusion we have proved that if $h_V= 1$ then
$p_g(C)$ equals either $g-1$ or $g$, but both these possibilities lead
to contradictions, hence $h_V \neq 1$.
\subsection{Proof that $h_V \neq 2$}
Still in the setup of Proposition~\ref{lem:main1}, we now assume by
contradiction that $h_V=2$.
Then $\dim(\Phi_V)=2\delta-2$. Let $\Delta\in \Phi_{V}$ be a general
point. Again $\Delta$ can be seen as the limit of general
1-dimensional families $\{\Delta_t\}_{t\in \ensuremath{\mathbf{D}}}$, where $\ensuremath{\mathbf{D}}$ is a disk,
and $\Delta_t$ is general in $S^{[\delta]}$ for $t\neq 0$. We consider
the closure $\mathcal L_\Delta$ of the union of all
$(p-3\delta)$-dimensional sublinear systems
$\lim _{t \to 0} \bigl( \phi^ {-1}_{L,\delta}(\Delta_t) \bigr) \subset
|L(-2\Delta)|$
as $\{\Delta_t\}_{t\in \ensuremath{\mathbf{D}}}$ varies among all families as above.
Similarly to the case $h_V=1$, we have:\\
\begin{inparaenum}
\item [(i)] $\mathcal L_\Delta$ is contained in $\overline V\cap
\cstsev L \delta$ and $\dim(\mathcal L_\Delta)=
p-3\delta+\epsilon$, with $0\leqslant \epsilon\leqslant 1$ ;\\
\item [(ii)] the general curve in $\mathcal L_\Delta$ is singular at
$\Delta$ but has singularities worse than only nodes at the points of
$\Delta$;\\
\item [(iii)] as $\Delta$ moves in a suitable dense open subset $U$ of
$\Phi_V$, the union $\bigcup _{\Delta\in U} \mathcal L_\Delta$ describes a
locally closed subset of dimension
\[
\dim(\Phi_V)+\dim(\mathcal L_\Delta)=g-2+\epsilon,
\]
which is dense in an irreducible component $W$ of
$\overline V\cap \cstsev L \delta$.
\end{inparaenum}
If $\epsilon=1$, then $\dim(W)=g-1$ and the discussion goes as in the
case $h_V=1$. So we assume $\epsilon=0$, hence $\dim(W)=g-2$. Let $C$
be the general curve in $W$. By Proposition \ref {prop:dim}, we have
$g-2\leqslant p_g(C)\leqslant g$.
We will prove that this cannot happen, thus proving that
$h_V\neq 2$. The proof parallels the one for $h_V\neq 1$.
\subsubsection{Case $p_g(C)=g-2$}
By Proposition~\ref {prop:sev},
$C$ is a $(\delta+2)$-nodal curve, with two extra nodes $p_{\delta+1},
p_{\delta+2}\not\in \Delta$ and $W$ is dense in the closure of a
component of $\sev L {\delta+2}$. By Proposition~\ref {prop:ext}, locally
around $C$ there is only one smooth branch $\mathcal V$ of $\csev L
\delta$ containing $W$ and such that when the general point of
$C'$ of $\mathcal V$ specializes at $C$, then set of $\delta$
nodes of $C'$ specializes at $\Delta$. This is a contradiction,
because both $\overline V$ and $\cstsev L \delta$
contain $W$. Hence $p_g(C)=g-2$ cannot happen.
\endproof
\subsubsection{Case $p_g(C)=g-1$}
\label{g-1}
In this case we have the two following disjoint possibilities for $C$:\\
\begin{inparaenum}[(a)]
\item $C$ has precisely one more singularity $p_0$ besides the ones in
$\Delta$;\\
\item $C$ has no singularities besides the ones in $\Delta$,
either an ordinary tacnode or a ramphoid cusp
(with local equation $x^2=y^{4+\epsilon}$,
$\epsilon =0$ or $1$ respectively) at one of the points
of $\Delta$,
and nodes or ordinary cusps at the other points of $\Delta$.
\end{inparaenum}
\medskip
\par\noindent {\itshape Subcase (a)}.
The points $p_0,\ldots, p_\delta$ are either nodes or cusps.
Arguing as for Claim~\ref{claim:1},
we see that at most one of these points can be a cusp.
If $C$ is $(\delta+1)$-nodal, then $W$ sits in an irreducible
component of $\csev L {\delta+1}$, and we get a contradiction as
in the proof of case $p_g(C)=g-1$ for $h_V=1$.
If $C$ is $\delta$-nodal and 1-cuspidal, then again the map \eqref
{eq:T} is surjective and the deformation space of $C$ is locally the
product of the versal deformation spaces at $p_0,\ldots, p_\delta$.
We then have the two following possibilities.
If $p_0$ is a node, then $W$ sits in a $(g-1)$-dimensional irreducible
variety $W'$ parametrizing curves which are $(\delta-1)$-nodal and
1-cuspidal, such that when the general member of $W'$ tends to $C$, its
singularities tend to $\Delta$. Moreover the map \eqref {eq:T}
is surjective for the general member of $W'$.
Then $W'$ should be contained in both $\overline V$ and
$\cstsev L \delta$. On the other hand, as usual by now, $\csev L
\delta$ should be unibranched along $W'$, a contradiction.
If $p_0$ is the cusp, then $W$ sits in a $(g-1)$-dimensional
irreducible component $W'$ of $\csev L {\delta+1}$, such that
when the general member of $W'$ tends to $C$, its singularities
tend to $p_0,\ldots, p_\delta$. By Corollary \ref {cor:incl}, $W'$
should be contained in both $\overline V$ and $\cstsev L \delta$,
leading again to a contradiction. \medskip
\medskip
\par\noindent {\itshape Subcase (b)}.
Suppose the tacnode
or ramphoid cusp
is located at $p_1$, that $p_2,\ldots, p_k$ are
cusps, and $p_{k+1},\ldots,p_\delta$
are nodes:
one has $1\leqslant k\leqslant \delta$, and $k=1$ (resp.\ $\delta$)
means that there is no cusp (resp.\ no node).
If $C$ has local equation
$x^ 2=y^ {4+\epsilon}$
around
$p_1$, then the equisingular ideal $I_{p_1}$ at $p_1$ is
$(x,y^ {3+\epsilon})$
(see \cite [\S 3] {DeSe}). As usual set $I=\prod_{i=1}^ \delta
I_{p_i}$ and let $\mathcal I$ be the corresponding ideal sheaf.
We have
\begin{equation}\label{eq:gen}
h^ 0(C,N'_{C/S})=h^ 0(C, N_{C/S}\otimes \mathcal I)\geqslant \dim(W)=g-2.
\end{equation}
Now we can look at $\H^ 0(C,N'_{C/S})$ as defining a linear series of
\emph{generalized divisors} on the singular curve $C$ (see \cite
{Hart} and \cite [\S 3.4] {DeSe}). Then $N'_{C/S}= N_{C/S}\otimes
\mathcal I\cong \omega_C(-E)$ where $E$ is the effective generalized
divisor on $C$ defined by the ideal sheaf $\mathcal I$ and \eqref
{eq:gen} reads
\begin{equation}\label{eq:cc}
h^ 0(C, \omega_C(-E))\geqslant g-2.
\end{equation}
The subscheme of $C$ defined by $\mathcal I$ has length
$3+\epsilon$
at the tacnode, length 2 at each cusp and length 1 at the nodes, so that
\[
\mathop{\rm deg}(E)=3 +\epsilon +2(k-1)+\delta-k=\delta+k+1
+\epsilon.
\]
By Riemann--Roch and Serre duality
\cite[Theorems~1.3 and 1.4]{Hart}, one has
\begin{equation}\label {eq:bb}
h^ 0(C, \omega_C(-E))=h^ 1(C, \mathcal O_C(E))=h^ 0(C, \mathcal
O_C(E))-\mathop{\rm deg}(E)+p-1=h^ 0(C, \mathcal O_C(E))+g-k-2
-\epsilon.
\end{equation}
Next we argue as in the proof of \cite [Prop.~4.8] {DeSe}.
If $h^1(C, \mathcal O_C(E))<2$, then by \eqref {eq:cc} we have
$g\leq 3$,
which contradicts our assumptions that $3\delta \leq p$ and $\delta
>1$.
If on the other hand $h^ 0(C, \mathcal O_C(E))<2$, then by \eqref
{eq:cc} and \eqref {eq:bb} we have
\[
g-2\leqslant h^ 1(C, \mathcal O_C(E))\leqslant g-k-1
-\epsilon,
\]
hence $\epsilon=0$ and
$k=1$, i.e., the singularities of $C$ are precisely one ordinary
tacnode and $\delta-1$ nodes.
There is then equality in both
\eqref {eq:gen} and \eqref {eq:cc}, hence once more \eqref {eq:T} is
surjective and the deformation space of $C$ is locally the product of
the versal deformation spaces at $p_1,\ldots, p_\delta$. By looking at
the versal deformation space of a tacnode
(see \cite [p.~181]{CH}) we
see that $W$ is contained in $\csev L \delta$ which should be
unibranched along $W$, a contradiction.
So one has necessarily that $h^ i(C, \mathcal O_C(E))\geqslant 2$, for
$i=1,2$.
Then, since ${\rm Cliff}(C)=\lfloor\frac {p-1} 2\rfloor$ by
Theorem~\ref{t:GL-BFT}, one has
\[
\textstyle
p+1-h^ 0(C, \mathcal O_C(E))-h^ 1(C, \mathcal O_C(E))
=\mathop{\rm deg}(E)-2h^ 0(C, \mathcal O_C(E))+2
\geq \lfloor\frac {p-1} 2\rfloor
\]
hence
\[
\textstyle
g-2\leqslant h^ 1(C, \mathcal O_C(E))
\leqslant p+1 -\lfloor\frac {p-1} 2\rfloor
- h^ 0(C, \mathcal O_C(E))
\leqslant p-1- \lfloor\frac {p-1} 2\rfloor
=\lceil \frac {p-1} 2\rceil.
\]
Plugging in the inequality $3\delta \leq p$, one finds
\begin{equation}
\label{ineq:cliff}
\frac 2 3 p -2
\leq p-\delta-2
=g-2
\leq \lceil \frac {p-1} 2\rceil
\leq \frac p 2
\end{equation}
which implies $p \leq 12$, hence $p=11$ or $12$.
Case $p=11$ is impossible by \eqref{ineq:cliff}, since there is no
integer between the two extremes in \eqref{ineq:cliff}. If $p=12$,
then \eqref{ineq:cliff} implies $g=8$, hence $\delta=4$, which is
excluded by assumption. Hence subcase~(b) is impossible. This
concludes the proof that $p_g(C) \neq g-1$.
\subsubsection{Case $p_g(C)=g$}
As in the case $h_V=1$, $C$ is
singular only at $\Delta=p_1+\ldots+p_\delta$, having only nodes and
simple cusps, and it must have at least one cusp.
\begin{claim}\label{claim:2} $C$ has at most two cusps. \end{claim}
\begin{proof} [Proof of the Claim] The proof goes as the one of
Claim~\ref {claim:1}, from which we keep the notation. If $C$ has
cusps at $p_1,\ldots, p_k$, we have
\begin{equation}\label{eq:cusp}
h^ 0(\tilde{C},\omega_{\tilde{C}}(-p_1-\ldots- p_k))\geqslant \dim(W)=g-2.
\end{equation}
We argue by contradiction and assume $k\geqslant 3$. As in the proof
of Claim~\ref {claim:1}, we see that $\tilde{C}$ is not
hyperelliptic:
this would imply by Theorem~\ref{t:KLM} that
$g-2 =\dim(W) \leq 2$, hence $p=6$ and $g=4$; but in this case
$\delta=2$ and since $k \leq \delta$ we are out of the range $k
\geqslant 3$.
The only other possibility is that $\tilde{C}$ is
trigonal, $k=3$, and $\dim (|p_1+p_2+p_3|)=1$. In this case, one would
have $g-2 =\dim(W) \leq 4$ by Theorem~\ref{t:KLM}, which
together with the inequality $p \geq 3\delta$ implies that $p \leq 9$:
This is in contradiction with our assumptions. It is thus impossible
that $k \geq 3$, and the Claim is proved.
\end{proof}
By Claim~\ref {claim:2}, we have only the following two mutually
disjoint possibilities:\\
\begin{inparaenum}[(a)]
\item $C$ has precisely one cusp at $p_1$, and
$h^0(\tilde{C},\omega_{\tilde{C}}(-p_1))=g-1> g-2 = \dim(W)$;\\
\item $C$ has precisely two cusps at $p_1$ and $p_2$, and
$h^ 0(\tilde{C},\omega_{\tilde{C}}(-p_1- p_2))=g-2= \dim(W)$.
\end{inparaenum}
\medskip
\par\noindent {\itshape Subcase (a)}.
We have $h^ 0(C,N'_{C/S})=h^
0(\tilde{C},\omega_{\tilde{C}}(-p_1))=g-1$, hence the map \eqref
{eq:T} is surjective.
This implies as in the case $h_V=1$ and $p_g=g$
that $W$ is contained in a
subvariety $W'$ of dimension $g-1$ contained in
$\csev L \delta$, whose general point corresponds to a curve which has
$\delta-1$ nodes and one cusp, and, as in the proof of case $h_V=1$,
$\csev L \delta$ is unibranched locally at any point of $W'$
corresponding to such a curve for which the map \eqref {eq:T} is
surjective. This contradicts the fact that
$W$ is an irreducible component of
$\overline V\cap \cstsev L \delta$.
\medskip
\par\noindent {\itshape Subcase (b)}.
In this case $W$ is dense in the equisingular deformation locus of $C$
and again the map \eqref {eq:T} is surjective. This again implies that
$\csev L \delta$ is unibranched locally around $C$, which
leads to a contradiction.
\par\medskip
This concludes the proof that $h_V \neq 2$, hence also the proof of
Proposition~\ref{lem:main1}.
\section{Proof of Irreducibility if $p>4\delta -4$}
\label{S:conclusion}
In this section we conclude the proof of Theorem~\ref{t:main}.
So let $(S,L)$ be a primitively polarized $K3$ surface of genus $p
\geq 11$ such that $\mathop{\rm Pic}(S)=\ensuremath{\mathbf{Z}} [L]$, and $\delta$ be a non-negative
integer such that $4\delta-3 \leq p$.
These assumptions imply that $p \geq 3\delta$, so that the notion of
standard
component makes sense, and the Severi variety $\sev L \delta$ has a
unique standard component by Proposition~\ref{prop:stand}.
We assume by contradiction that $\sev L \delta$ is not irreducible:
this means that there exists a non-standard
component $V$ of the Severi variety $\sev L \delta$, and we shall see
this contradicts the inequality $p>4\delta -4$.
Let $h=h_V$. If $\delta \leq 1$, then Theorem~\ref{t:main} is
trivial; else we're in the range of application of
Proposition~\ref{lem:main1}
(note that the case $(p,\delta)=(12,4)$ is excluded by the hypothesis
$p\geq 4\delta-3$),
hence $h \geq 3$.
Consider a general member $C \in V$, and let
$\Delta = \{p_1, \ldots, p_\delta \}$
be the set of its nodes. Let
$\nu: \tilde C \to C$ be the normalization map, and for all
$i=1,\ldots,\delta$, $p_i'$ and $p_i''$ the two antecedents of $p_i$ by
$\nu$. We consider the divisor
$\tilde \Delta = \sum _{i=1} ^\delta
(p_i'+p_i'')$
on $\tilde C$.
\begin{lem}
\label{l:dim-deltatilde}
The complete linear series $|\tilde \Delta|$ is a
$g^h _{2\delta}$.
\end{lem}
\begin{proof}
One has $h^1(\tilde \Delta)=p-3\delta+h$ by Lemma~\ref{l:indep-adj^2},
and then the result follows from the Riemann--Roch formula.
\end{proof}
\begin{proof}[Conclusion of the proof of Theorem~\ref{t:main}]
We maintain the above setup.
We first apply Theorem~\ref{t:CK}:
Let $g=p-\delta$ denote the geometric genus of $C$, and set
\begin{equation*}
\alpha =
\left\lfloor
\frac
{gh + (2\delta-h)(h-1)}
{2h(2\delta-h)}
\right\rfloor
=
\left\lfloor
\frac g {2(2\delta-h)}
+ \frac {h-1} {2h}
\right\rfloor ;
\end{equation*}
the existence of a $g^h_{2\delta}$ on $\tilde C$ implies the
inequality
\begin{align}
\label{eq:CK}
\alpha hg +\alpha h(\alpha h +1)
& \leq
\delta (2\alpha^2 h + 2\alpha +1).
\end{align}
Let us also apply
Theorem~\ref{t:KLM}:
The existence of a
$g^h_{2\delta}$ on $\tilde C$ induces the existence of a family
of dimension $2(h-1)$ of
$g^1_{2\delta}$'s on $\tilde C$, parametrizing the lines in the
$g^h_{2\delta}$, so it holds that
\begin{equation*}
\dim (V) + \dim \bigl( G^1_{2\delta}(\tilde C) \bigr) \geq g + 2(h-1),
\end{equation*}
which implies by Theorem~\ref{t:KLM} that
\begin{align}
g &\leq 2(2\delta-h).
\label{ineq:KLM}
\end{align}
Inequality~\eqref{ineq:KLM} implies that
\begin{equation*}
\alpha
= \left\lfloor
\frac g {2(2\delta-h)}
+ \frac {h-1} {2h}
\right\rfloor
\leq \left\lfloor
1+\frac 1 2
\right\rfloor
= 1.
\end{equation*}
Let us now show by contradiction that $\alpha=1$. If $\alpha \leq 0$,
then
\begin{equation*}
\frac
{gh + (2\delta-h)(h-1)}
{2h(2\delta-h)}
<1
\iff
g <
(2\delta-h)(1+\frac 1 h)
\iff
p <
\delta (3+ \frac 2 h)-h-1;
\end{equation*}
plugging in the inequality $h \geq 3$, we get that $\alpha\leq 0$ implies
$p <
\frac {11} 3 \delta -4$,
in contradiction with our assumption that $p>4\delta-4$. Hence
$\alpha=1$.
Therefore, \eqref{eq:CK} gives the inequalities
\begin{equation*}
hg + h( h +1)
\leq
\delta (2 h + 3)
\iff
p \leq
\delta (3 + \frac 3 h)
-h-1.
\end{equation*}
Taking into account the fact that $h \geq 3$,
this implies that $p \leq 4\delta-4$.
In conclusion, the existence of a non-standard component of
$\sev L \delta$ is in contradiction with the inequality
$p > 4\delta-4$.
\end{proof}
\providecommand{\bysame}{\leavevmode\hboxto3em{\hrulefill}\thinspace}
|
2,877,628,090,987 | arxiv | \section{Introduction}\label{sec1}
Young solar-type stars typically have strong magnetic fields with complex morphologies, like the closed loops surrounding active regions on the Sun \citep{Garraffo2018}. After about 50~Myr, the underlying stellar dynamo mechanism apparently becomes efficient at organizing the magnetic field on larger scales. The emergence of this large-scale organization has important consequences for the strong coupling between rotation and magnetic activity during the first half of stellar main-sequence lifetimes \citep{Skumanich1972}. The physical mechanism that produces this coupling is known as magnetic braking. Charged particles in a stellar wind are entrained in the magnetic field out to a critical distance known as the Alfv\'en radius, carrying away stellar angular momentum in the process. Most of the angular momentum that is lost from magnetic braking can be attributed to the largest scale components of the field, which have a longer effective lever-arm and more open field lines where the stellar wind can escape \citep{Reville2015, Garraffo2016, See2019}.
Middle-aged stars often have some of the clearest stellar activity cycles \citep{Brandenburg2017}. This may be a consequence of their slower rotation rates, which either fail to excite a second dynamo in the near surface shear layer \citep{BohmVitense2007}, or yield activity cycle periods that are much longer than the currently available data sets \citep{Baliunas1995}. Not long after rotation becomes slow enough to produce monoperiodic activity cycles ($P_{\rm rot}\sim 20$~days for solar analogs), it becomes too slow to imprint substantial Coriolis forces on the global convective patterns \citep{Featherstone2016}. This leads to a disruption of the solar-like pattern of differential rotation (i.e.\ faster at the equator and slower at the poles), and a gradual loss of shear to drive the organization of large-scale field by the global dynamo. The observational consequences of this mid-life transition include nearly uniform rotation in older stars \citep{Benomar2018}, weakened magnetic braking that temporarily stalls the rotational evolution \citep{vanSaders2016, Hall2021}, and a gradual decline in stellar activity until the cycles disappear entirely \citep{Metcalfe2016, Metcalfe2017}.
\cite{Metcalfe2019} recently tested this new understanding of magnetic stellar evolution using spectropolarimetric measurements of two stars with activity levels on opposite sides of the proposed mid-life transition. The more active star 88~Leo has a rotation period near 14 days and exhibits clear activity cycles, while the less active star $\rho$~CrB has a rotation period near 17 days and shows constant activity over several decades of monitoring \citep{Baliunas1995, Baliunas1996}. The snapshot observations with the Potsdam Echelle Polarimetric and Spectroscopic Instrument \citep[PEPSI,][]{Strassmeier2015} on the Large Binocular Telescope (LBT) appeared to confirm the predicted loss of large-scale magnetic field. The data produced a clear detection of a nonaxisymmetric dipole field in 88~Leo, and an upper limit on the dipole field strength in $\rho$~CrB that was well below what would be expected from its relative activity level---suggesting that most of the field was concentrated in smaller spatial scales. The age of $\rho$~CrB from gyrochronology (a lower limit on the actual age with weakened magnetic braking) was reported to be $2.5\pm0.4$~Gyr by \cite{Barnes2007}, while other age indicators suggested that it is substantially more evolved \citep{Valenti2005, Mamajek2008}.
In this paper, we aim to characterize the proposed magnetic transition by combining archival stellar activity data from the Mount Wilson Observatory (MWO) with asteroseismology from the {\it Transiting Exoplanet Survey Satellite} \citep[TESS,][]{Ricker2014}. In Section~\ref{sec2}, we reanalyze the complete MWO data sets for $\rho$~CrB and 88~Leo to assess their mean activity levels and rotation periods, we use TESS photometry to search for solar-like oscillations, we obtain X-ray luminosities to help constrain mass-loss rates, and we adopt additional constraints on the stellar properties using published spectroscopy, photometry, and astrometry. In Section~\ref{sec3}, we detect a signature of solar-like oscillations in $\rho$~CrB, and we derive precise stellar properties from asteroseismic modeling. In Section~\ref{sec4}, we assess the compatibility of the observations with an activity-age relation for solar analogs \citep{LorenzoOliveira2018}, and we estimate the magnetic braking torque using a simple wind modeling prescription. In Section~\ref{sec5}, we attempt to match the observations with rotational evolution models that assume either standard spin-down or weakened magnetic braking. Finally, we summarize and discuss our results in Section~\ref{sec6}, concluding that the asteroseismic age of $\rho$~CrB is consistent with the expected evolution of its mean activity level, and that weakened braking models can more readily explain its relatively fast rotation rate.
\begin{figure*}[p]
\centering\includegraphics[width=5.5in]{fig1a.pdf}
\hspace*{2pt}\centering\includegraphics[width=5.35in]{fig1b.pdf}\vspace*{12pt}
\includegraphics[width=5.5in]{fig1c.pdf}
\hspace*{2pt}\centering\includegraphics[width=5.35in]{fig1d.pdf}
\caption{Time series and Lomb-Scargle periodograms for $\rho$~CrB (top two panels) and 88~Leo (bottom two panels) showing the recovered rotation signals of 20.3 days and 15.0 days, respectively. Red lines in the periodograms indicate the 5\% false alarm probability calculated from a Monte Carlo process. (The data used to create this figure are available).\label{fig1}}
\end{figure*}
\section{Observations}\label{sec2}
\subsection{Mount Wilson HK data}\label{sec2.1}
Both $\rho$~CrB and 88~Leo have synoptic $S$-index time series from the MWO HK Project, ranging from near the beginning of the program in 1966 to its termination in 2003 (see Figure~\ref{fig1}). The MWO $S$-index measures the ratio of emission from 1~\AA{} cores of the Ca {\sc ii} H \& K lines to the sum of two nearby 20~\AA{} pseudo-continuum bandpasses \citep{Vaughan1978}. Such measurements are routinely used in studies of magnetic activity cycles and stellar rotation \citep[e.g.][]{Baliunas1996, Donahue1996}. Our analysis of the complete MWO time series gives a mean $S$-index of 0.1508 for $\rho$~CrB and 0.1655 for 88~Leo, in agreement with previous averages from the subset of data analyzed in \citet{Baliunas1995}. Adopting the spectroscopic temperatures from \cite{Brewer2016} and the activity scale from \cite{LorenzoOliveira2018}, we find $\log R'_{\rm HK}[T_{\rm eff}] = -5.177 \pm 0.015$ for $\rho$~CrB and $-4.958 \pm 0.015$ for 88~Leo (see Section~\ref{sec4}).
We applied the Lomb-Scargle periodogram to the entire time series as well as seasonal bins in order to search for rotational signals. We took signals with a false alarm probability (FAP) less than 5\% to be statistically significant. The FAP is defined as the probability that a peak in the periodogram is due to Gaussian noise \citep{Horne1986}, and we have calculated the FAP using that definition explicitly in a Monte Carlo simulation of 100,000 trials. In each trial, synthetic data of the same sampling cadence and standard deviation as the observational data are randomly drawn from the Gaussian distribution, and the Lomb-Scargle periodogram is computed. The fraction of random trials generating periodogram peaks higher than the one obtained from the observational data is the FAP. The uncertainty in the period is found by a similar Monte Carlo process where the observational data are moved within their 1\% uncertainty \citep{Baliunas1995} and the standard deviation of peak periods is computed. Using this method, we find a rotation period of $20.3 \pm 1.8$ days for $\rho$~CrB (FAP\,=\,4.2\%) and $15.0 \pm 0.3$ days for 88~Leo (FAP\,=\,1.2\%) from the complete time series. Figure~\ref{fig1} shows the time series and Lomb-Scargle periodograms for both stars, with the 5\% FAP line computed from the Monte Carlo shown as a red line. Single season analyses returned no significant peaks for $\rho$~CrB \citep[which is not unusual for ``flat activity'' stars,][]{Donahue1996}, and one season with a significant peak for 88~Leo, giving a rotation period of $14.3 \pm 0.8$ days (FAP\,=\,1.4\%) and confirming the global result.
Our rotation period for $\rho$~CrB is $\sim$2$\sigma$ longer than the 17~days found by \citet{Baliunas1996}, who used a subset of the MWO data and did not provide an uncertainty. However, our result agrees with \citet{Henry2000} who used a longer subset of the MWO data ($\langle P_{\rm rot} \rangle = 19 \pm 2$~days, with seasonal values between 17--20~days), and with \citet{Fulton2016} who found 18.5~days from Keck observations. For 88~Leo, we find good agreement with the 14~day rotation period determined by \citet{Baliunas1996} and the 14.32~day period determined by \citet{Olah2009} from the complete MWO time series.
\subsection{TESS photometry}\label{sec2.2}
TESS observed $\rho$~CrB in 2-minute cadence for a total of approximately 52 days during Sectors 24 and 25 of Cycle~2 (2020~Apr~15 – 2020~Jun~08). We downloaded the PDC-MAP SPOC light curve \citep{Jenkins2016}, but also derived our own light curve following the procedure described in \citet{Nielsen2020} and \citet{Buzasi2015} in hopes of improving on the noise level in the SPOC product. We treated sectors individually, masking cadences with nonzero quality flags. We then built a collection of single-pixel light curves for each pixel in the $25 \times 25$ pixel postage stamp. Our figure of merit for the quality of a light curve was the sum of the absolute values of the first-differenced light curve, generally a good proxy for high-frequency noise \citep{Nason2006}. Starting from the brightest pixel, we then added pixels one at a time to the light curve, choosing in each case the pixel that produced the largest decrease in our noise figure of merit, and continuing until light curve quality no longer improved. This process resulted in a somewhat larger aperture than that derived by the SPOC (114 pixels vs.\ 59 for Sector 24 and 108 pixels vs.\ 51 for Sector 25). The resulting light curves were then detrended of instrumental effects by fitting a second-order polynomial in $x$ and $y$ pixel location. We compared the resulting light curve to the SPOC product; improvement was modest but noticeable ($\sim$6\% decreased noise) at frequencies above 1~mHz, so we chose to use our light curve for the asteroseismic analysis in Section~\ref{sec3.1}.
We applied a similar photometric reduction algorithm to 88~Leo. TESS observed this star in 2-minute cadence for a total of approximately 27 days during Sector~22 of Cycle~2 (2020~Feb~18 – 2020~Mar~18). Once again, the process resulted in a somewhat larger aperture than that derived by the TESS SPOC (71 pixels vs.\ 36). After extraction and detrending, the noise level was lowered by approximately 15\% above 1~mHz.
\subsection{Spectral Energy Distribution}\label{sec2.3}
In order to provide an initial, empirical constraint on the stellar luminosities and radii, we performed an analysis of the broadband spectral energy distributions (SEDs) together with the {\it Gaia\/} EDR3 parallaxes following the procedures described in \citet{Stassun:2016} and \citet{Stassun:2017,Stassun:2018}. We pulled the FUV and NUV fluxes from {\it GALEX}, the $UBV$ magnitudes from \citet{Mermilliod:2006}, the Str\"omgren $uvby$ magnitudes from \citet{Paunzen:2015}, the $JHK_S$ magnitudes from {\it 2MASS}, the W1--W4 magnitudes from {\it WISE}, and the $G\,G_{BP}\,G_{RP}$ magnitudes from {\it Gaia}. Together, the available photometry spans the full stellar SED over the wavelength range 0.2--22~$\mu$m (see Figure~\ref{fig:sed}).
\begin{figure}
\centering\includegraphics[width=\columnwidth,trim=150 70 80 70,clip]{fig2a.pdf}
\centering\includegraphics[width=\columnwidth,trim=150 70 80 70,clip]{fig2b.pdf}
\caption{Spectral energy distributions for $\rho$~CrB (top) and 88~Leo (bottom). Red symbols are the observed photometric measurements, where the horizontal bars represent the effective width of the passband. Blue symbols are the model fluxes from the best-fit Kurucz atmosphere model (black).\label{fig:sed}}
\end{figure}
We performed a fit using Kurucz stellar atmosphere models \citep{Castelli2004}, adopting the effective temperature ($T_{\rm eff}$) and metallicity ([M/H]) from the spectroscopically determined values of \citet{Brewer2016}. Uncertainties were inflated to account for a realistic systematic noise floor: $T_{\rm eff} = 5833 \pm 78$~K, $\text{[M/H]} = -0.18 \pm 0.07$~dex for $\rho$~CrB, and $T_{\rm eff} = 6002 \pm 78$~K, $\text{[M/H]} = +0.04 \pm 0.07$~dex for 88~Leo. The extinction ($A_V$) was fixed at zero due to the proximity of the stars to Earth. The resulting fits (Figure~\ref{fig:sed}) have a reduced $\chi^2$ between 1--2 for both stars. Integrating the (unreddened) model SED gives the bolometric flux at Earth ($F_{\rm bol}$). Taking this $F_{\rm bol}$ together with the {\it Gaia\/} EDR3 parallax, with no systematic adjustment \citep[e.g., see][]{StassunTorres:2021}, yields bolometric luminosities for $\rho$~CrB and 88~Leo of $L_{\rm bol} = 1.746 \pm 0.041\ L_\odot$ and $L_{\rm bol} = 1.482 \pm 0.088\ L_\odot$, respectively. In addition, the $L_{\rm bol}$ together with the $T_{\rm eff}$ yields stellar radii for $\rho$~CrB and 88~Leo of $R = 1.295 \pm 0.025\ R_\odot$ and $R = 1.127 \pm 0.037\ R_\odot$, respectively. Finally, we can estimate the stellar mass using the empirical eclipsing-binary based relations of \citet{Torres:2010}, which gives $M = 1.09 \pm 0.07\ M_\odot$ and $M = 1.14 \pm 0.07\ M_\odot$ for $\rho$~CrB and 88~Leo, respectively.
\subsection{X-ray data}\label{sec2.4}
We obtained a {\it Chandra} observation of $\rho$~CrB using the High Resolution Camera imaging detector (HRC-I) on 2020~Apr~19 starting at UT\,14:59 for a net exposure time of 11870\,s. This instrument was chosen because it has the best available low-energy sensitivity for imaging observations. An earlier observation of $\rho$~CrB had also been obtained (PI:~S.~Saar) several years earlier on 2012~Jan~17 beginning at UT\,13:12 using the Advanced CCD Imaging Spectrometer spectroscopic array (ACIS-S) on the back-illuminated CCD (``s3'') for a net exposure of 9835\,s.
Both observations were downloaded from the {\it Chandra} archive and reprocessed using the {\it Chandra} Interactive Analysis of Observations (CIAO) software version 4.13 and calibration database version 4.9.4. While the ACIS-S data in principle have energy information for each photon from which a low-resolution X-ray spectrum can be derived, the $\rho$~CrB data contained only a handful of photon counts. The HRC-I data have no useful energy resolution. Analysis for both detectors therefore proceded similarly, by examining the photon counts attributable to $\rho$~CrB and using the instrument effective area to infer the implications for the X-ray flux. A summary of the observational results is presented in Table~\ref{tab1}.
\begin{table}[b]
\setlength{\tabcolsep}{4pt}
\centering
\caption{Summary of {\it Chandra} results for $\rho$~CrB}
\begin{tabular}{lcc}
\hline\hline
Parameter & HRC-I & ACIS-S\\
\hline
Chandra ObsID & 22308 & 12396 \\
Net exposure (s) & 11870 & 9835 \\
$\rho$~CrB count rate (count~ks$^{-1}$) & $2.85\pm 0.51$ & $0.77\pm 0.31$ \\
\cline{2-3}
Isothermal plasma temperature &
\multicolumn{2}{l}{ $(1.58\pm 0.32) \times 10^6$~K} \\
X-ray Luminosity $L_X^a$ & \multicolumn{2}{l}{$(9.1\pm 1.9)\times 10^{26}$ erg s$^{-1}$}\\
\hline
\end{tabular}
\label{tab1} \\
{\footnotesize $^a$Best estimate of the X-ray luminosity assuming an isothermal optically-thin plasma radiative loss model with a solar mixture of abundances scaled by a metallicity [M/H]\,$=-0.18$, and an interstellar absorbing column of $1.95\times 10^{18}$~cm$^{-2}$.\vspace*{-12pt}}
\end{table}
In order to provide insight into the source X-ray luminosity giving rise to the HRC-I and ACIS-S signals, we used the {\tt PIMMS} software\footnote{\url{https://heasarc.gsfc.nasa.gov/docs/software/tools/pimms.html}} version 4.11 to convert the observed HRC-I and ACIS-S count rates to the incident \mbox{X-ray} flux. Since we are lacking counts in the ACIS-S data to estimate a coronal temperature, we made the flux conversion for a range of isothermal plasma temperatures. We adopted the APEC optically-thin plasma radiative loss model \citep{Foster2012}, the metallicity of [M/H]=$-$0.18 from \citet{Brewer2016}, and the solar abundance mixture of \citet{Asplund2009}, together with an intervening hydrogen column density of $1.95\times 10^{18}$~cm$^{-2}$. This column density was estimated by interpolation within the compilations of column density measurements of \citet{Gudennavar2012} and \citet{Linsky2019}, for the {\it Gaia} EDR3 distance of 17.51~pc.
The X-ray luminosities in the ROSAT 0.1--2.4\,keV band corresponding to the observed HRC-I and ACIS-S count rates are illustrated as a function of isothermal plasma temperature in Figure~\ref{fig3}. Shaded regions illustrate the range of uncertainties based on the uncertainties in the extracted count rates. Sensitivity of the results to the adopted absorbing column was determined by repeating the luminosity calculations for lower and higher values of $N_\mathrm{H}$ by a factor of two. Sensitivity to metallicity was also checked in a similar way, by varying metallicity by a factor of two, and found to be negligible.
Table~\ref{tab1} summarises the {\it Chandra} results for coronal luminosity and plasma temperature under the isothermal approximation. Final values were determined by the intersection of the HRC-I and ACIS-S $L_X$-$T$ loci and uncertainty ranges. By far the largest uncertainty is in the estimate of the X-ray count rates. The final estimate of the X-ray luminosity for $\rho$~CrB, $(9.1 \pm 1.9)\times 10^{26}$~erg~s$^{-1}$, is very similar to that of the quiet Sun \citep[e.g.][]{Judge2003}, while the temperature is similar to the peak of the quiet Sun emission measure distribution \citep[e.g.][]{Brosius96}.
\begin{figure}
\centering\includegraphics[width=\columnwidth]{fig3.pdf}
\caption{The X-ray luminosity of $\rho$~CrB for isothermal, collision-dominated, optically-thin plasma radiative loss as a function of plasma temperature implied by the observed {\it Chandra} HRC-I and ACIS-S count rates. Shaded regions indicate the uncertainties arising from the count rate measurements and a 100\% error in the assessment of the intervening neutral hydrogen column density.
\label{fig3}}
\end{figure}
For 88~Leo, we start with the X-ray luminosity $\log L_X = 27.77$ from \citet{Wright+2011}, which was derived from ROSAT PSPC data. This value was computed using the observed count rate \citep[$0.0306\pm0.0102$~counts~s$^{-1}$,][]{Voges+2000} and the hardness ratio (HR\,=\,$-1$) calibration of \citet{Schmitt+1995} with a distance of $d = 23.0$~pc and $L_{\rm bol} = 1.50\ L_{\odot}$. Adjusting for the updated properties from Section~\ref{sec2.3}, we find $L_X\,=\,(6.1 \pm 2.1)\times 10^{27}$~erg~s$^{-1}$ and $\log L_X/L_{\rm bol}\,=\,-5.96^{+0.14}_{-0.19}$. Adopting the SED radius from Section~\ref{sec2.3}, the surface flux is $\log F_X\,=\,4.90^{+0.14}_{-0.19}$. As a check, we used {\tt PIMMS} with the above count rate, a column of $N_H\,=\,10^{18}$~cm$^{-2}$ \citep[based on the star's presence in the NGP cloud,][]{Linsky2019}, solar abundance and APEC models. We find $\log F_X\,=\,4.92$ at $T_X\,=\,1.04^{+0.75}_{-0.35}\times 10^6$~K, which is reasonable given the hardness ratio. The ROSAT observations were obtained in late 1990, when the Ca~HK emission was slightly below the average level, so the \mbox{X-ray} observations should represent below average coronal conditions for 88~Leo.
\section{Asteroseismology of \texorpdfstring{$\rho$~C\lowercase{r}B}{rho CrB}}\label{sec3}
\subsection{Global oscillation parameters}\label{sec3.1}
The expected frequency of maximum power ($\nu_{\rm max}$) for $\rho$~CrB based on the TESS Asteroseismic Target List \citep[ATL,][]{schofield19} is $\approx$\,2000\,$\mu$Hz, with a detection probability of $\approx$\,65\%. The top panel of Figure~\ref{fig4} shows a power spectrum of the TESS light curve from Section~\ref{sec2.2}. The spectrum displays a low signal-to-noise power excess around 1800\,$\mu$Hz, which is consistent with the ATL given uncertainties in predicted $\nu_{\rm max}$ values.
To test whether the power excess is consistent with solar-like oscillations, we calculated an autocorrelation of the power spectrum between $\approx$\,1400-2100\,$\mu$Hz (inset in the top panel of Figure~\ref{fig4}). The autocorrelation shows a peak at $\approx$\,89\,$\mu$Hz, close to the expected value for the characteristic large frequency separation ($\Delta\nu$) for solar-like oscillations in this frequency range \citep{stello09c}. We furthermore calculated an \'{e}chelle diagram (bottom panel of Figure~\ref{fig4}) by dividing the power spectrum into equal segments with length $\Delta\nu$ and stacking one above the other, so that modes with a given spherical degree align vertically in ridges \citep{grec83}. The offset of the visible ridge in the \'{e}chelle diagram, which is sensitive to the properties of the near-surface layers of the star \citep[e.g.][]{jcd14}, is consistent with expectations for a ridge of dipole ($l=1$) modes based on \textit{Kepler} measurements of stars with similar $\Delta\nu$ and $T_{\rm eff}$ \citep{white11}.
\begin{figure}
\centering\includegraphics[width=\columnwidth]{fig4.pdf}
\caption{Top panel: Power density spectrum of the TESS light curve for $\rho$~CrB. The inset shows an autocorrelation of the power spectrum, with the expected large frequency separation marked by a vertical dashed line. The red shaded area marks the measured large separation. Bottom panel: \'{E}chelle diagram of the power spectrum in the top panel.\label{fig4}}
\end{figure}
We used several independent methods \citep{huber09,mosser09,mathur10b,campante18} to extract global oscillation parameters from the power spectrum, which yielded broadly consistent results. Estimates of $\nu_{\rm max}$ showed a large spread, as expected for a low signal-to-noise detection \citep{chaplin14}, so we did not adopt a constraint for our subsequent analysis. We adopted $\Delta\nu = 89.3 \pm 1.1\ \mu$Hz as measured by the SYD pipeline, which was consistent with measurements from other methods.
We also searched for oscillations in 88~Leo, which yielded a null detection. The star is not included in the ATL, but based on the stellar properties from Section~\ref{sec2} it is less evolved than $\rho$~CrB with an expected $\nu_{\rm max}$ of $\approx 2700$\,$\mu$Hz and a detection probability $\approx$\,26\%. Given the low S/N detection in $\rho$~CrB, the fainter apparent magnitude of 88~Leo, and the fact that oscillation amplitudes decrease with increasing $\nu_{\rm max}$ and with higher activity \citep{Garcia2010, Chaplin2011, Mathur2019}, we conclude that the null detection is consistent with expectations.
\subsection{Grid-based modeling}
Grid-based modeling of $\rho$~CrB was performed using the Yale-Birmingham pipeline \citep{Basu2010, Basu2012, Gai2011} with $\Delta\nu$, [M/H], $T_{\rm eff}$, and luminosity as inputs, and the results are listed in Table~\ref{tab2}. The search was conducted on a grid containing two sub-grids---one with the solar-calibrated mixing length parameter, and the second using a metallicity-dependent mixing length, with the dependence given by \citet{Viani2018}. Both sub-grids assume a linear relation $\Delta Y/\Delta Z \approx 1.5$ that was obtained using a calibrated solar model assuming a primordial helium abundance of $0.248$ \citep{Steigman2010}. The grids have models with masses between 0.7\,$M_\odot$ and 3.3\,$M_\odot$ in intervals of 0.025\,$M_\odot$, evolved from the zero-age main-sequence to nearly the tip of the red-giant branch. The models were constructed with metallicities ranging from [M/H]=$-2.4$ to $+0.5$. The metallicity grid has a spacing of 0.1~dex between $-2.0$ and $+0.5$, and a spacing of 0.2~dex at lower metallicity. The metallicity scale is that of \citet{Grevesse1998}, i.e., $\text{[M/H]}=0$ corresponds to $Z/X=0.023$.
The grids were constructed using the Yale Stellar Evolution Code \citep[YREC,][]{Demarque2008} for consistency with the rotational evolution modeling in Section~\ref{sec5}. The models were constructed using OPAL opacities \citep{Iglesias1996} supplemented with low temperature opacities from \citet{Ferguson2005}. The OPAL equation of state \citep{Rogers2002} was used. All nuclear reaction rates were obtained from \citet{Adelberger1998}, except for that of the $^{14}N(p,\gamma)^{15}O$ reaction, for which we adopted the rate of \citet{Formicola2004}. All models included gravitational settling of helium and heavy elements using the formulation of \citet{Thoul1994}, with the diffusion coefficients smoothly decreased for stars more massive than $1.25\ M_\odot$. The large separation $\Delta\nu$ for the models were calculated from the frequencies of their radial modes, which in turn were calculated with the code of \citet{Antia1994}. The large separations were corrected for the surface term by applying the correction obtained by \citet{Viani2019}.
\begin{deluxetable*}{lccccc}
\tablecaption{Stellar Properties of $\rho$~CrB and 88~Leo\label{tab2}}
\tablehead{ & \multicolumn{2}{c}{$\rho$~CrB} & & & \\
\cline{2-3}
\colhead{} & \colhead{Asteroseismic} & \colhead{Other} & \colhead{~~~~~} & \colhead{88~Leo} & \colhead{Source}}
\startdata
$\log R'_{\rm HK}[T_{\rm eff}]$ (dex) & $\cdots$ & $-5.177 \pm 0.015$ & & $-4.958 \pm 0.015$ & (1) \\
$P_{\rm rot}$ (days) & $\cdots$ & $20.3 \pm 1.8$ & & $15.0 \pm 0.3$ & (1) \\
$T_{\rm eff}$ (K) & $5817^{+32}_{-33}$ & $5833 \pm 78$ & & $6002 \pm 78$ & (2) \\
$[$M/H$]$ (dex) & $-0.19 \pm 0.06$ & $-0.18 \pm 0.07$ & & $+0.04 \pm 0.07$ & (2) \\
$\log g$ (dex) & $4.190 \pm 0.008$ & $4.29 \pm 0.08$ & & $4.38 \pm 0.08$ & (2) \\
Radius ($R_\odot$) & $1.304 \pm 0.012$ & $1.295 \pm 0.025$ & & $1.127 \pm 0.037$ & (3) \\
Luminosity ($L_\odot$) & $1.749^{+0.036}_{-0.040}$ & $1.746 \pm 0.041$ & & $1.482 \pm 0.088$ & (3) \\
Mass ($M_\odot$) & $0.96 \pm 0.02$ & $1.09 \pm 0.07$ & & $1.14 \pm 0.07$ & (3) \\
Age (Gyr) & $9.8^{+0.7}_{-0.5}$ & $3.5 \pm 0.6$ & & $2.4 \pm 0.4$ & (4) \\
$L_X$ ($10^{27}$~erg~s$^{-1}$) & $\cdots$ & $0.91 \pm 0.19$ & & $6.1 \pm 2.1$ & (5) \\
\enddata
\tablerefs{(1) \S\,\ref{sec2.1}; (2) \cite{Brewer2016}; (3) \S\,\ref{sec2.3}; (4) \cite{Barnes2007}; (5) \S\,\ref{sec2.4}}
\end{deluxetable*}
\vspace*{-12pt}
\section{Magnetic Evolution}\label{sec4}
\subsection{Activity-age relation}
\begin{figure}
\centering\includegraphics[width=\columnwidth,trim=0 0 30 40,clip]{fig5.pdf}
\caption{Chromospheric activity versus stellar age for a sample of spectroscopic
solar twins from \cite{LorenzoOliveira2018} with ages determined from isochrone
fitting (gray circles). The asteroseismic age of $\rho$~CrB from TESS is
overplotted as a solid square, while updated ages from gyrochronology for both
stars are shown with open squares.\label{fig5}}
\end{figure}
Considering the stellar properties determined above, we can now evaluate how the age expected from the chromospheric activity level compares to other age indicators. To facilitate this comparison, in Figure~\ref{fig5} we show the activity-age relation for a sample of spectroscopic solar twins from \cite{LorenzoOliveira2018}. The ages for this sample (gray circles) were determined from isochrone fitting, and the chromospheric activity scale was calibrated using $T_{\rm eff}$ rather than B$-$V color. The derived activity-age relation with uncertainties (gray lines) should be applicable to stars that have a mass and metallicity similar to the Sun. We can place other stars on this same activity scale using their spectroscopic $T_{\rm eff}$ and average $S$-index, with a small correction for non-solar metallicity \citep[{0.213$\times$[M/H]},][]{SaarTesta2012}. The horizontal error bars indicate the age uncertainty, while the vertical error bars reflect the uncertainties in $T_{\rm eff}$ and [M/H].
Using these procedures to place $\rho$~CrB and 88~Leo on the chromospheric activity scale for solar twins, we can evaluate their ages from asteroseismology and gyrochronology. The asteroseismic age for $\rho$~CrB from Table~\ref{tab2} is shown as a solid square in Figure~\ref{fig5}, which falls directly on the activity-age relation for solar twins. Although we were unable to determine an asteroseismic age for 88~Leo, the age from gyrochronology should be reliable for this star because it is not yet below the critical activity level where weakened magnetic braking is inferred \citep{vanSaders2016, Brandenburg2017}. The updated gyrochronology ages for both stars are indicated with open squares \citep{Barnes2007}, showing a reasonable agreement for 88~Leo considering its higher mass \citep[$M_{\rm iso}=1.10\ M_\odot$,][]{Valenti2005} but revealing a strong disagreement for $\rho$~CrB. In Section~\ref{sec5}, we examine this tension in greater detail.
\subsection{Magnetic Braking Torque}
We can estimate the strength of magnetic braking for $\rho$~CrB and 88~Leo by combining the wind modeling prescription of \cite{Finley2017, Finley2018} with the constraints on magnetic morphology from \cite{Metcalfe2019}. Given the polar strengths of an axisymmetric dipole, quadrupole, and/or octupole magnetic field, along with the mass-loss rate, rotation period, stellar mass and radius, this prescription yields an estimate of the magnetic braking torque based on analytical fits to a set of detailed magnetohydrodynamic wind simulations. Although 88~Leo exhibits a nonaxisymmetric polarization profile, the amplitude of the signal can be reproduced with an axisymmetric dipole having a polar field strength $B_{\rm d}=-5$~G. For $\rho$~CrB, \cite{Metcalfe2019} cite upper limits on the polar field strength assuming a pure axisymmetric dipole ($|B_{\rm d}|\le0.7$~G) or quadrupole field ($|B_{\rm q}|\le2.4$~G), with the latter being larger due to geometric cancellation effects. An identical analysis of the same LBT data yields an upper limit on a pure axisymmetric octupole field of $|B_{\rm o}|\le19.6$~G. However, the LBT observations also showed that the disk-integrated line-of-sight magnetic field in $\rho$~CrB is about 64\% as strong as in 88~Leo, which agrees well with the relative chromospheric activity levels listed in Table~\ref{tab2}. Given the upper limit on the dipole component, the global field of $\rho$~CrB appears to be dominated by quadrupolar and higher-order components to account for its relative line-of-sight field and activity level.
Observationally, the mass-loss rate is one of the least certain quantities required by the wind modeling prescription. If we initially fix the mass-loss rate to the solar value for both stars ($\dot{M}_\odot=2\times10^{-14}\ M_\odot/\text{yr}$) and adopt the stellar properties from Table~\ref{tab2}, we find that the magnetic braking torque for $\rho$~CrB is $\la$20\% as strong as for 88~Leo. This estimate does not depend strongly on whether we adopt the asteroseismic or other estimates of radius and mass for $\rho$~CrB, so we adopt the asteroseismic properties for further analysis. The mass-loss rate generally decreases with stellar age, so we might expect it to be larger than the solar value at the updated gyrochronology age of 88~Leo (2.4~Gyr), and smaller by the asteroseismic age of $\rho$~CrB (9.8~Gyr).
If we adopt the scaling relation $\dot{M}\propto F_X^{0.77}$ from \cite{Wood2021} and calculate the X-ray fluxes from the luminosities in Section~\ref{sec2.4}, the mass-loss rate changes from 2.0\,$\dot{M}_\odot$ to 0.36\,$\dot{M}_\odot$ between the ages of these two stars\footnote{If we adopt the steeper scaling relation $\dot{M}\propto F_X^{1.29}$ of \cite{Wood2018} derived from GK dwarfs only, the mass-loss rate estimates become 3.1\,$\dot{M}_\odot$ for 88~Leo and 0.18\,$\dot{M}_\odot$ for $\rho$~CrB.}, and the magnetic braking torque for $\rho$~CrB becomes $\la$8\% as strong as for 88~Leo. We can estimate the relative contributions to this total reduction in magnetic braking torque by changing the parameters of the 88~Leo wind model one at a time to the values in the $\rho$~CrB model.
The largest factor that contributes to the reduction in magnetic braking torque is the shift in morphology towards quadrupolar and higher-order fields ($-$67\% from shifting the field from pure dipole to pure quadrupole), followed by the evolutionary change in mass-loss rate ($-$60\%), with smaller contributions from the weaker magnetic field (up to $-$34\% from changing the strength of a quadrupole field from 5~G to 2.4~G) and slower rotation ($-$26\%). The slightly lower mass (+4\%) and evolutionary change in the radius (+58\%) actually increase the relative magnetic braking torque, masking some of the other effects.
\vspace*{12pt}
\section{Rotational Evolution}\label{sec5}
We modeled the rotational evolution of $\rho$~CrB using the methodology laid out in \citet{Metcalfe2020}. We assumed solid body rotation, and used the \texttt{rotevol} \citep{Somers2017} tracer code to track the angular momentum evolution as a function of time, given a set of YREC evolutionary tracks and interpolation tools in \texttt{kiauhoku} \citep{Claytor2020}. We used the same model grid as that in \citet{Metcalfe2020} and adopted the same braking law parameters, with minor changes that we describe here. We scaled the critical Rossby number, Ro$_{\rm crit}$ in terms of the solar value, since the \citet{vanSaders2016} model grid and our current grid have slightly different solar calibrations due to differing input physics. We adopted Ro$_{\rm crit} = 0.92~\textrm{Ro}_{\odot}$ as estimated in \citet{vanSaders2019}. Second, although unimportant for the late time rotational evolution, we chose a constant specific angular momentum (cm$^2$~s$^{-1}$) of $\log{j_{spec}} = 16.3$ dex at 10~Myr \citep{Somers2017} as our initial condition.
We utilized the same Monte Carlo approach as in \citet{Metcalfe2020} in which the mass, initial metallicity, age, and mixing length are parameters of the model, with the asteroseismic radius and the spectroscopic surface [M/H] and $T_{\rm eff}$ as the observables. We adopted strict Gaussian priors on the mass ($0.96\pm0.02~M_{\odot}$) and age ($9.8\pm0.8$ Gyr) from the asteroseismic analysis, and a broader prior on the mixing length ($1.8\pm0.3$). In both cases, the rotation period is a prediction of the model, rather than a parameter we use in the fit itself. We used 8 walkers, each running for 100,000 steps.
The standard spin-down model predicts a rotation period of $52 \pm 5$ days for $\rho$~CrB, while the weakened braking model with Ro$_{\rm crit} = 0.92~\textrm{Ro}_{\odot}$ predicts a rotation period of $28 \pm 2$ days. We show in Figure~\ref{fig6} the posteriors on the predicted rotation distributions for both the standard spin-down and weakened magnetic braking cases in comparison to the observed period: both models predict longer periods.
We verified that changing the initial angular momentum is insufficient to relieve the tension, as in \citet{Metcalfe2020}. Similarly, allowing the model to deviate from purely solid body rotation is also unlikely to result in more rapid rotation: in both the Sun and asteroseismic samples the rotation with depth is consistent with a solid body \citep{Deheuvels2020}. The convection zone of $\rho$~CrB has not yet begun to deepen at its current position in the HR diagram, and it is unlikely to be dredging up higher angular momentum material from a differentially rotating interior, even if such radial shear exists. Furthermore, when the core and envelope are allowed to decouple rotationally \citep{MacGregor1991} the surface rotation rate tends to be \textit{slower} than a solid body model, because wind-driven loss drains angular momentum from the smaller, decoupled reservoir of the convective envelope. This star is also still hot enough ($\sim$5800~K) that assumptions about the convective mixing length have a comparatively mild effect on the predicted period.
\begin{figure}
\centering\includegraphics[width=\columnwidth,trim=7 7 0 0,clip]{fig6.pdf}
\caption{Predictions from a standard spin-down model (purple), weakened braking model (orange), and weakened braking model with a mass prior of $1.09\pm0.07~M_{\odot}$ (pink) for the rotation period of $\rho$~CrB. The observed rotation period from Section~\ref{sec2.1} is shown with black vertical lines.\label{fig6}}
\end{figure}
An underestimated stellar mass would result in predicted rotation periods that are too long, and indeed there is moderate tension between the asteroseismic mass and the empirical mass scale from eclipsing binaries. If we instead adopt a mass prior of $1.09 \pm 0.07~M_\odot$ (while also adopting an uninformative age prior) we predict a period of $23^{+5}_{-4}$ days for the weakened magnetic braking case. The inferred mass is $1.00\pm0.03\ M_\odot$ \citep[consistent with the isochrone mass,][]{Valenti2005,Brewer2016}, with predicted properties within 1$\sigma$ of the observed $L$, $R$, $T_{\rm eff}$, and surface [M/H]. The increased mass does require an age younger by about 2~Gyr (also consistent with isochrone estimates), but this is unsurprising: on the subgiant branch (SGB) near the turnoff, the age is tightly correlated with model mass. The ages of such stars are essentially equal to the main-sequence lifetime, and their rotational evolution shifts from being strongly dependent on time to strongly dependent on the structural evolution across the SGB.
We applied the same modeling techniques to 88 Leo, and find excellent agreement with the observed rotation period in both standard and weakened braking prescriptions. The predicted period is $15\pm2$ days for both models: they do not differ significantly because the Rossby number of 88 Leo is approximately equal to our adopted critical Rossby number ($\textrm{Ro} = 1.0\pm0.1\ \textrm{Ro}_{\rm crit}$). Both the standard and weakened braking models are identical until the critical Rossby number is exceeded, and thus both predict the same rotation period for 88 Leo.
\section{Summary and Discussion}\label{sec6}
By combining archival stellar activity data from MWO with asteroseismology from TESS, we have probed the nature of the transition that appears to decouple the evolution of rotation and magnetism in middle-aged stars. We characterized two stars ($\rho$~CrB and 88~Leo) with activity levels on opposite sides of the proposed mid-life transition---verifying their mean activity levels and rotation periods (Section~\ref{sec2.1}), quantifying their X-ray luminosities to estimate mass-loss rates (Section~\ref{sec2.4}), and deriving precise asteroseismic properties for the post-transition star $\rho$~CrB (Section~\ref{sec3}). Analysis of the resulting observational constraints reveals that the asteroseismic age of $\rho$~CrB agrees with the expected evolution of its mean activity level, while the age from gyrochronology does not (Figure~\ref{fig5}). No such tension exists for 88~Leo, suggesting a divergence in the evolution of rotation and magnetism between 2.4 and 3.5~Gyr for stars with shallower convection zones than the Sun.
Using a simple wind modeling prescription with previously published spectropolarimetric constraints on the global magnetic fields \citep{Metcalfe2019}, we find that the magnetic braking torque for $\rho$~CrB is more than an order of magnitude smaller than for 88~Leo, primarily due to a shift in morphology toward smaller spatial scales but reinforced by the evolutionary change in mass-loss rate and other properties (Section~\ref{sec4}). Rotational evolution models adopting standard spin-down can match the observational constraints for 88~Leo, but they fail for $\rho$~CrB. By contrast, models with weakened magnetic braking can more readily explain the fast rotation of $\rho$~CrB, particularly if the asteroseismic properties are slightly biased from the relatively low S/N detection (Figure~\ref{fig6}).
Future TESS observations may allow refinement of the stellar properties for $\rho$~CrB, and could yield an asteroseismic detection for 88~Leo. Both targets will be observed with 20-second cadence during Cycle~4, which yields a 20\% longer effective integration time due to the absence of onboard cosmic ray rejection, and also avoids significant attenuation of signals near the Nyquist frequency of 2-minute sampling \citep{Huber2021}. The latter is particularly important for 88~Leo ($\nu_{\rm max} \approx 2700~\mu$Hz), which will be observed during Sectors~45-46 (2021 Nov/Dec) and Sector~49 (2022 Mar), further improving the detection probability. Although 88~Leo has a K-dwarf companion separated by 15$\farcs$5, it only dilutes the signal from the primary by $\sim$10\%, and any solar-like oscillations in the K-dwarf are expected at a higher frequency and much lower amplitude. Additional observations of $\rho$~CrB will be obtained during Sector~51 (2022 May), and they can be combined with the Cycle~2 data to improve the S/N of the detection, potentially yielding a more precise value of $\Delta\nu$, a secure determination of $\nu_{\rm max}$, and perhaps some individual oscillation frequencies for detailed modeling. This may allow us to resolve the tension between the asteroseismic properties derived in Section~\ref{sec3} and the eclipsing binary mass scale, and possibly probe the impact of the observed non-solar abundance mixture for this star \citep{Brewer2016}.
Additional spectropolarimetic observations will provide new opportunities to test the mid-life transition hypothesis across a range of spectral types. Data recently obtained from the LBT include Stokes~V measurements of 18~Sco, 16~Cyg~A~\&~B, $\lambda$~Ser, and HD\,126053. The latter appears to be a transitional star like $\alpha$~Cen~A \citep{Metcalfe2017}, but with a rotation period and activity cycle very similar to the Sun. Such targets may offer the best constraints on the timescale for a shift in magnetic morphology, which must play out relatively quickly to explain the sudden reduction in magnetic braking torque suggested by observations \citep{vanSaders2016}. By contrast, evolutionary changes in the mass-loss rate, mean activity level, and rotation period (as a star expands on the main-sequence) should take place more gradually. Aside from 18~Sco \citep[which has a ground-based asteroseismic detection,][]{Bazot2011}, all of these targets will be observed by TESS with 20-second cadence in Cycle~4, and most of them have well-defined X-ray fluxes to constrain the mass-loss rates. Consequently, we should be able to extend the methodology applied above to a well-characterized sample of solar-type stars in the near future.
\vspace*{24pt}
The authors would like to thank Steven Cranmer, B.~J.\ Fulton, Sean Matt, Marc Pinsonneault, and Kaspar von~Braun for helpful exchanges.
T.S.M.\ acknowledges support from NSF grant AST-1812634, NASA grant 80NSSC20K0458, and Chandra award GO0-21005X. Computational time at the Texas Advanced Computing Center was provided through XSEDE allocation TG-AST090107.
J.v.S.\ acknowledges support from NASA grant 80NSSC21K0246.
D.B.\ acknowledges support from NASA through the Living With A Star Program (NNX16AB76G) and from the TESS GI Program under awards 80NSSC18K1585 and 80NSSC19K0385.
J.J.D.\ was supported by NASA contract NAS8-03060 to the {\it Chandra X-ray Center} and thanks the Director, Pat Slane, for continuing advice and support.
R.E.\ acknowledges NCAR for their support. The National Center for Atmospheric Research is sponsored by the National Science Foundation.
D.H.\ acknowledges support from the Alfred P. Sloan Foundation, NASA grant 80NSSC21K0652, and NSF grant AST-1717000.
S.H.S.\ is grateful for support from NASA Heliophysics LWS grant NNX16AB79G, and HST grant HST-GO-15991.002-A.
W.H.B.\ acknowledges support from the UK Space Agency.
T.L.C.\ is supported by Funda\c c\~ao para a Ci\^encia e a Tecnologia (FCT) in the form of a work contract (CEECIND/00476/2018).
A.J.F.\ is supported by the ERC Synergy grant ``Whole Sun'', \#810218.
O.K.\ acknowledges support by the Swedish Research Council, the Royal Swedish Academy of Sciences, and the Swedish National Space Agency.
S.M.\ acknowledges support from the Spanish Ministry of Science and Innovation with the Ramon y Cajal fellowship number RYC-2015-17697 and the grant number PID2019-107187GB-I00.
T.R.\ acknowledges support from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No.\ 715947).
V.S.\ acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No.\ 682393 AWESoMeStars).
This work benefited from discussions within the international team ``The Solar and Stellar Wind Connection: Heating processes and angular momentum loss'' at the International Space Science Institute (ISSI).
\software{CIAO \citep[v4.13;][]{Fruscione2006}, PIMMS \citep[v4.11;][]{Mukai1993}, SYD \citep{huber09}, YREC \citep{Demarque2008}, rotevol \citep{Somers2017}, kiauhoku \citep{Claytor2020}}
\bibliographystyle{aasjournal}
|
2,877,628,090,988 | arxiv | \section{Introduction\label{sec:intro}}
The main focus of reduced density matrix functional theory (RDMFT) \cite{G1975}, a framework where the one-body reduced density matrix (1RDM) plays the role of the fundamental variable, has been the proper description of ground-state singlet states. Little has been done on developing a RDMFT treatment of doublet or triplet states \cite{LHG2005, Piris1, RP2011, Piris_Leiva_spin}. The extension of the theory for such states can follow two different directions which can also be combined. The first concerns the development of approximate functionals or the extension of existing ones to describe such states. The second is the derivation of additional conditions to restrict the minimization of the existing functionals to the domain of 1RDMs that correspond to a prescribed spin state. The present work is a step in the second direction.
Since the many-electron problem, in general, cannot be solved exactly, several approximations were introduced where the total electronic energy is expressed as a functional of a density or density matrix. In this way, one switches from calculating the many-body state to calculating quantities like the density in density functional theory (DFT) \cite{HK1964} or the 1RDM in RDMFT.
RDMFT \cite{G1975,Pernal2016} got significant attention in the last 20 years as an alternative to DFT. Several approximations have been introduced \cite{M1984,GU1998,BB2002,GPB2005,LHZG2009,P2014,KP2014,RPGB2008,ML2008,SDLG2008,SSSLG2015,LSDEMG2009,P2006,PMLU2010,PLRMU2011,S1999,R2002,RX2007,LHRG2014}
with promising results in cases like molecular dissociation \cite{BB2002,GPB2005,LHZG2009,P2014,KP2014,MPRLU2011} or the gaps of periodic systems \cite{HLAG2007,SDLG2008,Letc2010,SSSLG2015}, where the results of basic DFT functionals are not satisfactory. Among these approximations, a central position is held by the M\"uller functional \cite{M1984,BB2002} which was found to overcorrelate substantially. Its inaccurate reconstruction of the 2RDM in terms of the 1RDM is manifested by the violation of the positivity of the 2-particle density, which was already recognized by M\"uller himself. However, the M\"uller functional served as a starting point for further improvements. Several other functionals were introduced \cite{Pernal2016} aiming to correct its overcorrelation: the functional of Goedecker and Umrigar \cite{GU1998} the BBC$n$ ($n$ = 1,2,3) \cite{GPB2005,RPGB2008,LHZG2009}, the approximation of Marques and Lathiotakis \cite{ML2008} and the Power functional \cite{SDLG2008,LSDEMG2009,Constraints_Power,SSSLG2015}. In a different fashion, PNOF$n$ approximations, $n$ = 1,$\cdots$6 \cite{P2006,PMLU2010,PLRMU2011,P2014}, and the theory of the antisymmetrized product of strongly orthogonal geminals (APSG) \cite{S1999,R2002,RX2007,KP2014} were developed focusing on improving the reconstruction of the 2RDM.
In any RDMFT calculation, one searches for the 1RDM that minimizes the total energy functional. However, the search has to be restricted in the domain of functions (trial 1RDMs) that satisfy certain constraints, known as $N$-representability conditions, which guarantee that the optimal 1RDM corresponds to a fermionic system. Given the exact ground-state total energy functional, the ensemble $N$-representability conditions of Coleman \cite{Coleman_1963} are sufficient to ensure that one finds the 1RDM that corresponds to the nondegenerate ground-state wave function. The reason is that any ensemble of pure states would always include excited states and would lead to a higher total energy. For approximate functionals, on the other hand, the ensemble conditions do not guarantee that the minimizing 1RDM could be obtained from a many-body fermionic wave function. The class of spin-compensated systems with time-reversal symmetry is a notable exception, since, in that case, the conditions for pure-state $N$-representability collapse to the ensemble conditions \cite{S1966}. The necessary and sufficient conditions for pure-state $N$-representability, also called generalized Pauli constraints, have only recently been discussed and explicitly expressed for systems with a small number of particles and specific finite sizes of the Hilbert space \cite{ThesisAltunbulak, Klyachko_Math, Borland-Dennis, PRL_pure_cond,Carlos1, Carlos2, Schilling_2015}.
Recently, it has been demonstrated that with enforcing only the ensemble conditions in a RDMFT calculation for open-shell systems, the pure-state conditions will be violated for many functionals of the 1RDM \cite{TLMH2015}. Hence, the enforcement of the pure-state conditions leads to a different solution. We should mention that the number of pure-state conditions explodes as the number of electrons and the dimension of the Hilbert space increase and their consideration in a minimization procedure becomes a very difficult task.
In many cases, the many-body Hamiltonian commutes with the total spin and the spin projection in any particular direction. As a result, one can choose the solutions of the many-body Schr\"odinger equation to be eigenstates of the Hamiltonian, $\hat{\mathbf{S}}^2$, and $\hat{S}_z$ simultaneously. Typical cases where the Hamiltonian does not commute with the spin operators include the application of nonuniform magnetic fields to the system or the inclusion of spin-orbit coupling. However, we are not considering such cases in this work. Thus, for the cases discussed here, it is desirable that the optimal 1RDMs correspond to eigenstates of the spin operators as well. While the pure-state conditions ensure that there exists a many-body wave function corresponding to a given 1RDM, there is generally no guarantee that this many-body state is an eigenstate of any spin operator or even corresponds to a specific expectation value of it. Hence, the question arises if one can find constraints such that the solutions preserve the symmetries of the original many-body Hamiltonian, i.e., whether the optimal 1RDM which one finds in a RDMFT calculation corresponds to a many-body state with a prescribed, specific total spin and $S_z$. For the $z$-component of the spin, one typically constrains separately the number of up and down electrons in the system \cite{LHG2005} to the correct integer values.
The expectation value of $\hat{S}_z$ is then given by $(N_\uparrow-N_\downarrow)/2$ (atomic units are used throughout this paper unless explicitly stated otherwise). However, this constraint guarantees only that the expectation value of $\hat{S}_z$ has the correct, prescribed value. Therefore, it is a necessary condition for the 1RDM to correspond to an eigenstate of $\hat{S}_z$ but not a sufficient one.
The situation is even more complicated for the total spin. Contrary to $\hat{S}_z$ the total spin $\hat{\mathbf{S}}^2$ is not a single-particle operator. Its expectation value is, therefore, not a trivial functional of the 1RDM. Consequently, restricting the 1RDMs to have a specific expectation value for $\hat{\mathbf{S}}^2$ is nontrivial as well since it requires the knowledge of the expectation value of $\hat{\mathbf{S}}^2$ as a functional of the 1RDM. Since this expectation value can easily be written as a functional of the 2-body reduced density matrix (2RDM), $\Gamma^{(2)}$, as \cite{L1955}
\begin{eqnarray}
\label{ssquare}
\langle \hat{\mathbf{S}}^2 \rangle &=& -\frac{N(N-4)}{4}\\
\nonumber
&& + \sum_{\sigma_1,\sigma_2}\int d^3r_1 d^3r_2
\Gamma^{(2)} (\br_1 \sigma_1,\br_2 \sigma_2 |\br_1 \sigma_2,\br_2 \sigma_1),
\end{eqnarray}
several attempts have been made to apply constraints on the 2RDM \cite{AV2005, Piris1, Piris2, Piris_Leiva_spin} which then also affect the 1RDM. The same problem arises in DFT because the density functional to calculate $\langle \hat{\mathbf{S}}^2\rangle$ is unknown. As an approximate functional, one then usually evaluates $\langle\hat{\mathbf{S}}^2\rangle$ using the Kohn-Sham Slater determinant. However, this, in general, does not yield the correct value of $\langle \hat{\mathbf{S}}^2\rangle$ of the interacting system \cite{Handy_spin_dft, spin_dft_reiher, spin_becke}.
In this work, we discuss some necessary conditions for the 1RDM of a 2-electron system to correspond to a triplet configuration. These conditions can also be derived from symmetry considerations of the triplet wave function\cite{LSH1956}. In analogy to the pure-state conditions, for systems with a triplet ground-state the exact functional would find the corresponding 1RDM in the energy minimization without applying additional constraints. As we see, the conditions are generally violated by the 1RDMs obtained from the three approximate functionals considered here, namely, the M\"uller, the BBC3, and the Power functionals, when they are not explicitly enforced. Moreover, using the necessary conditions, we show that the BBC3 and the Power functionals break the energy degeneracy between the highly polarized triplet state ($S_z=1$) and the $S_z=0$ one, which is a clear deficiency of these approximations. We also apply these conditions, in an approximate way, to systems with an even number of electrons that is larger than two. In this case, we assume that $N-2$ natural orbitals form a singlet configuration and only two active electrons form the triplet. We apply the triplet conditions to various small systems, which have a singlet ground state, to calculate the first excited triplet state. We show that, by imposing these constraints, the results for the optimal 1RDMs are closer to exact than the results obtained without imposing them.
In most cases, the total energies of the first excited triplet states also improve when the constraints are applied.
For the M\"uller functional we also report values of $\langle \hat{\mathbf{S}}^2\rangle$ since this functional provides an approximation for the whole 2RDM in terms of the 1RDM. For the other two functionals, BBC3 and Power, only the energy functional is available and of course expectation values of $\hat{\mathbf{S}}^2$ cannot be reported.
This paper is organized as follows: In Section \ref{sec:spinconst}, we present the necessary conditions which we consider for the triplet state of 2-electron systems and their generalization in order to be applicable to more electrons. Our results are presented in Section \ref{sec:results}, where we assess the inclusion of these constraints in RDMFT calculations as far as the optimal 1RDM and the total energy of the lowest triplet state are concerned. Finally, our conclusions are included in Section \ref{sec:conc}.
\section{Spin Constraints}\label{sec:spinconst}
Writing the 1RDM, $\gamma(\v r, \v r')$ in its spectral representation
\begin{equation}
\gamma(\v r, \v r')=\sum_{j=1}^\infty n_j\varphi_j^*(\v r')\varphi_j(\v r)
\end{equation}
with the occupation numbers $n_j$ and the natural orbitals $\varphi_j(\v r)$ one can easily express the ensemble $N$-representability conditions \cite{Coleman_1963} as
\begin{equation}
\sum_{j=1}^\infty n_j=N, \quad 0\leq n_j \leq 1.
\end{equation}
These two conditions ensure that the 1RDM corresponds to a system of $N$ fermions but not necessarily to a pure $N$-particle state. Obviously, in any practical calculation the number of natural orbitals is restricted to a finite number $M$ with $M>N$ which is a valid approximation since the occupation numbers $n_j$ fall off rapidly for $j>N$. Restricting the system not only to fermionic ensembles but actual $N$-particle states requires additional constraints which increase rapidly in number with the number of particles $N$ and the number of orbitals $M$. For small $N$ and $M$ they can be given explicitly \cite{ThesisAltunbulak, Klyachko_Math, Borland-Dennis}.
In the present work,
we allow for spin-dependent density matrices and occupation numbers, the natural orbitals, however, remain spin independent \cite{LHG2005}, i.e.,
\begin{equation}
\gamma_\sigma(\v r, \v r')=\sum_{j=1}^\infty n_{j\sigma}\varphi_j^*(\v r')\varphi_j(\v r)
\end{equation}
and
\begin{equation}
\gamma(\v r, \v r')=\sum_{\sigma=\uparrow, \downarrow} \gamma_\sigma(\v r, \v r').
\end{equation}
Note that the choice of having the same set of spatial orbitals for both spin
channels is not related to describing density matrices with a specific $S_z$ (which can be also achieved with different sets of spin orbitals \cite{RP2011}) but,
facilitates the description of $\gamma$ with a specific expectation value of $\hat{\mathbf{S}}^2$. In approximations which use a single Slater determinant, the introduction of spatial orbitals that are different for the two spins induces the so-called spin contamination problem (see for example Ref.~\cite{spin_contamination}). This problem is completely avoided when the orbitals are spin-independent. In analogy, in RDMFT, as we see in the present work, the assumption of spin-independent spatial orbitals leads to necessary conditions for fixing the correct expectation value of $\hat{\mathbf{S}}^2$ for two electrons which take a simple form and involve only the spin-dependent occupation numbers. From now on, with the notation $S_z$ we mean, in general, the expectation value $\langle\hat{S_z}\rangle$.
In order to describe a system with a specific $S_z$ one requires that
\begin{equation}
\label{repres_spin}
\sum_{j=1}^\infty n_{j\sigma}=N_\sigma, \quad \sum_\sigma N_\sigma=N.
\end{equation}
However, while this guarantees that the expectation value of $\hat{S}_z$ is given by $(N_\uparrow-N_\downarrow)/2$, it is only a necessary but not a sufficient condition for the 1RDM to correspond to an eigenstate of $\hat{S}_z$. For example, the state
\begin{equation}
\Psi(\v r_1\sigma_1,\v r_2\sigma_2) = \frac{1}{\sqrt{2}}
\left(|1^\uparrow 2^\uparrow\rangle + |3^\downarrow 4^\downarrow\rangle\right)
\end{equation}
is a linear combination of an $S_z=1$ and an $S_z=-1$ eigenstate. We denote the natural orbital $\varphi_1(\v r)$ being occupied with a spin-up electron as $1^\uparrow$ and the Slater determinant as $\mid \:\:\rangle$. The nonzero occupation numbers of this state are given by
\begin{equation}
n_{1\uparrow}=n_{2\uparrow}=n_{3\downarrow}=n_{4\downarrow}=\frac{1}{2},
\end{equation}
and the sum of the occupation numbers in each spin channel is $N_\uparrow=N_\downarrow=1$.
Thus, even when both $N_\uparrow$ and $N_\downarrow$ are fixed to integer values, the 1RDM does not need to correspond to an eigenstate of the $\hat{S}_z$ operator. Exceptions are the maximally polarized states, i.e., for fixed $N_\uparrow=N$ and $N_\downarrow=0$ or vice versa. In these cases, one is guaranteed to find an $\hat{S}_z$ eigenstate with $S_z=\pm N/2$.
Furthermore, for these states there exists only one $\hat{\mathbf{S}}^2$ eigenstate. Therefore, in this specific situation, enforcing a certain value for $S_z$ ensures that the 1RDM corresponds to an eigenstate of both $\hat{S}_z$ and $\hat{\mathbf{S}}^2$ with the latter having the eigenvalue $S(S+1)=(N/2)(N/2+1)$, provided one enforces pure-state $N$-representability. If pure-state $N$-representability is not enforced the calculation will generally yield an ensemble of states with $S_z=\pm N/2$. In other words, each of the states in the ensemble will be an eigenstate of $\hat{S}_z$ and $\hat{\mathbf{S}}^2$ and the expectation values of the whole ensemble will be $\pm N/2$ and $N/2(N/2+1)$, respectively. The pure-state $N$-representability can be ensured by simply transferring the known pure-state $N$-representability conditions \cite{ThesisAltunbulak, Klyachko_Math, Borland-Dennis, Carlos1, Carlos2} to the occupation numbers of the spin channel that is occupied in the system.
For $N=2$, there are only two possible configurations for the total spin, $S=1$ or $S=0$. The fully polarized states correspond to $S_z=\pm 1$ and, as discussed above, are easy to distinguish in RDMFT from the $S_z=0$ states. The necessary and sufficient conditions for pure-state $N$-representability for $N=2$ only require a double degeneracy of the occupation numbers \cite{LSH1956}. Hence, enforcing all occupation numbers of the up (down) spin channel to be doubly degenerate yields a triplet eigenstate with $S_z=1$ ($S_z=-1$). The question remains how to distinguish between the two $S_z=0$ states, i.e., the triplet state with $S_z=0$ and the singlet state. We can construct the wave function for the triplet state with $S_z=0$ starting from the fully polarized state
\begin{equation}\label{eq:trips=1}
\left|S=1, S_z=1\right\rangle=a_1|1^\uparrow 2^\uparrow\rangle+a_2|3^\uparrow 4^\uparrow\rangle+a_3|5^\uparrow 6^\uparrow\rangle \cdots\: .
\end{equation}
Note that one needs an even number of natural orbitals $M$ since only doubly excited Slater determinants are allowed in the expansion. Including a determinant which is a single excitation of any other determinant in the expansion leads to off-diagonal terms in the 1RDM which is forbidden, as we are constructing the Slater determinants from natural orbitals. As one can see, the occupation numbers for such a state are pairwise degenerate with
\begin{equation}\label{eq:2fold}
n_{1\uparrow}=n_{2\uparrow}=|a_1|^2, n_{3\uparrow}=n_{4\uparrow}=|a_2|^2 \cdots\,,
\end{equation}
i.e.. the pure-state constraint is satisfied. Applying $\hat{S}_-$ to the state (\ref{eq:trips=1}) we obtain
\begin{eqnarray}\label{eq:trips=0}
|S=1, S_z=0\rangle&=&\frac{1}{\sqrt{2}} \bigl(a_1[|1^\downarrow 2^\uparrow\rangle+|1^\uparrow 2^\downarrow\rangle]\\
\nonumber
&&\quad\:\: +a_2[|3^\downarrow 4^\uparrow\rangle+|3^\uparrow 4^\downarrow\rangle]+\cdots\bigl),
\end{eqnarray}
where $1^\uparrow$ denotes that the natural orbital $\varphi_1$ is occupied with an up electron in the Slater determinant, and the spatial dependence of $1^\uparrow$ and $1^\downarrow$ is identical. The corresponding occupation numbers are 4-fold degenerate with
\begin{eqnarray}\label{eq:4fold}
&&n_{1\uparrow}=n_{1\downarrow}=n_{2\uparrow}=n_{2\downarrow}=|a_1|^2/2,\\\nonumber
&&n_{3\uparrow}=n_{3\downarrow}=n_{4\uparrow}=n_{4\downarrow}=|a_2|^2/2,\\
\nonumber
&&\cdots
\end{eqnarray}
We also see from Eqs.\ (\ref{eq:trips=1}) and (\ref{eq:trips=0}) that the spatial parts of the natural orbitals of the two spin channels are the same since the spin operator only acts on the spin parts. Corresponding to the triplet $S_z=0$ state there also exists a singlet state
\begin{eqnarray}\label{eq:sing}
|S=0, S_z=0\rangle &=&\frac{1}{\sqrt{2}} \bigl(a_1[|1^\downarrow 2^\uparrow\rangle-|1^\uparrow 2^\downarrow\rangle]\\
\nonumber
&&\quad\:\: +a_2[|3^\downarrow 4^\uparrow\rangle-|3^\uparrow 4^\downarrow\rangle]+\cdots\bigl)
\end{eqnarray}
which also has the occupation numbers given by Eq.\ (\ref{eq:4fold}). In other words, the 4-fold degeneracy of the occupation numbers is a necessary but not sufficient condition for the 1RDM to belong to a triplet state with $S_z=0$. As a result, contrary to the fully polarized states, enforcing a 4-fold degeneracy on the occupation numbers might still yield a singlet state rather than the $S_z=0$ triplet state. However, this is not the most general singlet state. One can also construct a singlet as
\begin{equation}
|S=0, S_z=0\rangle=c_1|1^\downarrow 1^\uparrow\rangle
+c_2|2^\downarrow 2^\uparrow\rangle+\cdots
\label{eq:gensing}
\end{equation}
which leads to occupation numbers that are doubly degenerate only. The double degeneracy in the occupation numbers of the singlet 1RDM and the spatial part of the triplet 1RDM were derived from symmetry considerations of the corresponding wave functions by L\"owdin and Shull \cite{LSH1956}.
Running a RDMFT calculation with $N_\uparrow=N_\downarrow=1$ without enforcing extra constraints we typically find the double degeneracy of the occupations that corresponds to the general singlet configuration, Eq.\ (\ref{eq:gensing}). This is true not only for approximate functionals but also for the exact one which is known for $N=2$ \cite{LSH1956}. This is not surprising since the occupation numbers have more variational freedom than for the states (\ref{eq:trips=0}) and (\ref{eq:sing}) where a 4-fold degeneracy is required. Even a linear combination of the general singlet state (\ref{eq:gensing}) and the triplet state (\ref{eq:trips=0}) yields some occupation numbers which are 4-fold degenerate. For example, the state
\begin{equation}\label{eq:lincomb}
c_1|1^\downarrow 1^\uparrow\rangle +
c_2\left(|2^\downarrow 3^\uparrow\rangle+|2^\uparrow 3^\downarrow\rangle\right),
\end{equation}
where the first part is a singlet state while the second one is a $S_z=0$ triplet state, corresponds to occupation numbers
\begin{eqnarray}\label{eq:lincombocc1}
n_{1\uparrow}&=&n_{1\downarrow}=|c_1|^2,\\
\label{eq:lincombocc2}
n_{2\uparrow}&=&n_{2\downarrow}=n_{3\uparrow}=n_{3\downarrow}=|c_2|^2.
\end{eqnarray}
In other words, the occupation numbers coming from the triplet part are again 4-fold degenerate. Note that due to the fact that we are expanding in natural orbitals, an orbital from the singlet part of Eq.\ (\ref{eq:lincomb}) cannot be used again in the triplet part since this would introduce determinants which are single excitations of each other.
Energetically, unless the Hamiltonian contains a magnetic field or any other spin-specific terms, the two states (\ref{eq:trips=1}) and (\ref{eq:trips=0}) are degenerate. Hence, for calculating the triplet energy it should be irrelevant which state is calculated. However, many RDMFT functionals do not satisfy this degeneracy. An exception is the M\"uller functional for which one can show that the states (\ref{eq:trips=1}) and (\ref{eq:trips=0}) have the same energy (see Appendix \ref{app:mueller}). In those cases where the degeneracy is broken, one can calculate the $S_z=0$ triplet state by enforcing the 4-fold degeneracy of the occupation numbers. This prevents the minimization from finding the general singlet state (\ref{eq:gensing}) which is lower in energy. The lowest singlet state with 4-fold degeneracy (\ref{eq:sing}) has a higher energy than the corresponding triplet state (\ref{eq:trips=0}) \cite{Atkins}.
For more than two electrons, one often encounters cases where only the two outer electrons are important for describing the correct physics. Within any multiconfiguration wave function approach, this corresponds to working with only two active electrons.
For example, for four electrons, one writes the wave function as
\begin{eqnarray}\nonumber
|S=1, S_z=1\rangle &=& a_1|1^\uparrow1^\downarrow 2^\uparrow 3^\uparrow\rangle+a_2|1^\uparrow1^\downarrow 4^\uparrow 5^\uparrow\rangle\\
\label{eq:M=1_pinned}
&+& a_3|1^\uparrow1^\downarrow 6^\uparrow 7^\uparrow\rangle + \ldots
\end{eqnarray}
which leads to the following occupation numbers
\begin{eqnarray}
\label{eq:occM=1_pinned}
\nonumber
n_{1\uparrow}&=& n_{1\downarrow}=1,\\
\nonumber
n_{2\uparrow}=n_{3\uparrow},&& n_{4\uparrow}=n_{5\uparrow},\\
\nonumber
n_{6\uparrow}=n_{7\uparrow},&& \ldots\: .\\
\end{eqnarray}
Acting with $\hat{S}_{-}$ on the state of Eq.~(\ref{eq:M=1_pinned}) we obtain
\begin{eqnarray}
\nonumber
|S=1,S_z=0\rangle &=& \frac{a_1}{\sqrt{2}}\left(|1^\uparrow 1^\downarrow 2^\uparrow 3^\downarrow \rangle+|1^\uparrow 1^\downarrow 2^\downarrow 3^\uparrow\rangle\right)\\
\nonumber
&+&\frac{a_2}{\sqrt{2}}\left(|1^\uparrow 1^\downarrow 4^\uparrow 5^\downarrow\rangle + |1^\uparrow 1^\downarrow 4^\downarrow 5^\uparrow\rangle\right)\\
\nonumber
&+&\frac{a_3}{\sqrt{2}}\left(|1^\uparrow 1^\downarrow 6^\uparrow 7^\downarrow\rangle + |1^\uparrow 1^\downarrow 6^\downarrow 7^\uparrow\rangle\right)\\
&+&\ldots
\label{eq:M=0_pinned}
\end{eqnarray}
with occupation numbers
\begin{eqnarray}
\label{eq:occM=0_pinned}
\nonumber
n_{1\uparrow}&=& n_{1\downarrow}=1,\\
\nonumber
n_{2\uparrow}=n_{2\downarrow}&=&n_{3\uparrow}=n_{3\downarrow},\\
\nonumber
n_{4\uparrow}=n_{4\downarrow}&=&n_{5\uparrow}=n_{5\downarrow},\\
\nonumber
n_{6\uparrow}=n_{6\downarrow}&=&n_{7\uparrow}=n_{7\downarrow},\\
\vdots && \vdots \: .
\end{eqnarray}
However, for the $|S=1,S_{z}=1\rangle$, the choice of forcing all the inner orbitals to have occupation numbers equal to one is very restrictive and leaves no variational freedom in the spin-down channel. There are cases, as we see later, where imposing the corresponding constraints in an energy minimization leads to an overestimation of the energy of the triplet state. As an alternative, we suggest to assume that the inner orbitals are equally, but not necessarily fully, occupied in the two spin channels, such that these orbitals give a $S=0$ contribution to the total spin. Hence, we enforce the following conditions on the occupation numbers for $S_z=1$
\begin{eqnarray}
\label{eq:occM=1}
\nonumber
n_{1\uparrow}&=& n_{1\downarrow},\\
\nonumber
n_{2\uparrow}=n_{3\uparrow},&& n_{4\uparrow}=n_{5\uparrow},\\
\nonumber
n_{6\uparrow}=n_{7\uparrow}\\
\vdots && \vdots\:,
\end{eqnarray}
and leave the occupation numbers of the down channel, except for $n_{1\downarrow}$, unconstrained.
The conditions for four electrons of Eqs.~(\ref{eq:occM=1_pinned}), (\ref{eq:occM=0_pinned}), and (\ref{eq:occM=1}) can be extended to any even number of electrons. One can then apply them to calculate, for example, approximate total energies for the lowest triplet state. We expect these total energies to be correct for the lowest triplet state because the assumption that this triplet state is built entirely by the two outer electrons is a good approximation in this case.
We would like to point out that with the constraint (\ref{eq:occM=1_pinned}), the system is effectively reduced to a 2-particle system with $S_z=1$. Hence, the double degeneracy of the fractional occupation numbers is a necessary and sufficient condition to find expectation values $S=1$ and $S_z=1$. Using instead the constraint (\ref{eq:occM=1}), the calculated 1RDM will not correspond exactly to the correct expectation value of $\hat{\mathbf{S}}^2$ due to the additional freedom especially in the outer occupation numbers of the spin down channel. Nevertheless, this constraint will improve the description of a triplet state in general compared with no constraint at all. The effect of the constraints in the description of triplets can be tested for the M\"uller functional which offers an ansatz for the full 2RDM and the expectation value of $\hat{\mathbf{S}}^2$ can be calculated.
We note that enforcing the pinning of some occupation numbers to one, which is generally an approximation, reduces dramatically the number of pure-state $N$-representability conditions and makes it easier to apply them in practical implementations. As the number of electrons and the dimension of the Hilbert space increase, the number of the exact generalized Pauli constraints explodes and their consideration in the minimization is extremely difficult. Pinning an occupation number means that it has no influence in the question of pure-state or ensemble $N$-representability. From Eqs.\ (\ref{eq:M=1_pinned})-(\ref{eq:occM=0_pinned}), we can see that the orbitals which correspond to pinned occupation numbers appear in every Slater determinant in a pure state. For an ensemble, such an orbital would appear in every Slater determinant of every term that contributes to the ensemble. Consequently, for those systems where pinning the occupation numbers for all but two electrons is a valid approximation, we could simply consider the generalized Pauli constraints for two electrons and for those occupation numbers which are not pinned. For a 2-electron triplet with $S_z=\pm 1$ the double degeneracy constraint (\ref{eq:2fold}) coincides with the generalized Pauli constraint. For the triplet with $S_z=0$, the 4-fold degeneracy constraint (\ref{eq:4fold}) is stricter but fulfills the generalized Pauli constraints.
So far, we have discussed constraints on the triplet states. In order to calculate the excitation energy from a singlet ground state to the first excited triplet state, we also need to calculate the energy of the singlet. Since many approximations were derived aiming at the correct description of singlet ground states, we expect to obtain quantitatively correct results for the singlet ground-state energies by just enforcing ${S}_z=0$ despite the fact that we cannot exclude that our density matrix might be contaminated by contributions from states with total spin $S$ larger than zero. Note that for systems with an even number of electrons, a nondegenerate ground state and a Hamiltonian which has time-reversal symmetry, by enforcing ${S}_z=0$ setting the constraint $n_{j\uparrow}=n_{j\downarrow}$, we also satisfy the generalized Pauli constraints \cite{S1966}.
Let us point out that even when the generalized Pauli constraints are satisfied this does not mean that the 1RDM that we have corresponds necessarily to a pure state, it could also correspond to an ensemble. However, by satisfying these constraints we exclude the case that a given 1RDM corresponds to ensembles only and cannot correspond to a pure state.
\section{Results\label{sec:results}}
We now apply the constraints discussed in the last section to
the energy minimization in RDMFT calculations. We employed three different approximations for the total energy within RDMFT, namely, the M\"uller, the BBC3, and the Power functionals \cite{M1984, GPB2005, SDLG2008}. We first compare the 1RDMs obtained from the constrained calculations to those obtained without these constraints and the ``exact'' 1RDM from a MCSCF calculation. The inclusion of the extra constraints on the occupation numbers adds only an insignificant amount of computational cost since the bottleneck of the RDMFT minimization is the optimization of the natural orbitals. We refer to the calculations without the constraints discussed in this work, related to $\langle\hat{\mathbf{S}}^2\rangle$, as minimizations without constraint. Despite this name, we still impose the constraint (\ref{repres_spin}) which fixes the expectation value of $\hat{S_z}$ and the correct number of electrons in all our calculations.
As test systems, we considered helium, H$_2$, Be, BH, H$_2$O and Mg for which, due to their small size, the MCSCF calculations are feasible. For helium, H$_2$, Be, and BH, we used the cc-pVTZ basis set, and the energy minimization for all methods was performed using $10$, $14$, $35$, and $24$ natural orbitals, respectively. For H$_2$O and Mg we used the cc-pVDZ with $20$ and $17$ natural orbitals, respectively. In all cases, we used less natural orbitals than the basis sets would allow, since, for small systems, we obtained very small occupation numbers which cause numerical problems in the convergence of the MCSCF calculations. In addition, for larger systems the demands in memory become prohibitive for the MCSCF calculation because we want to compare to the exact 1RDM and, therefore, cannot pin occupations to one. The MCSCF triplet as well as the one- and 2-electron integral calculations were performed using the Gamess US code \cite{GAMESS}. The RDMFT calculations were performed with the HIPPO computer code \cite{code}.
\subsection{Quality of 1-RDMs Using $\langle \hat{\mathbf{S}}^2 \rangle$ Constraints}
As discussed in the previous section, a 2-electron wave function with $S_z=1$ expressed in terms of natural orbitals has the form (\ref{eq:trips=1}), which results in the restrictions (\ref{eq:2fold}). Thus, we tested if these exact conditions are satisfied by the functionals considered here, for the helium atom and the hydrogen molecule at different internuclear separations by calculating the ground state for $N_\uparrow=2$ and $N_\downarrow=0$. We find that the conditions are violated in all cases, i.e.. the occupations are not pairwise equal but show differences of up to $0.09$ within the pairs. The conditions are, as expected, satisfied by the MCSCF calculations and, as can be shown analytically, by the exact 2-electron RDMFT functional (LSH) \cite{LSH1956}. We then perform the RDMFT calculations using Eq.\ (\ref{eq:2fold}) as an additional constraint during the optimization of the occupation numbers.
We also perform RDMFT minimizations for larger systems with an even number of particles with the approximate constraint (\ref{eq:occM=1_pinned}), where we assume that the triplet is formed from the two outermost electrons and the inner orbitals have occupations pinned to one and with the constraint (\ref{eq:occM=1}), where we allow the inner occupations to be less than one. We compare the occupation numbers from these RDMFT minimizations, with and without enforcing the constraints, with occupation numbers from ``exact'' MCSCF calculations to check whether the constraints help to get a density matrix closer to the exact one.
As the constraints concern the occupation numbers, an important criterion for the quality of the calculated 1RDMs is the square difference
\begin{equation}\label{eq:delta}
\Delta=\frac{1}{N}\sum_{j=1}^M \left(\sum_{\sigma=\uparrow,\downarrow}\left(n_{j\sigma}^\mathrm{RDMFT}-n_{j\sigma}^\mathrm{MCSCF}\right)\right)^2
\end{equation}
of the obtained RDMFT occupations from the exact ones, which we show in Table \ref{tab:dev_occ}. In Eq.\ (\ref{eq:delta}), $M$ denotes the number of natural orbitals included in the calculation and $N$ the total number of electrons. We show results for $\Delta$ without imposing the additional constraints (w/o) and from calculations with the spin constraint (\ref{eq:2fold}) for two electrons (cons.) in the top half of the Table. For more than two electrons, we impose the constraints of Eq.~(\ref{eq:occM=1_pinned}) (cons.\ pin.) and (\ref{eq:occM=1}) (cons.), and the results are shown in the bottom half. For the 2-electron systems, imposing the exact constraint (\ref{eq:2fold}), in calculations, with the approximate 1RDM functionals we adopted, results in optimal occupation numbers closer to the exact ones. Moreover, for more than two electrons, both approximate constraints, Eqs.\ (\ref{eq:occM=1_pinned}) and (\ref{eq:occM=1}), improve significantly the 1RDM as the occupation numbers are much closer to the exact ones than the occupations from the energy minimization without additional constraints.
We should emphasize that, although the constraint (\ref{eq:occM=1}) is expected to lead to a larger deviation from the correct $\langle \hat{\mathbf{S}}^2\rangle$ than the constraint (\ref{eq:occM=1_pinned}), it leads to occupations closer to the exact ones.
\begin{table*}
\setlength{\tabcolsep}{1.0pt}
\begin{tabular}{|l|ccc|ccc|ccc|} \hline\hline
& \multicolumn{3}{c|}{M\"uller} & \multicolumn{3}{c|}{BBC3} & \multicolumn{3}{c|}{Power} \\
& w/o & \multicolumn{2}{c|}{cons.} & w/o & \multicolumn{2}{c|}{cons.} & w/o & \multicolumn{2}{c|}{cons.} \\[0.1ex]\hline
He & 0.001505 & \multicolumn{2}{c|}{0.000063} & 0.000108 & \multicolumn{2}{c|}{0.000014} & 0.000109 & \multicolumn{2}{c|}{0.000005}\\
H$_2$ (1.4 au) & 0.019184 & \multicolumn{2}{c|}{0.000855} & 0.000743 & \multicolumn{2}{c|}{0.000006} & 0.002157 & \multicolumn{2}{c|}{0.000060} \\
H$_2$ (2.5 au) & 0.004083 & \multicolumn{2}{c|}{0.000853} & 0.000312 & \multicolumn{2}{c|}{0.000064} & 0.000377 & \multicolumn{2}{c|}{0.000063} \\
H$_2$ (5.0 au) & 0.001690 & \multicolumn{2}{c|}{0.000808} & 0.000294 & \multicolumn{2}{c|}{0.000089} & 0.000161 & \multicolumn{2}{c|}{0.000075} \\\hline
average $\Delta$ & 0.00662 & \multicolumn{2}{c|}{0.00064} & 0.00036 & \multicolumn{2}{c|}{0.00004} & 0.00070 & \multicolumn{2}{c|}{0.00005} \\\hline\hline
& w/o & cons. pin. & cons. & w/o & cons. pin & cons. & w/o & cons. pin.& cons. \\[0.1ex] \hline
Be & 0.207842 & 0.090246 & 0.068044 & 0.001813 & 0.000302 & 0.000166 & 0.164014 & 0.022162 & 0.010776 \\
BH & 0.081441 & 0.023143 &0.0091545& 0.028337& 0.000731 & 0.000274 & 0.078891 & 0.004478& 0.001066 \\
H$_2$O & 0.034401 & 0.003178 &0.0077090& 0.001668 & 0.000476 &0.000152 & 0.011851 & 0.000708 & 0.000877 \\
Mg & 0.053385 & 0.034422 &0.0162205& 0.011861 & 0.001604 &0.000128 & 0.053019 & 0.003068 & 0.003617\\\hline
average $\Delta$& 0.09427 &0.03775 &0.02528&0.01092&0.00078&0.00018&0.07694&0.00760 & 0.00408\\
\hline\hline
\end{tabular}
\caption{\label{tab:dev_occ}{\bf For the $S_z=1$ State, deviation of the calculated occupation numbers from the exact.} For two-electron systems (top): Deviation $\Delta$ (see Eq.\ (\ref{eq:delta})) of the occupation numbers from the exact occupations (MCSCF) calculated with different RDMFT functionals, without (w/o) enforcing the additional exact spin constraint (\ref{eq:2fold}) and with the constraint (cons.). For systems with more than two electrons (bottom): The same deviation without any constraint (w/o), with the constraint (\ref{eq:occM=1_pinned}) (cons. pin.), or using Eq.\ (\ref{eq:occM=1}) (cons.). For each system we used the same number of natural orbitals and the same basis set for the RDMFT and MCSCF calculations.}
\end{table*}
\begin{table*}
\setlength{\tabcolsep}{1.0pt}
\begin{tabular}{|l|ccc|ccc|ccc|c|} \hline\hline
& \multicolumn{3}{c|}{M\"uller} & \multicolumn{3}{c|}{BBC3} & \multicolumn{3}{c|}{Power}&MCSCF \\
& w/o & \multicolumn{2}{c|}{cons.} & w/o & \multicolumn{2}{c|}{cons.} & w/o & \multicolumn{2}{c|}{cons.} & \\[0.1ex] \hline
He & 4.9$\cdot 10^{-2}$ & \multicolumn{2}{c|}{1.3$\cdot 10^{-2}$} & 1.3$\cdot 10^{-2}$ & \multicolumn{2}{c|}{6.3$\cdot 10^{-3}$} &1.3$\cdot 10^{-2}$ & \multicolumn{2}{c|}{3.8$\cdot 10^{-3}$} &1.9$\cdot 10^{-4}$ \\
H$_2$ (1.4 au) & 1.0$\cdot 10^{-1}$ & \multicolumn{2}{c|}{5.5$\cdot 10^{-2}$} &3.8$\cdot 10^{-2}$ & \multicolumn{2}{c|}{1.7$\cdot 10^{-2}$} & 6.1$\cdot 10^{-2}$ & \multicolumn{2}{c|}{1.9$\cdot 10^{-2}$} & 5.7$\cdot 10^{-3}$ \\
H$_2$ (2.5 au) & 8.5$\cdot 10^{-2}$ & \multicolumn{2}{c|}{5.4$\cdot 10^{-2}$} &2.5$\cdot 10^{-2}$ & \multicolumn{2}{c|}{1.7$\cdot 10^{-2}$} & 2.7$\cdot 10^{-2}$ & \multicolumn{2}{c|}{1.7$\cdot 10^{-2}$} & 3.0$\cdot 10^{-3}$ \\
H$_2$ (5.0 au) & 5.4$\cdot 10^{-2}$ & \multicolumn{2}{c|}{5.1$\cdot 10^{-2}$} &2.3$\cdot 10^{-2}$ & \multicolumn{2}{c|}{1.7$\cdot 10^{-2}$} & 1.7$\cdot 10^{-2}$ & \multicolumn{2}{c|}{1.5$\cdot 10^{-4}$} & 2.5$\cdot 10^{-4}$\\\hline
average $\Delta_w$ & 7.0$\cdot 10^{-2}$ &\multicolumn{2}{c|}{4.1$\cdot10^{-2}$}&2.3$\cdot 10^{-2}$&\multicolumn{2}{c|}{1.2$\cdot10^{-2}$}&2.7$\cdot10^{-2}$&\multicolumn{2}{c|}{1.1$\cdot 10^{-2}$} & -\\\hline \hline
& w/o & cons. pin. & cons. & w/0 & cons. pin & cons. & w/o & cons. pin.& cons.& \\[0.1ex] \hline
Be & 6.8$\cdot 10^{-1}$ & 6.8$\cdot 10^{-1}$ &5.9$\cdot 10^{-1}$& 8.2$\cdot 10^{-2}$ & 5.5$\cdot 10^{-2}$ &4.3$\cdot 10^{-2}$& 1.0 & 3.3$\cdot 10^{-1}$ &2.3$\cdot 10^{-1}$& 1.5$\cdot 10^{-2}$\\
BH & 1.1& 4.5$\cdot 10^{-1}$ &3.8$\cdot 10^{-1}$& 3.4$\cdot 10^{-1}$ & 7.2$\cdot 10^{-2}$ &5.5$\cdot 10^{-2}$& 1.0 & 2.0$\cdot 10^{-1}$ & 1.3$\cdot 10^{-1}$&7.7$\cdot 10^{-2}$ \\
H$_2$O & 4.1$\cdot 10^{-1}$ & 2.3$\cdot 10^{-1}$&2.0$\cdot 10^{-1}$& 1.1$\cdot 10^{-1}$ & 8.6$\cdot 10^{-2}$ &6.7$\cdot 10^{-2}$ & 2.3$\cdot 10^{-1}$ & 1.1$\cdot 10^{-1}$&7.9$\cdot 10^{-2}$ & 8.1$\cdot 10^{-2}$ \\
Mg & 1.0 &7.1$\cdot 10^{-1}$ &5.0$\cdot 10^{-1}$& 3.0$\cdot 10^{-1}$ &1.6$\cdot 10^{-1}$ &5.6$\cdot 10^{-2}$ & 1.0 &2.3$\cdot 10^{-1}$& 2.3$\cdot 10^{-1}$ & 1.3$\cdot 10^{-2}$\\\hline
average $\Delta_w$& 7.5$\cdot10^{-1}$& 4.7$\cdot 10^{-1}$&3.3$\cdot 10^{-1}$&1.6$\cdot 10^{-1}$&5.0$\cdot10^{-2}$&1.5$\cdot10^{-2}$&7.7$\cdot10^{-1}$&7.2$\cdot10^{-1}$&1.2$\cdot10^{-1}$&-\\
\hline\hline
\end{tabular}
\caption{\label{tab:weak_occ} {\bf Same as Table \ref{tab:dev_occ} but for the sum of the occupation numbers of the weakly occupied orbitals, $w$ (see Eq.\ (\ref{eq:wcorr})).} Here $\Delta_w$ denotes the absolute deviation from the ``exact'' MCSCF results which is then averaged over all systems in each part of the table.}
\end{table*}
Correlations in RDMFT are manifested by fractional occupation numbers. A measure for the correlation is the total electronic charge of ``weakly'' occupied orbitals, i.e., those with occupations smaller than $1/2$, defined as
\begin{equation}\label{eq:wcorr}
w=\sum_{n_{j\sigma} <\frac{1}{2}} n_{j\sigma}.
\end{equation}
We again compare to the results from a MCSCF calculation using
\begin{eqnarray}
\Delta_w=\left|w-w^{MCSCF}\right|
\end{eqnarray}
which is then averaged over all the systems considered.
A known deficiency of many approximate functionals, which can also be seen in Table \ref{tab:weak_occ}, is that they typically overestimate $w$. For 2-electron systems, imposing the exact constraint (\ref{eq:2fold}) for $S=1$ and $S_z=1$, lowers $w$ to values closer to the exact ones. For systems with more than two electrons, both the constraints (\ref{eq:occM=1_pinned}) and (\ref{eq:occM=1}) reduce $w$ to values closer to the ``exact" MCSCF result.
Results with the constraint (\ref{eq:occM=1}), i.e., without pinning the inner occupation numbers, are the closest to MCSCF.
So far, we have assessed the quality of the optimal 1RDMs from our calculations by comparing them to the results from MCSCF calculations. However, the goal was to calculate 1RDMs with a specific expectation value for the total spin which requires a functional of $\langle \hat{\mathbf{S}}^2\rangle$ in terms of the 1RDM. For the M\"uller functional, the energy functional was derived using an ansatz for the second-order reduced density matrix $\Gamma^{(2)}$ in terms of the 1RDM \cite{M1984}. Using this ansatz for $\Gamma^{(2)}$ we can calculate $\langle \hat{\mathbf{S}}^2\rangle$ from Eq.\ (\ref{ssquare}). The resulting expression reads
\begin{eqnarray}\nonumber
\langle \hat{\mathbf{S}}^2 \rangle_{\mathrm{M\ddot{u}ller}}& = &\frac{(N_\uparrow-N_\downarrow)^2}{4}
+ (N_\uparrow + N_\downarrow) \\
&&-\frac{1}{2} \sum_{j=1}^{\infty} \sum_{\sigma\sigma^\prime = \uparrow,\downarrow}
\sqrt{(n_{j\sigma}\,n_{j\sigma^\prime})}.
\label{eq:Stot_Mueller}
\end{eqnarray}
The correct expectation value of the triplet state is $\langle \hat{\mathbf{S}}^2 \rangle=2 $. Thus, we calculate the difference
\begin{eqnarray}
\Delta \mathbf{S}^2=2-\langle \hat{\mathbf{S}}^2 \rangle_{\mathrm{M\ddot{u}ller}}
\end{eqnarray}
to examine whether imposing the constraint (\ref{eq:occM=1}) improves
$\langle \hat{\mathbf{S}}^2 \rangle$ or not. The results we obtained are shown in Fig.\ \ref{fig:spin_violation}. It is apparent that in all cases the
considered approximate constraint improves the values of $\langle \hat{\mathbf{S}}^2 \rangle$ obtained with Eq.~(\ref{eq:Stot_Mueller}). Note that with the constraint (\ref{eq:occM=1_pinned}), where we pin the inner occupation numbers to one, and for 2-electron systems, where
Eq.\ (\ref{eq:2fold}) is the exact constraint, $\Delta\mathbf{S}^2$
is zero and therefore not included in Fig.\ \ref{fig:spin_violation}.
\begin{figure}
\includegraphics[width=0.4\textwidth, clip]{spin_viol.eps}
\caption{\label{fig:spin_violation} Difference between $\langle\hat{\mathbf{S}}^2 \rangle$ calculated from Eq.\ (\ref{eq:Stot_Mueller}) with the M\"uller functional and the exact $\langle\hat{\mathbf{S}}^2 \rangle=2$ for a triplet state. The triplet is calculated without any constraints for $\hat{\mathbf{S}}^2$ (green shaded) and with the constraint Eq.\ (\ref{eq:occM=1}) (red full).}
\end{figure}
Let us point out that the M\"uller ansatz for $\Gamma^{(2)}$ is not exact, therefore, the calculated value $\langle \hat{\mathbf{S}}^2 \rangle_{\mathrm{M\ddot{u}ller}}$ is also not exact. However, the value is consistent with the energy functional which was used in the minimization procedure. Unfortunately, for the other approximations that we employed, the functionals for $\langle \hat{\mathbf{S}}^2 \rangle$ that are consistent with the energy functionals are not available.
\subsection{{Energy of Excited Triplet States}\label{sec:sing-trip}}
In this subsection, we discuss the effect of imposing the constraints for $\langle \hat{\mathbf{S}}^2\rangle$ on the total energies of excited triplet states. By imposing the constraint (\ref{eq:2fold}) we calculate the lowest lying triplet energy of 2-electron systems with $S_z=1$ and compare it with the energy that we get using the constraint (\ref{eq:4fold}) for the triplet with $S_z=0$. As we show in Appendix \ref{app:mueller}, the M\"uller functional respects the degeneracy between the $S_z=0$ triplet state and the fully polarized triplet states. This is not the case for the other approximations we employed in this work.
\begin{figure}
\includegraphics[width=0.45\textwidth, clip]{H2_singlet_triplet.eps}
\caption{\label{fig:h2diss} Excitation energy, ground-state singlet to the lowest triplet, of the hydrogen molecule as a function of the internuclear distance, using the BBC3 functional. The triplet is calculated either by just imposing $S_z=1$ with no constraint for $\langle \hat{\mathbf{S}}^2\rangle$ or by additionally imposing the constraint for $S=1$, $S_z=1$ (Eq. (\ref{eq:2fold})) or for $S=1$, $S_z=0$ (Eq. (\ref{eq:4fold})).}
\end{figure}
In Fig.\ \ref{fig:h2diss}, we plot the energy difference between the first excited triplet and the ground-state singlet of the H$_2$ molecule, i.e., the first singlet to triplet excitation energy, as a function of the internuclear distance, using the BBC3 functional. The unconstrained calculation for $S_z=1$ agrees only qualitatively with the MCSCF results. Enforcing the constraint (\ref{eq:2fold}) for $S=1, S_z=1$ slightly improves the excitation energies. The best results are obtained by enforcing the constraint (\ref{eq:4fold}) for $S=1, S_z=0$. The difference in the two constrained calculations arises from the fact that the degeneracy of $S=1, S_z=1$ and $S=1, S_z=0$ is broken by the functional, although it should not since the Hamiltonian is spin-independent. In Table \ref{tab:sing-tripl} (top), we show the lowest triplet total energies of the helium atom and the hydrogen molecule at two different interatomic distances, the equilibrium distance, $R$=1.4~au and a larger one, $R$=2.5~au, for all functionals that we used. For the M\"uller functional, in all systems considered, the constraints for $\langle \hat{\mathbf{S}}^2\rangle$ improve the corresponding energies compared to the unconstrained $S_z=1$ calculation. For the BBC3 functional, the $S=1$, $S_z=1$ and $S=1$, $S_z=0$ constraints give different results. Although both improve the triplet energies the second performs better. The Power functional also breaks the energy degeneracy but only the $S=1$, $S_z=1$ constraint improves the total energies of the triplets while the $S=1$, $S_z=0$ deteriorates them.
As a measure for the quality of the functional in calculating triplet energies we include the average, absolute, relative deviation from the MCSCF energies, i.e.,
\begin{equation}
\label{eq:diff}
\delta = \frac{1}{N_{\rm sys}} \sum_i \frac{|E_i-E_i^{\rm MCSCF}|}{|E_{i}|}\,,
\end{equation}
where $E_i$ is the RDMFT energy of system $i$, $E_i^{\rm MCSCF}$ the corresponding MCSCF energy and $N_{\rm sys}$ the number of cases. This quantity is included in Table \ref{tab:sing-tripl}. In the same Table, we also include $\delta_{\rm ex}$, defined similarly to $\delta$ in Eq.~(\ref{eq:diff}), for the energy differences between the ground-state singlets and the first excited triplet states. In the same Table, for completeness, we also include the total energies of the ground-state singlets. As one can see, the errors for the M\"uller and the BBC3 functionals in calculating these excitations are mainly due to the bad description of the triplets, as the total energies of the singlets are very accurate. In all cases, the conditions for the triplet improve the singlet-to-lowest-triplet excitation energies although with the Power functional and the $S=1$, $S_z=0$ constraint this is only achieved due to a cancelation of errors.
\begin{table*}
\setlength{\tabcolsep}{2.0pt}
{\footnotesize\begin{tabular}{|ll|c|c|c|c|} \hline
& & M\"uller & BBC3 & Power & MCSCF \\ \hline\hline
He & S=1 $S_z=1 $ w/o & -1.9809 & -1.9623 & -1.9475 & -1.9364 \\
& S=1 $S_z=1 $ cons & -1.9645 & -1.9565 & -1.9428 & \\
& S=1 $S_z=0 $ cons & -1.9645 & -1.9515 & -1.8501 & \\
& S=0 & -2.9062 & -2.8971 & -2.9022 & -2.8989 \\\hline
H$_2$ 1.4 au& S=1 $S_z=1$ w/o & -0.8489 & -0.8131 & -0.7972 & -0.7794 \\
& S=1 $S_z=1$ cons & -0.8226 & -0.8017 & -0.7881 & \\
& S=1 $S_z=0$ cons & -0.8226 & -0.7824 & -0.7201 & \\
& S=0 & -1.1870 & -1.1701 & -1.1464 & -1.1716 \\\hline
H$_2$ 2.5 au& S=1 $S_z=1$ w/o & -0.9968 & -0.9726 & -0.9575 & -0.9445 \\
& S=1 $S_z=1$ cons & -0.9873 & -0.9684 & -0.9544 & \\
& S=1 $S_z=0$ cons & -0.9873 & -0.9534 & -0.8857 & \\
& S=0 & -1.1185 & -1.0936 & -1.0614 & -1.0915 \\ \hline
$\delta$ & S=1 $S_z=1$ w/o & 0.056 & 0.029 & 0.014 & \\
& S=1 $S_z=1$ cons & 0.038 & 0.021 & 0.008 & \\
& S=1 $S_z=0$ cons & 0.038 & 0.007 & 0.061 & \\
& S=0 & 0.013 & 0.001 & 0.017 & \\\hline
$\delta_{\rm ex}$ & S=1 $S_z=1$ w/o & 0.12 & 0.10 & 0.15 & \\
& S=1 $S_z=1$ cons & 0.07 & 0.08 & 0.13 & \\
& S=1 $S_z=0$ cons & 0.07 & 0.03 & 0.11 & \\
\hline
\multicolumn{6}{c}{ } \\
\hline
Be &S=1 $S_z=1$ w/o & -14.6966 & -14.5685 & -14.5853 & -14.5327\\
&S=1 $S_z=1$ cons. pin. & -14.6513 & -14.5544 & -14.5514 & \\
&S=1 $S_z=1$ cons. & -14.6548 & -14.5582 & -14.5530 & \\
&S=1 $S_z=0$ cons. & -14.6958 & -14.5524 & -14.5225 & \\
&S=0 & -14.7471 & -14.6491 & -14.6170 & -14.6331 \\\hline
BH &S=1 $S_z=1$ w/o & -25.3748 & -25.2098 & -25.2058 & -25.1901 \\
&S=1 $S_z=1$ cons. pin. & -25.2710 & -25.1811 & -25.1720 & \\
&S=1 $S_z=1$ cons. & -25.3109 & -25.1997 & -25.1617 & \\
&S=1 $S_z=0$ cons. & -24.4076 & -25.1793 & -25.1824 & \\
&S=0 & -25.4504 & -25.2512 & -25.2350 & -25.2385 \\\hline
H$_2$O &S=1 $S_z=1$ w/o & -76.1195 & -75.9048 & -75.8544 & -75.8749 \\
&S=1 $S_z=1$ cons. pin. & -75.8729 & -75.8061 & -75.7558 & \\
&S=1 $S_z=1$ cons. & -76.0106 & -75.8704 & -75.8015 & \\
&S=1 $S_z=0$ cons. & -76.1468 & -75.8616 & -75.8586 & \\
&S=0 & -76.2046 & -76.3443 & -76.1097 & -76.1732 \\\hline
Mg &S=1 $S_z=1$ w/o & -199.6822 & -199.5835 & -199.6077 & -199.5500 \\
&S=1 $S_z=1$ cons. pin. & -199.6822 & -199.5835 & -199.6077 & \\
&S=1 $S_z=1$ cons. & -199.6336 & -199.5606 & -199.5711 & \\
& S=1 $S_z=0$ cons. & -199.6436 & -199.5771 & -199.5776 & \\
& S=0 & -199.7506 & -199.6603 & -199.6553 & -199.6378 \\\hline
$\delta$ & S=1 $S_z=1$ w/o & 0.0055 & 0.0010 & 0.0012 & \\
& S=1 $S_z=1$ cons. pin & 0.0029 & 0.0008 & 0.0010 & \\
& S=1 $S_z=1$ cons. & 0.0038 & 0.0006 & 0.0009 & \\
& S=1 $S_z=0$ cons. & 0.0115 & 0.0006 & 0.0004 & \\
& S=0 & 0.0043 & 0.0010 & 0.0005 & \\\hline
$\delta_{\rm ex}$ & S=1 $S_z=1$ w/o & 0.38 & 0.12 & 0.42 & \\
& S=1 $S_z=1$ cons. pin & 0.92 & 0.24 & 0.27 & \\
& S=1 $S_z=1$ cons. & 0.57 & 0.08 & 0.20 & \\
& S=1 $S_z=0$ cons. & 0.34 & 0.17 & 0.09 & \\ \hline
\end{tabular}}
\caption{\label{tab:sing-tripl} {\bf Energies of Lowest Triplet States and Singlet Ground States (in Ha) for Different RDMFT Functionals Calculated with or without Additional Constraints for $\langle \hat{\mathbf{S}}^2\rangle$.} For 2-electron systems (top), the triplet is calculated either by imposing only $S_z=1$ without additional constraints (first line for each system), or by imposing the constraint (\ref{eq:2fold}) for $S=1$, $S_z=1$ (second line for each system), or the constraint (\ref{eq:4fold}) for $S=1$, $S_z=0$ (third line). Energies for the ground state singlet are also included (fourth line). For systems with more than two electrons (bottom), the triplet is calculated with $S_z=1$ without any additional spin constraint (first line), with $S_z=1$ by imposing the constraint (\ref{eq:occM=1_pinned}) i.e., pinning the inner occupations to one (second line), and with $S_z=1$ by imposing the constraint (\ref{eq:occM=1}) (third line). For $S_z=0$, we impose the constraint (\ref{eq:occM=0_pinned}) (forth line). The exact energies obtained with MCSCF using the same basis set and the same number of active orbitals are also given for comparison. The average absolute relative deviations from MCSCF (Eq.~(\ref{eq:diff})), for the total energies, $\delta$, and the singlet-triplet excitation energies, $\delta_{\rm ex}$, are also included.}
\end{table*}
In Table \ref{tab:sing-tripl} (bottom), we include results for systems with more than two electrons. These systems are chosen because one can assume that their first excited triplet is formed by the two outermost electrons only.
For the three functionals we considered, and in the first line for each system, we give the lowest triplet energy which is calculated using the constraint (\ref{eq:occM=1_pinned}), i.e., by pinning the occupations of all core orbitals to one and letting only the outer orbitals for the majority spin to be fractionally occupied with two electrons. This guarantees that the core electrons do not contribute to the total spin. We loosened the constraint for the inner occupations by enforcing only the constraint (\ref{eq:occM=1}). The results are given in the second line for each system. As shown in Fig.\ \ref{fig:spin_violation}, for the M\"uller functional, this leads to a deviation from the correct $\langle \hat{\mathbf{S}}^2\rangle$ but is still closer to the exact $\langle \hat{\mathbf{S}}^2\rangle$ than imposing no additional restriction.
On average the total energies of the first excited triplet states are improved imposing the constraints considered here, with the exception of the M\"uller functional with the $S=1$ and $S_z=0$ constraint. However, the singlet-to-first-triplet excitation energies worsen in many cases due to a cancellation of errors in favor of the unconstrained calculations with some functionals.
The small number of systems that we considered, does not allow us to draw a decisive conclusion on the effect of the constraints on the excitation energies. The excitation energies (with or without the additional constraints) show a large error compared to those from MCSCF, for all the functionals we employed. This is, at least partially, due to the fact that functionals in RDMFT are typically devised and tuned to reproduce the energies of ground-state singlets. They even fail, in some cases, to identify that there is a triplet with lower energy. For example the M\"uller functional yields a singlet as the ground state for the oxygen and carbon atoms instead of the correct triplet states.
As we discussed before, with the restriction (\ref{eq:occM=1}) we cannot guarantee that we get a triplet as there exists a singlet with the same occupations. However, if the triplet is lower in energy than the corresponding singlet then the minimization will find it. The advantage of this restriction is that it can be applied for functionals that are devised to treat systems with the same number of up and down electrons.
\section{Conclusion\label{sec:conc}}
We have considered necessary conditions for the one-body-reduced density matrix of a system of two electrons to correspond to a triplet state. There are separate conditions for the fully polarized triplet states $S_z=\pm 1$ and for the $S_z=0$ state. In a spin-restricted description, i.e., assuming the same spatial dependence of the natural orbitals in the two spin channels, the conditions for $S_z=\pm 1$ restrict the occupation numbers to be doubly degenerate. For $S_z=0$, on the other hand, a 4-fold degeneracy of the occupation numbers was found.
We first tested if the conditions are satisfied for the fully polarized, $S_z=\pm 1$, triplet states of prototype two electron systems, namely, the helium atom and the H$_2$ molecule, using typical approximate RDMFT functionals and found that they are violated significantly. They are, however, satisfied by the exact functional for two electrons, as can be shown analytically, and in MCSCF calculations. Since the conditions only affect the degeneracy of the occupation numbers they can easily be enforced in RDMFT calculations as additional constraints in the energy minimization. Thus, we applied the conditions for $S_z=1$ to calculate the lowest excited triplet states of the aforementioned 2-electron systems. We found that, with the employed approximations, the optimal occupation numbers improve significantly compared to ``exact'' MCSCF results. We also calculated the total energies of the lowest triplet states when
the conditions for $S_z=1$ and $S_z=0$ are applied and we found, in most of cases, an improvement of these energies.
We also evaluated the idea of applying the aforementioned conditions, which are exact for two
electron systems, to systems with more than two electrons. For $S_z=\pm 1$, we employed two different approximate constraints: In the first, all electrons from the minority spin channel and all but two electrons from the majority spin channel occupy pinned natural orbitals, leaving only two electrons from the majority spin channel to lead to fractional occupation numbers. In the second, spin-up and spin-down core natural orbitals have equal occupancies which are not necessarily pinned to one, and the 2-fold degeneracy is assumed only for the weakly occupied natural orbitals which accommodate the two additional electrons of the majority spin. We evaluated the approximate constraints that we propose by testing their effect when imposed as additional constraints in RDMFT minimizations of some atoms and molecules and found that, in all cases, we get occupation numbers closer to the exact ones than without imposing them. For $S_z=0$ triplets, the extension we considered assumes that core natural orbitals accommodating all but two electrons form a singlet state and are pinned, while the rest of the orbitals, which accommodate the two remaining electrons follow the 4-fold degeneracy as in the case of only two electrons. Same as for the 2-electron systems, we applied the constraints both for $S_z=\pm 1$ and $S_z=0$ to RDMFT minimizations with different functionals to calculate first excited triplet states.
On average the constraints considered here improved the energy of the first excited triplet state, with the exception of the M\"uller functional with the $S=1$, $S_z=1$ constraints. This improvement, however, in both cases of two and more than two electrons, does not concern necessarily the corresponding singlet-triplet excitation energies. We found that for the excitation energies, error cancellations are in favor of unconstrained calculations, and the additional constraints might deteriorate the agreement of these energies with the exact. This effect is partly due to the fact that 1RDM functionals might not treat singlet and triplet states at the same level of accuracy. For the majority of present-day approximations, the main focus has been the accurate description of singlet states and there is no guarantee of the quality of their results when extended to triplet states. For example, most functionals break the degeneracy between the fully polarized and the $S_z=0$ triplet states.
The present work is a significant step in the description of high-spin states using reduced density matrix functional theory. Our findings motivate the development of approximations which could offer a better description for triplet states by following the proposed recipe. A benchmark for these approximations would be the prediction of the triplet ground states of atomic and molecular systems. Finally, with the proposed methodology, it becomes feasible to access the $S_z=0$ triplet state in RDMFT by applying the appropriate necessary conditions. Consequently, for any new functional one could test the degeneracy between the fully polarized and the $S_z=0$ triplet states. In the future, with the improvement of available approximations, it will be possible for RDMFT to study cases of broken degeneracy of the triplet states, e.g.\ when magnetic fields are applied.
\section{Acknowledgments}
IT and NH acknowledge support from a Emmy-Noether grant from Deutsche Forschungsgemeinschaft. NNL acknowledges support from the Greek Ministry of Education (E$\Sigma\Pi$A program), GSRT action $\rm KPH\Pi I\Sigma$, project ``New multifunctional Nanostructured Materials and Devices - POLYNANO'' (No. 447963). IT would like to acknowledge Dr. S.Thanos for useful discussions on the manuscript.
\begin{appendix}
\section{Degeneracies in the M\"uller Functional}\label{app:mueller}
In this Appendix, we show that for 2-electron systems, the M\"uller functional
respects the energy degeneracy of the ${S}_z=1$ and ${S}_z=0$ triplet states. The states (\ref{eq:trips=1}) and (\ref{eq:trips=0}) have the same natural orbitals but differ in their occupation. If the occupation numbers of the ${S}_z=1$ state are denoted by $n_{j\uparrow}$ (the down channel is empty) then the occupation numbers for the ${S}_z=0$ state, $\tilde{n}_{j\sigma}$, are given by
\begin{equation}\label{eq:occtrips=0}
\tilde{n}_{j\uparrow}=\tilde{n}_{j\downarrow}=n_{j\uparrow}/2.
\end{equation}
Starting from the solution of the fully polarized state, we know that the total energy is given by
\begin{eqnarray}
&&E = \sum_{j=1}^\infty n_{j\uparrow}\int d^3r \varphi_j^*(\v r)\left(-\frac{\nabla^2}{2}+v_\mathrm{ext}(\v r)\right)\varphi_j(\v r) \nonumber\\
&&+\frac{1}{2}\sum_{j,k=1}^\infty n_{j\uparrow}n_{k\uparrow} J_{jk}-\frac{1}{2}\sum_{j,k=1}^\infty \sqrt{n_{j\uparrow}n_{k\uparrow}} K_{jk}
\label{eq:energytrips=1}
\end{eqnarray}
with
\begin{eqnarray}
J_{jk}&=&\iint d^3r d^3r'\frac{|\varphi_j(\v r)|^2|\varphi_k(\v r')|^2}{|\v r-\v r'|},\\
K_{jk}&=&\iint d^3r d^3r'\frac{\varphi_j^*(\v r)\varphi_k^*(\v r')\varphi_k(\v r)\varphi_j(\v r')}{|\v r-\v r'|}.
\end{eqnarray}
The derivative of the total energy with respect to the occupation number $n_{j\uparrow}$ reads as
\begin{eqnarray}
\frac{\partial E}{\partial n_{j\uparrow}}&=&\int d^3r \varphi_j^*(\v r)\left(-\frac{\nabla^2}{2}+v_\mathrm{ext}(\v r)\right)\varphi_j(\v r)\nonumber \\
&+&\sum_{k=1}^\infty n_{k\uparrow} J_{jk}-\sum_{k=1}^\infty \frac{\sqrt{n_{k\uparrow}}}{2\sqrt{n_{j\uparrow}}} K_{jk}.
\label{eq:derivs=1}
\end{eqnarray}
As we are at the solution point, the derivatives with respect to all fractional occupation numbers satisfy
\begin{equation}
\frac{\partial E}{\partial n_{j\uparrow}}=\mu,
\end{equation}
where $\mu$ denotes the chemical potential of the system.
Using the occupation numbers $\tilde{n}_{j\sigma}$ of the $\langle\hat{S}_z\rangle=0$ state instead, the total energy is given as
\begin{eqnarray}
E &=& \sum_{\sigma}\sum_{j=1}^\infty \tilde{n}_{j\sigma}\int d^3r \varphi_j^*(\v r)\left(-\frac{\nabla^2}{2}+v_\mathrm{ext}(\v r)\right)\varphi_j(\v r) \nonumber\\
&+&\frac{1}{2}\sum_{\sigma\sigma'}\sum_{j,k=1}^\infty \tilde{n}_{j\sigma}\tilde{n}_{k\sigma'} J_{jk}\nonumber\\
&-&\frac{1}{2}\sum_\sigma\sum_{j,k=1}^\infty \sqrt{\tilde{n}_{j\sigma}\tilde{n}_{k\sigma}} K_{jk}.
\end{eqnarray}
Making the spin sums explicit this can be rewritten as
\begin{eqnarray}
E &=& \sum_{j=1}^\infty \left(\tilde{n}_{j\uparrow}+\tilde{n}_{j\downarrow}\right)
\int d^3r \varphi_j^*(\v r)\left(-\frac{\nabla^2}{2}+v_\mathrm{ext}(\v r)\right)\varphi_j(\v r) \nonumber\\
&+&\frac{1}{2}\sum_{j,k=1}^\infty
\left(\tilde{n}_{j\uparrow}+\tilde{n}_{j\downarrow}\right)\left(\tilde{n}_{k\uparrow}+\tilde{n}_{k\downarrow}\right) J_{jk} \nonumber\\
&-&\frac{1}{2}\sum_{j,k=1}^\infty
\left(\sqrt{\tilde{n}_{j\uparrow}\tilde{n}_{k\uparrow}}+\sqrt{\tilde{n}_{j\downarrow}\tilde{n}_{k\downarrow}}\right) K_{jk}
\end{eqnarray}
which, using Eq.\ (\ref{eq:occtrips=0}) is identical to the energy of the $\langle\hat{S}_z\rangle=1$ state, Eq.\ (\ref{eq:energytrips=1}).
However, we still need to show that this energy is also an extremum. For the derivative with respect to the occupation numbers we obtain
\begin{eqnarray}
\frac{\partial E}{\partial \tilde{n}_{j\sigma}}&=&\int d^3r \varphi_j^*(\v r)\left(-\frac{\nabla^2}{2}+v_\mathrm{ext}(\v r)\right)\varphi_j(\v r)\nonumber \\
&+&\sum_{\sigma'}\sum_{k=1}^\infty \tilde{n}_{k\sigma'} J_{jk}-\sum_{k=1}^\infty \frac{\sqrt{\tilde{n}_{k\sigma}}}{2\sqrt{\tilde{n}_{j\sigma}}} K_{jk}.
\end{eqnarray}
Again making the spin sum explicit in the second term and using $\tilde{n}_{k\sigma}/\tilde{n}_{j\sigma}=n_{k\uparrow}/n_{j\uparrow}$ we find that the derivative is the same as in Eq.\ (\ref{eq:derivs=1}). Hence, if the occupation numbers $n_{j\uparrow}$ minimize the total energy for the triplet ${S}_z=1$ state then the occupation numbers $\tilde{n}_{j\sigma}$ defined in Eq.\ (\ref{eq:occtrips=0}) form the minimum for the triplet ${S}_z=0$ state.
We note that this derivation crucially depends on the square root dependence in the exchange term. For a general power $\alpha$, i.e., $(n_{j\sigma}n_{k\sigma})^\alpha$ in the exchange energy, one finds factors of $2/2^{2\alpha}$ and $1/2^{2\alpha-1}$ in the exchange energy and its derivative, respectively, when comparing the terms for $n_{j\uparrow}$ and $\tilde{n}_{j\sigma}$. In other words, the terms are only the same for $\alpha=1/2$.
We also emphasize that the degeneracy holds only if the two sets of orbitals are identical. Since the two states (\ref{eq:trips=1}) and (\ref{eq:trips=0}) are connected by $\hat{S}_\pm$, which only acts on the spin degrees of freedom, this is satisfied. However, if one determines the orbitals from an energy minimization it can happen that one finds different minima for the orbitals in the two cases resulting in a broken degeneracy. In the systems that we have tested, we found that the degeneracy is satisfied with an accuracy of 6 decimal digits which corresponds to the convergence of the overall calculation.
As we have seen, for the M\"uller functional the sums are the same because all the terms in the sums are identical. For the other functionals considered here, there might be specific cases of sets of occupation numbers that triplet states are degenerate, i.e., the sums are equal, however this degeneracy does not hold in general.
\end{appendix}
\bibliographystyle{apsrev}
|
2,877,628,090,989 | arxiv | \section{Introduction}
Let $G$ be a simple connected graph with $n = |V|$ vertices.
For a vertex $v \in V (G)$\,, $deg (v)$ denotes the degree of $v$\,.
For vertices $v, u \in V$\,, the distance $d (v, u)$ is defined as
the length of a shortest path between $v$ and $u$ in $G$\,.
The eccentricity $\varepsilon (v)$ of a vertex $v$ is the maximum
distance from $v$ to any other vertex. \vspace{0.2cm}
Sharma, Goswami and Madan \cite{ShGoMa97} introduced a distance--based molecular
structure descriptor, which they named ``{\it eccentric connectivity index\/}''
and which they defined as
$$
\xi^c = \xi^c (G) = \sum_{v \in V (G)} deg (v) \cdot \varepsilon (v) \ .
$$
The index $\xi^c$ was successfully used for mathematical modeling of
biological activities of diverse nature \cite{DuGuMa08,GuSiMa02,KuSaMa04,SaMa00,SaMa03}.
Some mathematical properties of $\xi^c$ were recently reported in \cite{ZhDu09}.
Chemical trees (trees with maximum vertex degree at most four) provide the graph
representation of alkanes \cite{GuPo86}. It is therefore a natural problem to
study trees with bounded maximum degree.
Denote by $\Delta = \Delta(T)$ the maximum vertex degree of a tree $T$\,. The
path $P_n$ is the unique $n$-vertex tree with $\Delta = 2$\,, while the star $S_n$ is
the unique tree with $\Delta = n-1$\,. Therefore, we can assume that
$3 \leq \Delta \leq n - 2$\,.
For an arbitrary tree $T$ on $n$ vertices \cite{ZhDu09},
$$
\left \lfloor \frac{3 (n - 1)^2 + 1}{2} \right \rfloor = \xi^c (P_n)
\geq \xi^c (T) \geq \xi^c (S_n) = 3 (n - 1) \ .
$$
\vspace{5mm}
\section{Chemical trees with maximum eccentric connectivity index}
The broom $B_{n, \Delta}$ is a tree consisting of a star $S_{\Delta
+ 1}$ and a path of length $n - \Delta - 1$ attached to a
pendent vertex of the star. It is proven
in \cite{LiGu07} that among trees with maximum vertex degree equal to
$\Delta$\,, the broom $B_{n, \Delta}$ uniquely minimizes the largest eigenvalue of the
adjacency matrix. Further, within the same class of trees, the broom has minimum
Wiener index and Laplacian-energy like invariant \cite{St09}. In \cite{YaYe05} and
\cite{YuLv06} it was demonstrated that the broom has minimum energy among trees with,
respectively, fixed diameter and fixed number of pendent vertices.
\vspace{0.2cm}
\begin{figure}[ht]
\center
\includegraphics [width = 9cm]{broom.eps}
\caption { \textit{ The broom $B_{11, 6}$\,. } }
\end{figure}
The $\Delta$-starlike tree $T(n_1,n_2,\ldots,n_\Delta)$ is a
tree composed of the root $v$\,, and the paths $P_{n_1}$\,,
$P_{n_2}$\,, \ldots, $P_{n_\Delta}$\,, attached to $v$\,.
The number of vertices of $T(n_1,n_2,\ldots,n_{\Delta})$ is
thus equal to $n_1 + n_2 + \cdots + n_{\Delta} +
1$\,. Notice that the broom $B_{n, \Delta}$ is
a $\Delta$-starlike tree, $B_{n, \Delta} \cong T(n-\Delta,1,1,\ldots,1)$\,.
\begin{thm}
\label{thm-pi} Let $w$ be a vertex of a nontrivial connected graph
$G$\,. For nonnegative integers $p$ and $q$\,, let $G (p, q)$ denote
the graph obtained from $G$ by attaching to the vertex $w$ pendent paths
$P = w v_1 v_2 \ldots v_p$ and $Q = w u_1 u_2 \dots u_q$ of lengths $p$
and~$q$\,, respectively. If $p \geq q \geq 1$\,, then
$$
\label{eq-pi} \xi^c (G (p, q)) < \xi^c (G (p + 1, q - 1)) \ .
$$
\end{thm}
\begin{proof}
The degrees of vertices $u_{q - 1}$ and $v_p$ are changed, while all other vertices have the same
degree in $G (p + 1, q - 1)$ as in $G (p, q)$\,. Since after this transformation
the longer path has increased, the eccentricity of vertices from $G$ are either
the same or increased by one. We will consider three cases based on the longest path
from the vertex $w$ in the graph $G$\,. Denote by $deg' (v)$ and $\varepsilon'(v)$
the degree and eccentricity of vertex $v$ in $G(p+1,q-1)$\,.
\noindent
{\bf Case 1. } The length of the longest path from the vertex $w$ in $G$ is greater than $p$\,.
This means that the pendent vertex of $G$\,, most distant from $w$ is the most
distant vertex for all vertices of $P$ and $Q$\,. It follows that
$\varepsilon_{G (p + 1, q - 1)} (v) = \varepsilon_{G (p, q)} (v)$ for all vertices
$w, v_1, v_2, \ldots, v_p, u_1, u_2, \ldots, u_{q - 1}$\,, while the eccentricity
of $u_q$ increased by $p + 1 - q$\,.
\begin{eqnarray*}
\xi^c (G (p + 1, q - 1)) - \xi^c (G (p, q)) &\geq&
\left[ deg' (u_{q - 1})\,\varepsilon' (u_{q - 1})
+ deg' (u_{q})\,\varepsilon'\,(u_{q}) + deg' (v_{p})\,\varepsilon' (v_{p}) \right] \\
&-& \left[ deg (u_{q - 1})\,\varepsilon (u_{q - 1}) + deg (u_{q})\,\varepsilon (u_{q}) +
deg (v_{p})\,\varepsilon (v_{p}) \right] \\
&=& - \varepsilon (u_{q - 1}) + (p - q + 1) + \varepsilon (v_p) > 0 \ .
\end{eqnarray*}
\noindent {\bf Case 2. } The length of the longest path from the vertex $w$ in $G$ is less than or
equal to $p$ and greater than $q$\,. This means that either the vertex of $G$ that is most distant
from $w$ or the vertex $v_p$ is the most distant vertex for all vertices of $P$\,, while for
vertices $w, u_1, u_2, \ldots, u_q$ the most distant vertex is $v_p$\,. It follows that
$\varepsilon_{G (p + 1, q - 1)} (v) = \varepsilon_{G (p, q)} (v)$ for vertices $v_1, v_2, \ldots,
v_p$\,, while $\varepsilon_{G (p + 1, q - 1)} (v) = \varepsilon_{G (p, q)} (v) + 1$ for vertices
$w, u_1, u_2, \ldots, u_{q - 1}$\,. The eccentricity of $u_q$ increased by at least $1$\,.
\begin{eqnarray*}
\xi^c (G (p + 1, q - 1)) - \xi^c (G (p, q)) &\geq& deg' (w)\,\varepsilon' (w) +
deg' (v_{p})\,\varepsilon' (v_{p}) + \sum_{j = 1}^q deg' (u_{j})\,\varepsilon' (u_{j})\\
&-& deg (w)\,\varepsilon (w) - deg (v_{p}) \,\varepsilon (v_{p}) -
\sum_{j = 1}^q deg (u_{j})\,\varepsilon (u_{j})\\
&\geq& q + \left[ \varepsilon (u_{q - 1}) + 1 \right]
\left[ deg (u_{q - 1}) - 1 \right] -
\varepsilon (u_{q - 1})\,deg (u_{q - 1}) + \varepsilon (v_p)\\
&>& \varepsilon (v_p) - \varepsilon (u_{q - 1}) > 0 \ .
\end{eqnarray*}
\noindent
{\bf Case 3. } The length of the longest path from the vertex $w$ in $G$ is
less than or equal to $q$\,. This means that the pendent vertex most distant from
the vertices of $P$ and $Q$ is either $v_p$ or $u_q$\,, depending on the position.
Using the formula for eccentric connectivity index of a path, we have
\begin{eqnarray*}
\xi^c (G (p + 1, q - 1)) - \xi^c (G (p, q)) &>& \xi^c (P_{p + q + 1}) +
[deg (w) - 2]\,\varepsilon' (w) \\
&-& \xi^c (P_{p + q + 1}) - [deg (w) - 2]\,\varepsilon (w) \\
&=& deg (w) - 2 \geq 0 \ .
\end{eqnarray*}
Since $G$ is a nontrivial graph with at least one vertex, we have strict inequality.
This completes the proof.
\end{proof}
\begin{thm}
\label{thm-broom} Let $T \not \cong B_{n, \Delta}$ be an arbitrary
tree on $n$ vertices with maximum vertex degree $\Delta$\,. Then
$$
\xi^c (B_{n, \Delta}) > \xi^c (T) \ .
$$
\end{thm}
\begin{proof}
Fix a vertex $v$ of degree $\Delta$ as a root and let
$T_1, T_2, \ldots, T_{\Delta}$ be the trees
attached at~$v$\,. We can repeatedly apply the transformation described
in Theorem \ref{thm-pi} to any vertex of degree at least three with
greatest eccentricity from the root in every tree~$T_i$\,, as long as
$T_i$ does not become a path. When all trees $T_{1} ,T_{2},\dots, T_{\Delta}$
turn into paths, we can again apply transformation from Theorem~\ref{thm-pi}
at the vertex~$v$ as long as there exists at least two paths of length greater
than one, further decreasing the eccentric connectivity index. Finally, we arrive
at the broom $B_{n, \Delta}$ as the unique tree with maximum eccentric
connectivity index.
\end{proof}
By direct verification, it holds
$$
\xi^c (BT_{n, \Delta}) = \left \lfloor \frac{3n^2 - 2\Delta n - 2n -
\Delta^2 +4\Delta}{2} \right \rfloor .
$$
From the above proof, we also get that
$B'_{n,\Delta} = T(n-\Delta-1,2,1,\ldots,1)$ has the second minimal
$\xi^c$ among trees with maximum vertex degree $\Delta$\,.
It was proven in \cite{ZhDu09} that the path $P_n$ has maximum and the
star $S_n$ minimum $\xi^c$-value among connected graphs on $n$ vertices.
From Theorem~\ref{thm-broom} we know that the maximum eccentric connectivity
index among trees on $n$~vertices is achieved for one of the brooms
$B_{n,\Delta}$\,. If $\Delta>2$\,, we
can apply the transformation from Theorem~\ref{thm-pi} at the
vertex of degree~$\Delta$ in $B_{n, \Delta}$ and obtain
$B_{n, \Delta-1}$\,. Thus, it follows
$$
EE(S_{n}) = EE(B_{n,n-1}) < EE(B_{n,n-2}) < \cdots < EE(B_{n,3})<
EE(B_{n,2})=EE(P_{n}) \ .
$$
Also, it follows that $B_{n, 3}$ has the second maximum eccentric connectivity
index among trees on $n$ vertices.
\section{The minimum eccentric connectivity index of trees with fixed \\radius}
Vertices of minimum eccentricity form the center. A tree has exactly one
or two adjacent center vertices; in this latter case one speaks of a
bicenter. In what follows, if a tree has a bicenter, then our considerations
apply to any of its center vertices.
For a tree $T$ with radius $r(T)$\,,
$$
\label{tree_center} d (T) = \left\{
\begin{array}{l l}
2\,r(T) - 1 & \quad \mbox{if $T$ has a bicenter }\\[3mm]
2\,r (T) & \quad \mbox{if $T$ has has a center. }\\
\end{array} \right.
$$
Let $T_{(n, d)}$ be the set of $n$-vertex trees obtained from the
path $P_{d + 1} = v_0 v_1 \ldots v_d$ by attaching
$n - d - 1$ pendent vertices to $v_{\lfloor d/2 \rfloor}$ and/or
$v_{\lceil d/2 \rceil}$\,, where $2 \leq d \leq n - 2$\,.
Zhou and Du in \cite{ZhDu09} proved that for arbitrary tree $T$ on $n$
vertices and diameter $d$\,,
$$
\xi^c (T) \geq \xi (T^*)\ , \quad T^* \in T_{(n, d)}
$$
with equality if and only if $T \in T_{(n, d)}$\,.
Using the transformation from Theorem \ref{thm-pi} and applying it to a
center vertex, it follows that $\xi^c (T') < \xi^c (T'')$ for $T' \in T_{(n, 2r-1)}$
and $T'' \in T_{(n, 2r)}$\,.
\begin{cor}
Let $T$ be an arbitrary tree on $n$ vertices with radius $r$\,. Then
$$
\xi^c (T) \geq 3r (2r - 1) + 2 + (n - 2r)(2r + 1)
$$
with equality if and only if $T \in T_{(n, 2r-1)}$\,.
\end{cor}
\vspace{5mm}
\section{The maximum eccentric connectivity index of trees with \\ perfect matchings}
A graph possessing perfect matchings must have an even number of vertices.
Therefore throughout this section we assume that $n$ is even.
It is well known that if a tree $T$ has a perfect matching,
then this perfect matching $M$ is unique:
namely, a pendent vertex $v$ has to be matched with its unique neighbor $w$\,,
and then $M-\{vw\}$ forms the perfect matching of $T-v-w$\,.
Let $A_{n, \Delta}$ be a $\Delta$-starlike tree $T(n-2\,\Delta,2,2,\ldots,2,1)$
consisting of a central vertex $v$\,,
a pendent vertex, a pendent path of length $n-2\,\Delta$\,, and
$\Delta - 2$ pendant paths of length $2$\,, all attached to $v$\,.
\begin{thm}
The tree $A_{n,\Delta}$ has maximum eccentric connectivity index among trees
with perfect matching and maximum vertex degree $\Delta$\,.
\end{thm}
\begin{proof}
Let $T$ be an arbitrary tree with perfect matching and let
$v$ be a vertex of degree $\Delta$\,,
with neighbors $v_1, v_2, \ldots, v_{\Delta}$\,.
Let $T_1, T_2, \ldots, T_{\Delta}$ be the maximal subtrees
rooted at $v_1, v_2, \ldots, v_{\Delta}$\,, respectively.
Then at most one of the numbers $|T_1|, |T_2|, \ldots, |T_{\Delta}|$
can be odd (if $T_i$ and $T_j$ have odd number of vertices, then their
roots $v_i$ and $v_j$ will be unmatched). Since the number of vertices
of $T$ is even, there exists exactly one among
$T_1, T_2, \ldots,T_{\Delta}$ with odd number of vertices.
Using Theorem \ref{thm-pi}, we may transform each $T_i$ into a path
attached to $v$ -- while simultaneously decreasing $\xi^c$ and keeping the
existence of a perfect matching.
Assume that $T_{\Delta}$ has odd number of vertices,
while the remaining trees have even number of vertices.
We apply a transformation similar to the one in Theorem \ref{thm-pi},
but instead of moving one vertex, we move two vertices
in order to keep the existence of a perfect matching.
Thus, if $p \geq q \geq 2$ then
$$
\xi^c (G (p, q)) < \xi^c (G (p + 2, q - 2)) \ .
$$
Using this transformation we may reduce $T_{\Delta}$ to one vertex,
the trees $T_2, \ldots, T_{\Delta - 1}$ to two vertices,
leaving $T_1$ with $n - 2\Delta$ vertices, and thus obtaining $A_{n,\Delta}$\,.
Since all times we strictly decreased $\xi^c$\,,
we conclude that $A_{n, \Delta}$ has minimum eccentric connectivity index
among the trees with perfect matching and maximum vertex degree $\Delta$\,.
\end{proof}
The path $P_n \cong A_{n,2}$ has maximum, while $A_{n,n/2}$ has minimum
eccentric connectivity index among trees with perfect matchings.
\vspace{5mm}
\section{The minimum eccentric connectivity index of trees \\
with fixed number of pendent vertices}
In \cite{ZhDu09} the authors determinate the $n$-vertex trees with $p$ pendent vertices,
$2 \leq p \leq n - 1$\,, with the maximum eccentric connectivity index, and, consecutively, the extremal trees
with the maximum, second-maximum and third-maximum eccentric connectivity index for $n \geq 6$\,.
For the completeness, here we determine the $n$-vertex trees with $2 \leq p \leq n - 1$ pendent vertices
that have minimum eccentric connectivity index.
\newpage
\begin{de}
Let $v$ be a vertex of a tree $T$ of degree $m + 1$\,. Suppose that $P_1, P_2, \ldots, P_m$ are
pendent paths incident with $v$\,, with lengths $1 \leq n_1 \leq n_2 \leq \ldots \leq n_m$\,. Let
$w$ be the neighbor of $v$ distinct from the starting vertices of paths $v_1, v_2, \ldots, v_m$\,,
respectively. We form a tree $T' = \delta (T, v)$ by removing the edges $v v_1, v v_2, \ldots, v
v_{m - 1}$ from $T$ and adding $m - 1$ new edges $w v_1, w v_2, \ldots, w v_{m - 1}$ incident with
$w$\,. We say that $T'$ is a $\delta$-transform of $T$ and write $T' = \delta (T, v)$\,.
\end{de}
\begin{thm}
\label{thm-delta} Let $T' = \delta (T, v)$ be a $\delta$-transform of a tree $T$ of order $n$\,.
Let $v$ be a non-central vertex, furthest from the root among all branching vertices (with degree
greater than~$2$). Then
$$
\xi^c (T) > \xi^c (T')\,.
$$
\end{thm}
\begin{proof}
The degrees of vertices $v$ and $w$ have changed -- namely, $deg (v) - deg' (v) = deg' (w) - deg
(w) = m - 1$\,. Since the furthest vertex from $v$ does not belong to $P_1, P_2, \ldots, P_m$ and
$n_m \geq n_i$ for $i = 1, 2, \ldots, m - 1$\,, it follows that the eccentricities of all vertices
different from $P_1, P_2, \ldots, P_{m - 1}, P_m$ do not change after $\delta$ transformation. The
eccentricities of vertices from $P_m$ also remain the same, while the eccentricities of vertices
from $P_1, P_2, \ldots, P_{m - 1}$ decrease by one. Using the equality $\varepsilon (v) =
\varepsilon (w) + 1$\,, it follows that
\begin{eqnarray*}
\xi^c (T) - \xi^c (T') &=& \sum_{i = 1}^{m - 1} (1 + 2(n_i - 1)) + (m - 1) \cdot \varepsilon (v) - (m - 1) \cdot \varepsilon (w) \\
&=& 2 \left ( n_1 + n_2 + \ldots + n_{m - 1} \right) - (m - 1) + (m - 1)(\varepsilon (v) - \varepsilon (w)) \\
&=& 2 \left ( n_1 + n_2 + \ldots + n_{m - 1} \right) > 0\,.
\end{eqnarray*}
This completes the proof.
\end{proof}
The $p$-starlike tree $SB_{n, p} = T(n_1, n_2, \ldots, n_p)$ is {\it balanced\/} if all paths have
almost equal lengths, i.e., $|n_i - n_j| \leq 1$ for every $1 \leq i \leq j \leq p$\,.
\begin{thm}
The balanced $p$-starlike tree $SB_{n, p}$ has minimum eccentric connectivity index among trees
with $p$ pendent vertices, $2 < p < n - 1$\,.
\end{thm}
\begin{proof}
Let $T$ be a rooted $n$-vertex tree with $p$ pendent vertices. If
$T$ contains only one vertex of degree greater than two, we can
apply Theorem \ref{thm-pi} in order to arrive at the balanced starlike
tree $SB_{n, p}$\,, without changing the
number of pendent vertices. If $T$ has several vertices of degree
greater than $2$\,, such that there are only pendent paths attached
below them, then we take the one most distant from the center vertex of $T$\,.
By repetitive application of the $\delta$ transformation and
balancing pendant paths, the eccentric connectivity index decreases.
Assume that we arrived at a tree with two centers $C = \{v, w\}$ with
only pendent paths attached at both centers.
If all pendent paths have equal lengths, then $n = k p + 2$\,. Since we
can reattach $p - 2$ pendent paths at any central vertex
without changing $\xi^c (T)$\,, it follows that there are exactly
$\lfloor p/2 \rfloor$ extremal trees with minimum
eccentric connectivity index in this special case.
Now, let $R$ be the path with length $r = r (T) - 1$ attached to
$v$ and let $Q$ be the shortest path of length $q$ attached to
$w$\,. After applying the $\delta$ transformation at vertex $v$\,, the
eccentric connectivity index remains the same. If we apply the transformation
from Theorem \ref{thm-pi} to two pendant paths of lengths $r + 1$ and $q$ attached at $w$\,,
we will strictly decrease the eccentric connectivity index. Finally, we conclude
that $SB_{n, p}$ is the unique extremal tree that minimizes $\xi^c$ among
$n$-vertex trees with $p$ pendent vertices for $n \not \equiv 2 \pmod p$\,.
\end{proof}
\section{Chemical trees with minimal eccentric connectivity index}
\begin{thm}
\label{thm-rot}
Let $T$ be a rooted tree, with a center vertex $c$ as root. Let $u$ be the vertex
closest to the root vertex, such that $deg (u) < \Delta$\,. Let $w$ be the pendent
vertex most distant from the root, adjacent to vertex $v$\,, such that
$\varepsilon (v) > \varepsilon (u)$\,. Construct a tree $T'$ by deleting the edge $vw$ and
inserting the new edge $uw$\,. Then
$$
\xi^c (T) > \xi^c (T') \ .
$$
\end{thm}
\begin{proof}
In the transformation $T \to T'$ the degrees of vertices other than $u$
and $v$ remain the same, while $deg' (u) = deg (u) + 1$ and $deg' (v) = deg (v) - 1$\,.
Since the tree is rooted at the center vertex, the radius of $T$ is equal
to $r (T) = d (c, w)$\,. Furthermore, there exists a vertex $w'$ in a different
subtree attached to the center vertex, such that
$d (c, w') = r (T)$ or $d (c, w') = r (T) - 1$\,.
From the condition $\varepsilon (v) > \varepsilon (u)$\,, it follows that
$d (c, w') > d (c, u)$ and $w' \neq u$\,.
By rotating the edge $vw$ to $uw$\,, the eccentricity of vertices other than $w$ decrease if and
only if $w$ is the only vertex at distance $r (T)$ from the center vertex. Otherwise, the
eccentricities remain the same. In both cases, we have
\begin{eqnarray*}
\xi^c (T) - \xi^c (T') &\geq& deg (v)\,\varepsilon (v) + deg (w)\,\varepsilon (w)
+ deg (u)\,\varepsilon (u)\\
&-& \left[ deg' (v)\,\varepsilon' (v) + deg' (w)\,\varepsilon' (w) +
deg' (u)\,\varepsilon' (u) \right] \\
&\geq& \varepsilon (v) + (\varepsilon (v) - \varepsilon (u)) - \varepsilon (u)
= 2 (\varepsilon (v) - \varepsilon (u)) > 0 \ .
\end{eqnarray*}
This completes the proof.
\end{proof}
The Volkmann tree $VT (n, \Delta)$ is a tree on $n$ vertices and
maximum vertex degree $\Delta$\,, defined as follows \cite{770,FiHo02}.
Start with the root having $\Delta$ children. Every vertex different
from the root, which is not in one of the last two levels, has exactly
$\Delta -1$ children. In the last level, while not all vertices need
to exist, the vertices that do exist fill the level consecutively.
Thus, at most one vertex on the level second to last has its degree
different from $\Delta$ and $1$\,. For more details on Volkmann trees
see \cite{770,FiHo02,GuFMG07}. In \cite{770,FiHo02} it was shown that
among trees with fixed $n$ and $\Delta$\,, the Volkmann tree has
minimum Wiener index. Volkmann trees have also other extremal
properties among trees with fixed $n$ and $\Delta$
\cite{GuFMG07,790,SiTo05,YuLu08}.
\begin{figure}[ht]
\center
\includegraphics [width = 6cm]{volkmann.eps}
\caption { \textit{ The Volkmann tree $VT (21, 4)$\,. } }
\end{figure}
\begin{thm}
\label{thm-volkman} Let $T$ be an arbitrary
tree on $n$ vertices with maximum vertex degree~$\Delta$\,. Then
$$
\xi^c (T) \geq \xi^c (VT_{n, \Delta}).
$$
\end{thm}
\begin{proof}
Among $n$-vertex trees with maximum degree $\Delta$\,, let $T^*$ be the extremal tree with minimum
eccentric connectivity index. Assume that $u$ is a vertex closest to the root vertex $c$\,, with
$deg (u) < \Delta$ and let $w$ be the pendent vertex most distant from the root, adjacent to vertex
$v$\,. Also, let $k$ be the greatest integer, such that
$$
n \geq 1 + \Delta + \Delta (\Delta - 1) + \Delta (\Delta - 1)^2 +
\cdots + \Delta (\Delta - 1)^{k - 1} \ .
$$
First, we will show that the radius of $T^*$ has to be less than or equal to $k + 1$\,.
Assume that $r (T^*) = d (c, w) > k + 1$\,. Since the distance from the
center vertex to $u$ is less than or equal to $k$\,, it follows that
$$
\varepsilon (v) \geq 2 r (T^*) - 2 \geq k + r (T^*) \geq \varepsilon (u) \ .
$$
If strict inequality holds, then we can apply Theorem \ref{thm-rot} and decrease the eccentric
connectivity index -- which contradicts to the assumption that $T^*$ is the tree with minimum
$\xi^c$\,. Therefore, $\varepsilon (v) = \varepsilon (u)$ and after performing the transformation
from Theorem \ref{thm-rot}, the eccentric connectivity index does not change. According to the
definition of the number $k$\,, after finitely many transformations, the vertex~$w$ will be the
only vertex at distance $r(T)$ from the center vertex and we will strictly decrease $\xi^c
(T^*)$\,. Also, this means that for the case $n = 1 + \Delta + \Delta (\Delta - 1) + \Delta (\Delta
- 1)^2 + \cdots + \Delta (\Delta - 1)^{k - 1}$\,, the Volkmann tree is the unique tree with minimum
eccentric connectivity index.
Now, we can assume that the radius of $T^*$ is equal $k + 1$\,.
If the distance $d (c, u)$ is less than $k - 1$\,, it follows again
that $\varepsilon (v) > \varepsilon (u)$\,, which is
impossible. Therefore, the levels $1, 2, \ldots, k - 1$ are full
(level $i$ contains exactly $\Delta (\Delta - 1)^{i - 1}$ vertices),
while the $k$-th and $(k + 1)$-th levels contain
$$
L = n - \left[ 1 + \Delta + \Delta (\Delta - 1) + \Delta (\Delta - 1)^2 +
\cdots + \Delta (\Delta - 1)^{k - 1} \right]
$$
vertices.
Assume that $T^*$ has only one center vertex -- then $d (c, w) = k + 1$ and
$\varepsilon (v) = 2 r (T^*) - 1$\,.
If $d (c, u) = k - 1$\,, we can apply the transformation from Theorem \ref{thm-rot}
and strictly decrease $\xi^c$\,. Thus,
for $L > (\Delta - 1)^k$\,, the $k$-th level is also full and the pendent vertices in
the $(k + 1)$-th level can be
arbitrarily assigned. Using the same argument, for $L \leq (\Delta - 1)^k$\,, the extremal
trees are bicentral. By completing the $k$-th level,
we do not change the eccentric connectivity index -- since $\varepsilon (v) = \varepsilon (u)$\,.
Finally, $\xi (T^*) = \xi (VT (n, \Delta))$ and the result follows.
\end{proof}
In Table 1 we give the minimum value of eccentric connectivity index
among $n$ vertex trees with maximum vertex degree $\Delta$\,, together
with the number of such extremal trees (of which one is the Volkman tree).
Note that for $n \leq 2 \Delta$ the number of extremal trees is $1$\,, and for
$\Delta > 2$ holds $\xi^c (VT (n, \Delta - 1)) \geq \xi^c (VT (n, \Delta))$\,.
\vspace{5mm}
\section{A linear algorithm for calculating the eccentric connectivity index of a tree}
Let $T$ be a rooted tree, with a center vertex as root.
Let $c_1, c_2, \ldots, c_k$ be the
neighbors of the center vertex $c$\,, and $T_1, T_2, \ldots, T_k$ be the
corresponding rooted subtrees. Let $r_i$ be the length of the longest path
from $c_i$ in the subtree $T_i$\,, $i = 1, 2, \ldots, k$\,.
\begin{lemma}
\label{le-ecc}
The eccentricity of the vertex $v \in V (T_i)$ equals
$$
\varepsilon (v) = d (v, c) + 1 + \max_{i \neq k} r_k \ .
$$
\end{lemma}
\vspace{5mm}
\begin{tabular} {||c|ccccccccc||} \hline
$n$ & $\Delta=2$ & $\Delta=3$ & $\Delta=4$ & $\Delta=5$ & $\Delta=6$ & $\Delta=7$ & $\Delta=8$
& $\Delta=9$ & $\Delta=10$ \\[2mm] \hline
11 & 150\,;\,1 & 79\,;\,3 & 62\,;\,5 & 60\,;\,6 & 49\,;\,1 & 49\,;\,1 & 49\,;\,1 & 49\,;\,1 & 30\,;\,1 \\[1mm]
12 & 182\,;\,1 & 88\,;\,3 & 69\,;\,4 & 67\,;\,8 & 54\,;\,1 & 54\,;\,1 & 54\,;\,1 & 54\,;\,1 & 54\,;\,1 \\[1mm]
13 & 216\,;\,1 & 97\,;\,1 & 76\,;\,4 & 74\,;\,9 & 72\,;\,10 &59\,;\,1 & 59\,;\,1 & 59\,;\,1 & 59\,;\,1 \\[1mm]
14 & 254\,;\,1 & 106\,;\,1 & 83\,;\,3 & 81\,;\,11 & 79\,;\,12 & 64\,;\,1 & 64\,;\,1 & 64\,;\,1 & 64\,;\,1 \\[1mm]
15 & 294\,;\,1 & 130\,;\,7 & 90\,;\,2 & 88\,;\,11 & 86\,;\,16 & 84\,;\,14 & 69\,;\,1 & 69\,;\,1 & 69\,;\,1 \\[1mm]
16 & 338\,;\,1 & 141\,;\,10 & 97\,;\,1 & 95\,;\,12 & 93\,;\,19 & 91\,;\,19 & 74\,;\,1 & 74\,;\,1 & 74\,;\,1 \\[1mm]
17 & 384\,;\,1 & 152\,;\,7 & 104\,;\,1 & 102\,;\,11 & 100\,;\,23 & 98\,;\,24 & 96\,;\,21 & 79\,;\,1 & 79\,;\,1 \\[1mm]
18 & 434\,;\,1 & 163\,;\,7 & 138\,;\,24 & 109\,;\,11 & 107\,;\,25 & 105\,;\,31 & 103\,;\,27 & 84\,;\,1 & 84\,;\,1 \\[1mm]
19 & 486\,;\,1 & 174\,;\,4 & 147\,;\,20 & 116\,;\,9 & 114\,;\,29 & 112\,;\,37 & 110\,;\,36 & 108\,;\,29 & 89\,;\,1 \\[1mm]
20 & 542\,;\,1 & 185\,;\,3 & 156\,;\,18 & 123\,;\,8 & 121\,;\,30 & 119\,;\,46 & 117\,;\,45 & 115\,;\,39 & 94\,;\,1 \\[1mm]
&&&&&&&&& \\ \hline \hline
$n$ & $\Delta=11$ & $\Delta=12$ & $\Delta=13$ & $\Delta=14$ & $\Delta=15$ & $\Delta=16$ &
$\Delta=17$ & $\Delta=18$ & $\Delta=19$ \\[2mm] \hline
11 & & & & & & & & & \\[1mm]
12 & 33\,;\,1 & & & & & & & & \\[1mm]
13 & 59\,;\,1 & 36\,;\,1 & & & & & & & \\[1mm]
14 & 64\,;\,1 & 64\,;\,1 & 39\,;\,1 & & & & & & \\[1mm]
15 & 69\,;\,1 & 69\,;\,1 & 69\,;\,1 & 42\,;\,1 & & & & & \\[1mm]
16 & 74\,;\,1 & 74\,;\,1 & 74\,;\,1 & 74\,;\,1 & 45\,;\,1 & & & & \\[1mm]
17 & 79\,;\,1 & 79\,;\,1 & 79\,;\,1 & 79\,;\,1 & 79\,;\,1 & 48\,;\,1 & & & \\[1mm]
18 & 84\,;\,1 & 84\,;\,1 & 84\,;\,1 & 84\,;\,1 & 84\,;\,1 & 84\,;\,1 & 51\,;\,1 & & \\[1mm]
19 & 89\,;\,1 & 89\,;\,1 & 89\,;\,1 & 89\,;\,1 & 89\,;\,1 & 89\,;\,1 & 89\,;\,1 & 54\,;\,1 & \\[1mm]
20 & 94\,;\,1 & 94\,;\,1 & 94\,;\,1 & 94\,;\,1 & 94\,;\,1 & 94\,;\,1 & 94\,;\,1 & 94\,;\,1 & 57\,;\,1 \\[1mm]
\hline \hline
\end{tabular}
\vspace{5mm}
\baselineskip=0.20in
\noindent
{\bf Table 1.} The minimal value of the eccentricity connectivity index of trees
with $n$ vertices and maximum vertex degree $\Delta$\,, and the number of such extremal trees.
\vspace{15mm}
\baselineskip=0.30in
\begin{proof}
We show that the longest path starting at vertex $v$ has to traverse the center vertex $c$\,.
This means that the eccentricity of $v$ is equal to the sum of $d (v, c)$ and the
longest path starting at $c$ and not contained in $T_i$\,. Assume that the longest path
$P$ from $v$ stays in the subtree $T_i$\,, and let $w$ be the
vertex from $P$ at the smallest distance from the root $c$\,. Then
$d (v, c) \geq d (v, w) + 1$\,. Since the root vertex is a center of $T$\,, we have
$\max\limits_{k \neq i} r_k + 1 \geq r_i$ and consequently
$$
d (v, c) + \max_{k \neq i} r_k \geq d (v, w) + r_i \geq |P| \ .
$$
This means that $d (v, c) + 1 + \max\limits_{k \neq i} r_k$ is strictly greater
than $|P|$\,, which is a contradiction.
\end{proof}
We now present a simple linear algorithm for calculating the eccentric connectivity index of a
tree~$T$\,. First, find a center vertex of a tree -- this can be done in time $O (n)$ (see
\cite{CoLRS01} for details). For every vertex $v$\,, we have to find the length of the longest path
from $v$ in the subtree rooted at $v$\,. This can be done inductively using depth--first search,
also in time $O (n)$\,. If $r [v]$ represents the length of the longest path in the subtree rooted
at $v$\,, then
$$
r [v] = 1 + \max_{(v, w) \in E (T), \ w \neq p [v]} r [w]
$$
where $p [v]$ denotes the parent of vertex $v$ in $T$\,.
For all neighbors $c_i$ of the center vertex $c$\,, we can calculate the maximum
$\max\limits_{i \neq j} r [c_j]$\,. Finally, for every vertex $v$ we calculate the
eccentricity $\varepsilon (v)$ in $O (1)$ using Lemma \ref{le-ecc},
and sum $deg (v) \cdot \varepsilon (v)$\,.
The time complexity of the algorithm is linear $O (n)$\,, and the memory used
is $O (n)$\,, since we need three additional arrays of length $n$\,.
\vspace{5mm}
\noindent
{\it Acknowledgement.\/} This work was supported by the
research grants 144015G and 144007 of the Serbian Ministry of Science
and Technological Development.
\vspace{5mm}
|
2,877,628,090,990 | arxiv | \section{Introduction}
In 1972 Montgomery \cite{kn:mont73} and Dyson \cite{kn:dyson}
discovered that pairs of zeros of the Riemann zeta-function are
distributed like pairs of eigenvalues of random unitary matrices.
Part of this discovery could be proven, under the assumption of
the Riemann Hypothesis, and part relies on a heuristic based on
the Hardy-Littlewood conjectures for the distribution of prime
pairs.
Odlyzko \cite{kn:odlyzko89}, in the 1980s carried out a
substantial numerical test of Montgomery's conjecture which
provided stunning visual substantiation.
Subsequently Rudnick and Sarnak \cite{kn:rudsar} showed that the
limit high on the critical line of the $n$-correlation of the
zeros of the Riemann zeta-function agreed with that of unitary
matrices, provided that the test function had a Fourier transform
with limited support.
At the same time, Bogomolny and Keating \cite{kn:bogkea96} showed
how the Hardy-Littlewood conjectures could be used to derive the
asymptotic limit of the $n$-correlation.
Bogomolny and Keating \cite{kn:bk96} also investigated the
difference between Montgomery's limiting pair-correlation
conjecture and the data of Odlyzko. A close examination of
Odlyzko's data revealed that lower order terms, likely to be of an
arithmetic nature, were present. They derived formulae for these
lower order terms, initially using the Hardy-Littlewood
conjectures, but subsequently developing a method whose point of
departure was the trace formula of Gutzwiller. They gave full
details for the lower order terms for the 2-point correlation, as
well as numerics showing the goodness of fit, whereas for three
and higher correlations, they outlined several methods which lead
to these lower order terms.
In this paper, we present a different approach to obtaining these
lower order terms for $n$-correlation. Our approach is based on
the `ratios conjecture' of Conrey, Farmer, and Zirnbauer
\cite{kn:cfz1,kn:cfz2} (see also \cite{kn:consna06}). Assuming the
ratios conjecture we prove a formula which explicitly gives all of
the lower order terms in any order correlation. (In the final
section we write down the first four correlations.)
Our method works equally well for random matrix theory. An
interesting feature of this work is the new formula for the
$n$-correlation of random matrix theory that arises by this method
(see Theorem \ref{theo:main}.) It is a far less elegant formula
than the usual determinantal expression, but it allows for direct
comparison with the number theoretical result, illustrating the
identical structure of the $n$-point correlations of Riemann zeros
and random matrix eigenvalues. In fact, in the scaling limit (when
the variables in the test function are multiplied by $\log T/2\pi$
and $T$, the height up the critical line, becomes large) then all
of the arithmetic features of the formula for the $n$-correlation
of the Riemann zeros disappear and it exactly matches our new
formula for the the $n$-correlation of eigenvalues of unitary
matrices in the equivalent limit. This identification allows us to
prove that in the scaling limit the leading order terms for our
$n$-correlation of the Riemann zeros have the expected
determinantal form. See \cite{kn:consna07} for the explicit
derivation of the asymptotic limit in the case of the triple
correlation of Riemann zeros. The higher correlations follow in
exactly the same way.
This point is significant in view of the difficulty
in making this identification in other works on $n$-correlation
and $n$-level density.
In Rudnick and Sarnak this identification is proven in the
case of test functions restricted to $[-1,1]$; the proof is quite involved and
makes serious use of the restriction on the support of the test
function.
Indeed this point forms a difficulty which shows up, for example,
in the work of Gao \cite{kn:gao05} on $n$-level density for zeros
of quadratic $L$-functions. Rubinstein \cite{kn:rub98} had
evaluated this for test functions whose total support was
contained in $[-1,1]$ and verified the determinantal form for
functions restricted to this class, analogous to what Rudnick and
Sarnak did. Gao extended the range of support to $[-2,2]$ but for
these test functions was unable to derive the determinantal form,
due to combinatorial complexities. Here we handle the case in full
generality without any mention of the test function. It is
possible that our method will shed light on this difficulty that
arises in these other works.
This paper extends the calculation of the triple correlation of
Riemann zeros \cite{kn:consna07}. An anticipated application of
this current work is to the determination of the lower order terms
in the nearest neighbor spacing for zeta-zeros.
Throughout this paper we assume the truth of the Riemann
Hypothesis.
\section{Background and notation}
\subsection{The Riemann zeta-function}
The Riemann zeta-function is defined by
\begin{eqnarray}\zeta(s)=\sum_{n=1}^\infty \frac{1}{n^s}
\end{eqnarray} for $s=\sigma+it$ with $\sigma >1.$ It has a
meromorphic continuation to the whole complex plane with its only
singularity a simple pole at $s=1$ with residue 1. It satisfies a
functional equation which, in its symmetric form reads
\begin{eqnarray}
\pi^{-\frac s 2}\Gamma\bigg(\frac s 2 \bigg) \zeta(s) = \pi^{
\frac {s-1} 2}\Gamma\bigg(\frac {1-s} 2 \bigg) \zeta(1-s)
\end{eqnarray}
and in its asymmetric form is
\begin{eqnarray}
\zeta(s)=\chi(s)\zeta(1-s)\end{eqnarray} where
\begin{eqnarray}
\chi(1-s)=\chi(s)^{-1}=2(2\pi)^{-s}\Gamma(s)\cos \frac{\pi s}{2}.
\end{eqnarray}
The product formula discovered by Euler is
\begin{eqnarray}
\zeta(s)=\prod_p \bigg( 1-\frac{1}{p^s}\bigg)^{-1}
\end{eqnarray}
for $\sigma>1$ where the product is over the prime numbers $p$.
The complex zeros of the Riemann zeta-function are denoted by
$\rho=\beta+i\gamma_j$ where it is known that $0<\beta<1.$
The Riemann Hypothesis asserts that $\beta=1/2$ for all zeros $\rho$. We assume
this is true and denote the zeros as $1/2+i\gamma_j$, where
$0<\gamma_1\le \gamma_2\le \dots.$ The number of $\gamma$ with
$0<\gamma\le T$ is given by
\begin{eqnarray}
N(T)=\#\{\gamma\le T\}=\frac{T}{2\pi}\log \frac{T}{2\pi e} +O(\log
T)
\end{eqnarray}
so that the average distance from one $\gamma$ to the next is
$\sim2\pi/\log \gamma$.
The family $\{\zeta(1/2+it)| t>0\}$ parametrized by real numbers
$t$ can be modeled by characteristic polynomials of unitary
matrices.
\subsection{Unitary matrices}
If $X$ is an $N\times N$ matrix with complex entries $X=(x_{jk})$,
we let $X^*$ be its conjugate transpose, i.e. $X^*=(y_{jk})$ where
$y_{jk}=\overline{x_{kj}}.$ $X$ is said to be unitary if $XX^*=I$.
We let $U(N)$ denote the group of all $N\times N$ unitary
matrices. This is a compact Lie group and has a Haar measure which
allows us to do analysis.
All of the eigenvalues of $X\in U(N)$ have absolute value 1; we
write them as
\begin{eqnarray}e^{i\theta_1}, e^{i\theta_2}, \dots , e^{i\theta_N}.\end{eqnarray}
The eigenvalues of $X^*$ are $e^{-i\theta_1},\dots,
e^{-i\theta_N}$. Clearly, the determinant, $\det X=\prod_{n=1}^N
e^{i\theta_n}$ of
a unitary matrix is a complex number
with absolute value equal to 1.
The average distance from one $\theta$ to the next is $2\pi/N$. To
obtain a sequence of numbers with average spacing 1 we let
\begin{eqnarray}
\tilde{\theta_j}=\frac{N\theta_j}{2\pi}.
\end{eqnarray}
For any sequence of $N$ points on the unit circle there are
matrices in $U(N)$ with these points as eigenvalues. The
collection of all matrices with the same set of eigenvalues
constitutes a conjugacy class in $U(N)$. Thus, the set of
conjugacy classes can identified with the collection of sequences
of $N$ points on the unit circle.
We are interested in computing various statistics about these
eigenvalues. Consequently, we identify all matrices in $U(N)$ that
have the same set of eigenvalues.
Weyl's integration formula gives a simple way to perform averages over $U(N)$
for functions $f$ that are constant on conjugacy classes.
Such functions are called `class functions'.
Weyl's formula asserts that for such an $f$,
\begin{eqnarray}\int_{U(N)} f(X) ~d\mbox{Haar}=\int_{[0,2\pi]^N}
f(\theta_1,\dots,\theta_N)dX_N,\end{eqnarray} where
\begin{eqnarray} dX_N &=&\prod_{1\le j<k\le
N}\big|e^{i\theta_k}-e^{i\theta_j}\big|^2 ~\frac{d\theta_1 \dots
d\theta_N}{N! (2\pi)^N}.\end{eqnarray} Since $N$ will be fixed in
this paper, we will usually write $dX$ in place of $dX_N$. The Haar
measure can be expressed in terms of the Vandermonde determinant
\begin{eqnarray}
\Delta(w_1,\dots,w_R)=\det_{R\times R}\big(
w_i^{j-1}\big)=\prod_{1\le j < k\le R}(w_k-w_j).
\end{eqnarray}
The characteristic polynomial of a matrix $X$ is denoted
$\Lambda_X(s)$ and is defined by
\begin{eqnarray}\Lambda_X(s)=\det(I-sX^*)=\prod_{n=1}^N(1-se^{-i\theta_n}).
\end{eqnarray}
The roots of $\Lambda_X(s)$ are the eigenvalues of $X$ and are on
the unit circle. The characteristic polynomial satisfies the
functional equation
\begin{eqnarray} \Lambda_X(s)&=&(-s)^N\prod_{n=1}^N e^{-i\theta_n}\prod_{n=1}^N
(1-e^{i\theta_n}/s)\\
&=&(-1)^N \det X^* ~s^N~\Lambda_{X^*}(1/s).\end{eqnarray} Note that
\begin{eqnarray} \label{eqn:fe}
s\frac{\Lambda_X'}{\Lambda_X}(s)+\frac 1 s
\frac{\Lambda_{X^*}'}{\Lambda_{X^*}}\big(\frac 1s\big)=N.
\end{eqnarray}
These characteristic polynomials have value distributions similar
to that of the Riemann zeta-function and form the basis of Random
Matrix models which predict behavior for the Riemann zeta-function
based on what can be proven about the $\Lambda$. Some care has to
be taken in making these comparisons because we are used to
thinking about the zeta-function in a half-plane whereas the
$\Lambda$ are naturally studied in a circle. The translation is
that the 1/2-line corresponds to the unit circle; the half-plane
to the right of the 1/2-line corresponds to the inside of the unit
circle. Note that $\Lambda_X(0)=1$ is the analogue of
$\lim_{\sigma\to \infty}\zeta(\sigma+it)=1$.
We let
\begin{eqnarray}\label{eq:z}
z(x)=\frac{1}{1-e^{-x}}.
\end{eqnarray}
In our formulas for averages of characteristic polynomials the
function $z(x)$ plays the role for random matrix theory that
$\zeta(1+x)$ plays in the theory of moments of the Riemann
zeta-function.
We want an accurate formula for
\begin{eqnarray}
\sideset{}{^*}\sum_{0<\gamma_{j_1},\dots,\gamma_{j_n}<T}f(\gamma_{j_1},\dots,\gamma_{j_n}),
\end{eqnarray}
for suitable functions $f$, to be described later, where the sum
is for distinct indices $j$; the desired formula should be
analogous to the RMT theorem which we state in the next section.
\section{Eigenvalue correlations }
Here is a statement for $n$-correlation of eigenvalues of random
unitary matrices of size $N$:
\begin{theorem}
Let $f:[0,2\pi]^n\to \mathbb C$ be a continuous function of
$n$-variables. Then
\begin{eqnarray*}
\int_{U(N)}\sideset{}{^*}\sum_{1\le j_1,\dots , j_n\le N}
f(\theta_{j_1},\dots ,\theta_{j_n}) dX_N = \frac{1}{(2\pi)^n}
\int_{[0,2\pi]^n} f(\theta_1,\dots,\theta_n)
\det_{n\times n} S_N(\theta_k-\theta_j)~d\theta_1\dots
~d\theta_n,
\end{eqnarray*}
where $\sideset{}{^*}\sum$ indicates that the sum is for distinct
indices and where
\begin{eqnarray*}
S_N(\theta)=\frac{\sin \frac {N\theta}{2}}{\sin \frac \theta 2 }.
\end{eqnarray*}
\end{theorem}
This theorem is a well-known consequence of Gaudin's Lemma, see
\cite{kn:conrey04}.
In the following sections we present a new proof of this theorem,
for periodic, holomorphic test functions $f$, based on a formula
for averaging ratios of characteristic polynomials of unitary
matrices. There are many proofs of this formula, see
\cite{kn:cfz1,kn:cfs05,kn:bumgam06}. The point of this approach is
that it has a natural analog in the theory of $L$-functions.
\subsection{Averages of ratios of characteristic polynomials}
\label{sect:ratiolambda}
The statement of the ratios theorem is slightly complicated. We
attempt to make it easier to comprehend by eliminating subscripts.
So, let there be given finite sets $A, B, C$ and $D$ and consider
\begin{eqnarray} \label{eq:R}
&&\mathcal R(A,B;C,D):=\int_{U(N)}\frac{\prod_{\alpha\in A}
\Lambda_X(e^{-\alpha})\prod_{\beta\in B}
\Lambda_{X^*}(e^{-\beta})} {\prod_{\gamma\in
C}\Lambda_X(e^{-\gamma}) \prod_{\delta\in D}
\Lambda_{X^*}(e^{-\delta})}dX,
\end{eqnarray}
with $\Re \gamma>0, \Re \delta >0$. Theorem \ref{theo:rat}, the
ratios theorem, is written in an equivalent but slightly different
form to previous work, where we express $\mathcal R(A,B;C,D)$ as a
sum over subsets $S\subset A$ and $T\subset B$ with $|S|=|T|$. Each
term in this sum essentially has the same structure, except that the
elements of $S$ effectively exchange places with those in $T$. In
addition we let $\overline{S}=A-S$ and $\overline{T}=B-T$. We will
let $\hat \alpha$ denote a generic member of $S$ and $\hat \beta$
denote a generic member of $T$; we will use $\alpha$ and $\beta$ for
generic members of $A$ and $B$ or of $\overline{S}$ and
$\overline{T}$, according to the context. Also $S^-=\{-\hat\alpha:
\hat\alpha\in S\}$, and similarly for $T^-$. The Ratios Theorem is
most easily stated in terms of
\begin{eqnarray}
Z(A,B):=\prod_{\alpha\in A\atop\beta\in B}z(\alpha+\beta),
\end{eqnarray}
where $z(x)=\frac{1}{1-e^{-x}}$, and
\begin{eqnarray}\label{eq:Z}Z(A,B;C,D):=
\frac{\prod_{\alpha\in A\atop \beta\in B}
z(\alpha+\beta)\prod_{\gamma\in C\atop \delta\in
D}z(\gamma+\delta)} {\prod_{\alpha\in A\atop \delta\in D}
z(\alpha+\delta) \prod_{\beta\in B\atop \gamma\in C}
z(\beta+\gamma)}=\frac{Z(A,B)Z(C,D)}{Z(A,D)Z(B,C)}.\end{eqnarray}
\begin{theorem}[Ratios Theorem \cite{kn:cfz1,kn:cfs05}] \label{theo:rat}
With $\Re \gamma>0, \Re \delta >0$ for $\gamma\in C$ and $\delta
\in D$, $|C|\leq|A|+N$ and $|D|\leq|B|+N$, we have
\begin{eqnarray*}
&&\mathcal R(A,B;C,D)=\sum_{S\subset A,T\subset B\atop
|S|=|T|}e^{-N(\sum_{\hat\alpha\in S} \hat \alpha +\sum_{\hat\beta
\in T}\hat\beta)} Z(\overline{S}+ T^-,\overline{T}+ S^-;C,D),
\end{eqnarray*}
where $A=S+\overline{S}$, $B=T+\overline{T}$ and $Z$ is defined at
(\ref{eq:Z}).
\end{theorem}
\subsection{Averages of logarithmic derivatives of characteristic
polynomials}
For use in determining multiple correlation we differentiate the
Ratios Theorem to obtain a theorem about averages of logarithmic
derivatives:
\begin{theorem}
\label{theo:J} If $\Re \alpha_j>0$ and $ \Re \beta_j
>0$ for $\alpha_j\in A$ and $\beta_j\in B$, then $J(A;B)=J^*(A;B)$ where
\begin{eqnarray}
J(A;B)&:=& \int_{U(N)}\prod_{\alpha\in A}
(-e^{-\alpha})\frac{\Lambda_X'}{\Lambda_X}(e^{-\alpha})\prod_{\beta\in
B} (-e^{-\beta})\frac{\Lambda_{X^*}'}{\Lambda_{X^*}}(e^{-\beta})~ dX
,
\end{eqnarray}
\begin{eqnarray} &&J^*(A;B):= \nonumber \\
&&\qquad\qquad\sum_{S\subset A,T\subset B\atop
|S|=|T|}e^{-N(\sum_{\hat \alpha\in S} \hat \alpha
+\sum_{\hat{\beta}\in T}\hat\beta)} \frac{Z(S,T)Z(S^-,T^-)} {
Z^{\dagger}(S,S^-)Z^{\dagger}(T,T^-)} \sum_{{(A-S)+ (B-T)\atop =
U_1+\dots + U_R}\atop |U_r|\le 2}\prod_{r=1}^R H_{S,T}(U_r),
\end{eqnarray}
and
\begin{equation}\label{eqn:Hrmt}
H_{S,T}(W)=\left\{\begin{array}{ll} \sum_{\hat \alpha\in
S}\frac{z'}{z}(\alpha-\hat{\alpha})-\sum_{\hat\beta\in T}
\frac{z'}{z}(\alpha +\hat \beta) &\mbox{ if $W=\{\alpha\}\subset
A-S$}
\\
\sum_{\hat\beta\in T}\frac{z'}{z}(\beta-\hat
\beta)-\sum_{\hat\alpha\in S} \frac{z'}{z}
(\beta+\hat\alpha) &\mbox{ if $W=\{\beta\}\subset B-T$}\\
\left(\frac{z'}{z}\right)'(\alpha+\beta) & \mbox{ if
$W=\{\alpha,\beta\}$ with $
{\alpha \in A-S, \atop \beta\in B-T}$}\\
0&\mbox{ otherwise}.
\end{array}\right.
\end{equation}
Also, $Z(A,B)=\prod_{\alpha\in A\atop\beta\in B}z(\alpha+\beta)$,
with the dagger on $Z^\dagger(S,S^-)$ imposing the additional
restriction that a factor $z(x)$ is omitted if its argument is
zero.
\end{theorem}
\begin{remark}The definitions of $J(A;B)$ and $J^*(A;B)$ make sense
without the restriction that $\Re \alpha_j>0,$ and $ \Re \beta_j
>0$. However, the two are not equal without these
restrictions.\end{remark}
\begin{remark}Note that $J^*(A;B)$ has a pole when an $\alpha\in A$ is
equal to $-\beta$, for some $\beta \in B$. It also appears to have
a pole when two $\alpha$'s are equal, say $\alpha_1=\alpha_2$,
occurring when $\alpha_1\in S$ and $\alpha_2\notin S$, as seen in
the term $\frac{z'}{z}(\alpha-\hat\alpha)$ of (\ref{eqn:Ha}).
However, this is cancelled by a pole with residue of the opposite
sign when $S$ is replaced by $S-\{\alpha_1\}+ \{\alpha_2\}$. The
same phenomenon occurs when two $\beta$'s are equal, as can be
seen in the concrete examples given in (\ref{eqn:Jabb}) and
(\ref{eqn:Jabbb}).\end{remark}
\begin{proof} By (\ref{eq:R}), we have
\begin{eqnarray} \label{eq:Jasderiv}
J(A;B) =\left.\prod_{\alpha\in A \atop \beta \in
B}\frac{d}{d\alpha}\frac{d}{d\beta} \mathcal
R(A,B;C,D)\right|_{C=A \atop D=B}.
\end{eqnarray}
Of course, in this situation $|C|=|A|$ and $|D|=|B|$; so that we
may think of $A=\{\alpha_1,\dots ,\alpha_k\}$ and
$C=\{\gamma_1,\dots,\gamma_k\}$ and then the substitution
``$C=A$''means the substitution $\gamma_i=\alpha_i$ for
$i=1,2,\dots, k$, and similarly for $D$ and $B$.
Recall from Theorem \ref{theo:rat} that $\mathcal R$ is expressed
as a sum of $Z$ over subsets $S$ and $T$. In performing the
differentiations in (\ref{eq:Jasderiv}) we will find that the
derivatives with respect to the variables in $S$ and $T$ are fairly
simple to perform (as we will show below, culminating in
(\ref{eq:JSTderiv})), but we will need Lemma \ref{lemma:diff} to
differentiate with respect to the remaining variables. Hence we
first
rewrite $Z$ so as to separate these variable types.
Note first that $Z(A,B)=Z(B,A)$ and that
$Z$ behaves nicely with respect to unions:
\begin{eqnarray}\label{eqn:Zunion}
Z(A_1+ A_2,B)=Z(A_1,B)Z(A_2,B).
\end{eqnarray}
Recall that $A=S+ \overline{S}$ and $B=T+ \overline{T}$ and put
$C=C_S+ C_{\overline{S}}$ and $D=D_T+ D_{\overline{T}}$ where we
think of $C_S$, for example, as being the set that will be
substituted by $S$ when eventually $C$ is substituted by $A$. Then
using (\ref{eqn:Zunion}) repeatedly, we have
\begin{eqnarray}\label{eq:Zswitched}
&&Z(\overline{S}+ T^-,\overline{T}+ S^-;C,D)\\\nonumber
&&\qquad\qquad=\frac{Z(\overline{S},\overline{T})Z(\overline{S},S^-)Z(T^-,\overline{T})Z(T^-,S^-)Z(C,D)}{Z(\overline{S},D)Z(\overline{T},C)Z(S^-,C_{\overline{S}})Z(T^-,D_{\overline{T}})Z(S^-,C_S)Z(T^-,D_T)}.
\end{eqnarray}
This simplifies further if we make the substitution for $C$ and $D$:
\begin{eqnarray} \label{eqn:nonsense}
Z(\overline{S}+ T^-,\overline{T}+
S^-;C,D)\big|_{C=A\atop
D=B}=\frac{Z(S,T)Z(S^-,T^-)}{Z(S,S^-)Z(T,T^-)}=Z(S,T;T^-,S^-).
\end{eqnarray}
Note that since $z(x)$ has a pole at $x=0$, the resulting
expression is 0 unless both $S$ and $T$ are empty.
We now differentiate (\ref{eq:Zswitched}) with respect
to the variables in $S$ and $T$; these derivatives are easy to
calculate because, anticipating the substitution of each
$\gamma\in C_S$ by an $\hat \alpha$ we see that in differentiating
with respect to $\hat \alpha$ the expression $z(\gamma-\hat
\alpha)$ in the denominator
(one of the factors of
$Z(S^-,C_S)$) must be differentiated; if not it makes the whole
expression 0 after the substitution is made because $z(x)$ has a
pole at $x=0$. Using the notation
\begin{eqnarray}
Z^{\dagger}(X,Y)=\prod_{x\in X,y\in Y\atop x+y\ne 0}z(x+y),
\end{eqnarray}
and noting that
\begin{eqnarray} \frac{d}{d\hat\alpha}
\frac{1}{z(\gamma-\hat\alpha)} = -e^{\hat\alpha-\gamma},
\end{eqnarray}
we have, for example,
\begin{eqnarray}
\prod_{\hat\alpha\in S}\frac{d}{d\hat\alpha}
\frac{1}{Z(S^-,C_S)}\bigg|_{C_S=S}=\frac{(-1)^{|S|}}{Z^{\dagger}(S^-,S)}.
\end{eqnarray}
In this way we obtain
\begin{eqnarray}\label{eq:JSTderiv}
&& J(A;B)= \sum_{S,T\atop |S|=|T|}e^{-N(\sum \hat \alpha
+\sum\hat\beta)}
\frac{Z(S,T)Z(S^-,T^-)}{Z^{\dagger}(S,S^-)Z^{\dagger}(T,T^-)}
\\&&\qquad \times \left. \nonumber \prod_{\alpha\in \overline{S}\atop \beta\in
\overline{T}}\frac{d}{d\alpha}\frac{d}{d\beta}
\left(\frac{Z(\overline{S},\overline{T})Z(\overline{S},S^-)Z(\overline{T},T^-)Z(C,D)}
{Z(S,T)Z(C_{\overline{S}},S^-)Z(D_{\overline{T}},T^-)
Z(\overline{S},D)Z(\overline{T},C)
}\right)\right|_{C=A\atop D=B}.\end{eqnarray}
Note that the sets $C_{\overline{S}}$ and $D_{\overline{T}}$ vary
from term to term in the sum over $S$ and $T$ since the division of
$C$ into the union of sets $C_S$ and $C_{\overline{S}}$ mimics the
form of $A=S+\overline{S}$, and similarly for $D$. Also observe
that
\begin{eqnarray}
\frac{Z(\overline{S},\overline{T})Z(\overline{S},S^-)Z(\overline{T},T^-)Z(C,D)}
{Z(S,T)Z(C_{\overline{S}},S^-)Z(D_{\overline{T}},T^-)
Z(\overline{S},D)Z(\overline{T},C)
}\bigg|_{C=A\atop D=B}=1.
\end{eqnarray}
To perform the differentiations in (\ref{eq:JSTderiv}) we use a
form of logarithmic differentiation expressed in the following.
\begin{lemma}
\label{lemma:diff} Let $H$ be a differentiable function of $w\in
W$. Then
\begin{eqnarray}
\left(\prod_{w\in W}\frac{d}{dw}\right)e^H= e^{H }
\sum_{W=W_1+\dots+W_r}H(W_1)\dots H(W_r)
\end{eqnarray}
where
\begin{eqnarray}
H(W)=\left(\prod_{w\in W} \frac{d}{dw}\right)H.
\end{eqnarray}
The sum is over all set partitions of $W$ into disjoint sets
$W_j$.
\end{lemma}
In words this Lemma says that to perform a derivative with respect
to each variable once, we form all of the set partitions of the
complete set of variables and add up over these set partitions the
product of the partial derivatives of the exponent $H$ with
respect to each variable in each subset of the partition. This
lemma is obvious upon working a few examples.
We apply this lemma with
\begin{eqnarray} \label{eqn:Hdef} &&
H=H^{A,B,C,D}_{S,T}:=\sum_{\alpha\in \overline{S}\atop \beta\in
\overline{T}} \log z(\alpha+\beta)+\sum_{\alpha\in \overline{S}\atop
\hat \alpha\in S} \log z(\alpha-\hat\alpha) + \sum_{\beta\in
\overline{T}\atop \hat \beta\in T}
\log z(\beta-\hat\beta)\\
&& \qquad \qquad \qquad \qquad
-
\sum_{\alpha\in \overline{S}\atop \delta\in D} \log
z(\alpha+\delta)-\sum_{\beta\in \overline{T}\atop \gamma\in C} \log
z(\beta+\gamma)\nonumber
\end{eqnarray}
and so obtain, with
$H_{S,T}(W):=H^{A,B,C,D}_{S,T}(W)\big|_{C=A\atop
D=B}=\left(\left(\prod_{w\in W}
\frac{d}{dw}\right)H^{A,B,C,D}_{S,T}\right)\big|_{C=A\atop D=B}$,
\begin{eqnarray}
&& J(A;B)= \sum_{S,T\atop |S|=|T|}e^{-N(\sum \hat \alpha
+\sum\hat\beta)}
\frac{Z(S,T)Z(S^-,T^-)}{Z^{\dagger}(S,S^-)Z^{\dagger}(T,T^-)}\sum_{\overline{S}+
\overline{T}\atop = W_1+\dots +W_R}\prod_{r=1}^R H_{S,T}(W_r).
\end{eqnarray}
Strictly speaking, $H_{S,T}(W)$ depends on $A$ and $B$, but from now
on use of $H_{S,T}(W)$ will refer always to the expressions in
(\ref{eqn:Ha})-(\ref{eq:H3}) and these can be used without
specifically referring to $A$ and $B$.
By consideration of (\ref{eqn:Hdef}) it is clear that we can
restrict the subsets $W_r$ to be singletons or else pairs which
have precisely one $\alpha$ and one $\beta$. This follows from
some easy calculations. Since
\begin{eqnarray}
H^{A,B,C,D}_{S,T}(\{\alpha\})=\sum_{\beta\in \overline{T}}
\frac{z'}{z}(\alpha+\beta)+\sum_{\hat \alpha \in S}
\frac{z'}{z}(\alpha-\hat\alpha) -\sum_{\delta \in D}
\frac{z'}{z}(\alpha+\delta),
\end{eqnarray}
we have
\begin{eqnarray} \label{eqn:Ha}
H_{S,T}(\{\alpha\})&=& \sum_{\hat \alpha\in S}
\frac{z'}{z}(\alpha-\hat\alpha) -\sum_{\hat \beta\in
T}\frac{z'}{z}(\alpha+\hat \beta), \;\;\; \alpha\notin S.
\end{eqnarray}
Similarly,
\begin{eqnarray} \label{eqn:Hb}
H_{S,T}(\{\beta\})&=& \sum_{\hat \beta\in T}
\frac{z'}{z}(\beta-\hat\beta) -\sum_{\hat \alpha\in
S}\frac{z'}{z}(\beta+\hat \alpha),\;\;\; \beta \notin T.
\end{eqnarray}
In addition
\begin{eqnarray} \label{eqn:Hab}
H_{S,T}(\{\alpha,\beta \})=
\left(\frac{z'}{z}\right)'(\alpha+\beta),\;\;\; \alpha,\beta \notin
S {\rm \;or\;} T.
\end{eqnarray}
Also, \begin{equation}\label{eq:H0} H_{S,T}(\emptyset)=1,
\end{equation}
and
\begin{equation}\label{eq:Haa}
H_{S,T}(\{\alpha,\alpha'\})=H_{S,T}(\{\beta,\beta'\})=0
\end{equation}
and
\begin{equation}\label{eq:H3}
H_{S,T}(W)=0,\;\;\; {\rm \;if\;} |W|\ge 3.
\end{equation}
\end{proof}
\subsection{Residue identity}\label{sect:res}
A key ingredient of the proof of $n$-correlation will be the
following residue identity for $J^*(A;B)$:
\begin{lemma} \label{lem:residue} Suppose that $\alpha^*\in A$ and $\beta^*\in B$. Let $A'=A-\{\alpha^*\}$
and $B'=B-\{\beta^*\}.$ Then $J^*(A;B)$ has a simple pole at
$\alpha^*=-\beta^*$ with
\begin{eqnarray}
\operatornamewithlimits{Res}_{\alpha^*=-\beta^*} J^*(A;B)
=NJ^*(A';B')+J^*(A';B)+J^*(A' +\{-\beta^*\};B').
\end{eqnarray}
\end{lemma}
\begin{proof}
By Theorem \ref{theo:J} we have
\begin{eqnarray}
J^*(A,B)=\sum_{{S\subset A\atop T\subset B}\atop |S|=|T|}
D_{S,T}(\overline{S},\overline{T})
\end{eqnarray}
where throughout this proof (and this paper) $A=\overline{S}+S$,
$B=\overline{T}+T$, and
\begin{eqnarray}
D_{S,T}(\overline{S},\overline{T})=Q(S,T)\sum_{\overline{S}+ \overline{T} =\sum W_r}\prod_{r} H_{S,T}(W_r).
\end{eqnarray}
Here the sum is over any collection of non-empty sets $W_1,W_2,\dots
$ which form a partition of $\overline{S}+ \overline{T}$,
\begin{eqnarray}
Q(S,T)=e^{-N(\sum_{\hat \alpha\in S} \hat \alpha +\sum_{\hat{\beta}\in T}\hat\beta)}
\frac{Z(S,T)Z(S^-,T^-)}{Z^{\dagger}(S,S^-)Z^{\dagger}(T,T^-)}
\end{eqnarray}
and $H_{S,T}(W)$ is defined in (\ref{eqn:Ha})-(\ref{eq:H3}). We
claim that $D$, $Q$ and $H$ have the following properties:
\begin{description}
\item[P1]If $\alpha^*\in \overline{S}$ and $\beta^*\in \overline{T}$, then $Q(S,T)$ is
independent of $\alpha^*$ and $\beta^*$ and
\begin{eqnarray}
H_{S,T}(W)=\left\{ \begin{array}{ll}
\frac{1}{(\alpha^*+\beta^*)^2}+O(1)
& \mbox{ if $W=\{\alpha^*,\beta^*\}$ }\\
$O(1)$ & \mbox{ otherwise }
\end{array} \right.
\end{eqnarray}
\item[P2] If $\alpha^*\in S$ and $\beta^*\in \overline{T}$, then $Q(S,T)$ is
regular when $\alpha^*=-\beta^* $ and
\begin{eqnarray}
H_{S,T}(W)=\left\{ \begin{array}{ll}
\frac{1}{\alpha^*+\beta^*}+O(1)
& \mbox{ if $W=\{\beta^*\}$ }\\
$O(1)$ & \mbox{ otherwise }
\end{array} \right.
\end{eqnarray}
\item[P3] If $\alpha^*\in \overline{S}$ and $\beta^*\in T$, then $Q(S,T)$ is
regular when $\alpha^*=-\beta^*$ and
\begin{eqnarray}
H_{S,T}(W)=\left\{ \begin{array}{ll}
\frac{1}{\alpha^*+\beta^*}+O(1)
& \mbox{ if $W=\{\alpha^*\}$ }\\
$O(1)$ & \mbox{ otherwise }
\end{array} \right.
\end{eqnarray}
\item[P4] If $\alpha^*\in S$ and $\beta^*\in T$ and
$S'=S-\{\alpha^*\}$ and $T'=T-\{\beta^*\}$, then
$Q(S,T)=\big(\frac{-1}{(\alpha^*+\beta^*)^2}+O(1)\big)Q_1(S,T)$
where
\begin{eqnarray}
&&Q_1(S,T)=Q(S',T')\big(1-(\alpha^*+\beta^*)\big(N+
H_{S',T'}(\{\alpha^*\})|_{\alpha^*=-\beta^*}+
H_{S',T'}(\{\beta^*\})\big)\nonumber
\\
&&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
\qquad\qquad+O(|\alpha^*+\beta^*|^2)\big)
\end{eqnarray}
and
\begin{eqnarray}
&&H_{S,T}(W)=H_{S',T'}(W)-(\alpha^*+\beta^*)\big(H_{S',T'}(W+\{\alpha^*\})_{\alpha^*=-\beta^*}
+H_{S',T'} (W+\{\beta^*\})\big)\nonumber
\\
&&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
\qquad\qquad\quad+O(|\alpha^*+\beta^*|^2).
\end{eqnarray}
\end{description}
We show that the lemma follows from these four properties and then
prove that these properties hold in this situation. (We will later
demonstrate a proof along very similar lines when we treat the
$n$-correlation of the zeta-zeros.)
From these four properties we obtain four Laurent or Taylor
expansions of $D_{S,T}(\overline{S},\overline{T})$ as a function of
$\alpha^*$ in a neighborhood of $-\beta^*$:
\begin{itemize}
\item
If $\alpha^*\in \overline{S}$ and $\beta^*\in \overline{T}$, then (with $\overline{S}'=\overline{S}-\{\alpha^*\}$
and $\overline{T}'=\overline{T}-\{\beta^*\}$)
\begin{eqnarray}\label{eq:item1}
D_{S,T}(\overline{S},\overline{T})&=&
\left(\frac{1}{(\alpha^*+\beta^*)^2}+O(1)\right)
Q(S,T) \sum_{\overline{S}'+ \overline{T}' =\sum W_r}\prod_{r}
H_{S,T}(W_r)\nonumber\\
&=& \left(\frac{1}{(\alpha^*+\beta^*)^2}+O(1)\right)D_{S,T}(\overline{S}',\overline{T}');
\end{eqnarray}
consequently,
$\operatornamewithlimits{Res}_{\alpha^*=-\beta^*}D_{S,T}(\overline{S},\overline{T})=0$.
\item If $\alpha^*\in S$ and $\beta^*\in \overline{T}$, then
\begin{eqnarray}
\operatornamewithlimits{Res}_{\alpha^*=-\beta^*}D_{S,T}(\overline{S},\overline{T})&=&Q(S'+\{-\beta^*\},T)\sum_{\overline{S}+
\overline{T}' =\sum W_r}\prod_{r} H_{S'+\{-\beta^*\},T}(W_r)
\\&=&D_{S'+\{-\beta^*\},T}(\overline{S},\overline{T}').\nonumber
\end{eqnarray}
\item If $\alpha^*\in \overline{S}$ and $\beta^*\in T$, then
\begin{eqnarray}
\operatornamewithlimits{Res}_{\alpha^*=-\beta^*}D_{S,T}(\overline{S},\overline{T})&=&Q(S,T)\sum_{\overline{S}'+
\overline{T} =\sum W_r}\prod_{r} H_{S,T}(W_r)
\\
&=&D_{S,T}(\overline{S}',\overline{T}).\nonumber
\end{eqnarray}
\item If $\alpha^*\in S$ and $\beta^*\in T$, then
\begin{eqnarray}\label{eq:item4} &&
Q_1(S,T)\sum_{\overline{S}+ \overline{T} =\sum W_r}\prod_r
H_{S,T}(W_r)=Q(S',T') \sum_{\overline{S}+ \overline{T} =\sum
W_r}\prod_r H_{S',T'}(W_r)\\
&&\quad\times\bigg(1+(\alpha^*+\beta^*)\Big(N+
H_{S',T'}(\{\alpha^*\})|_{\alpha^*=-\beta^*}+
H_{S',T'}(\{\beta^*\})\nonumber
\\
&&\qquad\qquad+\sum_r\frac{H_{S',T'}(W_r+\{\alpha^*\})|_{\alpha^*=-\beta^*}
+H_{S',T'}(W_r+\{\beta^*\})}{H_{S',T'}(W_r)}\Big)\nonumber\\
&&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
+O(|\alpha^*+\beta^*|^2)\bigg).\nonumber
\end{eqnarray}
Therefore in this final case,
\begin{eqnarray}
\operatornamewithlimits{Res}_{\alpha^*=-\beta^*}D_{S,T}(\overline{S},\overline{T})=N
D_{S',T'}(\overline{S},\overline{T})+D_{S',T'}(\overline{S}+
\{-\beta^*\},\overline{T})+D_{S',T'}(\overline{S},\overline{T}+
\{\beta^*\}).
\end{eqnarray}
Note that (\ref{eq:item4}) can be written as
\begin{eqnarray}\label{eq:DDDD}
D_{S',T'}(\overline{S},\overline{T})\Big(1+O\big(|\alpha^*+\beta^*|\big)\Big).
\end{eqnarray}
\end{itemize}
By (\ref{eq:item1}) and (\ref{eq:item4}) the double poles in P1
and P4 cancel because
\begin{eqnarray}
\sum_{S\subset A, T\subset B\atop {|S|=|T|\atop \{\alpha^*\}\in S,
\{\beta^*\}\in T}}D_{S',T'}(\overline{S},\overline{T}) =
\sum_{S\subset A, T\subset B\atop {|S|=|T|\atop \{\alpha^*\}\notin
S, \{\beta^*\}\notin T}}D_{S,T}(\overline{S}',\overline{T}')
=J^*(A',B');
\end{eqnarray}
therefore, the pole at $\alpha^*=-\beta^*$ is simple.
Combining the four bullet-points above, we have (where as usual
the primed notation means that $\alpha^*$ or $\beta^*$ has been
removed from that set)
\begin{eqnarray}
&&\operatornamewithlimits{Res}_{\alpha^*=-\beta^*}
J^*(A;B)\nonumber
\\
&&\qquad=\sum_{S\subset A, T\subset B\atop {|S|=|T|\atop
\{\alpha^*\}\in S, \{\beta^*\}\in T}} (N D_{S',T'}
(\overline{S},\overline{T}))+D_{S',T'}(\overline{S}+
\{-\beta^*\},\overline{T})+D_{S',T'}(\overline{S},\overline{T}+
\{\beta^*\})
\\
&&\qquad \qquad\qquad+\sum_{S\subset A, T\subset B\atop
{|S|=|T|\atop \{\alpha^*\}\notin S, \{\beta^*\}\in T}}
D_{S,T}(\overline{S}',\overline{T})+ \sum_{S\subset A, T\subset
B\atop {|S|=|T|\atop \{\alpha^*\}\in S, \{\beta^*\}\notin T}}
D_{S'+\{-\beta^*\},T}(\overline{S},\overline{T}')\nonumber
\end{eqnarray}
Note that since $\alpha^*$ doesn't appear in any of the summands,
with the temporary convention that $A'=R+\overline{R}$, we can
relabel two of the sums as follows:
\begin{eqnarray}
&&\sum_{S\subset A, T\subset B\atop {|S|=|T|\atop \{\alpha^*\}\in S,
\{\beta^*\}\in T}}D_{S',T'}(\overline{S},\overline{T}+ \{\beta^*\})
+ \sum_{S\subset A, T\subset B\atop {|S|=|T|\atop \{\alpha^*\}\notin
S,
\{\beta^*\}\in T}} D_{S,T}(\overline{S}',\overline{T})\nonumber \\
&&\quad \qquad=\sum_{R\subset A', T\subset B\atop {|R|=|T|\atop
\{\beta^*\}\notin T}}D_{R,T}(\overline{R},\overline{T}) +
\sum_{R\subset A', T\subset B\atop {|R|=|T|\atop
\{\beta^*\}\in T}} D_{R,T}(\overline{R},\overline{T})\nonumber \\
&& \quad \qquad= J^*(A';B).
\end{eqnarray}
Similarly,
\begin{eqnarray}
&&\Bigg(\sum_{S\subset A, T\subset B\atop {|S|=|T|\atop
\{\alpha^*\}\in S, \{\beta^*\}\in T}}D_{S',T'}(\overline{S}+
\{\alpha^*\},\overline{T}) +\sum_{S\subset A, T\subset B\atop
{|S|=|T|\atop \{\alpha^*\}\in S, \{\beta^*\}\notin T}}
D_{S,T}(\overline{S},\overline{T}')\Bigg) \Bigg|_{\alpha^*=-\beta^*}
\nonumber
\\
&&\quad\qquad\quad=J^*(A;B')\big|_{\alpha^*=-\beta^*} =
J^*(A'+\{-\beta^*\};B').
\end{eqnarray}
Thus we have arrived at
\begin{eqnarray}
\operatornamewithlimits{Res}_{\alpha^*=-\beta^*} J^*(A;B)
=NJ^*(A';B')+J^*(A';B)+J^*(A' +\{-\beta^*\};B'),
\end{eqnarray}
which is the statement of the lemma.
Now we verify that properties P1 through P4 are satisfied in the
random matrix situation where we have
\begin{eqnarray}
Q(S,T)=e^{-N(\sum_{\hat \alpha\in S} \hat \alpha +\sum_{\hat{\beta}\in T}\hat\beta)}
\frac{Z(S,T)Z(S^-,T^-)}{Z^{\dagger}(S,S^-)Z^{\dagger}(T,T^-)}
\end{eqnarray}
and
\begin{equation}\label{eqn:Heval}
H_{S,T}(W)=\left\{\begin{array}{ll} \sum_{\hat{\alpha}\in
S}\frac{z'}{z}(\alpha-\hat \alpha)-\sum_{\hat \beta\in T}
\frac{z'}{z}(\alpha+\hat \beta)
&\mbox{ if $W=\{\alpha\}\subset A-S$}\\
\sum_{\hat \beta \in T}\frac{z'}{z}(\beta-\hat \beta )-\sum_{\hat
\alpha \in S} \frac{z'}{z}(\beta+\hat \alpha)
&\mbox{ if $W=\{\beta\}\subset B-T$}\\
\left(\frac{z'}{z}\right)'(\alpha +\beta) & {\rm \;if\;} W=\{\alpha,\beta\} {\rm \;with\;}
{{\alpha\in A-S}\atop {\beta\in B-T}}\\
0&\mbox{ otherwise}
\end{array}
\right.
\end{equation}
We will
start with the simplest case that $\alpha^*\in \overline{S}$,
$\beta^*\in \overline{T}$. The only polar term from
$\alpha^*=-\beta^*$ arises from a situation when one of the
partition parts $W_r=\{\alpha^*,\beta^*\}$ and there is a pole from
$H_{S,T}(W_r)=\left(\frac{z'}{z}\right)'(\alpha^*+\beta^*)$. Since
$\left(\frac{z'}{z}\right)'(x)=1/x^2+O(1)$ and $\alpha^*$ and
$\beta^*$ don't appear in $Q(S,T)$, this completes the proof of P1.
Next, suppose that $\alpha^*\in
S$ and $\beta^*\in \overline{T}$. The only pole in
$D_{S,T}(\overline{S},\overline{T})$ occurs in the product of the
$H$ for $H_{S,T}(W_r)$ when $W_r=\{\beta^*\}$. We have
\begin{eqnarray}\label{eqn:innotin}
H_{S,T}(\{\beta^*\})=\sum_{\hat\beta \in T}\frac{z'}{z}(\hat
\beta-\beta^*)- \sum_{\hat \alpha\in S}\frac{z'}{z}(\beta^*+\hat
\alpha)
\end{eqnarray}
for which, when $\hat \alpha=\alpha^*$, the term
$-\frac{z'}{z}(\beta^*+\alpha^*)$ has a simple pole at $
\alpha^*=-\beta^*$ with residue 1. $Q(S,T)$ is clearly regular at
$\alpha^*=-\beta^*$.
Next, when $\alpha^*\in \overline{S}$ and $\beta^*\in T$, the only
pole in the product of the $H$ occurs for $H_{S,T}(\{\alpha^*\})$.
We have
\begin{eqnarray}\label{eqn:innotin}
H_{S,T}(\{\alpha^*\})=\sum_{\hat\alpha \in S}\frac{z'}{z}(\hat
\alpha-\alpha^*)- \sum_{\hat \beta\in T}\frac{z'}{z}(\alpha^*+\hat
\beta)
\end{eqnarray}
for which, when $\hat \beta =\beta^*$, the term
$-\frac{z'}{z}(\alpha^*+\beta^*)$ has a simple pole at
$\alpha^*=-\beta^*$ with residue 1. $Q(S,T)$ does not depend on
$\alpha^*$.
Finally, we consider the case
$\alpha^*\in S$ and $\beta^*\in T$. We have
\begin{eqnarray}
Q(S,T)=z(\alpha^*+\beta^*)z(-\alpha^*-\beta^*) Q_1(S,T)
\end{eqnarray}
where
\begin{eqnarray}
Q_1(S,T)&=&Q(S',T')\nonumber
\\
&&\qquad \times e^{-N(\alpha^*+\beta^*)}\frac{\prod_{\hat \beta\in
T'}z(\alpha^*+\hat \beta)z(-\alpha^*-\hat \beta)\prod_{\hat
\alpha\in S'} z(\hat \alpha+\beta^*)z(-\hat
\alpha-\beta^*)}{\prod_{\hat \alpha\in S'}z(\alpha^*-\hat
\alpha)z(\hat \alpha-\alpha^*)\prod_{\hat \beta\in T'}
z(\beta^*-\hat \beta)z(\hat \beta-\beta^*)}.
\end{eqnarray}
Note that
\begin{eqnarray}
Q_1(S,T)\big|_{\alpha^*=-\beta^*}=Q(S',T').
\end{eqnarray}
We are well on our way to verifying property P4 because
$z(\alpha^*+\beta^*)z(-\alpha^*-\beta^*)
=\frac{-1}{(\alpha^*+\beta^*)^2}+\frac{1}{12}+O(|\alpha^*+\beta^*|)$
and we have an expansion for $Q_1(S,T)$ in the neighborhood of
$\alpha^*=-\beta^*$:
\begin{eqnarray} \label{eqn:Q} \nonumber
Q_1(S,T)&=&
Q(S',T')\big(1-N(\alpha^*+\beta^*)+O(|\alpha^*+\beta^*|^2\big))\\
&&\qquad \times \bigg(1+(\alpha^*+\beta^*)\bigg( \sum_{\hat\alpha
\in S'}\Big( \frac{z'}{z}(\hat \alpha+\beta^*)-
\frac{z'}{z}(-\beta^*-\hat\alpha)\Big)\nonumber
\\
&&\qquad \qquad +\sum_{\hat \beta\in T'}
\Big(\frac{z'}{z}(-\beta^*+\hat \beta)-\frac{z'}{z}(\beta^*-\hat \beta)\Big)\bigg)
+O(|\alpha^*+\beta^*|^2)\bigg)\nonumber\\
&=&Q(S',T')\bigg(1-(\alpha^*+\beta^*)\big(N+H_{S',T'}(\{\alpha^*\})|_{\alpha^*=-\beta^*}\\
&&\qquad \qquad\qquad+
H_{S',T'}(\{\beta^*\})\big)+O(|\alpha^*+\beta^*|^2)\bigg).\nonumber
\end{eqnarray}
Now we obtain an expansion for $H_{S,T}(W)$, where we remember that
$W\subset \overline{S}+\overline{T}$ and so $W$ does not contain
$\alpha^*$ or $\beta^*$. By (\ref{eqn:Heval}) we have that
\begin{eqnarray}
H_{S,T}(\{\alpha\})&=&\sum_{\hat{\alpha}\in
S}\frac{z'}{z}(\alpha-\hat \alpha)-\sum_{\hat \beta\in T}
\frac{z'}{z}(\alpha+\hat \beta)\nonumber\\
&=& H_{S',T'}(\{\alpha\})+ \frac{z'}{z}(\alpha-\alpha^*)-
\frac{z'}{z}(\alpha+ \beta^*);
\end{eqnarray}
\begin{eqnarray}
H_{S,T}(\{\beta\})&=&\sum_{\hat \beta \in
T}\frac{z'}{z}(\beta-\hat \beta )-
\sum_{\hat \alpha \in S} \frac{z'}{z}(\beta+\hat \alpha)\nonumber\\
&=& H_{S',T'}(\{\beta\})+ \frac{z'}{z}(\beta-\beta^*)-
\frac{z'}{z}(\beta+ \alpha^*);
\end{eqnarray}
and
\begin{eqnarray}
H_{S,T}(\{\alpha,\beta\})=\left(\frac{z'}{z}\right)'(\alpha
+\beta)=H_{S',T'}(\{\alpha,\beta\}).
\end{eqnarray}
Thus,
\begin{eqnarray}
H_{S,T}(W)\bigg|_{\alpha^*=-\beta^*}=H_{S',T'}(W)
\end{eqnarray}
and
\begin{equation}
\frac{d}{d\alpha^*} H_{S,T}(W)\bigg|_{\alpha*=-\beta^*}=\left\{
\begin{array}{ll}
-\left(\frac{z'}{z}\right)'(\alpha+\beta^*)
& \mbox{ if $W=\{\alpha\}$ }\\
-\left(\frac{z'}{z}\right)'(\beta-\beta^*)& \mbox{ if $W=\{\beta\}$} \\
0&\mbox{ otherwise}
\end{array}\right\}.
\end{equation}
Note that we can write this as
\begin{equation}
\frac{d}{d\alpha^*}
H_{S,T}(W)\bigg|_{\alpha*=-\beta^*}=-H_{S',T'}(W+\{\alpha^*\})\big|_{\alpha^*=-\beta^*}
-H_{S',T'}(W+\{\beta^*\}),
\end{equation}
where one or both of the terms will be zero. From this the
expansion of $H_{S,T}(W)$ in P4 follows.
This concludes the proof of Lemma \ref{lem:residue}.
\end{proof}
\subsection{$n$-correlation via the ratios theorem}
In this section we will prove the following expression for the
$n$-correlation.
\begin{theorem} \label{theo:offtheline}
Let $\mathcal C_-$ denote the path from $-\delta+\pi i$ down to
$-\delta-\pi i$ and let $\mathcal C_+$ denote the path from
$\delta-\pi i$ up to $\delta+\pi i$ and let $f$ be a
$2\pi$-periodic, holomorphic function of $n$ variables. Using the
notation $J(A;B)$ from Theorem \ref{theo:J},
\begin{eqnarray}\label{eq:offtheline}&&\int_{U(N)}\sum_{j_1,\dots ,j_n=1}^N
f(\theta_{j_1},\dots,\theta_{j_n})dX\nonumber
\\
&&\qquad =\frac{1}{(2\pi i)^n} \sum_{K+L+M=
\{1,\dots,n\}}(-1)^{|L|+|M|} N^{|M|} \\
&&\qquad\qquad\qquad \times\int_{\mathcal {C_+}^K} \int_{\mathcal
{C_-}^{L+ M}}J(z_K;-z_L) f(iz_1,\dots,iz_n)~dz_1\dots
~dz_n\nonumber
\end{eqnarray}
where
$z_K=\{z_k:k\in K\}$, $-z_L=\{-z_\ell:\ell\in L\}$ and $\int_{\mathcal {C_+}^K} \int_{\mathcal {C_-}^{L+ M}}$
means that we are integrating all of the variables in $z_K$ along the $\mathcal C_+$
path and all of the variables in $z_{L}$ or $z_{M}$ along the $\mathcal C_-$
path.
\end{theorem}
\begin{proof}
Since
\begin{eqnarray}
g(z)=\Lambda_X(e^z)=\prod_{j=1}^N\left(1-e^ze^{-i\theta_j}\right)
\end{eqnarray}
has zeros at $z=i\theta_j+2\pi i m$, $m\in\mathbb Z$, by Cauchy's
theorem we can express a sum
\begin{eqnarray}
\sum_{j=1}^Nf(\theta_j)=\frac{1}{2\pi i}\int_{\mathcal
C}\frac{g'}{g}(z)f(z/i)~dz =\frac{1}{2\pi i}\int_{\mathcal C}e^z
\frac{\Lambda_X'}{\Lambda_X}(e^z)f(z/i)~dz
\end{eqnarray}
where $\mathcal C$ is a positively oriented contour which encloses
a subinterval of the imaginary axis of length $2\pi$. We choose a
specific path $\mathcal C$ to be the positively oriented rectangle
that has vertices $\delta-\pi i,\delta+\pi i, -\delta+\pi i,
-\delta-\pi i$ where $\delta $ is a small positive number. More
generally, we have
\begin{eqnarray}
&&\sum_{j_1,\dots
,j_n=1}^Nf(\theta_{j_1},\dots,\theta_{j_n})\nonumber
\\
&&\qquad\qquad=\frac{1}{(2\pi i)^n} \int_{\mathcal C}\dots
\int_{\mathcal C} \prod_{j=1}^n e^{z_j}
\frac{\Lambda_X'}{\Lambda_X}(e^{z_j})
f(z_1/i,\dots,z_n/i)~dz_1\dots dz_n.
\end{eqnarray}
We average this equation over $X\in U(N)$ and, after a change of
variables $z_j\to -z_j$, we obtain
\begin{eqnarray} \label{eqn:basic}
&&\int_{U(N)}\sum_{j_1,\dots
,j_n=1}^Nf(\theta_{j_1},\dots,\theta_{j_n})dX\nonumber\\
&&\qquad\qquad=\frac{1}{(2\pi i)^n} \int_{\mathcal
C^n}J(z_1,\dots,z_n;) f(iz_1,\dots,iz_n)~dz_1\dots dz_n.
\end{eqnarray}
Let $\mathcal C_-$ denote the path along the left side of
$\mathcal C$ from $-\delta+\pi i$ down to $-\delta-\pi i$ and let
$\mathcal C_+$ denote the path along the right side of $\mathcal
C$ from $\delta-\pi i$ up to $\delta+\pi i$. Since the
periodicity of the function $f$ implies that the horizontal
segments of the contours cancel, each variable $z_j$ is on one or
the other of these two vertical paths. Thus, our expression is a
sum of $2^n $ terms, each term being an n-fold integral with each
integral on a vertical line segment either $\mathcal C_-$ or
$\mathcal C_+.$ For each variable $z_j$ which is on $\mathcal C_-$
we use the functional equation (\ref{eqn:fe}) to replace $
e^{-z_j}\frac{\Lambda_X'}{\Lambda_X}(e^{-z_j}) $ by
$N-e^{z_j}\frac{\Lambda_{X^*}'}{\Lambda_{X^*}}(e^{z_j}). $ In this
way we find that
\begin{eqnarray}&&
\frac{1}{(2\pi i)^n}\int_{\mathcal C^n}J(z_1,\dots,z_n;)
f(iz_1,\dots,iz_n)~dz_1\dots dz_n\nonumber\\
&& \qquad = \frac{1}{(2\pi
i)^n}\sum_{\epsilon_j\in\{-1,+1\}}\int_{\mathcal
C_{\epsilon_1}}\dots \int_{\mathcal C_{\epsilon_n}}\int_{U(N)}
\;(-1)^n\; \prod_{j=1}^n \left(\frac{1-\epsilon_j}{2}N+\epsilon_j
e^{-\epsilon_j z_j}\frac{\Lambda_{X^{\epsilon_j}}'}
{\Lambda_{X^{\epsilon_j}}}
(e^{-\epsilon_j z_j})\right)\\
&&\qquad \qquad \times f(iz_1,\dots,iz_n)~dXdz_1\dots
dz_n.\nonumber
\end{eqnarray}
Another way to write this equation is
\begin{eqnarray}&&
\frac{1}{(2\pi i)^n}\int_{\mathcal C^n}J(z_1,\dots,z_n;)
f(iz_1,\dots,iz_n)~dz_1\dots dz_n\nonumber\\
&& \qquad= \frac{1}{(2\pi i)^n}\int_{U(N)}\;(-1)^n
\sum_{K\subset\{1,\dots,n\}} \prod_{j\in K} \int_{\mathcal
C_+}e^{-z_j}\frac{\Lambda_X'} {\Lambda_X} (e^{- z_j})
\prod_{j\notin K}\int_{\mathcal C_-}
\left(N-e^{z_j}\frac{\Lambda_{X^*}'}
{\Lambda_{X^*}}
(e^{ z_j})\right)\\
&& \qquad \qquad \times f(iz_1,\dots,iz_n)dz_1\dots
dz_n~dX.\nonumber
\end{eqnarray}
The expansion of the product over $j\notin K$ can be easily
expressed as a sum over further subsets of $\{1,\ldots,n\}$. We
have
\begin{eqnarray}&&
\frac{1}{(2\pi i)^n}\int_{\mathcal C^n}J(z_1,\dots,z_n;)
f(iz_1,\dots,iz_n)~dz_1\dots dz_n\nonumber\\
&& \qquad= \frac{1}{(2\pi i)^n}\int_{U(N)}\;(-1)^n
\sum_{K+L+M=\{1,\dots,n\}} \prod_{j\in K} \int_{\mathcal
C_+}e^{-z_j}\frac{\Lambda_X'} {\Lambda_X} (e^{- z_j})\prod_{j\in
L}\int_{\mathcal C_-}
(-1)e^{z_j}\frac{\Lambda_{X^*}'}
{\Lambda_{X^*}} (e^{ z_j})
\\
&& \qquad \qquad \times \prod_{j\in M}\int_{\mathcal C_-}N
f(iz_1,\dots,iz_n)~dz_1\dots ~dz_n~dX.\nonumber
\end{eqnarray}
Using this last equation, (\ref{eqn:basic}) and the definition of
$J(A;B)$ from Theorem \ref{theo:J}, we have the statement of the
Theorem.
\end{proof}
\subsection{$n$-correlation theorem}
We will now prove our main theorem.
\begin{theorem} Let $J^*$ be as defined in Theorem \ref{theo:J}. Then
\label{theo:main}
\begin{eqnarray}&&\int_{U(N)}\sideset{}{^*}\sum_{1\le j_1,\dots , j_n\le N} f(\theta_{j_1},\dots,\theta_{j_n})
dX_N\nonumber
\\
&&\qquad =\frac{1}{(2\pi )^n}\int_{[0,2\pi]^n} \sum_{K+L+M=
\{1,\dots,n\}} N^{|M|}
J^*(-i\theta_K;i\theta_L)
f(\theta_1,\dots,\theta_n)~d\theta_1\dots ~d\theta_n
\end{eqnarray}
where $i\theta_L=\{i\theta_\ell:\ell\in L\}$,
$-i\theta_K=\{-i\theta_k:k\in K\}$ and the star on the sum
indicates summation over distinct indices. Moreover, the
integrand has no poles on the path of integration.
\end{theorem}
Note the similar forms of Theorem \ref{theo:offtheline} and
Theorem \ref{theo:main}. In the former the sum is over all
indices and the integrals are on paths slightly shifted away from
the imaginary axis and in the latter the sum is over distinct
indices and the integration is along the imaginary axis. Moving
the integrals onto the imaginary axis results in some principal
value terms, and surprisingly these cancel exactly with extra
terms in the sum in \ref{theo:offtheline}.
We actually prove a more general theorem (Theorem \ref{theo:main1}
below). We start with a little notation: For a given $n$ and $0\leq
R\leq n$, let the sum $\sum_{j_1,\ldots,j_n=1}^N$ with the
additional condition that $j_m\neq j_{\ell}$ if $m>R$ {\em and}
$\ell>R$ be denoted by $\sum^{n,R}$. If we additionally fix three
disjoint sets $K$, $L$ and $M$ whose union is $\{1,2,\ldots,n\}$,
then we introduce the following notation for the familiar integral
\begin{eqnarray}\label{eq:Jfdef}
&&\int_{-\pi i}^{\pi i} \cdots \int_{-\pi i}^{\pi i}
\int_{C_+^{K\cap \{1,\ldots,R\}}} \int_{C_-^{(L + M)\cap
\{1,\ldots,R\}}} J^*(z_K;-z_L)f(iz_1,\ldots,iz_n) dz_1\cdots
dz_R\;dz_{R+1} \ldots dz_{n}\nonumber \\
&&\qquad =:I_{f;K,L,M}^{n,R}.
\end{eqnarray}
Once again, the integrals on the imaginary axis are principal
value integrals.
We have already derived equation (\ref{eq:offtheline}). In the
new notation this is written as
\begin{eqnarray}
\label{eq:offtheline2} &&(2\pi i)^n \int_{U(N)} \sum\!^{n,n}
f(\theta_{j_1},\ldots,\theta_{j_n}) dX = \sum_{K+L+M=\{1,\ldots,n\}}
(-1)^{|L + M|}N^{|M|}I^{n,n}_{f;K,L,M}.
\end{eqnarray}
Note that (\ref{eq:offtheline}) features $J$ whereas
$I^{n,R}_{f;K,L,M}$ is defined in terms of $J^*$. However, when
$R=n$ (that is, all the integrals are off the imaginary axis)
Theorem \ref{theo:J} says that $J$ and $J^*$ are equal.
With the help of Lemma \ref{lem:residue} we will prove the
following:
\begin{theorem}
\label{theo:main1} Using the notation of (\ref{eq:Jfdef}) and the
preceding paragraph, with $0\leq R \leq n$,
\begin{eqnarray}
\label{eq:main1} &&(2\pi i)^n \int_{U(N)} \sum\!^{n,R}
f(\theta_{j_1},\ldots,\theta_{j_n}) dX = \sum_{K+L+M=\{1,\ldots,n\}}
(-1)^{|(L + M)\cap\{1,\ldots,R\}|}N^{|M|}I^{n,R}_{f;K,L,M}.
\end{eqnarray}
\end{theorem}
\begin{proof} We will prove this by induction. Assume that Theorem
\ref{theo:main1} holds for $n-1$ and any $0\leq R\leq n-1$.
We start with the right side of (\ref{eq:main1}) and move the
$z_R$ integral onto the imaginary axis, resulting in a principal
value integral and a residue at $z_R=z_t$, for $t>R$, in any term
where $R\in K$, $t\in L$ {\em or} $R\in L, t\in K$. A close
inspection of the integral and the form of $J^*(z_K;-z_L)$ reveals
that there is no pole unless $R$ and $t$ are in one of these two
configurations (see the comment in the final paragraph of Section
\ref{sect:ratiolambda}). Also, if $t<R$ then the contour on which
$z_t$ is integrated has not yet been moved and so it remains on
the far side of the imaginary axis from the $z_R$ contour and
hence does not yield a pole. Each residue contribution comes in
the form of the three terms in Lemma \ref{lem:residue}, multiplied
by $\pi i$. (It is $\pi i$ rather than $2\pi i$ because the $z_R$
contour is moving precisely onto the imaginary axis, where $z_t$
lies, yielding half the contribution of a contour completely
encircling the pole.) Thus
\begin{eqnarray}
\label{eq:indproof1} &&\sum_{K+L+M=\{1,\ldots,n\}} (-1)^{|(L+
M)\cap\{1,\ldots,R\}|}N^{|M|}I^{n,R}_{f;K,L,M}\nonumber \\
&&= \sum_{K+L+M=\{1,\ldots,n\}} (-1)^{|(L+
M)\cap\{1,\ldots,R-1\}|}N^{|M|}I^{n,R-1}_{f;K,L,M}\\
&&\qquad+2\times\sum_{t=R+1}^n \pi i \Big[
\sum_{K'+L'+M=\{1,\ldots,n\}-\{R,t\}} (-1)^{|(L' +
M)\cap\{1,\ldots,R-1\}|}N^{|M|} \nonumber \\
&&\qquad\qquad\times\int_{-\pi i}^{\pi i} \cdots \int_{-\pi
i}^{\pi i} \int_{C_+^{K\cap \{1,\ldots,R-1\}}} \int_{C_-^{(L+
M)\cap \{1,\ldots,R-1\}}}
\Big(J^*(z_{K'+\{t\}};-z_{L'})\nonumber\\
&&\qquad\qquad\qquad\qquad+J^*(z_{K'};-z_{L'+\{t\}})+NJ^*(z_{K'};-z_{L'})\Big)\nonumber\\
&&\qquad\qquad\times
f(iz_1,\ldots,iz_{R-1},iz_t,iz_{R+1},\ldots,iz_n) dz_1\cdots
dz_{R-1}\;dz_{R+1} \ldots dz_{n}\Big]\nonumber
\end{eqnarray}
The final sum above contains the two identical contributions from
the case $R\in K, t\in L$ and the case $R\in L, t\in K$. To
confirm the sign of each term, if $R\in K$, the residue is
multiplied by $+i\pi$ because the contour of integration moves in
from the right of the imaginary axis (skirting the pole in the
positive direction) and the argument $z_R$ in $J^*(z_K;-z_L)$
occurs with a plus sign. Note that $(-1)^{|(L'+
M)\cap\{1,\ldots,R-1\}|}=(-1)^{|(L+ M)\cap\{1,\ldots,R\}|}$ if
$R\in K$ and $L=L'+\{t\}$. On the other hand, if $R\in L$ then
the $z_R$ contour comes from the left of the imaginary axis, but
as $C_-$ is directed downwards, the pole is still circled in the
positive direction. However, $z_R$ appears in $J^*(z_K;-z_L)$
with a minus sign, so the residue acquires an extra minus sign,
which is captured above because $(-1)^{|(L'+
M)\cap\{1,\ldots,R-1\}|}=(-1)\times(-1)^{|(L+
M)\cap\{1,\ldots,R\}|}$ if $L=L'+\{R\}$.
In the integrals in the final sum above we now relabel the
integration variables \linebreak
$z_1,z_2,\ldots,z_{R-1},z_{R+1},\ldots,z_n$ by
$z_1,z_2,\ldots,z_{n-1}$ so that
$f(z_1,\ldots,z_{R-1},z_t,z_{R+1},\ldots,z_t,\ldots,z_n)$ is
replaced by
$f(z_1,\ldots,z_{R-1},z_{t-1},z_R,\ldots,z_{t-1},\ldots,z_{n-1})$
$=:g_t(z_1,\ldots,z_{n-1})$. In addition, for some function $h$ of
sets $K,L$ and $M$,
\begin{eqnarray}
&&\sum_{K+L+M=\{1,\ldots,m-1\}}
(h(K+\{m\},L,M)+h(K,L+\{m\},M)+h(K,L,M+\{m\}))\nonumber
\\
&&\qquad\qquad\qquad=\sum_{K+L+M=\{1,\ldots,m\}} h(K,L,M),
\end{eqnarray}
so we now rewrite the three $J^*$ terms in the final sum in
(\ref{eq:indproof1}) as a sum over partitions of
$\{1,\ldots,n-1\}$. Thus (\ref{eq:indproof1}) equals
\begin{eqnarray}
\label{eq:indproof2} && \sum_{K+L+M=\{1,\ldots,n\}} (-1)^{|(L+
M)\cap\{1,\ldots,R-1\}|}N^{|M|}I^{n,R-1}_{f;K,L,M} \nonumber\\
&&\quad+2\pi i\sum_{t=R+1}^n \sum_{K+L+M=\{1,\ldots,n-1\}}
(-1)^{|(L+ M)\cap\{1,\ldots,R-1\}|}N^{|M|}I_{g;K,L,M}^{n-1,R-1}.
\end{eqnarray}
By the induction hypothesis, this equals
\begin{eqnarray}
&& \sum_{K+L+M=\{1,\ldots,n\}} (-1)^{|(L+
M)\cap\{1,\ldots,R-1\}|}N^{|M|}I^{n,R-1}_{f;K,L,M} \nonumber\\
&&\quad+2\pi i\sum_{t=R+1}^n (2\pi i)^{n-1}\int_{U(N)}
\sum\!^{n-1,R-1} g_t(\theta_{j_1},\ldots,\theta_{j_{n-1}})dX.
\end{eqnarray}
Note that the left side of (\ref{eq:main1}) can be written as
\begin{eqnarray}
&&(2\pi i)^n \int_{U(N)}\sum\!^{n,R-1}
f(\theta_{j_1},\ldots,\theta_{j_n})dX \nonumber\\
&&\qquad\qquad+ (2\pi i)^n\int_{U(N)}\sum_{t=R+1}^n
\sum\!^{n-1,R-1} g_t(\theta_{j_1},\ldots,\theta_{j_{n-1}})dX,
\end{eqnarray}
where the second sum incorporates all the terms where
$\theta_{j_R}=\theta_{j_t}$, $t>R$, and then uses the same
relabelling of the variables $\theta_{j_1},\theta_{j_2},\ldots,
\theta_{j_{R-1}},\theta_{j_{R+1}},\ldots,\theta_{j_n}$ and the
definition of $g_t$ as described before (\ref{eq:indproof2}).
Therefore
\begin{eqnarray}
&&(2\pi i)^n \int_{U(N)} \sum\!^{n,R-1}
f(\theta_{j_1},\ldots,\theta_{j_n}) dX \nonumber \\
&&\qquad\qquad= \sum_{K+L+M=\{1,\ldots,n\}} (-1)^{|(L+
M)\cap\{1,\ldots,R-1\}|}N^{|M|}I^{n,R-1}_{f;K,L,M}
\end{eqnarray}
and so, using the induction hypothesis, we have used
(\ref{eq:main1}) for a given $n$ and $R$ to deduce the same
expression for $n$ and $R-1$. Since in (\ref{eq:offtheline2}) we
have derived the expression for $R=n$ for any $n$, we have shown
that if (\ref{eq:main1}) is true for $n-1$, it is also true for $n$.
To justify the induction hypothesis in $n$, we consider $n=1$.
Equation (\ref{eq:offtheline}) states
\begin{equation}
2\pi i\int_{U(N)} \sum\!^{1,1}f(\theta_{j_1}) dX=
\sum_{K+L+M=\{1\}}(-1)^{|(L+ M)\cap \{1\}|} N^{|M|}
I^{1,1}_{f;K,L,M}=-NI_{f;\emptyset,\emptyset,\{1\}}^{1,1}.
\end{equation}
The final step above follows by remembering that
$J^*(\emptyset;z_A)=0$ for any nonempty set $A$. Since
$\sum^{1,1}=\sum^{1,0}$ and
$I_{f;\emptyset,\emptyset,\{1\}}^{1,1}=-I_{f;\emptyset,\emptyset,\{1\}}^{1,0}$,
it is immediate that
\begin{equation}
2\pi i\int_{U(N)} \sum\!^{1,0}f(\theta_{j_1}) dX=
\sum_{K+L+M=\{1\}}(-1)^{|(L+ M)\cap \emptyset|} N^{|M|}
I^{1,0}_{f;K,L,M}.
\end{equation}
This completes the proof of Theorem \ref{theo:main1}.
\end{proof}
It remains to verify that the integrand in Theorem \ref{theo:main}
has no poles on the path of integration. We have already
confirmed in Lemma \ref{lem:residue} that each
$J^*(-i\theta_K;i\theta_L)$ has only a simple pole at
$\theta_k=-\theta_\ell$ for $\theta_k\in \theta_K$ and
$\theta_\ell\in \theta_L$.
We check that
\begin{eqnarray}\label{eq:singset}
\sum_{K+L+M=\{1,2,\ldots,n\}} N^M J^*(-i\theta_K;i\theta_L)
\end{eqnarray}
has no pole at $\theta_1=\theta_2$ for generic values of the
remaining variables. A given $J^*(-i\theta_K;i\theta_L)$ only has
a pole when $\theta_1\in \theta_L$ and $\theta_2 \in \theta_K$, or
vice versa, so
\begin{eqnarray}
&&\operatornamewithlimits{Res}_{\theta_1=\theta_2}
\sum_{K+L+M=\{1,2,\ldots,n\}} N^M
J^*(-i\theta_K;i\theta_L) \nonumber \\
&&\qquad= \sum_{K+L+M=\{3,\ldots,n\}} N^M
\operatornamewithlimits{Res}_{\theta_1=\theta_2}\Big(J^*(-i\theta_K+\{-i\theta_1\}
;\{i\theta_2\}+i\theta_L) \nonumber \\
&&\qquad\qquad\qquad\qquad+ J^*(-i\theta_K+\{-i\theta_2\}
;\{i\theta_1\}+i\theta_L)\Big)=0;
\end{eqnarray}
this is zero because $\operatornamewithlimits{Res}_{s=x}f(s,x)=-
\operatornamewithlimits{Res}_{s=x}f(x,s)$. Thus if
(\ref{eq:singset}) had a singular set it would be of complex
dimension less than $n-1$ and this implies that there is no
singular set (see for example \cite{kn:krantz}, Corollary 7.3.2).
Our new proof of $n$-correlation in the case of random matrix
theory is now complete.
\section{Correlations of the Riemann zeros}
Now we turn to the Riemann zeta-function. The goal is to obtain a
precise conjecture for the $n$-correlation of its zeros and we do
this following the method of the previous section for the random
matrix case.
\subsection{The ratios conjecture for the zeta-function}
We derive our formula rigorously from the ratios
conjecture for the zeta-function, which we now state.
\begin{conjecture}[Ratios Conjecture \cite{kn:cfz2}]\label{conj:ratiozeta}
Let $Z_\zeta(A,B)=\prod_{\alpha\in A\atop\beta\in B}
\zeta(1+\alpha+\beta)$ and
\begin{eqnarray}Z_\zeta(A,B;C,D):=
\frac{Z_\zeta(A,B)Z_\zeta(C,D)}
{Z_\zeta(A,D)Z_\zeta(B,C)}.\end{eqnarray}
Further, let
\begin{eqnarray}\label{eq:Azeta}
\mathcal{A}_\zeta(A,B;C,D)=\prod_p Z_{p}(A,B;C,D)\int_0^1\mathcal{A}_{p,\theta}(A,B;C,D)~d\theta
\end{eqnarray}
where
$z_p(x):=(1-p^{-x})^{-1}$, $Z_p(A,B)=\prod_{\alpha\in
A\atop\beta\in B} z_p(1+\alpha+\beta)^{-1}$ and
\begin{eqnarray}Z_p(A,B;C,D):=
\frac{Z_p(A,B)Z_p(C,D)} {Z_p(A,D)Z_p(B,C)}\end{eqnarray}
and
\begin{eqnarray}
\label{eq:Aptheta} \mathcal{A}_{p,\theta}(A,B;C,D):= \frac
{\prod_{\alpha\in A} z_{p,-\theta}(\frac 12 +\alpha)
\prod_{\beta\in B}z_{p,\theta}(\frac 12 +\beta)}{ \prod_{\gamma\in
C}z_{p,-\theta}(\frac 12 +\gamma) \prod_{\delta\in
D}z_{p,\theta}(\frac 12 +\delta)}
\end{eqnarray}
with $z_{p,\theta}(x):=(1-e(\theta)p^{-x})^{-1}$. Then, provided
that $-\frac{1}{4}<\Re \alpha,\Re \beta<\frac{1}{4}$,
$\frac{1}{\log T} \ll \Re \gamma,\Re \delta<\frac{1}{4}$ and $\Im
\alpha,\Im\beta,\Im\gamma,\Im\delta\ll T$, we conjecture that,
with $s=\frac12+it$, for any interval $I\subset [-T,T]$,
\begin{eqnarray}
&&\int_I \frac{\prod_{\alpha\in A}\zeta(s+\alpha)\prod_{\beta\in
B} \zeta(1-s+\beta)}{\prod_{\gamma\in C}\zeta(s+\gamma)
\prod_{\delta\in D}\zeta(1-s+\delta)} ~dt = \int_I \mathcal
R_{\zeta,t}(A,B;C,D) ~dt +O(|I|^{1/2+\epsilon})
\end{eqnarray}
where
\begin{eqnarray}
\mathcal R_{\zeta,t}(A,B;C,D)=\sum_{S\subset A,T\subset B\atop
|S|=|T|} X_t(S,T) Z_\zeta \mathcal{A}_\zeta(\overline{S}+
T^-,\overline{T}+ S^-;C,D).
\end{eqnarray}
Here $T^-$ means the set of all of the negatives of elements of
$T$ (i.e. $T^-:=\{-t:t\in T\}$), $A=S+\overline{S}$,
$B=T+\overline{T}$ and
\begin{eqnarray}
X_t(S,T)=\prod_{\hat{\alpha}\in
S}\chi(s+\hat{\alpha})\prod_{\hat{\beta}\in
T}\chi(1-s+\hat{\beta}),
\end{eqnarray}
where $\chi(1-s)=\chi(s)^{-1}=2(2\pi)^{-s}\Gamma(s)\cos \frac {\pi
s}2$ is the factor from the functional equation
$\zeta(s)=\chi(s)\zeta(1-s)$.
\end{conjecture}
\begin{remark}Note
that since $|S|=|T|$, for small shifts $\hat{\alpha}$ and
$\hat{\beta}$ we have
\begin{eqnarray}\label{eq:chiapprox}
X_t(S,T)= e^{-\ell(\sum_{\hat\alpha\in S} \hat \alpha +\sum_{\hat
\beta\in T}\hat\beta)}\bigg(1+O(1/(1+|t|)\bigg),
\end{eqnarray}
where $\ell =\log\frac{t}{2\pi}$, which can sometimes be used to
simplify formulae.
\end{remark}
The method for constructing the ratios conjecture is detailed in
\cite{kn:cfz2} and is based on the same principles as the recipe
for generating conjectures for moments (see \cite{kn:cfkrs}) of
zeta and $L$-functions. (Moments cover just the case where
$C=\{\emptyset\}$ and $D=\{\emptyset\}$.)
\begin{corollary}\label{cor:chis}
With the same conditions on $\alpha,\beta,\gamma$ and $\delta$ as
in Conjecture \ref{conj:ratiozeta}, and with conditions on $\mu$
the same as those on $\alpha$ and $\beta$, we have
\begin{eqnarray}
&&\int_I \frac{\prod_{\alpha\in A}\zeta(s+\alpha)\prod_{\beta\in
B} \zeta(1-s+\beta)}{\prod_{\gamma\in C}\zeta(s+\gamma)
\prod_{\delta\in D}\zeta(1-s+\delta)} \prod_{\mu \in U}
\frac{\chi'}{\chi}(s+\mu)~dt \nonumber \\
&&\qquad= \int_I \mathcal R_{\zeta,t}(A,B;C,D)\prod_{\mu \in U}
\frac{\chi'}{\chi}(s+\mu) ~dt +O(|I|^{1/2+\epsilon})
\end{eqnarray}
as a consequence of Conjecture \ref{conj:ratiozeta}.
\end{corollary}
\begin{proof}
This follows immediately by integration by parts using the fact
that
\begin{eqnarray}
&&\frac{\chi'}{\chi} (s) \ll \log(2+|s|)\quad \mbox{ and } \quad
\frac{d}{ds} \frac{\chi'}{\chi}(s)\ll \frac{1}{1+|s|}.
\end{eqnarray}
\end{proof}
\subsection{Averages of logarithmic derivatives of the Riemann
zeta function}
To determine the correlations of the Riemann zeros, we will need a
result about averaging logarithmic derivatives of the zeta
function:
\begin{theorem} \label{theo:Jzeta} Assuming the Ratios Conjecture, if $\Re \alpha_i,\Re
\beta_j>0$ for $\alpha_i\in A$ and $\beta_j \in B$ then
$J_{\zeta,I}(A;B;U)=J^*_{\zeta,I}(A;B;U)+O(|I|^{1/2+\epsilon})$ where for an interval $I$
\begin{eqnarray} \label{eq:defJzeta}
&&J_{\zeta,I}(A;B;U)\\&&\qquad :=\int_I \prod_{\alpha \in A}
\frac{\zeta'}{\zeta}(\tfrac 12 +it+\alpha)\:\prod_{\beta\in
B}\frac{\zeta'}{\zeta}(\tfrac 12 -it+\beta)\prod_{\mu\in
U}\Big(-\frac{\chi'}{\chi}(\tfrac 12 +it + \mu)\Big)~dt\nonumber
\end{eqnarray}
and
\begin{eqnarray}
&& J_{\zeta,I}^*(A;B;U) :=\int_I J_{\zeta,t}^*(A;B;U)~dt,
\end{eqnarray}
where
\begin{eqnarray}
&&J_{\zeta,t}^*(A;B;U):=\sum_{S\subset A,T\subset B\atop
|S|=|T|}X_t(S,T) \nonumber
\frac{Z_\zeta(S,T)Z_\zeta(S^-,T^-)}{{Z_\zeta}^{\dagger}(S,S^-){Z_\zeta}^{\dagger}(T,T^-)}\mathcal{A}_\zeta(T^-,S^-;S,T)
\\&& \qquad \qquad \qquad\qquad \times
\sum_{{\overline{S}+ \overline{T}\atop = W_1+\dots + W_R}
}\prod_{r=1}^R \mathcal{H}_{S,T}(W_r) \times \prod_{\mu\in
U}\Big(-\frac{\chi'}{\chi}(1/2+it+\mu)\Big).
\end{eqnarray}
Here we use the notation $Z_{\zeta}$ as in Conjecture
\ref{conj:ratiozeta} and $Z^{\dagger}_\zeta(A,B)=\prod_{{\alpha\in
A\atop\beta\in B} \atop{\alpha+\beta\neq 0}}
\zeta(1+\alpha+\beta)$. In addition,
$T^-:=\{-t:t\in T\}$, $A=S+\overline{S}$, $B=T+\overline{T}$ and
\begin{eqnarray}\label{eq:HHHH}
\mathcal H_{S,T} (W_r)= H_{\zeta;S,T}(W_r)-\sum_p
H_{p,1;S,T}(W_r)+\sum_p H_{p,2;S,T}(W_r).
\end{eqnarray}
Further, we have
\begin{equation}\label{eqn:H}
H_{\zeta,S,T}(W)=\left\{\begin{array}{ll} \sum_{\hat \alpha\in
S}\frac{\zeta'}{\zeta}(1+\alpha-\hat{\alpha})-\sum_{\hat\beta\in
T} \frac{\zeta'}{\zeta}(1+\alpha +\hat \beta) &\mbox{ if
$W=\{\alpha\}\subset \overline{S}$}
\\
\sum_{\hat\beta\in T}\frac{\zeta'}{\zeta}(1+\beta-\hat
\beta)-\sum_{\hat\alpha\in S} \frac{\zeta'}{\zeta}
(1+\beta+\hat\alpha) &\mbox{ if $W=\{\beta\}\subset \overline{T}$}\\
\left(\frac{\zeta'}{\zeta}\right)'(1+\alpha+\beta) & \mbox{ if
$W=\{\alpha,\beta\}$ with $
{\alpha \in \overline{S}, \atop \beta\in \overline{T}}$}\\
0&\mbox{ otherwise};
\end{array}
\right.
\end{equation}
\begin{equation}\label{eqn:HH}
H_{p,1;S,T}(W)=\left\{\begin{array}{ll} \sum_{\hat \alpha\in
S}\frac{z_p'}{z_p}(1+\alpha-\hat{\alpha})-\sum_{\hat\beta\in T}
\frac{z_p'}{z_p} (1+\alpha+\hat \beta) &\mbox{ if
$W=\{\alpha\}\subset \overline{S}$}
\\
\sum_{\hat\beta\in T}\frac{z_p'}{z_p}(1+\beta-\hat
\beta)-\sum_{\hat\alpha\in S} \frac{z_p'}{z_p}
(1+\beta+\hat\alpha) &\mbox{ if $W=\{\beta\}\subset \overline{T}$}\\
\left(\frac{z_p'}{z_p}\right)'(1+\alpha+\beta) & \mbox{ if
$W=\{\alpha,\beta\}$ with $
{\alpha \in \overline{S}, \atop \beta\in \overline{T}}$}\\
0&\mbox{ otherwise};
\end{array}
\right.
\end{equation}
and
\begin{eqnarray}
H_{p,2,S,T}(W)
&& =\sum_{W =\sum_{j=1}^J X_j}(-1)^{J-1}(J-1)!\prod_{j=1}^J
c_{S,T}(X_j)
\end{eqnarray}
with
\begin{eqnarray}
c_{S,T}(X) := \frac{\int_0^1 \mathcal
A_{p,\theta}(S,T)\prod_{\alpha\in \overline{S}\cap
X}\frac{z_{p,-\theta}'}{z_{p,-\theta}}(\frac
12+\alpha)\prod_{\beta\in \overline{T}\cap X}
\frac{z_{p,\theta}'}{z_{p,\theta}}(\frac 12+ \beta)
~d\theta}{\int_0^1 \mathcal A_{p,\theta}(S,T)~d\theta}
\end{eqnarray}
and the notation
\begin{eqnarray}
\mathcal A_{p,\theta}(S,T):=\mathcal A_{p,\theta}(T^-,S^-;S,T),
\end{eqnarray}
with $A_{p,\theta}(A,B;C,D)$ as in Conjecture
\ref{conj:ratiozeta}.
\end{theorem}
To prove this we want to differentiate $\mathcal R_{\zeta,t}$ with
respect to all of the $\alpha\in A$ and $\beta\in B$ and then
replace each $\gamma$ by an $\alpha$ and each $\delta$ by a
$\beta$. In what follows $A$ and $C$ have the same cardinality, as
do $B$ and $D$. Thus, after differentiation, when we want to set
the $\gamma$ equal to the $\alpha$ in some order, and the $\delta$
equal to the $\beta$ in some order, we can abbreviate this by
$C=A$ and $D=B$. With the definition of $J_{\zeta,I}(A;B;U)$ as in
(\ref{eq:defJzeta}), and using Corollary \ref{cor:chis}, we have
\begin{eqnarray}
&&J_{\zeta,I}(A;B;U)\\&&\qquad=\int_I\prod_{\alpha\in A \atop
\beta \in B}\frac{d}{d\alpha}\frac{d}{d\beta} \mathcal
R_{\zeta,t}(A,B;C,D)\bigg|_{C=A\atop D=B}\prod_{\mu\in
U}\Big(-\frac{\chi'}{\chi}(1/2+it+\mu)\Big)~dt+O(|I|^{1/2+\epsilon}).\nonumber
\end{eqnarray}
The situation is much as in the random matrix theory case, except
that now we have to understand how to include the arithmetical
factor $\mathcal A_\zeta$. For a start, we can differentiate with
respect to the $\hat\alpha\in S$ and $\hat \beta\in T$ as before.
$C_{\overline{S}}$ and $D_{\overline{T}}$ are defined as in the
proof of Theorem \ref{theo:J}. We have
\begin{eqnarray}\label{eq:halfderiv}
&& J_{\zeta,I}(A;B;U) = \int_I \prod_{\mu\in
U}\Big(-\frac{\chi'}{\chi}(1/2+it+\mu)\Big)\sum_{S\subset
A,T\subset B\atop |S|=|T|}X_t(S,T)
\frac{Z_\zeta(S,T)Z_\zeta(S^-,T^-)}{{Z_\zeta}'(S,S^-){Z_\zeta}'(T,T^-)}
\\&& \qquad \times \nonumber
\prod_{\alpha\in \overline{S}\atop \beta\in
\overline{T}}\frac{d}{d\alpha}\frac{d}{d\beta}\Bigg(
\frac{Z_\zeta(\overline{S},\overline{T})Z_\zeta(\overline{S},S^-)Z_\zeta(\overline{T},T^-)Z(C,D)}
{Z_\zeta(S,T)Z_\zeta(C_{\overline{S}},S^-)Z_\zeta(D_{\overline{T}},T^-)
Z_\zeta(\overline{S},D)Z_\zeta(\overline{T},C)
}\nonumber \\
&&\qquad\qquad\qquad\qquad\qquad\times\mathcal{A}_\zeta(\overline{S}+ T^-,\overline{T}+ S^-;C,D)
\Bigg)\bigg|_{C=A\atop D=B}~dt +O(|I|^{1/2+\epsilon}) .\nonumber\end{eqnarray}
In anticipation of applying Lemma \ref{lemma:diff}, we note that a
brief calculation shows that
\begin{eqnarray} \label{eqn:Aid} \nonumber
&&\frac{Z_\zeta(\overline{S},\overline{T})Z_\zeta(\overline{S},S^-)
Z_\zeta(\overline{T},T^-)Z(C,D)}
{Z_\zeta(S,T)Z_\zeta(C_{\overline{S}},S^-)Z_\zeta(D_{\overline{T}},T^-)
Z_\zeta(\overline{S},D)Z_\zeta(\overline{T},C)
}\mathcal{A}_\zeta(\overline{S}+ T^-,\overline{T}+ S^-;C,D)\bigg|_{C=A\atop
D=B}\\&& \qquad \qquad
=\mathcal{A}_\zeta(T^-,S^-;S,T).
\end{eqnarray}
The remainder of the proof of Theorem \ref{theo:Jzeta} consists of
applying Lemma \ref{lemma:diff} with (keeping just the factors
from the big brackets in (\ref{eq:halfderiv}) that depend on
$\overline{S}$ and $\overline{T}$)
\begin{eqnarray}
H&=&\log Z_\zeta(\overline{S},\overline{T})+\log
Z_\zeta(\overline{S},S^-)+\log Z_\zeta(\overline{T},T^-)-\log
Z_\zeta(\overline{S},D) -\log Z_\zeta(\overline{T},C) \nonumber
\\
&&+\sum_p \big(\log Z_p(\overline{S},\overline{T})+\log
Z_p(\overline{S},S^-)+\log Z_p(\overline{T},T^-)-\log
Z_p(\overline{S},D) -\log Z_p(\overline{T},C) \\
&&\qquad\qquad+ \log \int_0^1 \mathcal
A_{p,\theta}(\overline{S}+T^-,\overline{T}+S^{-};C,D)d \theta
.\nonumber
\end{eqnarray}
Note the exponent $-1$ in the definition of $Z_p(A,B)$ in
Conjecture \ref{conj:ratiozeta} which accounts for the minus sign
in front of $H_{1,p,S,T}(W_r)$ in the definition in
(\ref{eq:HHHH}) of $H_{S,T}(W_r)$.
Now it just remains to prove:
\begin{lemma} \label{lem:lemma3}
Let $W\subset \overline{S}+ \overline{T}$. Then
\begin{eqnarray}\label{eq:hp2}
\nonumber H_{p,2,S,T}(W)&:=& \left.\prod_{w\in W}\frac{d}{dw} \log\left(
\int_0^1\mathcal A_{p,\theta}(\overline{S}+ T^-,\overline{T}+ S^-
;C,D)~d\theta\right)\right|_{C=\overline{S}+ S
\atop D=\overline{T}+ T}\\
& =&\sum_{W =\sum_{j=1}^J X_j}(-1)^{J-1}(J-1)!\prod_{j=1}^J
c_{S,T}(X_j);
\end{eqnarray}
here
\begin{eqnarray}\label{eq:cST}
c_{S,T}(X) := \frac{\int_0^1 \mathcal
A_{p,\theta}(S,T)\prod_{\alpha\in \overline{S}\cap
X}\frac{z_{p,-\theta}'}{z_{p,-\theta}}(\frac
12+\alpha)\prod_{\beta\in \overline{T}\cap X}
\frac{z_{p,\theta}'}{z_{p,\theta}}(\frac 12+ \beta)
~d\theta}{\int_0^1 \mathcal A_{p,\theta}(S,T)~d\theta}
\end{eqnarray}
where we have adopted the notation
\begin{eqnarray}
\mathcal A_{p,\theta}(S,T):=\mathcal A_{p,\theta}(T^-,S^-;S,T).
\end{eqnarray}
Further, we have, for $\alpha^*\in S$, $\beta^*\in T$,
$S'=S-\{\alpha^*\}$, $T'=T-\{\beta^*\}$ and $W\subset
\overline{S}+\overline{T}$,
\begin{eqnarray}&&\label{eq:cSTderiv}
\nonumber\frac{d}{d\alpha^*}c_{S,T}(W)\bigg|_{\alpha^*=-\beta^*}=-c_{S',T'}(W+\{\alpha^*\})|_{\alpha^*=-\beta^*}
-c_{S',T'}(W+
\{\beta^*\})\\
&&\quad \qquad
+c_{S',T'}(W)c_{S',T'}(\{\alpha^*\})|_{\alpha^*=-\beta^*}+c_{S',T'}(W)c_{S',T'}(\{\beta^*\})
\end{eqnarray}
and
\begin{eqnarray} \label{eqn:dHp2}
&& \frac{d}{d\alpha^*}H_{p,2;S,T}(W)\bigg|_{\alpha^*=-\beta^*}= \nonumber \\
&&\qquad\qquad-H_{p,2,S',T'}(W+
\{\alpha^*\})|_{\alpha^*=-\beta^*}-H_{p,2,S',T'}(W+ \{\beta^*\}).
\end{eqnarray}
\end{lemma}
\begin{proof}
The proof is simple, involving only differentiation. The first
line, (\ref{eq:hp2}), follows from logarithmic differentiation of
the integral of $\mathcal A_{p,\theta}$, where each variable $w\in
W$ appears in just one place. The equation (\ref{eq:cSTderiv})
also arises immediately by following the rules of differentiation.
Note that $\alpha^*$ appears in $\mathcal A_{p,\theta}(S,T)$ in
the numerator of (\ref{eq:Aptheta}) in a factor
$z_{p,\theta}(\frac 12 -\alpha^*)$ and in the denominator in a
factor of type $z_{p,-\theta}(\frac 12 +\alpha^*)$ and
$\alpha^*\notin W$. The first term in (\ref{eq:cSTderiv}) comes
from the $z_{p,-\theta}(\frac 12 +\alpha^*)$ factor in the
numerator of (\ref{eq:cST}), the second term in
(\ref{eq:cSTderiv}) from the $z_{p,\theta}(\frac 12 -\alpha^*)$
and the final two terms in (\ref{eq:cSTderiv}) from the integral
in the denominator of (\ref{eq:cST}). To obtain (\ref{eqn:dHp2})
note that the $\alpha^*$ appears in each factor of $c_{S,T}(X_j)$,
$j=1,\ldots,J$, in (\ref{eq:hp2}). Using the product rule on each
term in (\ref{eq:hp2}) we differentiate each $c_{s,T}(X_j)$ in
turn and sum the results. Note that
$c_{S,T}(X_j)\big|_{\alpha^*=-\beta^*}= c_{S',T'}(X_j)$. Using
(\ref{eq:cSTderiv}) and careful combinatorial accounting
(\ref{eqn:dHp2}) can be obtained.
\end{proof}
\begin{remark} We have allowed here some loose use of notation in
writing $H_{p,2,S',T'}(W+\{\alpha^*\})$. The lemma starts out by
defining $H_{p,2,S,T}(W)$ where
$W\subset\overline{S}+\overline{T}$, but of course
$W+\{\alpha^*\}\notin \overline{S}+\overline{T}$ when $\alpha^*\in
S$. However to understand the notation
$H_{p,2,S',T'}(W+\{\alpha^*\})$ simply replace $S$ with $S'$ and
$T$ with $T'$ in the definition of $H_{p,2,S,T}$ and replace
$\overline{S}$ with $A-S'$ and $\overline{T}$ with $B-T'$. A
similar comment applies to $c_{S',T'}(W+\{\alpha^*\})$ in
(\ref{eq:cSTderiv}).
\end{remark}
\subsection{Residue identity revisited}
Using Theorem \ref{theo:Jzeta}, which is an application of Lemma
\ref{lemma:diff} along much the same lines as Theorem
\ref{theo:J}, we now prove the analogue for $\zeta$ of Lemma
\ref{lem:residue}.
\begin{lemma} \label{lem:residuezeta} Suppose that $\alpha^*\in A$ and $\beta^*\in B$. Let $A'=A-\{\alpha^*\}$
and $B'=B-\{\beta^*\}$ and $\ell=\log \frac{t}{2\pi}$. Then
$J^*_{\zeta,t}(A;B;U)$ (defined in Theorem \ref{theo:Jzeta}) has a
simple pole at $\alpha^*=-\beta^*$ with
\begin{eqnarray}
&&\operatornamewithlimits{Res}_{\alpha^*=-\beta^*}
J^*_{\zeta,t}(A;B;U) =-\frac{\chi'}{\chi}(s-\beta^*)
J^*_{\zeta,t}(A';B';U)\nonumber
\\
&&\qquad\qquad\qquad\qquad\qquad\qquad+J^*_{\zeta,t}(A';B;U)+J^*_{\zeta,t}(A'+\{-\beta^*\};B';U).
\end{eqnarray}
\end{lemma}
\begin{proof} We use Lemma \ref{lem:residue}.
First we remember the convention that $A=S+\overline{S}$ and
$B=T+\overline{T}$ and we write Theorem \ref{theo:Jzeta} as
\begin{eqnarray}
J^*_{\zeta,t}(A,B;U)=\prod_{\mu\in U}\frac{\chi'}{\chi}(1/2+it+\mu)\sum_{{S\subset A\atop T\subset B}\atop
|S|=|T|} D_{\zeta; S,T}(\overline{S},\overline{T})
\end{eqnarray}
where, with the abbreviation $\mathcal A(S,T):=\mathcal
A_{\zeta}(T^-,S^-;S,T)$, we have
\begin{eqnarray}
D_{\zeta; S,T}(\overline{S},\overline{T})=Q_\zeta(S,T)\mathcal{A}(S,T)\sum_{\overline{S}+ \overline{T} =\sum W_r}\prod_{r=1}^R
\mathcal H_{S,T}(W_r)
\end{eqnarray}
and
\begin{eqnarray}
Q_\zeta(S,T):=X_t(S,T)
\frac{Z_\zeta(S,T)Z_\zeta(S^-,T^-)}{{Z_\zeta}^\dagger(S,S^-){Z_\zeta}^\dagger(T,T^-)}.
\end{eqnarray}
Now we let $Q_\zeta(S,T)\mathcal{A}(S,T)$ play the role of
$Q(S,T)$ in the proof of Lemma \ref{lem:residue} and $\mathcal
H_{S,T}(W)$ plays the role of $H_{S,T}(W)$. Thus we need to prove
the four conditions below, describing the behaviour of the various
components of the formula as $\alpha^*$ approaches $-\beta^*$, and
then the rest of the proof is identical to that of Lemma
\ref{lem:residue}.
\begin{description}
\item[Q1] If $\alpha^*\in \overline{S}$ and $\beta^*\in \overline{T}$, then
$Q_\zeta(S,T)\mathcal{A}(S,T)$ is independent of $\alpha^*$ and
$\beta^*$ and
\begin{eqnarray}
\mathcal H_{S,T}(W)=\left\{ \begin{array}{ll}
\frac{1}{(\alpha^*+\beta^*)^2}+O(1)
& \mbox{ if $W=\{\alpha^*,\beta^*\}$ }\\
$O(1)$ & \mbox{ otherwise }
\end{array} \right.
\end{eqnarray}
\item[Q2] If $\alpha^*\in S$ and $\beta^*\in \overline{T}$, then
$Q_\zeta(S,T)\mathcal{A}(S,T)$ is regular when $\alpha^*=-\beta
^*$ and
\begin{eqnarray}
\mathcal H_{S,T}(W)=\left\{ \begin{array}{ll}
\frac{1}{\alpha^*+\beta^*}+O(1)
& \mbox{ if $W=\{\beta^*\}$ }\\
$O(1)$ & \mbox{ otherwise }
\end{array} \right.
\end{eqnarray}
\item[Q3] If $\alpha^*\in \overline{S}$ and $\beta^*\in T$, then
$Q_\zeta(S,T)\mathcal{A}(S,T)$ is regular when $\alpha^*=-\beta^*$
and
\begin{eqnarray}
\mathcal H_{S,T}(W)=\left\{ \begin{array}{ll}
\frac{1}{\alpha^*+\beta^*}+O(1)
& \mbox{ if $W=\{\alpha^*\}$ }\\
$O(1)$ & \mbox{ otherwise }
\end{array} \right.
\end{eqnarray}
\item[Q4] If $\alpha^*\in S$ and $\beta^*\in T$ and
$S'=S-\{\alpha^*\}$ and $T'=T-\{\beta^*\}$, then
$Q_\zeta(S,T)=\big(\frac{-1}{(\alpha^*+\beta^*)^2}+O(1)\big)Q_{\zeta;1}(S,T)$
and
\begin{eqnarray}
&&Q_{\zeta;1}(S,T)\mathcal A(S,T)=Q_\zeta(S',T')\mathcal
A(S',T')\bigg(1\nonumber\\
&& \qquad-(\alpha^*+\beta^*)\big(-\frac{\chi'}{\chi}(s-\beta^*)
+\mathcal H_{S',T'}(\{\alpha^*\})|_{\alpha^*=-\beta^*}+\mathcal
H_{S',T'}(\{\beta^*\})\big)\bigg) +O(1)
\end{eqnarray}
and
\begin{eqnarray}&&\label{eq:Hexpa}
\mathcal H_{S,T}(W)= \mathcal H_{S',T'}(W)-
(\alpha^*+\beta^*)(\mathcal
H_{S',T'}(W+\{\alpha^*\})|_{\alpha^*=-\beta^*}+\mathcal
H_{S',T'}(W+\{\beta^*\}))\\
&&\qquad \qquad \qquad \qquad +O(|\alpha^*+\beta^*|^2) .\nonumber
\end{eqnarray}
\end{description}
We have just to prove these four conditions to complete the proof.
We start with the first case where $\alpha^*\notin S$,
$\beta^*\notin T$. Then $\alpha^*\in \overline{S}$ and $\beta^*\in
\overline{T}$. The terms $H_{p,1}$ and $H_{p,2}$ have no poles
because of the conditions on the real parts of $\alpha$ and
$\beta$. The only polar term from $\alpha^*=-\beta^*$ arises from
a situation when one of the partition parts is
$W_r=\{\alpha^*,\beta^*\}$ and there is a pole from
$H_{\zeta;S,T}(W_r)=\left(\frac{\zeta'}{\zeta}\right)'(1+\alpha^*+\beta^*)$.
Since $\left(\frac{\zeta'}{\zeta}\right)'(1+x)=1/x^2+O(1)$ and
$Q_\zeta(S,T)\mathcal{A}(S,T)$ is clearly independent of
$\alpha^*$ and $\beta^*$, the first condition is satisfied.
Next, suppose that $\alpha^*\in S$ and $\beta^*\notin T$. The only
pole in $D_{\zeta;S,T}(\overline{S},\overline{T})$ occurs in the
product of the $H$ for $H_{\zeta;S,T}(W_r)$ when $W_r=\{\beta^*\}$.
We have
\begin{eqnarray}\label{eqn:innotinz}
H_{\zeta;S,T}(\{\beta^*\})=\sum_{\hat\beta \in
T}\frac{\zeta'}{\zeta}(1+\hat \beta-\beta^*)- \sum_{\hat \alpha\in
S}\frac{\zeta'}{\zeta}(1+\beta^*+\hat \alpha)
\end{eqnarray}
for which, when $\hat \alpha=\alpha^*$, the term
$-\frac{\zeta'}{\zeta}(1+\beta^*+\alpha^*)$ has a simple pole at $
\alpha^*=-\beta^*$ with residue 1. $Q_\zeta(S,T)\mathcal{A}(S,T)$
depends on $\alpha^*$ and not $\beta^*$, so it is regular when
$\alpha^*=- \beta^*$.
Similarly, when $\alpha^*\notin S$ and $\beta^*\in T$, the only
pole in the product of the $H$ occurs for
$H_{\zeta;S,T}(\{\alpha^*\})$. We have
\begin{eqnarray}\label{eqn:innotinz}
H_{\zeta;S,T}(\{\alpha^*\})=\sum_{\hat\alpha \in
S}\frac{\zeta'}{\zeta}(1+\hat \alpha-\alpha^*)- \sum_{\hat
\beta\in T}\frac{\zeta'}{\zeta}(1+\alpha^*+\hat \beta)
\end{eqnarray}
for which, when $\hat \beta =\beta^*$, the term
$-\frac{\zeta'}{\zeta}(1+\alpha^*+\beta^*)$ has a simple pole at
$\alpha^*=-\beta^*$ with residue 1.
Finally, we consider the case
$\alpha^*\in S$ and $\beta^*\in T$. Let $S'=S-\{\alpha^*\}$ and
$T'=T-\{\beta^*\}$. We have
\begin{eqnarray}
Q_\zeta(S,T)=\zeta(1+\alpha^*+\beta^*)\zeta(1-\alpha^*-\beta^*)
Q_{\zeta;1}(S,T)
\end{eqnarray}
where
\begin{eqnarray}
Q_{\zeta;1}(S,T)&=&Q_\zeta(S',T')X_t(\{\alpha^*\},\{\beta^*\})\nonumber
\\
&&\qquad \times \frac{\prod_{\hat \beta\in
T'}\zeta(1+\alpha^*+\hat \beta) \zeta(1-\alpha^*-\hat
\beta)\prod_{\hat \alpha\in S'} \zeta(1+\hat
\alpha+\beta^*)\zeta(1-\hat \alpha-\beta^*)}{\prod_{\hat \alpha\in
S'} \zeta(1+\alpha^*-\hat \alpha)\zeta(1+\hat
\alpha-\alpha^*)\prod_{\hat \beta\in T'} \zeta(1+\beta^*-\hat
\beta)\zeta(1+\hat \beta-\beta^*)}.
\end{eqnarray}
Note that $\zeta(1+\alpha^*+\beta^*)\zeta(1-\alpha^*-\beta^*)
=\frac{-1}{(\alpha^*+\beta^*)^2}+O(1)$. Also, remembering that
$\chi(s-\beta^*)\chi(1-s+\beta^*)=1$,
\begin{eqnarray}
Q_{\zeta;1}(S,T)\big|_{\alpha^*=-\beta^*}=Q_\zeta(S',T'),
\end{eqnarray}
which gives us an expansion for $Q_{\zeta;1}(S,T)$ in the
neighborhood of $\alpha^*=-\beta^*$:
\begin{eqnarray} \label{eqn:Qz} \nonumber &&
Q_{\zeta;1}(S,T)=
Q_\zeta(S',T')(1+\frac{\chi'}{\chi}(s-\beta^*)(\alpha^*+\beta^*)+O(|\alpha^*+\beta^*|^2)\\
&&\qquad \times \bigg(1+(\alpha^*+\beta^*)\bigg( \sum_{\hat\alpha
\in S'}\Big( \frac{\zeta'}{\zeta}(1+\hat \alpha+\beta^*)-
\frac{\zeta'}{\zeta}(1-\beta^*-\hat\alpha)\Big)\nonumber\\
&&\qquad \qquad +\sum_{\hat \beta\in T'}
\Big(\frac{\zeta'}{\zeta}(1-\beta^*+\hat
\beta)-\frac{\zeta'}{\zeta}(1+\beta^*-\hat \beta)\Big)\bigg)
+O(|\alpha^*+\beta^*|^2)\bigg)\nonumber\\
&&\qquad =
Q_\zeta(S',T')\Big(1-(\alpha^*+\beta^*)\big(-\frac{\chi'}{\chi}(s-\beta^*)+H_{\zeta;S',T'}(\{\alpha^*\})|_{\alpha^*=-\beta^*}\nonumber
\\
&&\qquad\qquad\qquad+
H_{\zeta;S',T'}(\{\beta^*\})\big)+O(|\alpha^*+\beta^*|^2)\Big).
\end{eqnarray}
Since $\left. \mathcal A(S,T)\right|_{\alpha^*=-\beta^*}=\mathcal
A (S',T')$, we have the expansion around $\alpha^*=-\beta^*$:
\begin{eqnarray} \label{eqn:Az} \nonumber &&
\mathcal A(S,T)= \mathcal A(S',T')
\bigg(1+(\alpha^*+\beta^*)\sum_p\bigg( \sum_{\hat\alpha \in S'}\big(- \frac{z_p'}{z_p}(1+\hat
\alpha+\beta^*)+
\frac{z_p'}{z_p}(1-\beta^*-\hat\alpha)\big)\\
&&\qquad \qquad +\sum_{\hat \beta\in T'}
\big(-\frac{z_p'}{z_p}(1-\beta^*+\hat
\beta)+\frac{z_p'}{z_p}(1+\beta^*-\hat \beta)\big)\nonumber\\
&&\qquad -\frac{\int_0^1\mathcal A_{p,\theta}(S',T')\big(\frac
{z_{p,-\theta}'}{z_{p,-\theta}}(\frac 12
-\beta^*)+\frac{z_{p,\theta}'}{z_{p,\theta}}(\frac 12
+\beta^*)\big)~d\theta}{\int_0^1\mathcal
A_{p,\theta}(S',T')~d\theta}\bigg)
+O(|\alpha^*+\beta^*|^2)\bigg)\nonumber\\
&&=\mathcal A(S',T')
\bigg(1+(\alpha^*+\beta^*)\sum_p\big(H_{p,1;S',T'}(\{\alpha^*\})|_{\alpha^*=-\beta^*}+
H_{p,1;S',T'}(\{\beta^*\})\nonumber\\
&&\qquad \qquad \qquad \quad
-H_{p,2;S',T'}(\{\alpha^*\})|_{\alpha^*=-\beta^*}-
H_{p,2;S',T'}(\{\beta^*\})\big)\bigg) +O(|\alpha^*+\beta^*|)^2,
\end{eqnarray}
where the first line is a result of differentiating
(\ref{eq:Azeta}), and the second line from the definitions of
$H_{p,1;S,T}(W)$ (in Theorem \ref{theo:Jzeta}) and
$H_{p,2;S,T}(W)$ (in (\ref{eq:hp2})).
Thus we have
\begin{eqnarray}
&&Q_{\zeta;1}(S,T)\mathcal A(S,T)=Q_\zeta(S',T')\mathcal
A(S',T')\bigg(1\nonumber\\
&&\qquad -(\alpha^*+\beta^*)\big(-\frac{\chi'}{\chi}(s-\beta^*) +\mathcal
H_{S',T'}(\{\alpha^*\})|_{\alpha^*=-\beta^*}+\mathcal
H_{S',T'}(\{\beta^*\})\big)\bigg) +O(1)
\end{eqnarray}
as $\alpha^*\to -\beta^*$.
Now we obtain an expansion for the product of $H$ term. By the
definition of $H_\zeta$ in Theorem \ref{theo:Jzeta} we have that
\begin{eqnarray}
H_{\zeta;S,T}(\{\alpha\})&=&\sum_{\hat{\alpha}\in
S}\frac{\zeta'}{\zeta}(1+\alpha-\hat \alpha)-\sum_{\hat \beta\in
T}
\frac{\zeta'}{\zeta}(1+\alpha+\hat \beta)\nonumber\\
&=& H_{\zeta;S',T'}(\{\alpha\})+
\frac{\zeta'}{\zeta}(1+\alpha-\alpha^*)-
\frac{\zeta'}{\zeta}(1+\alpha+ \beta^*);
\end{eqnarray}
\begin{eqnarray}
H_{\zeta;S,T}(\{\beta\})&=&\sum_{\hat \beta \in
T}\frac{\zeta'}{\zeta}(1+\beta-\hat \beta )-
\sum_{\hat \alpha \in S} \frac{\zeta'}{\zeta}(1+\beta+\hat \alpha) \nonumber\\
&=& H_{\zeta;S',T'}(\{\beta\})+
\frac{\zeta'}{\zeta}(1+\beta-\beta^*)-
\frac{\zeta'}{\zeta}(1+\beta+ \alpha^*);
\end{eqnarray}
and
\begin{eqnarray}
H_{\zeta;S,T}(\{\alpha,\beta\})=\left(\frac{\zeta'}{\zeta}\right)'(1+\alpha
+\beta)=H_{\zeta;S',T'}(\{\alpha,\beta\}).
\end{eqnarray}
Thus,
\begin{eqnarray} \label{eqn:HzST}
H_{\zeta;S,T}(W)\bigg|_{\alpha^*=-\beta^*}= H_{\zeta;S',T'}(W)
\end{eqnarray}
and
\begin{eqnarray} \label{eqn:dHzeta}
\frac{d}{d\alpha^*}H_{\zeta;S,T}(W)\big|_{\alpha^*=-\beta^*}&=&
\left\{\begin{array}{ll}
-\left(\frac{\zeta'}{\zeta}\right)'(1+\alpha+\beta^*) &\mbox{ if $W=\{\alpha\}\subset \overline{S}$}\\
-\left(\frac{\zeta'}{\zeta}\right)'(1+\beta-\beta^*)&\mbox{ if $W=\{\beta\}\subset \overline{T}$}\\
0 &\mbox{ otherwise}
\end{array}\right.\\
&=& -H_{\zeta,S',T'}(W+
\{\alpha^*\})|_{\alpha^*=-\beta^*}-H_{\zeta,S',T'}(W+
\{\beta^*\}).\nonumber
\end{eqnarray}
In exactly the same way,
\begin{eqnarray} \label{eqn:Hp1ST}
H_{p,1;S,T}(W)\bigg|_{\alpha^*=-\beta^*}= H_{p,1;S',T'}(W)
\end{eqnarray}
and
\begin{eqnarray} \label{eqn:dHp1}
&&\frac{d}{d\alpha^*}H_{p,1;S,T}(W)\big|_{\alpha^*=-\beta^*}\nonumber
\\
&&\qquad\qquad= -H_{p,1,S',T'}(W+
\{\alpha^*\})|_{\alpha^*=-\beta^*}-H_{p,1,S',T'}(W+ \{\beta^*\}).
\end{eqnarray}
Also, by Lemma \ref{lem:lemma3} we have
\begin{eqnarray} \label{eqn:Hp2ST}
H_{p,2;S,T}(W)\bigg|_{\alpha^*=-\beta^*}= H_{p,2;S',T'}(W)
\end{eqnarray}
and
\begin{eqnarray}\label{eqn:dHp2a}
&&
\frac{d}{d\alpha^*}H_{p,2;S,T}(W)\bigg|_{\alpha^*=-\beta^*}\nonumber
\\
&&\qquad\qquad= -H_{p,2,S',T'}(W+
\{\alpha^*\})|_{\alpha^*=-\beta^*}-H_{p,2,S',T'}(W+ \{\beta^*\}).
\end{eqnarray}
Combining these results we have exactly equation (\ref{eq:Hexpa}).
The rest of the proof proceeds exactly as before.
\end{proof}
\subsection{$n$-correlation via the ratios conjecture}
Now we proceed to $n$-correlation. Let $f$ satisfy the conditions
\begin{eqnarray} \label{eq:fconditions}
&&f(x_1,\ldots,x_n) \text{ is holomorphic for } |\Im x_j| <2, \text{ with } j=1,\ldots,n,\\
&& \text{is translation invariant, ie. } f(x_1+t,\ldots,x_n+t)=f(x_1,\ldots,x_n) \nonumber\\
&& \text{and satisfies } f(0,x_2,\ldots,x_n)\ll
1/(1+|x_2|^2+\cdots+|x_n|^2) \text{ as } |x_j| \to
\infty,\nonumber \\
&&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \text{ with }
j=2,\ldots,n.\nonumber
\end{eqnarray}
\begin{theorem} \label{theo:zetaofftheline}
Let $\mathcal C_-$ denote the path from $-\delta+ iT$ down to
$-\delta-iT$ and let $\mathcal C_+$ denote the path from
$\delta-iT$ up to $\delta+iT$ and let $f$ be as in
(\ref{eq:fconditions}). Using the notation $J_{\zeta,t}(A;B;C)$
from Theorem ~\ref{theo:Jzeta},
\begin{eqnarray}\label{eq:zetaofftheline}
&&\sum_{0<\gamma_{j_1},\dots , \gamma_{j_n}\le T}
f(\gamma_{j_1},\dots,\gamma_{j_n})\nonumber
\\
&&\qquad =\frac{1}{(2\pi i)^n} \sum_{K+L+M=
\{1,\dots,n\}}(-1)^{|L|+|M|} \\
&&\qquad\qquad\qquad \times\int_{\mathcal {C_+}^K} \int_{\mathcal
{C_-}^{L+ M}}\frac{1}{T}\int_{I^*}J_{\zeta,t}(z_K;-z_L;-z_M) ~dt
~f(iz_1,\dots,iz_n)~dz_1\dots ~dz_n\nonumber
\end{eqnarray}
where
$z_K=\{z_k:k\in K\}$, $-z_L=\{-z_\ell:\ell\in L\}$ and
$\int_{\mathcal {C_+}^K} \int_{\mathcal {C_-}^{L+ M}}$
means that we are integrating all of the variables in $z_K$ along the $\mathcal C_+$
path and all of the variables in $z_{L}$ or $z_{M}$ along the $\mathcal C_-$
path; and $I^*$ is the interval which has lower endpoint $\max\{0,-\Im z_1,\ldots,-\Im z_n\}$
and upper endpoint
$\min\{T,T-\Im z_1,\ldots,T-\Im z_n\}$.
\end{theorem}
\begin{proof}
By Cauchy's theorem we can express the sum over zeros as
\begin{eqnarray}\label{eq:nfold}
&&\sum_{0< \gamma_1,\dots ,\gamma_n\leq T}
f(\gamma_1,\dots,\gamma_n)\nonumber
\\
&&\qquad\qquad=\frac{1}{(2\pi i)^n} \int_{\mathcal C}\dots
\int_{\mathcal C} f(-iz_1,\dots,-iz_n)\prod_{j=1}^n
\frac{\zeta'}{\zeta}(1/2+z_j)~dz_1\dots dz_n,
\end{eqnarray}
where $\mathcal C$ is a positively oriented contour which encloses
a subinterval of the imaginary axis from zero to $T$. We choose a
specific path $\mathcal C$ to be the positively oriented rectangle
that has vertices $\delta,\delta+iT, -\delta+iT, -\delta$ where
$\delta$ is a small positive number.
Due to the translation invariance of $f$, (\ref{eq:nfold}) equals
\begin{eqnarray}
&&\frac{1}{T}\int_0^T\frac{1}{(2\pi i)^n} \int_{\mathcal C}\dots
\int_{\mathcal C}
f(-iz_1-t,\dots,-iz_n-t)\nonumber \\
&&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
\times\prod_{j=1}^n
\frac{\zeta'}{\zeta}(1/2+z_j)~dz_1\dots dz_n~dt\nonumber \\
&&=\frac{1}{T}\int_0^T\frac{1}{(2\pi i)^n} \int_{\mathcal
C_{-it}}\dots \int_{\mathcal C_{-it}}
f(-iz_1,\dots,-iz_n)\nonumber
\\
&& \qquad\qquad\qquad\qquad\qquad\qquad\qquad\times\prod_{j=1}^n
\frac{\zeta'}{\zeta}(1/2+it+z_j)~dz_1\dots
dz_n~dt\nonumber \\
&&=\frac{1}{(2\pi i)^n}
\sum_{\epsilon_j\in\{-1,+1\}}\int_{\mathcal C_{\epsilon_n}}\dots
\int_{\mathcal C_{\epsilon_1}}\frac{1}{T}\int_{I^*}
f(-iz_1,\dots,-iz_n)\nonumber
\\
&& \qquad\qquad\qquad\qquad\qquad\qquad\qquad\times\prod_{j=1}^n
\frac{\zeta'}{\zeta}(1/2+it+z_j)~dt~dz_1\dots
dz_n+O(T^{\epsilon})\label{eq:qxvz}
\end{eqnarray}
where the range of the innermost
integral is the interval $I^*$ which has lower endpoint $\max\{0,-\Im z_1,\ldots,-\Im z_n\}$
and upper endpoint
$\min\{T,T-\Im z_1,\ldots,T-\Im z_n\}$.
In the second line we made a change of variables $z_j\rightarrow
z_j+it$. The contour $\mathcal C_{-it}$ is $\mathcal C$ shifted
down by $-it$; that is, it runs from $\delta-it,\delta+i(T-t),
-\delta+i(T-t), -\delta-it$. In progressing to the third line, we
note that the horizontal portions of the contour of integration
can be chosen so that the integral along them is $O(T^{\epsilon})$
(following the identical argument to Davenport \cite{kn:dav80},
page 108), so we concentrate on the vertical sides of the
contours. When we now exchange the order of integration to move
the $t$ integral to the inside, the integration over
$z_1,\ldots,z_n$ becomes the sum of $2^n$ integrals, each on one
of the contours $\mathcal C_+$ or $\mathcal C_-$ defined in
Theorem \ref{theo:zetaofftheline}.
\begin{remark}The main integral is of size $\approx T \log ^n T$. The $T$
is a result of $\int_{[-T,T]^n} f \approx T$; the power of the log
comes from the moment of the logarithmic derivative and will
become clear from the examples at the end of the paper.
\end{remark}
For each variable $z_j$ in (\ref{eq:qxvz}) which is on $\mathcal
C_-$ we use the functional equation
\begin{equation}
\label{eq:logderivfe}\frac{\zeta'}{\zeta}(s)=\frac{\chi'}{\chi}(s)-\frac{\zeta'}{\zeta}(1-s)
\end{equation}
to replace $\frac{\zeta'}{\zeta}(s+z_j)$, where $s=1/2+it$.
In this
way we find that (\ref{eq:qxvz}) equals
\begin{eqnarray}
&& \frac{1}{(2\pi i)^n}\sum_{\epsilon_j\in\{-1,+1\}}\int_{\mathcal
C_{\epsilon_n}}\dots \int_{\mathcal
C_{\epsilon_1}}\frac{1}{T}\int_{I^*} \; \prod_{j=1}^n
\left(\frac{1-\epsilon_j}{2}\frac{\chi'}{\chi}(s+z_j)+\epsilon_j
\frac{\zeta'} {\zeta}
(1/2+\epsilon_j(it+z_j))\right)\\
&&\qquad \qquad \times f(iz_1,\dots,iz_n)~dt~dz_1\dots
dz_n.\nonumber
\end{eqnarray}
Another way to write this equation is
\begin{eqnarray}&& \frac{1}{(2\pi i)^n}
\sum_{K\subset\{1,\dots,n\}} \prod_{j\in K} \int_{\mathcal
C_+} \prod_{j\notin K}\int_{\mathcal
C_-}\frac{1}{T}\int_{I^*}\frac{\zeta'} {\zeta} (s+z_j)
\left(\frac{\chi'}{\chi}(s+z_j)-\frac{\zeta'}
{\zeta}
(1-s-z_j)\right)\\
&& \qquad \qquad \times f(iz_1,\dots,iz_n)~dt~dz_1\dots
dz_n.\nonumber
\end{eqnarray}
(Note that the $dz$'s are no longer in order
but this
should not cause confusion.) The expansion of the product over
$j\notin K$ can be easily expressed as a sum over subsets of $K$.
This yields
\begin{eqnarray}
&& \frac{1}{(2\pi i)^n} \sum_{K+L+M=\{1,\dots,n\}}(-1)^{|L|+|M|}
\prod_{k\in K} \int_{\mathcal C_+}\prod_{\ell\in L}\int_{\mathcal
C_-}\frac{1}{T}\int_{I^*}\frac{\zeta'} {\zeta} (s+z_k)
\frac{\zeta'}
{\zeta} (1-s-z_{\ell})
\\
&& \qquad \qquad \times \prod_{m\in M}\int_{\mathcal
C_-}\Big(-\frac{\chi'}{\chi}(s+z_m)\Big)\;\;
f(iz_1,\dots,iz_n)~dt~dz_1\dots ~dz_n.\nonumber
\end{eqnarray}
\begin{remark}
We note the asymptotic for $\frac{\chi'}{\chi}$:
\begin{eqnarray}
\label{eq:chichi}
\frac{\chi'}{\chi}(1/2+it)=-\log\frac{|t|}{2\pi}\left(1+O\left(\frac
1 {|t|}\right)\right).
\end{eqnarray}
In some applications $|z_m|$ is small relative to $t$ and it
simplifies the formulae to replace $\frac{\chi'}{\chi}(s+z_m)$
with $-\log \tfrac{|t|}{2\pi}$. However, here where $z_k$ can be
the same size as $t$ we will not use this approximation.
\end{remark}
We have the statement of Theorem
\ref{theo:zetaofftheline}.
\end{proof}
\subsection{$n$-correlation for the Riemann zeros}
We will now state our main theorem.
\begin{theorem} Assume the Ratios Conjecture
\ref{conj:ratiozeta}. Let $J_{\zeta,t}^*$ be as defined in Theorem
\ref{theo:Jzeta}. Then \label{theo:mainzeta}
\begin{eqnarray}&&\sum_{0<\gamma_1\neq\cdots\neq\gamma_n\leq T} f(\gamma_1,\dots,\gamma_n)
\nonumber
\\
&&\qquad =\frac{1}{(2\pi )^n}\int_{[-T,T]^n} \frac{1}{T}\int_{I^*}
\sum_{K+L+M=
\{1,\dots,n\}}J_{\zeta,t}^*(-iz_K;iz_L;iz_M)~dt~\\
&&\qquad\qquad\qquad\qquad\qquad\times f(z_1,\dots,z_n)~dz_1\dots
~dz_n+ O(T^{1/2+\epsilon})\nonumber
\end{eqnarray}
where $-iz_K=\{-iz_k:k\in K\}$, $iz_L=\{iz_\ell:\ell\in L\}$, and
$iz_M=\{iz_m:m\in M\}$. Moreover, the integrand has no poles
on the path of integration.
\end{theorem}
The proof is nearly identical to that of Theorem \ref{theo:main}.
The only difference is that some care is needed with regard to
endpoints of intervals when we move each new path of integration
onto the imaginary axis. The (slight) difficulty is with poles
that may lie at the very endpoints; this point did not arise in
the random matrix theory context because of the periodicity of the
integrand. However by extending the paths slightly we can
circumvent this difficulty; an argument like that used to handle
the horizontal segments in the proof of Theorem
\ref{theo:zetaofftheline} will work in this case, too, and
introduces an error term of size only $O(T^{\epsilon})$.
It remains to verify that the integrand in Theorem
\ref{theo:mainzeta} has no poles on the path of integration. We
have already confirmed in Lemma \ref{lem:residue} that each
$J^*(-i\theta_K;i\theta_L)$ has only a simple pole at
$\theta_k=-\theta_\ell$ for $\theta_k\in \theta_K$ and
$\theta_\ell\in \theta_L$.
We check that
\begin{eqnarray}\label{eq:singsetzeta}
\sum_{K+L+M=\{1,2,\ldots,n\}}J_{\zeta,t}^*(-i\theta_K;i\theta_L;i\theta_M)
\end{eqnarray}
has no pole at $\theta_1=\theta_2$ when the values of the remaining $\theta_j$ are unequal to $\theta_1$ or $\theta_2$.
A given $J_{\zeta,t}^*(-i\theta_K;i\theta_L;i\theta_M)$ only has
a pole when $\theta_1\in \theta_L$ and $\theta_2 \in \theta_K$, or
vice versa, so
\begin{eqnarray}
&&\operatornamewithlimits{Res}_{\theta_1=\theta_2}
\sum_{K+L+M=\{1,2,\ldots,n\}}
J_{\zeta,t}^*(-i\theta_K;i\theta_L;i\theta_M) \nonumber \\
&&\qquad= \sum_{K+L+M=\{3,\ldots,n\}}
\operatornamewithlimits{Res}_{\theta_1=\theta_2}\Big(J_{\zeta,t}^*(-i\theta_K+\{-i\theta_1\}
;\{i\theta_2\}+i\theta_L;i\theta_M) \nonumber \\
&&\qquad\qquad\qquad\qquad+ J_{\zeta,t}^*(-i\theta_K+\{-i\theta_2\}
;\{i\theta_1\}+i\theta_L;i\theta_M)\Big)=0;
\end{eqnarray}
this is zero because $\operatornamewithlimits{Res}_{s=x}f(s,x)=-
\operatornamewithlimits{Res}_{s=x}f(x,s)$.
Thus if (\ref{eq:singsetzeta}) had a singular set it would be of
complex dimension less than $n-1$ and by standard results in the
theory of several complex variables, this implies that there is no
singular set (see for example \cite{kn:krantz}, Corollary 7.3.2).
Our proof of $n$-correlation in the case of $\zeta$-zeros is now complete.
\begin{corollary}
By rearranging the integrals, now that we know the integrand has
no singularities, and using the fact that $f$ is translation
invariant we have that the ratios conjecture implies that
\begin{eqnarray}&&\sum_{0<\gamma_1\neq\cdots\neq\gamma_n\leq T} f(\gamma_1,\dots,\gamma_n)
\nonumber
\\
&&\qquad =\frac{1}{T}\int_0^T \frac{1}{(2\pi )^n}\int_{[-T,T]^n} \sum_{K+L+M=
\{1,\dots,n\}}
J_{\zeta,t}^*(-iz_K+it;iz_L-it;iz_M-it) \\
&&\qquad\qquad\qquad\qquad\qquad\times f(z_1,\dots,z_n)~dz_1\dots
~dz_n~dt+ O(T^{1/2+\epsilon})\nonumber
\end{eqnarray}
\end{corollary}
\section{Examples} In this section we explicitly write out
all of the terms in our expressions for $n$-correlations for RMT
eigenvalues and for $\zeta$-zeros for $2\le n\le 4$. In the case
of the $\zeta$ correlations, we simplify the terms $X_t(S,T)$ and
$\frac{\chi'}{\chi}(s+\alpha)$ using the approximations involving
$\ell=\log \tfrac{t}{2\pi}$ mentioned at (\ref{eq:chiapprox}) and
(\ref{eq:chichi}).
To proceed, we calculate $J^*(A;B)$ and
$J^*_{\zeta,t}(A;B):=J^*_{\zeta,t}(A;B;U)/\prod_{\mu \in U}
\frac{\chi'}{\chi}(\tfrac{1}{2}+it+\mu)$ for sets $A$ and $B$ with
4 or fewer elements. $D_{S,T}$ and $D_{\zeta,S,T}$ are defined in
the proofs of Lemma \ref{lem:residue} and Lemma
\ref{lem:residuezeta}, respectively. In the following sections we
first compile $D_{S,T}$, $D_{\zeta,S,T}$, then evaluate $J^*$ and
$J_{\zeta,t}^*$ and then assemble these into the correlation
formulas, $R_{N,n}(x_1,\ldots,x_n)$ and
$R_{\zeta,t,n}(x_1,\ldots,x_n)$. Here we define $R$ via
\begin{eqnarray}
&&\int_{U(N)} \sideset{}{^*}\sum_{1\leq j_1,\ldots,j_n\leq N}
f(\theta_1,\ldots,\theta_n) dX_N\nonumber \\
&&\qquad\qquad = \frac{1}{(2\pi)^n} \int_{[0,2\pi]^n} R_{N,n}
(x_1,\ldots,x_n)f(x_1,\ldots,x_n) dx_1\cdots dx_n
\end{eqnarray}
and
\begin{eqnarray}
&&\sum_{0<\gamma_1\neq \cdots \neq \gamma_n\leq T}
f(\gamma_1,\ldots,\gamma_n) = \frac{1}{(2\pi)^n} \int_{[-T,T]^n}
\frac{1}{T}\int_{I^*} R_{\zeta,t,n}(x_1,\ldots,x_n) dt \nonumber
\\
&&\qquad\qquad\qquad\qquad \times f(x_1,\ldots,x_n)dx_1\cdots
dx_n+O(T^{1/2+\epsilon}),
\end{eqnarray}
(compare these with Theorems \ref{theo:main} and
\ref{theo:mainzeta}).
Recall that
\begin{eqnarray}
z(x)=\frac{1}{(1-e^{-x})},
\end{eqnarray}
\begin{eqnarray}
S(x)=S_N(x)=\frac{\sin \frac{Nx}{2}}{\sin \frac x 2},
\end{eqnarray}
\begin{eqnarray}z_p(x):=(1-p^{-x})^{-1},
\end{eqnarray}
and
\begin{eqnarray}Z_p(A,B)=\prod_{\alpha\in A\atop\beta\in B}
z_p(1+\alpha+\beta)^{-1}.\end{eqnarray}
We also will introduce, as needed below, a number of other
expressions $A(x)$, $B(x)$, etc.; for convenience, these are
listed in Section \ref{sect:auxfunc}.
\subsection{Pair correlation, RMT}
Suppose that the sets $A$ and $B$ have just one element:
$A=\{a\},B=\{b\}$. We have
\begin{eqnarray}
&&J^*(a;b)= D_{\phi,\phi}+D_{a,b},
\end{eqnarray}
where
\begin{eqnarray}
D_{\phi,\phi}=\left(\frac{z'}{z}\right)'(a+b),
\end{eqnarray}
and
\begin{eqnarray}
D_{a,b}=e^{-N(a+b)}z(a+b)z(-a-b).
\end{eqnarray}
Thus,
\begin{eqnarray} \label{eqn:Jab}
&&J^*(a;b)=
\left(\frac{z'}{z}\right)'(a+b)
+e^{-N(a+b)}z(a+b)z(-a-b),
\end{eqnarray}
Then the 2-point correlation function is
\begin{eqnarray}
R_{N,2}(u,v) = N^2+J^*(iu;-iv)+J^*(-iu;iv) =\det
\left(\begin{array}{cc}N&S(u-v)\\S(v-u)& N\end{array}\right).
\end{eqnarray}
\subsection{Pair correlation, $\zeta$}
We have
\begin{eqnarray}
\label{eq:I1ab} J^*_{\zeta,t}(a;b)&=& \Big(\frac{\zeta'}{\zeta}\Big)'(1+a+b)- B(a+b) \nonumber\\
&&\qquad+e^{-\ell(a+b)} \zeta(1+a+b)\zeta(1-a-b) A(a+b)
\end{eqnarray}
where $\ell=\log \frac{t}{2\pi}$ and
\begin{eqnarray}
\label{eq:Aprime} A(x)&=&\prod_p
\frac{(1-\tfrac{1}{p^{1+x}})(1-\tfrac{2}{p}+\tfrac{1}{p^{1+x}})}
{(1-\tfrac{1}{p})^2},
\end{eqnarray}
\begin{eqnarray}
\label{eq:Bprime} B(x)&=&\sum_p\left(\frac{\log
p}{p^{1+x}-1}\right)^2,
\end{eqnarray}
and conjecture that
\begin{eqnarray} R_{\zeta,t,2}(u,v)=\ell^2+J^*_\zeta(iu;-iv)+J^*_\zeta(-iu;iv).
\end{eqnarray}
Further, letting
\begin{eqnarray}
P_1(x)=e^{-\ell x}A(x)\zeta(1+x)\zeta(1-x)
\end{eqnarray}
and
\begin{eqnarray}
P_2(x)=\Big(\frac{\zeta'}{\zeta}\Big)'(1+x) - B(x),
\end{eqnarray}
we have
\begin{eqnarray}
J^*_{\zeta,t}(a;b)=P_1(a+b)+P_2(a+b).
\end{eqnarray}
\subsection{Triple correlation, RMT}
In this case we have
\begin{eqnarray}
&&J^*(a;b_1,b_2)= D_{\phi,\phi}+ D_{a,b_1}+D_{a,b_2},
\end{eqnarray}
\begin{eqnarray}
D_{\phi,\phi}=0,
\end{eqnarray}
\begin{eqnarray}
D_{a,b_1}=e^{-N(a+b_1)}z(a+b_1)z(-a-b_1)
\left(\frac{z'}{z}(b_2-b_1)-\frac{z'}{z}(b_2+a) \right),
\end{eqnarray}
and
\begin{eqnarray}
D_{a,b_2}=e^{-N(a+b_2)}z(a+b_2)z(-a-b_2)
\left(\frac{z'}{z}(b_1-b_2)-\frac{z'}{z}(b_1+a) \right),
\end{eqnarray}
Thus,
\begin{eqnarray} \label{eqn:Jabb}
J^*(a;b_1,b_2)
&=&
e^{-N(a+b_1)}z(a+b_1)z(-a-b_1)
\left(\frac{z'}{z}(b_2-b_1)-\frac{z'}{z}(b_2+a) \right)\\
&& + \nonumber e^{-N(a+b_2)}z(a+b_2)z(-a-b_2)
\left(\frac{z'}{z}(b_1-b_2)-\frac{z'}{z}(b_1+a) \right) .
\end{eqnarray}
Then
\begin{eqnarray}R_{N,3}(u,v,w)&=&
N^3 + N\big(J^*(i u; -i v) + J^*(i v; -i u) + J^*(i u; -i
w)\nonumber\\&&\qquad + J^*(i w; -i u) +
J^*(i w; -i v) + J^*(i v; -i w)\big)\nonumber\\
&& \qquad + \big(J^*(-i w; i u, i v) +
J^*(-i v; i w, i u) + J^*(-i u; i w, i v)\nonumber\\
&& \qquad \qquad + J^*(i u; -i w, -i v) +
J^*(i v; -i w, -i u) + J^*(i w; -i u, -i v)\big)\\
&=& \nonumber
\det\left(\begin{array}{ccc}
N& S(u - v)& S(u - w)\\ S(v - u)& N& S(v - w)\\ S(w - u)& S(w -
v)& N\end{array}\right).
\end{eqnarray}
\subsection{Triple correlation for $\zeta$}
The analogue for $\zeta$ is
\begin{eqnarray}
\label{eqn:Jzabb} &&J^*_{\zeta,t}(a;b_1,b_2)= e^{-\ell(a+b_1)}
A(a+b_1)\zeta(1+a+b_1)
\zeta(1-a-b_1)\nonumber \\
&&\qquad\times\bigg(\frac{\zeta'}{\zeta}(1+b_2-b_1)-\frac
{\zeta'}{\zeta}(1+a+b_2)-B_1(a+b_1,a+b_2)\bigg)\nonumber
\\
&& +e^{-\ell(a+b_2)}A(a+b_2) \zeta(1+a+b_2)
\zeta(1-a-b_2)\nonumber \\
&&\qquad\times\bigg(\frac{\zeta'}{\zeta}(1+b_1-b_2)-\frac
{\zeta'}{\zeta}(1+a+b_1)-B_1(a+b_2,a+b_1)\bigg)\\
&&\qquad\qquad\qquad\qquad+Q(a+b_1,a+b_2),\nonumber
\end{eqnarray}
where
\begin{eqnarray}
\label{eq:Qprime} Q(x,y)&=&-\sum_p \frac{\log^3 p}
{p^{2+x+y}(1-\frac{1}{p^{1+x}})( 1-\frac{1}{p^{1+y}})}
\end{eqnarray}
and
\begin{eqnarray}
\label{eq:Pprime} B_1(x,y)= \sum_p\frac{\Big(1-\frac{1}{p^x}\Big)
\Big(1-\frac{1}{p^x}-\frac{1}{p^y} +\frac{1}{p^{1+y}}\Big) \log p}
{\Big(1-\frac{1}{p^{1-x+y}}\Big) \Big(1-\frac{1}{p^{1+y}}\Big)
\Big( 1-\frac{2}{p}+\frac{1}{p^{1+x}}\Big)p^{2-x+y}}.\nonumber
\end{eqnarray}
Then we conjecture that
\begin{eqnarray} \nonumber R_{\zeta,t,3}(u,v,w)&=& \ell^3 + \ell\big(J^*_{\zeta,t}(i u;
-i v) + J^*_{\zeta,t}(i v; -i u)+ J^*_{\zeta,t}(i u; -i w)
\\ \nonumber
&&\qquad + J^*_{\zeta,t}(i w; -i u) +
J^*_{\zeta,t}(i w; -i v) + J^*_{\zeta,t}(i v; -i w)\big)\\
&& \qquad + \big(J^*_{\zeta,t}(-i w; i u, i v) +
J^*_{\zeta,t}(-i v; i w, i u) + J^*_{\zeta,t}(-i u; i w, i v)\\ \nonumber
&& \qquad \qquad + J^*_{\zeta,t}(i u; -i w, -i v) +
J^*_{\zeta,t}(i v; -i w, -i u) + J^*_{\zeta,t}(i w; -i u, -i v).
\end{eqnarray}
It is convenient to introduce the function
\begin{eqnarray}
P_3(a,b,c)=B_1(a+b,a+c)+\frac{\zeta'}{\zeta}(1+a+c)-\frac{\zeta'}{\zeta}(1+c-b).
\end{eqnarray}
In terms of this, we have
\begin{eqnarray}\nonumber
J^*_{\zeta,t}(\{a\};\{b_1,b_2\})=Q(a+b_1,a+b_2)-P_1(a+b_1)P_3(a,b_1,b_2)-P_1(a+b_2)P_3(a,b_2,b_1).
\end{eqnarray}
\subsection{Quadruple correlation, RMT}
With $A=\{a\},B=\{b_1,b_2,b_3\}$, we have that
\begin{eqnarray} \label{eqn:Jabbb}
&& J^*(a;b_1,b_2,b_3)=D_{\phi,\phi}+
D_{a,b_1}+D_{a,b_2}+D_{a,b_3}\nonumber
\\
&& \qquad = e^{-N(a+b_1)}z(a+b_1)z(-a-b_1)
\left(\frac{z'}{z}(b_2-b_1)-\frac{z'}{z}(b_2+a) \right)
\left(\frac{z'}{z}(b_3-b_1)-\frac{z'}{z}(b_3+a) \right) \nonumber\\
&& \qquad \quad + e^{-N(a+b_2)}z(a+b_2)z(-a-b_2)
\left(\frac{z'}{z}(b_1-b_2)-\frac{z'}{z}(b_1+a)
\right) \left(\frac{z'}{z}(b_3-b_2)-\frac{z'}{z}(b_3+a) \right)
\\
&& \qquad \quad + \nonumber e^{-N(a+b_3)}z(a+b_3)z(-a-b_3)
\left(\frac{z'}{z}(b_1-b_3)-\frac{z'}{z}(b_1+a)
\right) \left(\frac{z'}{z}(b_2-b_3)-\frac{z'}{z}(b_2+a) \right)
\end{eqnarray}
With $A=\{a_1, a_2\},B=\{b_1,b_2\}$, we have
\begin{eqnarray}
&&J^*(a_1,a_2;b_1,b_2)= D_{\phi,\phi}+ D_{a_1,b_1}+D_{a_1,b_2}+
D_{a_2,b_1}+D_{a_2,b_2}+D_{a_1,a_2,b_1,b_2}.
\end{eqnarray}
where
\begin{eqnarray}
D_{\phi,\phi}=\left(\frac{z'}{z}\right)'(a_1+b_1)\left(\frac{z'}{z}\right)'(a_2+b_2)
+\left(\frac{z'}{z}\right)'(a_1+b_2)\left(\frac{z'}{z}\right)'(a_2+b_1)
\end{eqnarray}
and
\begin{eqnarray}
D_{a_1,b_1}&=&e^{-N(a_1+b_1)}z(a_1+b_1)z(-a_1-b_1)\\
&&\qquad\nonumber\times\big(H_{\{a_1\},\{b_1\}}(\{a_2\},\{b_2\})
+H_{\{a_1\},\{b_1\}}(\{a_2\})
H_{\{a_1\},\{b_1\}}(\{b_2\})\big)\\
&=& \nonumber e^{-N(a_1+b_1)}z(a_1+b_1)z(-a_1-b_1)
\left(\left(\frac{z'}{z}\right)'(a_2+b_2)\right.
\\
&&\qquad \nonumber\left.+\left(\frac{z'}{z}(a_2-a_1)-\frac{z'}{z}(a_2+b_1)
\right) \left(\frac{z'}{z}(b_2-b_1)-\frac{z'}{z}(b_2+a_1) \right)
\right);
\end{eqnarray}
the other $D_{a_i,b_j}$ are similar. Also,
\begin{eqnarray} &&
D_{a_1,a_2,b_1,b_2}=e^{-N(a_1+a_2+b_1+b_2)}\times \nonumber\\
\nonumber &&\quad \frac{z(a_1+b_1)z(-a_1-b_1)z(a_1+b_2)z(-a_1-b_2)
z(a_2+b_1)z(-a_2-b_1)z(a_2+b_2)z(-a_2-b_2)}
{z(a_1-a_2)z(a_2-a_1)z(b_1-b_2)z(b_2-b_1)}.
\end{eqnarray}
Thus,
\begin{eqnarray}&& \nonumber
J^*(a_1, a_2; b_1, b_2) :=
\left(\frac{z'}{z}\right)'(a_1 + b_1) \left(\frac{z'}{z}\right)'(a_2 + b_2)
+ \left(\frac{z'}{z}\right)'(a_1 + b_2) \left(\frac{z'}{z}\right)'(a_2 + b_1)\\
&& \qquad + \nonumber
e^{-N (a_1 + b_1)} z(a_1 + b_1)
z(-a_1 - b_1) \\
&&\qquad \qquad \times \bigg(\left(\frac{z'}{z}\right)'(
a_2 + b_2) + \big(\frac{z'}{z}(a_2 - a_1) - \frac{z'}{z}(a_2 + b_1)\big) \big(\frac{z'}{z}(b_2 - b_1) -
\frac{z'}{z}(b_2 + a_1)\big)\bigg) \\
&&\qquad \nonumber +
e^{-N (a_1 + b_2)} z(a_1 + b_2)
z(-a_1 - b_2) \\ \nonumber
&&\qquad \qquad \times\bigg(\left(\frac{z'}{z}\right)'(
a_2 + b_1)+ \big(\frac{z'}{z}(a_2 - a_1) - \frac{z'}{z}(a_2 + b_2)\big) \big(\frac{z'}{z}(b_1 - b_2) -
\frac{z'}{z}(b_1 + a_1)\big)\bigg)\\
&&\qquad + \nonumber
e^{-N (a_2 + b_1)} z(a_2 + b_1)
z(-a_2 - b_1) \\ \nonumber
&& \qquad \qquad \times \bigg(\left(\frac{z'}{z}\right)'(
a_1 + b_2) + \big(\frac{z'}{z}(a_1 - a_2) - \frac{z'}{z}(a_1 + b_1)\big) \big(\frac{z'}{z}(b_2 - b_1) -
\frac{z'}{z}(b_2 + a_2)\big)\bigg)\\
&&\qquad + \nonumber
e^{-N (a_2 + b_2)} z(a_2 + b_2)
z(-a_2 - b_2) \\ \nonumber
&&\qquad \qquad \times \bigg(\left(\frac{z'}{z}\right)'(
a_1 + b_1) + \big(\frac{z'}{z}(a_1 - a_2) - \frac{z'}{z}(a_1 + b_2)\big) \big(\frac{z'}{z}(b_1 - b_2) -
\frac{z'}{z}(b_1 + a_2)\big)\bigg)\\
&&\qquad + \nonumber
e^{-N (a_1 + a_2 + b_1 + b_2)}z(a_1 + b_1) z(a_1 + b_2) z(a_2 + b_1) z(a_2 +
b_2) \\ \nonumber
&&\qquad \qquad \times \frac{
z(-a_1 - b_1) z(-a_1 - b_2) z(-a_2 - b_1)
z(-a_2 - b_2)}{z(a_1 - a_2)z(a_2 - a_1)z(b_1 - b_2)z(b_2 -
b_1)}.
\end{eqnarray}
Then
\begin{eqnarray}R_{N,4}(u,v,w,y)&=& N^4 + N^2 \big(J^*(i u; -i v) + J^*(i v; -i u) + J^*(i u;
-i w) \nonumber \\
&& \qquad \qquad + J^*(i w; -i u)+
J^*(i w; -i v) + J^*(i v; -i w) \nonumber \\
&&\qquad \qquad + J^*(i y; -i u) + J^*(i u; -i y) +
J^*(i y; -i v) \\
&&\qquad \qquad \nonumber + J^*(i v; -i y) + J^*(i y; -i w) + J^*(i w; -i y)\big) \\
&&\qquad + \nonumber
N \big(J^*(-i w; i u, i v) + J^*(-i v; i w, i u) + J^*(-i u; i w, i v) \\
&&\qquad \qquad + \nonumber
J^*(i u; -i w, -i v) + J^*(i v; -i w, -i u) + J^*(i w; -i u, -i v)\\
&&\qquad \qquad + \nonumber
J^*(-i w; i y, i v) + J^*(-i v; i w, i y) + J^*(-i y; i w, i v) \\
&&\qquad \qquad + \nonumber
J^*(i y; -i w, -i v) + J^*(i v; -i w, -i y) + J^*(i w; -i y, -i v) \\
&&\qquad \qquad + \nonumber
J^*(-i w; i u, i y) + J^*(-i y; i w, i u) + J^*(-i u; i w, i y) \\
&&\qquad \qquad + \nonumber
J^*(i u; -i w, -i y) + J^*(i y; -i w, -i u) + J^*(i w; -i u, -i y) \\
&&\qquad \qquad + \nonumber
J^*(-i y; i u, i v) + J^*(-i v; i y, i u) + J^*(-i u; i y, i v)\\
&&\qquad \qquad + \nonumber
J^*(i u; -i y, -i v) + J^*(i v; -i y, -i u) + J^*(i y; -i u, -i v)\big) \\
&&\qquad + \nonumber
J^*(-i y;i u, i v, i w) + J^*(-i w; i y, i u, i v) +
J^*(-i v; i y, i u, i w) \\
&&\qquad + \nonumber J^*(-i u; i y, i v, i w) + J^*(i y; -i u, -i v, -i w) + J^*(i u; -i y, -i v, -i w) \\
&&\qquad + \nonumber
J^*(i v; -i y, -i u, -i w) + J^*(i w; -i y, -i u, -i v) \\
&&\qquad + \nonumber
J^*(i y, i u; -i v, -i w) + J^*(i y, i v; -i u, -i w) +
J^*(i y, i w; -i u, -i v) \\
&&\qquad + \nonumber J^*(i u, i v; -i y, -i w) +
J^*(i u, i w; -i y, -i v) + J^*(i v, i w; -i y, -i u)
\\&=& \nonumber
\det\left(\begin{array}{cccc}N& S(u - v)& S(u - w)& S(u - y)\\S(v - u)& N& S(v -
w)&
S(v - y)\\ S(w - u)&S(w - v)& N& S(w - y)\\ S(y - u)& S(y -
v)&
S(y - w)& N\end{array}\right)
\end{eqnarray}
\subsection{Quadruple correlation, $\zeta$}
We conjecture that
\begin{eqnarray} \nonumber R_{\zeta,t,4}(u,v,w,y)&=& \ell^4 +
\ell^2 \big(J^*_{\zeta,t}(i u; -i v) + J^*_{\zeta,t}(i v; -i u) + J^*_{\zeta,t}(i u;
-i w)\\ \nonumber
&& \qquad \qquad + J^*_{\zeta,t}(i w; -i u)+
J^*_{\zeta,t}(i w; -i v) + J^*_{\zeta,t}(i v; -i w) \\
&&\qquad \qquad + J^*_{\zeta,t}(i y; -i u) + J^*_{\zeta,t}(i u; -i y) +
J^*_{\zeta,t}(i y; -i v)\\
&&\qquad \qquad \nonumber + J^*_{\zeta,t}(i v; -i y) + J^*_{\zeta,t}(i y; -i w) + J^*_{\zeta,t}(i w; -i y)\big) \\
&&\qquad + \nonumber
\ell \big(J^*_{\zeta,t}(-i w; i u, i v) + J^*_{\zeta,t}(-i v; i w, i u) + J^*_{\zeta,t}(-i u; i w, i v) \\
&&\qquad \qquad + \nonumber
J^*_{\zeta,t}(i u; -i w, -i v) + J^*_{\zeta,t}(i v; -i w, -i u) + J^*_{\zeta,t}(i w; -i u, -i v)\\
&&\qquad \qquad + \nonumber
J^*_{\zeta,t}(-i w; i y, i v) + J^*_{\zeta,t}(-i v; i w, i y) + J^*_{\zeta,t}(-i y; i w, i v) \\
&&\qquad \qquad + \nonumber
J^*_{\zeta,t}(i y; -i w, -i v) + J^*_{\zeta,t}(i v; -i w, -i y) + J^*_{\zeta,t}(i w; -i y, -i v) \\
&&\qquad \qquad + \nonumber
J^*_{\zeta,t}(-i w; i u, i y) + J^*_{\zeta,t}(-i y; i w, i u) + J^*_{\zeta,t}(-i u; i w, i y) \\
&&\qquad \qquad + \nonumber
J^*_{\zeta,t}(i u; -i w, -i y) + J^*_{\zeta,t}(i y; -i w, -i u) + J^*_{\zeta,t}(i w; -i u, -i y) \\
&&\qquad \qquad + \nonumber
J^*_{\zeta,t}(-i y; i u, i v) + J^*_{\zeta,t}(-i v; i y, i u) + J^*_{\zeta,t}(-i u; i y, i v)\\
&&\qquad \qquad + \nonumber
J^*_{\zeta,t}(i u; -i y, -i v) + J^*_{\zeta,t}(i v; -i y, -i u) + J^*_{\zeta,t}(i y; -i u, -i v)\big) \\
&&\qquad + \nonumber
J^*_{\zeta,t}(-i y;i u, i v, i w) + J^*_{\zeta,t}(-i w; i y, i u, i v) +
J^*_{\zeta,t}(-i v; i y, i u, i w) \\
&&\qquad + \nonumber J^*_{\zeta,t}(-i u; i y, i v, i w) + J^*_{\zeta,t}(i y; -i u, -i v, -i w) + J^*_{\zeta,t}(i u; -i y, -i v, -i w) \\
&&\qquad + \nonumber
J^*_{\zeta,t}(i v; -i y, -i u, -i w) + J^*_{\zeta,t}(i w; -i y, -i u, -i v) \\
&&\qquad + \nonumber
J^*_{\zeta,t}(i y, i u; -i v, -i w) + J^*_{\zeta,t}(i y, i v; -i u, -i w) +
J^*_{\zeta,t}(i y, i w; -i u, -i v) \\
&&\qquad + \nonumber J^*_{\zeta,t}(i u, i v; -i y, -i w) +
J^*_{\zeta,t}(i u, i w; -i y, -i v) + J^*_{\zeta,t}(i v, i w; -i y, -i u)
\end{eqnarray}
where the relevant $J^*_{\zeta,t}$ are now described.
We have
\begin{eqnarray}&&
J^*_{\zeta,t}(\{a\},\{b_1,b_2,b_3\})=-\sum_p\frac{z_p(1+a+b_1)z_p(1+a+b_2)z_p(1+a+b_3)\log^4p }{p^{3+3a+b_1+b_2+b_3}}\\
&&\qquad \qquad \qquad + \nonumber
W_1(a,b_1;b_2,b_3)+W_1(a,b_2;b_1,b_3)+W_1(a,b_3;b_1,b_2)
\end{eqnarray}
where
\begin{eqnarray}
W_1(a,b_1;b_2,b_3)=P_1(a+b_1)(P_3(a,b_1,b_2)P_3(a,b_1,b_3)-B_2(a,b_1;b_2,b_3)).
\end{eqnarray}
with
\begin{eqnarray}
B_2(a,b_1;b_2,b_3)=\sum_p \frac{(p-1) p^{2 b_1} \left(p^{a+b_1}-1\right) \left(p^{a+b_1}-p\right) \log ^2 p}{\left(-2 p^{a+b_1}+p^{a+b_1+1}+1\right)^2
\left(p^{b_1}-p^{b_2+1}\right) \left(p^{b_1}-p^{b_3+1}\right)}.
\end{eqnarray}
We also have
\begin{eqnarray} \nonumber &&J^*_{\zeta,t}(\{a_1,a_2\},\{b_1,b_2\})
=P_2(a_1+b_1)P_2(a_2+b_2)+P_2(a_1+b_2)P_2(a_2+b_1)\\
&&\qquad -B_4(a_1,a_2,b_1,b_2) \\&&+ \nonumber
e^{-\ell(a_1+a_2+b_1+b_2)}A^*(a_1,a_2,b_1,b_2)
\frac{Z_{\zeta}(\{a_1,a_2\},\{b_1,b_2\})Z_{\zeta}(\{-a_1,-a_2\},\{-b_1,-b_2\})}
{Z_{\zeta}^\dagger(\{a_1,a_2\},\{-a_1,-a_2\})Z_{\zeta}^\dagger
(\{b_1,b_2\},\{-b_1,-b_2\})}
\\ \nonumber
&&\qquad +W(a_1,b_1;a_2,b_2)+W(a_1,b_2;a_2,b_1)+W(a_2,b_1;a_1,b_2)+W(a_2,b_2;a_1,b_1)
\end{eqnarray}
where
\begin{eqnarray}&&
A^*(a_1,a_2,b_1,b_2)=\prod_p\frac{Z_p(\{a_1,a_2\},\{b_1,b_2\})Z_p(\{-a_1,-a_2\},\{-b_1,-b_2\})}
{Z_p(\{a_1,a_2\},\{-a_1,-a_2\})Z_p(\{b_1,b_2\},\{-b_1,-b_2\})}\\
&& \nonumber \qquad \times p^{-a_1-a_2-b_1-b_2} \bigg( 1+\frac
{z_p(1-a_1-b_1)z_p(1-a_2-b_1)z_p(b_2-b_1)}{z_p(1)z_p(-a_1-b_1)z_p(-a_2-b_1)z_p(1+b_2-b_1)} \\
&& \qquad \qquad \nonumber +\frac {z_p(1-a_1-b_2)z_p(1-a_2-b_2)z_p(b_1-b_2)}
{ z_p(1)z_p(-a_1-b_2)z_p(-a_2-b_2)z_p(1+b_1-b_2)} \bigg).
\end{eqnarray}
and
\begin{eqnarray} &&
W(a_1,b_1;a_2,b_2)=P_1(a_1+b_1)
\bigg\{P_2(a_2+b_2)-B_3(a_1,a_2;b_1,b_2)\\
&& \qquad \qquad \nonumber + P_3(a_1,b_1,b_2)P_3(b_1,a_1,a_2)\bigg\}
\end{eqnarray}
with
\begin{eqnarray}&&
B_3(a_1,a_2;b_1,b_2)=\sum_p\log^2p \bigg(\frac{(p-1)^2
\left(p^{a_1+b_1}-1\right)^2
p^{a_1+b_1}}{\left(p^{a_1}-p^{a_2+1}\right) \left(-2
p^{a_1+b_1}+p^{a_1+b_1+1}+1\right)^2 \left(p^{b_1}-p^{b_2+1}\right)}\\
&& \qquad \nonumber +\frac{C(a_1,a_2;b_1,b_2)}{\left(p^{a_1}-p^{a_2+1}\right) \left(-2
p^{a_1+b_1}+p^{a_1+b_1+1}+1\right) \left(p^{b_2+1}-p^{b_1}\right)
\left(p^{a_2+b_2+1}-1\right)}\\
&& \qquad \qquad \nonumber +\frac{1}{p^{a_2+b_2+1}-1} \bigg),
\end{eqnarray}
\begin{eqnarray} &&
C(a_1,a_2;b_1,b_2)=-p^{a_1+b_1}+2
p^{a_1+b_1+1}-p^{a_2+b_1+2}-p^{2 a_1+2 b_1+1}+p^{a_1+a_2+2
b_1+1}-p^{a_1+b_2+2}\nonumber \\
&&\qquad +p^{a_2+b_2+2}+p^{2 a_1+b_1+b_2+1}-2
p^{a_1+a_2+b_1+b_2+2}+p^{a_1+a_2+b_1+b_2+3};
\end{eqnarray}
and
\begin{eqnarray} \nonumber
B_4(a_1,a_2;b_1,b_2)=\sum_p \frac{
(3 - p^{1 + a_1 + b_1} - p^{1 + a_2 + b_1} -
p^{1 + a_1 + b_2} - p^{1 + a_2 + b_2} +
p^{2 + a_1 + a_2 + b_1 + b_2})\log^4 p}{(
p^{1 + a_1 + b_1}-1)(
p^{1 + a_2 + b_1})-1)(
p^{1 + a_1 + b_2}-1)(
p^{1 + a_2 + b_2}-1)}.
\end{eqnarray}
\subsection{Auxiliary functions}\label{sect:auxfunc}
For ease of reference we list the various auxiliary functions we have introduced in this example section.
\begin{eqnarray}
\label{eq:AprimeA} A(x)&=&\prod_p
\frac{(1-\tfrac{1}{p^{1+x}})(1-\tfrac{2}{p}+\tfrac{1}{p^{1+x}})}
{(1-\tfrac{1}{p})^2},
\end{eqnarray}
\begin{eqnarray}&&
A^*(a_1,a_2,b_1,b_2)=\prod_p\frac{Z_p(\{a_1,a_2\},\{b_1,b_2\})Z_p(\{-a_1,-a_2\},\{-b_1,-b_2\})}
{Z_p(\{a_1,a_2\},\{-a_1,-a_2\})Z_p(\{b_1,b_2\},\{-b_1,-b_2\})}\\
&&\qquad \nonumber \times p^{-a_1-a_2-b_1-b_2} \bigg( 1+\frac
{z_p(1-a_1-b_1)z_p(1-a_2-b_1)z_p(b_2-b_1)}{z_p(1)z_p(-a_1-b_1)z_p(-a_2-b_1)z_p(1+b_2-b_1)} \\
&& \qquad \qquad \nonumber +\frac {z_p(1-a_1-b_2)z_p(1-a_2-b_2)z_p(b_1-b_2)}
{ z_p(1)z_p(-a_1-b_2)z_p(-a_2-b_2)z_p(1+b_1-b_2)} \bigg).
\end{eqnarray}
\begin{eqnarray}
\label{eq:BprimeA} B(x)&=&\sum_p\left(\frac{\log
p}{p^{1+x}-1}\right)^2,
\end{eqnarray}
\begin{eqnarray}
\label{eq:PprimeA} B_1(x,y)= \sum_p\frac{\Big(1-\frac{1}{p^x}\Big)
\Big(1-\frac{1}{p^x}-\frac{1}{p^y} +\frac{1}{p^{1+y}}\Big) \log p}
{\Big(1-\frac{1}{p^{1-x+y}}\Big) \Big(1-\frac{1}{p^{1+y}}\Big)
\Big( 1-\frac{2}{p}+\frac{1}{p^{1+x}}\Big)p^{2-x+y}}.
\end{eqnarray}
\begin{eqnarray}
B_2(a,b_1;b_2,b_3)=\sum_p \frac{(p-1) p^{2 b_1} \left(p^{a+b_1}-1\right) \left(p^{a+b_1}-p\right) \log ^2 p}{\left(-2 p^{a+b_1}+p^{a+b_1+1}+1\right)^2
\left(p^{b_1}-p^{b_2+1}\right) \left(p^{b_1}-p^{b_3+1}\right)}.
\end{eqnarray}
\begin{eqnarray}&&
B_3(a_1,a_2;b_1,b_2)=\sum_p\log^2p \bigg(\frac{(p-1)^2
\left(p^{a_1+b_1}-1\right)^2
p^{a_1+b_1}}{\left(p^{a_1}-p^{a_2+1}\right) \left(-2
p^{a_1+b_1}+p^{a_1+b_1+1}+1\right)^2 \left(p^{b_1}-p^{b_2+1}\right)}\\
&& \qquad \nonumber +\frac{C(a_1,a_2;b_1,b_2)}{\left(p^{a_1}-p^{a_2+1}\right) \left(-2
p^{a_1+b_1}+p^{a_1+b_1+1}+1\right) \left(p^{b_2+1}-p^{b_1}\right)
\left(p^{a_2+b_2+1}-1\right)}\\
&& \qquad \qquad \nonumber +\frac{1}{p^{a_2+b_2+1}-1} \bigg)
\end{eqnarray}
\begin{eqnarray} \nonumber
B_4(a_1,a_2;b_1,b_2)=\sum_p \frac{
(3 - p^{1 + a_1 + b_1} - p^{1 + a_2 + b_1} -
p^{1 + a_1 + b_2} - p^{1 + a_2 + b_2} +
p^{2 + a_1 + a_2 + b_1 + b_2})\log^4 p}{(
p^{1 + a_1 + b_1}-1)(
p^{1 + a_2 + b_1})-1)(
p^{1 + a_1 + b_2}-1)(
p^{1 + a_2 + b_2}-1)}.
\end{eqnarray}
\begin{eqnarray} && \nonumber
C(a_1,a_2;b_1,b_2)=-p^{a_1+b_1}+2
p^{a_1+b_1+1}-p^{a_2+b_1+2}-p^{2 a_1+2 b_1+1}+p^{a_1+a_2+2
b_1+1}-p^{a_1+b_2+2}\\
&&\qquad +p^{a_2+b_2+2}+p^{2 a_1+b_1+b_2+1}-2
p^{a_1+a_2+b_1+b_2+2}+p^{a_1+a_2+b_1+b_2+3};
\end{eqnarray}
\begin{eqnarray}
P_1(x)=e^{-\ell x}A(x){\zeta}(1+x){\zeta}(1-x)
\end{eqnarray}
\begin{eqnarray}
P_2(x)=\Big(\frac{{\zeta}'}{{\zeta}}\Big)'(1+x) - B(x),
\end{eqnarray}
\begin{eqnarray}
P_3(a,b,c)=B_1(a+b,a+c)+\frac{{\zeta}'}{{\zeta}}(1+a+c)-\frac{{\zeta}'}{{\zeta}}(1+c-b).
\end{eqnarray}
\begin{eqnarray}
\label{eq:QprimeA} Q(x,y)&=&-\sum_p \frac{\log^3 p}
{p^{2+x+y}(1-\frac{1}{p^{1+x}})( 1-\frac{1}{p^{1+y}})}
\end{eqnarray}
\begin{eqnarray} &&
W(a_1,b_1;a_2,b_2)=P_1(a_1+b_1)
\bigg\{P_2(a_2+b_2)-B_3(a_1,a_2;b_1,b_2)\\
&& \qquad \qquad + P_3(a_1,b_1,b_2)P_3(b_1,a_1,a_2)\bigg\}\nonumber
\end{eqnarray}
\begin{eqnarray}
W_1(a,b_1;b_2,b_3)=P_1(a+b_1)(P_3(a,b_1,b_2)P_3(a,b_1,b_3)-B_2(a,b_1;b_2,b_3)).
\end{eqnarray}
|
2,877,628,090,991 | arxiv | \section{Introduction}
Design and synthesis of materials with a strong magnetic exchange bias (EB) property has been one of the intense research activities from the past several decades~\cite{Meiklejohn1957} and till to the present days~\cite{Wang2011, Nayak2015, Saha2019, Tian2021} due to their potential applications in spintronic devices~\cite{Parkin2004, Hirohata2014}. There exists several studies on designing the multilayered and core-shell structures to generate an effective large exchange bias at the interface of a ferromagnetic (FM) and an antiferromagnetic (AFM) layer~\cite{Inderhees2008, Lage2012, Lavorato2017, Perzanowski2017, Song2017}. Several bulk materials too have been synthesised, showing large exchange bias~\cite{Wisniewski2017, Belik2013, Fertman2020}. But most of the bulk materials are in the form of nanocomposites or with a complicated crystal structure of the doped ternary compounds. In this paper, we will discuss the exchange bias in a transition-metal monochalcogenide having a simplest crystal structure.
\begin{figure}[ht]
\centering
\includegraphics[width=0.45\textwidth]{Fig1}
\caption{Powder X-ray diffraction pattern and reitveld refinement of Cr$_{0.79}$Se, confirming the NiAs-type crystal structure with space group of P6$_3$/mmc (194).}
\label{1}
\end{figure}
Transition-metal monochalcogenides with the chemical formula of MX (M= Fe, Cr; X=S, Se, Te) are very versatile materials due to their diverse structural, electronic, and magnetic properties. For instance, Fe$_x$Se is a non-magnetic high temperature superconductor with a T$_c$ of 8 K, having a tetragonal crystal structure for x$\geq$1~\cite{Hsu2008}, while it is an antiferromagnetic metal having hexagonal crystal structure for x$<$1~\cite{Li2016}. Whereas, FeTe is always a tetragonal antiferromagnetic system with a stripe order~\cite{Maheshwari2015, Haenke2017}. Further, FeS is found to be a non-magnetic superconductor with a tetragonal crystal structure~\cite{Zhang2017}. On the other hand, similar to Fe$_x$X, Cr$_x$X (X=S, Se, Te) systems too are very diverse in their structural, electronic, and magnetic properties~\cite{Chen2019, Yang2019, Li2019, Sun2020, Coughlin2020, Huang2021}. For instance, Cr$_x$Te is a ferromagnetic half-metal and can exist in any of zinc-blend (ZB)~\cite{Sanyal2003}, rock-salt (RS)~\cite{Liu2010}, or NiAs~\cite{Lotgering1957} crystal structure type. Whereas, Cr$_{x}$S~\cite{Kamigaichi1960} and Cr$_{x}$Se~\cite{Corliss1961} are mostly known for their antiferromagnetic nature having the NiAs-type crystal structure. Some reports suggested Cr$_{x}$Se to be even a spin-glass type magnetic system~\cite{Li2006} and Cr$_{x}$S to be a ferrimagnetic metal~\cite{Konno1988}.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.9\textwidth]{Fig2}
\caption{(a)Temperature dependent powder X-ray diffraction pattern of Cr$_{0.79}$Se overlapped with reitveld refinement. (b) Enlarged XRD patterns for the reflections of (1$\bar{1}$1) and (01$\bar{3}$). (c) Plot of lattice constants $a$, $b$, and $c$ as a function of temperature. (d) Plot of cell volume ($V$) as a function of temperature.}
\label{2}
\end{figure*}
In this paper, we report a comprehensive study on the structural, electrical transport, and magnetic properties of Cr$_{0.79}$Se in the polycrystal form. Till date not many experimental studies are available on these system, despite being a non-collinear AFM metal~\cite{Corliss1961}. Recently, it was suggested that the antiferromagnetic metals with non-collinear spin texture are promising candidates for the anomalous Hall effect, induced by the berry curvature~\cite{Gan2016, Yan2017, Li2020}. With this motivation, we reinvestigated the structural, electrical, magnetic properties of this system. Our x-ray diffraction (XRD) studies demonstrate that Cr$_{0.79}$Se has the NiAs-type structure. At higher temperatures we noticed shift in certain XRD peak positions, leading to change in lattice parameters with temperature. In addition, from the XRD measurements we observe that the NiAs-type structure is stable up to as high as 600$^o$C of sample temperature. Electrical resistivity studies show Fermi-liquid like metallic behaviour at low temperatures ($<41$ K), and in the intermediate temperatures (41-200 K) the resistivity changes sublinearly with temperature. Further, at the elevated temperatures ($>200$ K) the rate of change of resistivity rapidly decreases with temperature. Magnetic properties studies suggest a transition from paramagnetic phase to an antiferromagnetic phase at a N$\acute{e}$el temperature of 225 K. Further, below 100 K, a weak ferromagnetism is found which is coexisting with antiferromagnetism.
\begin{figure}[ht]
\centering
\includegraphics[width=0.45\textwidth]{Fig3}
\caption{Temperature dependent electrical resistivity of Cr$_{0.79}$Se. Black solid curve represents T$^2$ law fitting up to 41K and the blue solid curve represents sublinear fitting between 41 and 200 K. Inset shows the plot of $\rho$ $vs.$ $T^2$. Green line in the inset is a linear fit to the data. Bottom image represents d$\rho$/dT $vs.$ T}
\label{3}
\end{figure}
\section{Results}
Figure~\ref{1}(a) shows rietveld refinement on the XRD data of Cr$_{0.79}$Se measured at the room temperature (RT). It is evident from the XRD data that Cr$_{0.79}$Se crystallizes into the NiAs-type crystal structure with a hexagonal space group of P6$_3$/mmc(194). The estimated lattice parameters from the rietveld refinement are found to be $a$=$b$=3.6811(3) Å and $c$=6.0198(6) Å. No additional impurity peaks have been noticed from the XRD data, demonstrating high phase purity of the sample. Further, we have performed XRD measurements as a function of temperature starting from RT to 600$^o$C, as shown in Figure~\ref{2}(a). From the temperature dependent XRD data we noticed that the peak positions are relatively shifted with the temperature. To demonstrate the peak shift, in Figure~\ref{2}(b), we fixed peak position of the reflection (1$\bar{1}$1) to notice a significant shift in peak position of the reflection (01$\bar{3}$) . In order to elucidate the structural changes with the temperature, we performed rietveld refinement for the XRD data at every measured temperature. The obtained lattice parameters are plotted as a function of temperature as shown in Figure~\ref{2}(c). We identify that the lattice parameter $a(b)$ is almost constant, changing from 3.681 Å to 3.712 Å, while the lattice parameter $c$ substantially increases from 6.024 Å to 6.113 Å in going from RT to 600$^o$C. Consequently, the unit cell volume also increases with the temperature as shown in Figure~\ref{2}(d).
\begin{figure}[hb]
\centering
\includegraphics[width=0.45\textwidth]{Fig4}
\caption{Magnetization as a function of temperature plotted for zero-field-cooled (ZFC) and field-cooled (FC) modes. Susceptibility as a function of temperature is plotted for the FC mode. Green curve is a susceptibility fitting with the Curie-Weiss law.}
\label{4}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.45\textwidth]{Fig5}
\caption{ Magnetization (M) as a function of applied magnetic field (H) is plotted for the ZFC mode at sample temperatures of 10, 100 and 150 K. Top-left inset shows M-H curve fit using the Eq.~\ref{eq2}. Bottom-right inset shows enlarged data around the zero magnetic field to show the hysteresis of M-H curve for 10 K, which disappears above 100 K.}
\label{5}
\end{figure}
Figure~\ref{3} shows temperature dependent electrical resistivity of Cr$_{0.79}$Se measured within the temperature range of 3.1 to 310 K. We observe from the resistivity data that at low temperatures (T$<$41 K), the data nicely fits to the Fermi liquid law of resistivity ($\propto$ $aT^2$). But beyond, 41 K the data follows a sublinear behaviour ($\propto$ $bT^{0.62}$) with temperature up to 200 K. Inset, in Fig.~\ref{3} confirms the Fermi-liquid nature of the resistivity as one can notice perfect linear relation between $\rho$ and $T^2$ (for T up to 41K). Bottom panel of Fig.~\ref{3} presents the plot of d$\rho$/dT $vs$ T. We notice that d$\rho$/dT increases with T up to 41 K, above 41 K d$\rho$/dT decreases with T up to 302 K, and beyond 302 K -d$\rho$/dT decreases with T hinting at an electronic phase transition at this temperature as d$\rho$/dT becomes $-ve$. Next, in Figure~\ref{4}, we show magnetization (M) as a function of temperature measured under zero-field-cooled (ZFC) and field-cooled (FC) modes at an applied external magnetic field of 500 Oe. In Fig.~\ref{4}, further we show inverse magnetic susceptibility (1/$\chi$) as a function of temperature measured in the FC mode. As can be seen from Fig.~\ref{4}, at higher temperatures (T$>$225 K), susceptibility follows the Curie-Weiss law,
\begin{equation}\label{eq1}
\chi (T)=\frac{C}{T-\Theta}
\end{equation}
here, $C$ is the Curie constant and $\Theta$ is the Curie-Weiss temperature. From the fitting, we found a Curie-Weiss temperature of $\Theta$=-300$\pm$2 K and a Curie constant of $C$= 24.5$\pm$0.5 Oe.gm.emu$^{-1}$.K$^{-1}$. The negative Curie-Weiss temperature suggest for a dominant antiferromagnetic interactions in the system. We further have calculated the effective magnetic moment of Cr ion in the paramagnetic regime using the formula, $\mu_{eff}=2.84 \sqrt{MC}$ $\mu_B=5.08\mu_B$/Cr~\cite{Makovetskii1978}. In addition, we observe a deviation from the linear dependence of 1/$\chi$ on T below 225 K, suggesting a magnetic transition from a paramagnetic to an antiferromagnetic phase.
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{Fig6}
\caption{(a) M-H loops plotted for the FC mode at various sample temperatures. Inset in (a) is enlarged data around the zero magnetic field to show the hysteresis of M-H curve. (b) M-H curves plotted for the FC mode at various applied magnetic fields. Bottom-right inset in (b) is enlarged data around the zero magnetic field to show the hysteresis of M-H curve at various applied fields. Bottom-left inset in (b) is enlarged data to show the hysteresis of M-H curve for the applied field of 9 T. Top-left inset in (b) shows coercive field as a function of applied field. (c) Exchange bias (H$_{EB}$) and coercive field (H$_C$) are plotted as a function of temperature measured with 3T magnetic field in the FC mode. (d) Plot of H$_{EB}$ $vs.$ no. of M-H loop cycles.}
\label{6}
\end{figure*}
Figure~\ref{5} depicts $M-H$ curves taken in the ZFC mode with an applied magnetic field of 1.5 T at temperatures 10, 100, and 150 K. From the inset shown at the bottom of Fig.~\ref{5}, we observe hysteresis in the $M-H$ loop with a coercive field of $H_C$=410 Oe when measured at 10 K and the hysteresis disappears at 100 K. Presence of hysteresis loop suggests for a ferromagnetic order at low temperature. Also, magnetization saturation is not reached up to the applied field of 1.5 T suggesting for a strong AFM order as well in this system. In order to quantify the strength of ferromagnetism in the system, as shown in the top-left inset of Fig.~\ref{5}, we performed M-H curve fitting with equation~\ref{eq2}~\cite{Patel2018}, to estimate the saturation magnetization M$_s$=1.3$\pm$0.1 emu/g and remanent magnetization M$_r$=0.09$\pm$0.01 emu/g, while holding the experimental coercive field H$_c$=410 Oe and susceptibility $\chi$=9.1 $\times$ 10$^{-5}$ emu/(Oe-g). These values suggest for a weak ferromagnetism in Cr$_{0.79}$Se.
\begin{equation}\label{eq2}
M(H)= \frac{2M_s}{\pi} tan^{-1} [(\frac{H}{H_c}\pm1)~tan(\frac{\pi M_r}{2M_s})]+\chi H
\end{equation}
Figure~\ref{6}(a) depicts $M-H$ curves measured in the FC mode at various sample temperatures using the applied magnetic field of 3 T. Inset at the right-bottom of Fig.~\ref{6}(a) demonstrate a significant shift in the M-H loop hysteresis, hinting at the presence of exchange bias in this sample. Figure~\ref{6}(b) depicts M-H curves in the FC mode at a fixed sample temperature of 10 K by varying the applied magnetic fields, 3 T, 6 T, and 9 T. Right-bottom inset of Fig.~\ref{6}(b) is the zoomed in image of $M-H$ loops in which one can notice hysteresis at all applied magnetic fields. Further, left-bottom inset of Fig.~\ref{6}(b) is the zoomed in image of $M-H$ loop measured with a magnetic field of 9 T, demonstrating the presence of hysteresis between 6 and 9 T. Figure~\ref{6}(c) depicts the exchange bias ($H_{EB}$) and coercivity ($H_C$) plotted as a function of temperature. From Fig.~\ref{6}(c) we can observe that both $H_{EB}$ and $H_{C}$ decrease with increase in temperature and become negligible above 100 K within the instrumental error of $\pm$20 Oe. Figure~\ref{6}(d) depicts training effect on the exchange bias. We observe a significant decrease (15$\%$) in the exchange bias after repeating four cycles of $M-H$ loops. The observation of exchange bias under the FC mode but not under the ZFC mode is inline with the phenomena of the exchange bias, as explained for various magnetic alloys and compounds~\cite{Giri2011}.
\section{Discussions}
The studied sample, Cr$_{0.79}$Se, has been found in the NiAs-type structure of the hexagonal crystal symmetry as demonstrated in Figs.~\ref{1} and ~\ref{2}, which is consistent with the crystal structure of stoichiometric CrSe~\cite{Tsubokawa1956}. On the other hand, the crystal symmetry of the so far existing off-stoichiometric systems of Cr$_x$Se deviate form the stoichiometric system. For instance, Cr$_{0.67}$Se posses trigonal crystal symmetry with a space group of R$\bar{3}$H (148)~\cite{Adachi1994}, while Cr$_{0.875}$Se, Cr$_{0.75}$Se and Cr$_{0.625}$Se posses the monoclinic crystal symmetry with a space group of C1$_2$/m1(12)~\cite{Chevreton1961, Blachnik1987, Sleight1969}. Most importantly, we identify that Cr$_{0.79}$Se is the first known off-stoichiometric composition crystallizing in the P6$_3$/mmc space group. Further from the temperature dependent XRD data, we notice peak shifting with the temperature. Such a peak position shift with temperature generally lead to change in the lattice parameters while still preserving the crystal symmetry. This has been supported by the rietveld refinement (see Fig.~\ref{2}). Thus, our studies confirm that the NiAs-crystal structure of Cr$_{0.79}$Se is stable up to 600$^o$C from the room temperature. Though Jahn-Teller distortion is not observed so far in any of the off-stoichiometric Cr$_x$Se compositions, in the case of stoichiometric CrSe, the Jahn-Teller distortion is suggested for temperature below 305$^o$C~\cite{Masumoto1962}. However, we do not observe any signature of Jahn-Teller distortion from our temperature dependant XRD data in the studied compound.
Next, from the electrical resistivity data shown in Fig.~\ref{3}(a) it is clear that Cr$_{0.79}$Se is as a Fermi-liquid type metal below 41 K. But above 41 K, the system deviates to a non-Fermi-liquid type metal showing a sub-linear dependence of the resistivity on temperature up to 200 K. The same is confirmed from the $d\rho/dT$ $vs$ T curve [see bottom of Fig.~\ref{3}(a)]. From this curve we observe that $d\rho/dT$ increases with temperature up to 41 K and after that it decreases with $T$ to reach zero at 302 K. Above 302 K, -$d\rho/dT$ decreases with increasing $T$ up to the measured temperature of 310 K. This observation hints at a metal-insulator (MI) transition above 302 K in Cr$_{0.79}$Se. MI transition has been noticed in some of the other transition-metal monochalcogenides as well. For instance, in Fe$_{0.875}$Se the MI transition is observed at 100 K due to a proximity effect of magnetic moment reorientation~\cite{Li2016}. Note here that a similar kind of resistivity data have been reported earlier in the case of Cr$_{0.67}$Se single crystals~\cite{Wu2020} with a sub-linear behaviour of resistivity up to 175 K and then changing the resistivity slop above this temperature. But note that the antiferromagnetic ordering in Cr$_{0.67}$Se is found only below 60 K, while in our sample the antiferromagnetic ordering is found at 225 K. Another report on the Cr$_{0.68}$Se single crystals also suggested for an antiferromagnetic ordering below 42 K, but unlike to Cr$_{0.67}$Se which is a metal up to 300 K, Cr$_{0.68}$Se is found to be a small gap semiconductor ($E_g$=3.9 meV)~\cite{Yan2017} down to lowest possible measured temperature. Thus, the electrical properties of Cr$_x$Se systems seem to be highly sensitive to the Cr atom concentration, which may directly affect the charge carrier density near the Fermi level and disorder of the system, rather than the magnetic interactions as we do not find any one-one correlation between the magnetic transition temperatures (see Figs.~\ref{4} -~\ref{6}) and the temperature dependent resistivity (see Fig.~\ref{3}).
Finally coming to the important observations of this study, we found a weak ferromagnetism in Cr$_{0.79}$Se below T$_C$=100 K that is coexisted with the AFM phase. As a result an exchange bias has been observed in this system below T$_C$. Usually, CrSe is known for their non-collinear AFM phase. But recently, one report showed a weak ferromagnetism in Cr$_{0.67}$Se along with AFM phase below 50 K~\cite{Wu2020}. However not much discussion is drawn on the presence of the exchange bias in Cr$_{0.67}$Se. Thus, we report the exchange bias for the first time in these systems. Another report on the neutron diffraction studies of Cr$_{0.67}$Se suggests for two magnetic phases, non-collinear AFM phase at low temperature ($<$ 38 K) and collinear AFM phase at high temperature (38 K$>$T$<$45 K), but did not find the ferromagnetism down to as low as 6 K~\cite{Adachi1994}. On the other hand, the estimated effective paramagnetic moment of 5.08$\mu_B$/Cr in our systems is slightly higher than the effective paramagnetic moment of 4.5$\mu_B$/Cr in the stoichiometric CrSe~\cite{Lotgering1957}. This could be mostly because of the mixed Cr valance states in the off-stoichiometric compositions ~\cite{Andresen1970, Liu2017}. Although, earlier a spin-glass like magnetic phase has been observed due to the frustrated magnetic moments between AFM and FM phases~\cite{Li2006}, in our system from the $M-T$ data (see Fig.~\ref{4}) we do not see any signature of the spin-glass like behaviour below T$_C$.
\section{Conclusions}
In summary, we systematically studied the structural, electrical transport, and magnetic properties of the antiferromagnetic transition-metal monochalcogenide Cr$_{0.79}$Se. We identify that Cr$_{0.79}$Se is synthesised into the same NiAs-type hexagonal crystal structure of the stoichiometric CrSe, unlike the other off-stoichiometric systems which form in differing crystal symmetries. Resistivity data suggest Cr$_{0.79}$Se to be a Fermi-liquid-type metal at low temperatures, while at higher temperatures the resistivity depends sublinearly on the temperature. Above the room temperature the resistivity data hints at a MI transition, but need more studies to confirm the same. Magnetic measurements suggest for a transition from paramagnetic phase to an antiferromagnetic phase at a N$\acute{e}$el temperature of 225 K. Importantly, a weak ferromagnetism is noticed below 100 K along with the antiferromagnetism. As a result, we notice significant exchange bias below 100 K due to the interaction between the ferro- and antiferromagnetic phases.
\section{Methods}
Samples of Cr$_{0.79}$Se are prepared by the standard solid-state reaction method~\cite{Tsubokawa1956} from high purity powders of Chromium (4N, Alfa Aeser) and Selenium (5N, Alfa Aeser) elements by mixing in appropriate ratio. The well-mixed powders were then heated in a muffle furnace at 1000$^o$C for 48 hours. The final sample was pressed into pellet form and heated again at 1000$^o$C for another 48 hours. As prepared polycrystalline sample was structurally characterized using the powder X-ray diffraction (XRD) equipped with Cu K$\alpha$ radiation of Rigaku-SmartLab (9 KW) at various samples temperatures (30$^o$C to 600$^o$C). Rietveld refinement analysis of the XRD data is done using FULLPROF software package~\cite{RodriguezCarvajal1993}. Energy-dispersive X-ray (EDX) analysis suggests the chemical composition of as prepared sample to be Cr$_{0.79}$Se. Electrical resistivity measurements were carried out using the standard four-probe technique with a closed-cycle refrigerator (CCR) based cryostat, within a temperature range of 3.1 K to 310 K. Conducting silver epoxy and Cu wires were used to make the electrical contacts. Magnetic property measurements were carried out using the vibrating sample magnetometer (VSM) (DynaCool, Quantum Design) up to a magnetic field of 9 tesla.
\section{Acknowledgements}
S.T. acknowledges the financial support given by SNBNCBS through the Faculty Seed Grants program. Authors thank Science and Engineering Research Board (SERB), Department of Science and Technology (DST), India for the financial support through the start-up research grants (SRG/2020/000393).
\section{Conflicts of Interest}
The authors declare no conflicts of interest.
\bibliographystyle{achemso}
|
2,877,628,090,992 | arxiv | \section{Introduction}
The use of Fourier methods in astronomical imaging is mainly related to radio interferometry \citep{richard2017interferometry}. However, in the last three decades, this approach has been utilized also in the case of solar hard X-ray telescopes that have been conceived in order to provide spatial Fourier components of the photon flux emitted via either bremsstrahlung or thermal processes during solar flares \citep{enlighten1658,krucker2020spectrometer}. These Fourier components, named {\em{visibilities}}, are sampled by the hard X-ray instrument in the two dimensional Fourier space, named $(u,{\mbox{v}})$-plane, in a sparse way, according to a geometry depending on the instrument design. By instance, the {\em{Reuven Ramaty High Energy Spectroscopic Imager (RHESSI)}} relies on the use of a set of nine rotating modulation collimators (RMCs) whose Full Width at Half Maximum (FWHM) is logarithmically spaced between $2.3$ and $183$ arcsec \citep{2002SoPh}. Each RMC measures visibilities on a circle of points in the $(u,v)$-space with a spatial frequency that corresponds to its angular resolution and a position angle that varies according to the spacecraft rotation (see Figure \ref{figure:fig-1}, left panel). On the other hand, the {\em{Spectrometer/Telescope for Imaging X-rays (STIX)}} on-board {\em{Solar Orbiter}} is based on the Moir\'e pattern technology \citep{STIX1,2019A&A...624A.130M} and its $30$ collimators sample the $(u,{\mbox{v}})$-plane over a set of six spirals for a FWHM resolution coarser than $7$ arcsec (see Figure \ref{figure:fig-1}, right panel).
Image reconstruction methods in solar hard X-ray astronomy rely on procedures that allow some sort of interpolation/extrapolation in the $(u,{\mbox{v}})$-space in order to recover information in between the sampled frequencies, for reducing the imaging artifacts and, outside the sampling domain, for obtaining super-resolution effects. Most methods accomplish these objectives by imposing constraints in the image domain, either by optimizing parameters associated to predefined image shapes via comparison with observations \citep{Aschwanden,sciacchitano2018identification}, or by minimizing regularization functionals that combine a fitting term with a stability term \citep{felix2017compressed,duval2018solar,massa2020mem_ge}.
However, the most straightforward approach to interpolation/extrapolation in visibility-based imaging is probably the one implemented in the uv$\_$smooth method \citep{009HAMassoneRDXI}, which is inspired by standard gridding approaches utilized in radio-astronomy. In particular, uv$\_$smooth starts from the observation that the coverage of the $(u,{\mbox{v}})$-plane offered by hard X-ray instruments is much sparser than that typical of radio astronomy and therefore utilizes spline interpolation at spatial frequencies smaller than the largest sampled frequencies and soft-thresholding on the image to reduce the ringing effects due to a naive and unconstrained Fourier transform inversion procedure \citep{daubechies2004iterative,Massone1}. This approach can exploit Fast Fourier Transform (FFT) in the inversion process and is characterized by a satisfactory reliability when reconstructing extended sources \citep{guo2013specific,guo2012determination,guo2012properties,caspi2015hard}; however, several applications \citep{dennis2019remarkably,bonettini2014accelerated} showed that uv$\_$smooth does not work properly when it is applied to visibility sets characterized by significant oscillations in the $(u,{\mbox{v}})$-plane. This misbehavior is essentially due to the fact that the interpolation algorithm utilized in uv$\_$smooth is not optimal and often misses the oscillating frequency information related to very narrow or well-separated sources (or, in the case of {\em{RHESSI}}, associated to the use of detectors with fine grids in the observation process).
The present paper proposes an enhanced release of uv$\_$smooth, based on the use of an advanced approach to interpolation in the frequency domain. Specifically, this approach relies on the use of Variably Scaled Kernels (VSKs), which are able to include {\em{a priori}} information in the interpolation process \citep{Bozzini1,vskmpi}. This additional knowledge is implicitly put into the kernel via a {\em{scaling function}} that determines the accuracy of the approximation process and that is linked to a first coarse reconstruction of the sought image. As far as the practical implementation of the VSK setting is concerned, in this study we considered the Mat\'ern $C^0$ kernel, which takes advantage of a low regularity degree and of a better numerical stability \citep{Matern}.
The plan of the paper is as follows. Section 2 illustrates the interpolation process based on VSKs. Section 3 describes the overall image reconstruction approach relying on the use of interpolation in the $(u,{\mbox{v}})$-plane and of the soft-thresholding technique applied for image reconstruction. Section 4 contains some validation tests performed against both synthetic {\em{STIX}} visibilities and experimental {\em{RHESSI}} observations. Our conclusions are offered in Section 5.
\begin{figure}
\centering
\includegraphics[scale=0.2]{./FigPaper/visrhessi} \hskip 1.2cm
\includegraphics[scale=0.2]{./FigPaper/visstix}
\caption{The sampling of the $(u,v)$ plane provided by {\em{RHESSI}} (left panel) and {\em{STIX}} (right panel).}
\label{figure:fig-1}
\end{figure}
\section{Interpolation in the Fourier domain}
Visibility-based hard X-ray telescopes provide experimental measurements of the Fourier transform of the incoming photon flux at specific points of the spatial frequency plane. We denote with ${\bf{f}}$ the vector whose components are the discretized values of the incoming flux, with ${\bf{F}}$ the discretized Fourier transform, with $ \{ {\bf u}_i=(u_i,v_i) \}_{i=1}^{n}$ the set of sampled points in the $(u,{\mbox{v}})$-plane, with ${\bf{V}}$ the vector whose $n$ components are the observed visibilities and with $\chi$ the binary mask returning $1$ at frequencies $\{{\bf{u}}_i\}_{i=1}^{n}$ and zero elsewhere. Then, the image formation model in this framework can be approximated by
\begin{equation}\label{b1}
{\bf{V}} = \chi \cdot {\bf{F}} {\bf{f}}~,
\end{equation}
where the symbol $\cdot$ denotes the entry-wise product. The uv$\_$smooth code incorporated in the SSW tree and validated in the case of {\em{RHESSI}} visibilities addresses equation (\ref{b1}) by means of an interpolation/extrapolation procedure in which the interpolation step is carried out via an algorithm based on spline functions and the extrapolation step is realized by means of a soft-thresholding scheme \citep{daubechies2004iterative,Massone1}. In the present paper we want to generalize the interpolation step of uv$\_$smooth by means of a more sophisticated numerical technique, in order to improve uv$\_$smooth performances, particularly in the case when visibility oscillations are significant.
In general, any interpolation approach seeks for a function, namely $P$, that matches the given measurements at their corresponding locations. Thus an interpolant of the visibilities is constructed in such a way that
\begin{equation}\label{b1-1}
P(\boldsymbol{u}_i)=\boldsymbol{V}_i, \quad i=1,\ldots,n.
\end{equation}
Typically, any interpolating function is of the form
\begin{equation}\label{b2}
P({\bf{u}}) = \sum_{k=1}^{n} a_k b_k({\bf{u}})~,
\end{equation}
where $\{b_1({\bf{u}}),\ldots,b_n({\bf{u}})\}$ is a set of appropriate basis functions and ${\bf{u}}$ is a vector in the interpolation domain. A possible choice for these basis functions is represented by the so-called Radial Basis Functions (RBFs), see e.g \citep{Fasshauer}, which have the property that
\begin{equation}\label{b3}
b_k({\bf{u}}) = \phi(\|{\bf{u}}-{\bf{u}}_k\|),~~~~k=1,\ldots,n~,
\end{equation}
where $\phi$ is a specific RBF.
In order to incorporate possible prior information in the interpolation process, the Variably Scaled Kernels (VSKs) represent a specific implementation of RBFs in which
\begin{equation}\label{b4}
b_k({\bf{u}}) = \phi(\|({\bf{u}},\psi({\bf{u}}))- ({\bf{u}}_k,\psi({\bf{u}}_k))\|),
~~~k=1,\ldots,n~,
\end{equation}
and where $\psi$ is the so-called scaling function encoding such prior information on the emitting source ${\bf{f}}$. Therefore, once the functions $\phi$ and $\psi$ are chosen, by imposing the interpolation conditions (\ref{b1-1}) the interpolation problem is reduced to the solution of the linear system
\begin{equation}\label{b5}
K{\bf{a}} = {\bf{V}}~,
\end{equation}
where ${\bf{a}}=(a_1,\ldots,a_n)^T$, ${\bf{V}}=({\bf{v}}_1,\ldots,{\bf{v}}_n)^T$ and $K_{ij}=\phi(\|({\bf{u}}_i,\psi({\bf{u}}_i))- ({\bf{u}}_j,\psi({\bf{u}}_j)\|)$, $i,j=1,\ldots,n$. Once system (\ref{b5}) is solved, the computed vector ${\bf{a}}$ is used to evaluate the interpolating function $P({\bf{u}})$ on the $N$ points $\{\bar{{\bf{u}}}_1,\ldots,\bar{{\bf{u}}}_N\}$ of a regular mesh of the $(u,{\mbox{v}})$-plane, with $N >> n$. This provides the visibility surface ${\bf{\overline{V}}}$ such that
\begin{equation}\label{bb5}
{\bf{\overline{V}}}_k = P(\bar{{\bf u}}_k) = \sum_{i=1}^n a_i \phi(\|({\bf{{\overline{u}}}}_k,\psi({\bf{{\overline{u}}}}_k))- ({\bf{u}}_i,\psi({\bf{u}}_i))\|),
~~~k=1,\ldots,N~.
\end{equation}
Equation (\ref{bb5}) implies that, after interpolation, the reconstruction problem for visibility-based interpolation has become
\begin{equation}\label{bbb5}
{\bf{\overline{V}}} = {\bf{\overline{F}}} {\bf{\overline{f}}}~,
\end{equation}
where ${\bf{\overline{F}}}$ is the $N \times N$ discretized Fourier transform and ${\bf{\overline{f}}}$ is the $N \times 1$ vector to reconstruct.
Two comments are probably relevant in conclusion of this subsection. First, from a technical viewpoint, the choice of $\phi$ and $\psi$ should guarantee numerical stability of system (\ref{b5}). Moreover, at a more general level, VSK approaches map the original measured data into a higher dimension space and therefore can be considered as a feature augmentation strategy. It follows that the definition of the scaling function plays a crucial role for the final outcome of this approach and the idea is to select it so that it mimics the samples as shown in \citep{vskmpi,vskjump,romani}.
\section{Image reconstruction}
The implementation of an image reconstruction process relying on the interpolation procedure described in the previous section needs the definition of a pipeline made of the following steps:
\begin{enumerate}
\item Construction of the matrix $K$. This step needs the choice of the function $\phi$ generating the RBFs and of the scaling function $\psi$, which implies to account for some prior information on the source image. As far as $\phi$ is concerned, we have chosen the Gaussian-like Mat\'ern function
\begin{equation}\label{b3-1}
\phi(\|{\bf{u}}-{\bf{u}}_k\|) = {\rm e}^{-\|{\bf{u}}-{\bf{u}}_k\|}~.
\end{equation}
As for $\psi$, in this study we have implemented two possible choices, based on coarse estimates of the X-ray source to reconstruct:
\begin{itemize}
\item We have applied the inverse Discrete Fourier Transform to the visibility set and used the Fourier projection of the corresponding back-projected map as the scaling function.
\item We have applied CLEAN to the visibility set and used the Fourier projection of the map of the CLEAN components as the scaling function.
\end{itemize}
\item Solution of equation (\ref{b5}). This is a square and rather well-conditioned linear system and therefore standard numerics for computing $K^{-1}$ works properly in the case of input data characterized by large signal-to-noise ratios. When the data statistics is low the system is solved by means of the equally standard Tikhonov method \citep{2003A&A...405..325M}.
\item Reconstruction of the image ${\bf{f}}$. To this aim we have implemented a soft-thresholding approach based on the projected Landweber iterative scheme \citep{1996JOSAA..13.1516P,piana1997projected}
\begin{equation}\label{c1}
{\bf{{\overline{f}}}}^{(k+1)} = {\cal{P}}_+[{\bf{\overline{f}}}^{(k)} + {\bf{\overline{F}}}^T({\bf{\overline{V}}} -
{\bf{\overline{F}}}{\bf{\overline{f}}}^{(k)})]~,
\end{equation}
where ${\cal{P}}_+$ pixel-wise imposes a positivity constraint.
In the present implementation we have assumed the initialization ${\bf{f}}=0$ and a stopping rule that relies on a check on the $\chi^2$ values \citep{Massone1}.
\end{enumerate}
The main advantages of this scheme are essentially two. First, the positivity constraint induces super-resolution effects, since it allows extrapolating the frequency information outside the support of the interpolated visibility surface \citep{1996JOSAA..13.1516P}. Second, the implementation of the iterative scheme is made computationally effective by the use of an FFT routine performing the required forward and backward Fourier transformation. We also point out that a well-established weakness of CLEAN is the fact that the determination of the reconstructed CLEAN map from the map of the CLEAN components is typically realized by means of convolution with an idealized point spread function whose FWHM is chosen by means of totally heuristic considerations. Choosing $\psi$ as the map of the CLEAN components is a way to exploit it in a completely objective way, within the framework of an automatic image reconstruction method.
\section{Applications to the reconstruction of flaring sources}
In this section we discuss the effectiveness of this enhanced release of uv$\_$smooth for visibility-based image reconstruction by considering tests on both synthetic simulations obtained by means of the {\em{STIX}} simulation software and experimental {\em{RHESSI}} observations.
\subsection{STIX simulated visibilities}
We simulated four {\em{STIX}} configurations with an overall incident flux of $10^4$ photons cm$^{-2}$ s$^{-1}$ (see Figure \ref{figmap}, first column). The first two configurations (Configuration 1 and Configuration 2) consisted of two foot-points with centers located at two different positions along the main diagonal. The third and fourth configurations (Configuration 3 and Configuration 4) mimic two flaring loops, one at the center of the field-of-view and the other one off-center (refer to Tables \ref{tab1}--\ref{tab4} for details on the parameters of the four considered configurations).
Using the {\em{STIX}} simulation software we generated 25 realizations of synthetic {\em{STIX}} visibilities for each configuration. Then, Figure \ref{figmap} shows the results provided by the original version of uv$\_$smooth and by the two enhanced versions of the algorithm when the scaling functions are based upon the back-projected map (uv$\_$smooth$\_$BP) and the map of the CLEAN components (uv$\_$smooth$\_$CC). In Tables \ref{tab1}--\ref{tab4} the corresponding values of the reconstructed parameters are compared with the ones of the ground-truths, where for each parameter we have given the average value with respect to the 25 realizations and the corresponding standard deviation.
The CPU times employed to obtain the reconstructions are shown in Table \ref{cpu}. Tests have been carried out on a Intel(R) Core(TM) i7 CPU 4712MQ 2.13 GHz processor.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.11]{./FigPaper/2pointL_1_realmap}
\includegraphics[scale=0.11]{./FigPaper/2pointL_1_uv_map}
\includegraphics[scale=0.11]{./FigPaper/2pointL_1_uv_vsk_map}
\includegraphics[scale=0.11]{./FigPaper/2pointL_1_uv_vsk_mapC}\vskip 0.1cm
\includegraphics[scale=0.11]{./FigPaper/2pointH_1_realmap}
\includegraphics[scale=0.11]{./FigPaper/2pointH_1_uv_map}
\includegraphics[scale=0.11]{./FigPaper/2pointH_1_uv_vsk_map}
\includegraphics[scale=0.11]{./FigPaper/2pointH_1_uv_vsk_mapC} \vskip 0.1cm
\includegraphics[scale=0.11]{./FigPaper/loop_1_realmap}
\includegraphics[scale=0.11]{./FigPaper/loop_1_uv_map}
\includegraphics[scale=0.11]{./FigPaper/loop_1_uv_vsk_map}
\includegraphics[scale=0.11]{./FigPaper/loop_1_uv_vsk_mapC} \vskip 0.1cm
\includegraphics[scale=0.11]{./FigPaper/loop1_1_realmap}
\includegraphics[scale=0.11]{./FigPaper/loop1_1_uv_map}
\includegraphics[scale=0.11]{./FigPaper/loop1_1_uv_vsk_map}
\includegraphics[scale=0.11]{./FigPaper/loop1_1_uv_vsk_mapC}
\caption{Reconstruction of four synthetic flaring configurations using simulated {\em{STIX}} visibilities. First column: ground-truth configurations. Second column: reconstructions provided by uv$\_$smooth. Third column: reconstructions obtained by using VSK-based interpolation when $\psi$ is the back projection (uv$\_$smooth$\_$BP). Third column: reconstructions obtained by using VSK-based interpolation when $\psi$ is the map of the CLEAN components (uv$\_$smooth$\_$CC). The ground-truth and reconstruction parameter values are in Tables 1--4.}
\label{figmap}
\end{figure}
\begin{table}[ht]
\caption{Results for the reconstruction of Configuration 1. The foot-point centers are denoted as $(x_p,y_p)$ while the flux is measured as photon cm$^{-2}$ s$^{-1}$.}
\begin{tabular}{lcccc}
\hline
\hline
& \multicolumn{4}{c}{First Peak} \\
\hline
&\hskip 0.1cm $x_p$ & \hskip 0.1cm $y_p$& FWHM & FLUX ($\times 10^3$) \\
Simulated & -8.0 & -8.0 & 11.0 & 6.58 \\
uv$\_$smooth & -6.0 $\pm$ 0.6 & -5.0 $\pm$ 0.4 & 11.2 $\pm$ 0.3 & 5.08 $\pm$ 0.13\\
uv$\_$smooth$\_$BP & -6.3 $\pm$ 0.4 & -6.2 $\pm$ 0.4 & 11.5 $\pm$ 0.4 & 5.53 $\pm$ 0.13 \\
\smallskip
uv$\_$smooth$\_$CC & -6.4 $\pm$ 0.5 & -6.0 $\pm$ 0.5 & 11.6 $\pm$ 0.4 & 5.52 $\pm$ 0.18 \\
\hline
& \multicolumn{4}{c}{Second Peak}\\
\hline
&\hskip 0.1cm $x_p$ & \hskip 0.1cm $y_p$& FWHM & FLUX ($\times 10^3$) \\
Simulated & 8.0 & 8.0 & 11.0 & 3.21 \\
uv$\_$smooth & 8.0 $\pm$ 0.5 & 6.4 $\pm$ 0.5 & 10.7 $\pm$ 0.5 & 2.42 $\pm$ 0.12\\
uv$\_$smooth$\_$BP & 8.1 $\pm$ 0.4 & 8.3 $\pm$ 0.6 & 11.5 $\pm$ 0.5 & 2.70 $\pm$ 0.13 \\
\smallskip
uv$\_$smooth$\_$$\_$CC & 7.9 $\pm$ 0.6 & 6.9 $\pm$ 0.3 & 12.3 $\pm$ 0.8 & 2.77 $\pm$ 0.14 \\
\end{tabular}
\begin{tabular}{lc}
\hline
& Total Flux ($\times 10^3$) \\
\hline
Simulated & 10.00 \\
uv$\_$smooth & 9,27 $\pm$ 0.18\\
uv$\_$smooth$\_$BP & 9.86 $\pm$ 0.23\\
\smallskip
uv$\_$smooth$\_$CC & 10.10 $\pm$ 0.19\\
\hline
\hline
\end{tabular}
\label{tab1}
\end{table}
\begin{table}[ht]
\caption{Results for the reconstruction of Configuration 2. The foot-point centers are denoted as $(x_p,y_p)$ while the flux is measured as photon cm$^{-2}$ s$^{-1}$.}
\begin{tabular}{lcccc}
\hline
\hline
& \multicolumn{4}{c}{First Peak} \\
\hline
&\hskip 0.2cm $x_p$ & \hskip 0.2cm $y_p$& FWHM & FLUX ($\times 10^3$) \\
Simulated & -24.0 & -24.0 & 11.0 & 6.51 \\
uv$\_$smooth & -7.5 $\pm$ 0.5 & -10.6 $\pm$ 0.4 & 12.3 $\pm$ 0.1 & 8.23 $\pm$ 0.19 \\
uv$\_$smooth$\_$BP & -21.9 $\pm$ 0.2 & -21.7 $\pm$ 0.4 & 10.8 $\pm$ 0.2 & 5.26 $\pm$ 0.11 \\
\smallskip
uv$\_$smooth$\_$CC & -21.8 $\pm$ 0.3 & -21.7 $\pm$ 0.4 & 11.4 $\pm$ 0.4 & 5.27 $\pm$ 0.12 \\
\hline
& \multicolumn{4}{c}{Second Peak}\\
\hline
&\hskip 0.2cm $x_p$ & \hskip 0.2cm $y_p$& FWHM & FLUX ($\times 10^3$) \\
Simulated & 24.0 & 24.0 & 11.0 & 3.25 \\
uv$\_$smooth & 8.5 $\pm$ 0.5 & -9.4 $\pm$ 0.8 & 12.9 $\pm$ 0.1 & 8.94 $\pm$ 0.26 \\
uv$\_$smooth$\_$BP& 24.0 $\pm$ 0.2 & 24.4 $\pm$ 0.5 & 10.9 $\pm$ 0.2 & 2.50 $\pm$ 0.12 \\
\smallskip
uv$\_$smooth$\_$CC & 23.3 $\pm$ 0.3 & 24.6 $\pm$ 0.8 & 10.9 $\pm$ 0.7 & 2.35 $\pm$ 0.14 \\
\end{tabular}
\begin{tabular}{lc}
\hline
& Total Flux ($\times 10^3$) \\
\hline \\
Simulated & 10.00 \\
uv$\_$smooth & 9.88 $\pm$ 0.34 \\
uv$\_$smooth$\_$BP & 10.55 $\pm$ 0.30 \\
\smallskip
uv$\_$smooth$\_$CC & 12.37 $\pm$ 0.28 \\
\hline
\hline
\end{tabular}
\label{tab2}
\end{table}
\begin{table}[ht]
\caption{Results for the reconstruction of Configuration 3. The position of the pixel with maximum intensity is denoted as $(x_p,y_p)$. The flux units are photon cm$^{-2}$ s$^{-1}$.}
\begin{tabular}{lccc}
\hline
\hline
&$x_p$ & $y_p$ & Total Flux ($\times 10^3$) \\
\hline
Simulated & 0.0 & 0.0 & 10.00 \\
uv$\_$smooth & 0.7 $\pm$ 0.6 & 1.3 $\pm$ 0.5 & 9.71 $\pm$ 0.27 \\
uv$\_$smooth$\_$BP & -0.5 $\pm$ 0.6 & 1.4 $\pm$ 0.6 & 10.55 $\pm$ 0.02\\
\smallskip
uv$\_$smooth$\_$CC & -0.6 $\pm$ 0.5 & 0.6 $\pm$ 0.4 & 10.55 $\pm$ 0.02\\
\hline
\hline
\end{tabular}
\label{tab3}
\end{table}
\begin{table}[ht]
\caption{Results for the reconstruction of Configuration 4. The position of the pixel with maximum intensity is denoted as $(x_p,y_p)$. The flux units are photon cm$^{-2}$ s$^{-1}$.}
\begin{tabular}{lccc}
\hline
\hline
&$x_p$ & $y_p$& Total Flux ($\times 10^3$) \\
\hline
Simulated & 18.0 & 18.0 & 10.00 \\
uv$\_$smooth & 15.4 $\pm$ 0.8 & 11.4 $\pm$ 0.7 & 10.59 $\pm$ 0.01 \\
uv$\_$smooth$\_$BP & 16.6 $\pm$ 0.7 & 15.8 $\pm$ 0.8 & 10.59 $\pm$ 0.02\\
\smallskip
uv$\_$smooth$\_$CC & 17.0 $\pm$ 0.7 & 16.8 $\pm$ 0.8 & 10.59 $\pm$ 0.01\\
\hline
\hline
\end{tabular}
\label{tab4}
\end{table}
\begin{table}[ht]
\caption{CPU burden (in second) employed by the three reconstruction algorithms averaged over the data corresponding to the four configurations.}
\begin{tabular}{lc}
\hline
\hline
& CPU times \\
\hline
uv$\_$smooth & 0.18 \\
uv$\_$smooth$\_$BP & 4.24 \\
\smallskip
uv$\_$smooth$\_$CC & 6.78 \\
\hline
\hline
\end{tabular}
\label{cpu}
\end{table}
\subsection{RHESSI observations}
On Saturday, May 3 2014 the GOES $1-8$ $\mathring{A}$ passband instrument recorded nine C class flares originating from three different active regions. In particular, in the time interval between 15:54:00 UT and 16:13:40 UT {\em{RHESSI}} observed a C$1.7$ event whose flaring shape in the $3-6$ keV energy channel evolved from a double foot-point to a narrow ribbon-like configuration. We have tested the effectiveness of this enhanced approach to interpolation in the $(u,{\mbox{v}})$-plane by considering five time intervals in that range, each one of $1$ minute duration. First, we focused on the visibility bag recorded at 16:07:04 UT by the combination of $3$ through $9$, $2$ through $9$ and $1$ through $9$ {\em{RHESSI}} detectors, respectively. Figures \ref{fig_rhessi_maydet} and \ref{fig_rhessi_vis_dets} respectively compare the reconstructions and the corresponding visibility fitting provided by uv$\_$smooth, uv$\_$smooth$\_$BP and uv$\_$smooth$\_$CC with the reconstructions and the fitting given by CLEAN when the map of the CLEAN components is convolved {\em{a posteriori}} with an idealized PSF with CLEAN beamwidth factor equal to $2$ (as done for the generation of the {\em{RHESSI}} image archive) and pixel dimension equal to $1$ arcsec in the 3 through 9 detector configuration and equal to $0.5$ arcsec for the other two combinations of detectors. The $\chi^2$ values of the four reconstruction methods are reported in Table \ref{tab:my_label_chi}. Then, in Figure \ref{fig_rhessi_may} and Figure \ref{fig_rhessi_may_vis} we fixed the configuration based on $3$ through $9$ detectors and compared the reconstructions provided by the same four imaging methods as in Figure \ref{fig_rhessi_maydet} and the corresponding fitting of the experimental measurements in the case of five time intervals between 16:08:04 and 16.12:04 UT. The $\chi^2$ values predicted by the four reconstruction methods with respect to the observations are contained in Table \ref{tab:my_label}.
\section{Comments and conclusions}
Enhancing visibility interpolation is particularly crucial in the case of the {\em{STIX}} image reconstruction problem, where observations are linked to a set of $30$ visibilities and, correspondingly, the sparsity of the sampling in the $(u,{\mbox{v}})$-plane is pronounced. As a confirmation of this, comparison with the four ground-truth configurations considered in the simulations of Figure \ref{figmap} shows that the use of VSKs provides more accurate estimates of the imaging parameters; this is particularly true in the case of Configurations 2 and 4 that produce wilder oscillations in the visibility domain and where the need of powerful interpolation is more urgent. The computational times reported in Table \ref{cpu} show that VSK interpolation increases the burden but keeps the reconstruction times competitive with the ones of most hard X-ray imaging methods.
In the case of {\em{RHESSI}} observations, the use of finer grids increases the spatial resolution but, at the same time, introduces high resolution artifacts. However, also in this case we can notice an improvement carried by the use of VSKs with respect to standard uv$\_$smooth, i.e. the progressive fragmentation of the reconstructed sources is less significant particularly when detectors from $2$ through $9$ are used. For most cases, we can notice that uv$\_$smooth$\_$BP and uv$\_$smooth$\_$CC can guarantee a nice trade-off between reconstruction accuracy and fitting: imaging artifacts are less numerous and pronounced if compared to standard uv$\_$smooth while $\chi^2$ values are either comparable or smaller than the ones corresponding to CLEAN reconstructions. Further, comparison between uv$\_$smooth$\_$CC and CLEAN shows that the former method can be interpreted as a user-independent way to exploit the CLEAN component map. Therefore, uv$\_$smooth$\_$CC concludes the overall CLEAN process, keeping the highly reliable step providing the CLEAN components and replacing the more heuristic one represented by the convolution with an idealized PSF with a totally automatic process based on feature augmentation.
\begin{acknowledgements}
The authors acknowledge the financial contribution from the agreement ASI-INAF n.2018-16-HH.0. This research has been accomplished within Rete ITaliana di Approssimazione (RITA). This is the first paper that AMM and MP submit after Richard Schwartz passed away, on Saturday December 12 2020. In these difficult times for the whole humanity, Richard's death has represented a further reason of sadness and grief for the {\em{RHESSI}} and {\em{STIX}} communities. AMM and MP acknowledge that Richard's intellectual guide is and will always remain an unforgettable milestone for their current and future scientific activity.
\end{acknowledgements}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_det3_9}
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_det3_9vskb}
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_det3_9vskC}
\includegraphics[scale=0.11]{./FigPaper/figd1} \vskip 0.1cm
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_det2_9_new}
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_det2_9vskb_new}
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_det2_9vskC_new}
\includegraphics[scale=0.11]{./FigPaper/figd2} \vskip 0.1cm
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_det1_9_new}
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_det1_9vskb_new}
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_det1_9vskC_new}
\includegraphics[scale=0.11]{./FigPaper/figd3}
\caption{Reconstruction of the flare observed by RHESSI on May 3 in 2014 at 16:07:04 UT. From left to right, the columns contain the reconstructions via uv$\_$smooth, uv$\_$smooth$\_$BP, uv$\_$smooth$\_$CC and CLEAN. From top to bottom the three rows indicate the reconstructions obtained using {\em{RHESSI}} detectors 3 through 9, 2 through 9 and 1 through 9, respectively.
}
\label{fig_rhessi_maydet}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.128]{./FigPaper/3May_rhessi_VIS001_c}
\includegraphics[scale=0.128]{./FigPaper/3May_rhessi_VIS001_vskB_c}
\includegraphics[scale=0.128]{./FigPaper/3May_rhessi_VIS001_vskC_c}
\includegraphics[scale=0.128]{./FigPaper/vfigd1} \vskip 0.1cm
\includegraphics[scale=0.128]{./FigPaper/3May_rhessi_VIS011_c}
\includegraphics[scale=0.128]{./FigPaper/3May_rhessi_VIS011_vskB_c}
\includegraphics[scale=0.128]{./FigPaper/3May_rhessi_VIS011_vskC_c}
\includegraphics[scale=0.128]{./FigPaper/vfigd2} \vskip 0.1cm
\includegraphics[scale=0.128]{./FigPaper/3May_rhessi_VIS111_c}
\includegraphics[scale=0.128]{./FigPaper/3May_rhessi_VIS111_vskB_c}
\includegraphics[scale=0.128]{./FigPaper/3May_rhessi_VIS111_vskC_c}
\includegraphics[scale=0.128]{./FigPaper/vfigd3}
\caption{Comparison between predicted and measured visibilities for the flare observed by RHESSI on May 3 2014 at 16:07:04 UT. From left to right, the columns contain the fits corresponding to uv$\_$smooth, uv$\_$smooth$\_$BP, uv$\_$smooth$\_$CC and CLEAN. From top to bottom, the rows correspond to using detector configurations from 3 through 9, from 2 through 9, and from 1 through 9, respectively.
}
\label{fig_rhessi_vis_dets}
\end{figure}
\begin{table}[ht]
\begin{tabular}{ccccc}
\hline
\hline
detectors & uv$\_$smooth & uv$\_$smooth$\_$BP & uv$\_$smooth$\_$CC & CLEAN \\
\hline
3--9 & 1.05 & 1.02 & 0.98 & 7.17 \\
2--9 & 1.20 & 0.96 & 0.93 & 4.57 \\
1--9 & 1.07 & 1.08 & 1.19 & 3.95 \\
\hline
\hline
\end{tabular}
\caption{$\chi^2$ values predicted by the four reconstruction methods applied to the {\em{RHESSI}} visibilities observed on May 3 2014 at 16:07:04 UT. The values are computed with respect to the visibilities measured by detectors 3 through 9, 2 through 9, and 1 through 9, respectively.}
\label{tab:my_label_chi}
\end{table}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_001_1}
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_001_1vskB}
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_001_1vskC}
\includegraphics[scale=0.11]{./FigPaper/fig1} \vskip 0.1cm
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_001_2}
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_001_2vskB}
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_001_2vskC}
\includegraphics[scale=0.11]{./FigPaper/fig2} \vskip 0.1cm
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_001_3}
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_001_3vskB}
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_001_3vskC}
\includegraphics[scale=0.11]{./FigPaper/fig3} \vskip 0.1cm
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_001_4}
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_001_4vskB}
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_001_4vskC}
\includegraphics[scale=0.11]{./FigPaper/fig4}
\vskip 0.1cm
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_001_5}
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_001_5vskB}
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_001_5vskC}
\includegraphics[scale=0.11]{./FigPaper/fig5} \caption{Reconstruction of the flare observed by RHESSI on May 3 2014. From left to right, the columns contain the reconstructions obtained by uv$\_$smooth, uv$\_$smooth$\_$BP, uv$\_$smooth$\_$CC and CLEAN. From top to bottom, the rows denote the evolution of the flare shape in five time intervals from 16:08:04 through 16:12:04 UT (integration time: $1$ min).
}
\label{fig_rhessi_may}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.128]{./FigPaper/3Mayrhessi_VIS001_1}
\includegraphics[scale=0.128]{./FigPaper/3Mayrhessi_VIS001_1vskB}
\includegraphics[scale=0.128]{./FigPaper/3Mayrhessi_VIS001_1vskC}
\includegraphics[scale=0.128]{./FigPaper/fig1a}
\includegraphics[scale=0.128]{./FigPaper/3Mayrhessi_VIS001_2}
\includegraphics[scale=0.128]{./FigPaper/3Mayrhessi_VIS001_2vskB}
\includegraphics[scale=0.128]{./FigPaper/3Mayrhessi_VIS001_2vskC}
\includegraphics[scale=0.128]{./FigPaper/fig2a} \vskip 0.1cm
\includegraphics[scale=0.128]{./FigPaper/3Mayrhessi_VIS001_3}
\includegraphics[scale=0.128]{./FigPaper/3Mayrhessi_VIS001_3vskB}
\includegraphics[scale=0.128]{./FigPaper/3Mayrhessi_VIS001_3vskC}
\includegraphics[scale=0.128]{./FigPaper/fig3a} \vskip 0.1cm
\includegraphics[scale=0.128]{./FigPaper/3Mayrhessi_VIS001_4}
\includegraphics[scale=0.128]{./FigPaper/3Mayrhessi_VIS001_4vskB}
\includegraphics[scale=0.128]{./FigPaper/3Mayrhessi_VIS001_4vskC}
\includegraphics[scale=0.128]{./FigPaper/fig4a}
\vskip 0.1cm
\includegraphics[scale=0.128]{./FigPaper/3Mayrhessi_VIS001_5}
\includegraphics[scale=0.128]{./FigPaper/3Mayrhessi_VIS001_5vskB}
\includegraphics[scale=0.128]{./FigPaper/3Mayrhessi_VIS001_5vskC}
\includegraphics[scale=0.128]{./FigPaper/fig5a}
\caption{Comparison between predicted and measured visibilities for the flare observed by RHESSI on May 3 2014. From left to right, the columns contain the fits corresponding to uv$\_$smooth, uv$\_$smooth$\_$BP, uv$\_$smooth$\_$CC and CLEAN. From top to bottom, the rows correspond to the evolution of the flare shape in five time intervals from 16:08:04 through 16:12:04 UT (integration time: $1$ min).
}
\label{fig_rhessi_may_vis}
\end{figure}
\begin{table}[ht]
\begin{tabular}{ccccc}
\hline
\hline
& uv$\_$smooth & uv$\_$smooth$\_$BP & uv$\_$smooth$\_$CC & CLEAN \\
\hline
$t_1$ & 1.13 & 1.10 & 1.07 & 3.70 \\
$t_2$ & 1.80 & 1.73 & 1.75 & 1.71 \\
$t_3$ & 2.25 & 2.18 & 1.96 & 1.42 \\
$t_4$ & 2.75 & 2.14 & 2.06 & 1.90 \\
$t_5$ & 3.01 & 2.97 & 2.70 & 1.80\\
\hline
\hline
\end{tabular}
\caption{$\chi^2$ values predicted by the four reconstruction methods applied to the {\em{RHESSI}} visibilities observed on May 3 2014 in the 5 time intervals from 16:08:04 through 16:12:04 UT (integration time: $1$ min). The values are computed with respect to the visibilities measured by detectors 3 through 9.}
\label{tab:my_label}
\end{table}
\bibliographystyle{aa.bst}
|
2,877,628,090,993 | arxiv | \section{Introduction}
\label{introduction}
The measurement of the stellar mass function (SMF) is a powerful statistical tool for tracing the stellar mass assembly or galaxy growth over cosmic time. Galaxy formation models rely on the well established $\Lambda$CDM cosmological framework that governs the growth of the dark matter structures and the less well understood baryonic physics at play inside the dark matter haloes (gas accretion, minor or major merging, star formation activity, feedback mechanisms, etc.).
The shape of the galaxy SMF compared to the expected halo mass function provides valuable information about the physical processes acting at the low- and high-mass ends of the mass function \citep[][]{Silk2012}.
A decade ago, early deep extragalactic surveys have revealed that the average stellar mass density decreased gradually (the integrated form of the SMF) from z$\sim$3 to z$\sim$0 \citep[e.g.][]{Dickinson2003, Fontana2003}. This trend is now confirmed up to redshift $z\sim 8$ \citep[]{Song2015} and is consistent with a hierarchical build-up of the cosmic structures. Later on, larger surveys have measured the evolution at high redshift of the galaxy bimodality, the well-known separation between star-forming and quiescent galaxies observed in the local Universe \citep[][]{Baldry2004, Moustakas2013}. They found that the bimodality was already in place at $z\sim 1$ with the quiescent galaxies dominating the massive end of the SMF and the star-forming galaxies dominating its low-mass end \citep[][]{Bundy2006,Borch2006}. This quiescent population had its main build-up epoch between $z=2$ and $z=1$, where the stellar mass density increased by a factor 10 \citep[][]{Cirasuolo2007,Arnouts2007}, while only a factor 2 increase is observed from $z=1$ to $z=0$ \citep[][]{Bell2004,Faber2007}.
According to the hierarchical scenario, such an early formation epoch of the quiescent population was not a problem as long as the stars formed before this in smaller units and galaxies continued to assemble their masses at a later time \citep[through dry merging phases, e.g.][]{DeLucia2006}. This is a natural support of the \textit{star formation downsizing} picture proposed by \citet[][]{Cowie1996}, where the onset of star formation begins
earlier for most massive galaxies than for lower mass galaxies \citep[see also][]{Gavazzi1996a}. However, the models predict a continuous increase of stellar mass for these massive galaxies with cosmic time \citep[e.g.][]{DeLucia2007}, which is challenged by the last measurements of the SMF, where the massive end does not show significant evolution from $z = 0$ up to redshift $z\sim 1$ \citep[e.g.][]{Marchesini2009,Ilbert2013,Muzzin2013, Moustakas2013,Mortlock2015}, suggesting a mass assembly downsizing.
The predominance of quiescent galaxies at the massive end \citep[e.g.][]{Baldry2012,Moustakas2013,Ilbert2013} supports the idea that the star formation activity is preferentially impeded in galaxies above a given stellar mass or a given dark matter halo mass, if we assume a stellar-to-halo mass relationship \citep[e.g.][]{Coupon2015}. A wide variety of quenching mechanisms have been proposed to explain the star formation quenching in massive galaxies, such as major mergers \citep[][]{Barnes1992}, virial shock heating \citep[][]{Keres2005}, or radio-AGN feedback \citep{Croton2006,Cattaneo2006} in massive haloes.
Several studies have emphasised the role played by the environment for the colour-bimodality and star-formation quenching in the local Universe
\citep{Hogg2003,Kauffmann2004,Baldry2006,Haines2007}. Mechanisms such as \textit{ram-pressure stripping}, in which the gas is expelled from the galaxy \citep[][]{Gunn1972}, or \textit{strangulation}, in which the cold gas supply is heated and then halted \citep[][]{Larson1980}, can be invoked as environmental quenching mechanisms. We emphasise that strangulation processes can either be linked to environment (e.g. when a galaxy enters the hot gas of a cluster) or to peculiar evolution (e.g. when a the radio-AGN feedback stops the cold gas infall).
The latest measurements of the quiescent SMFs reveal an upturn at the low-mass end in the local Universe \citep{Baldry2012,Moustakas2013}, whose build-up is observed at higher redshift \citep{Drory2009, Tomczak2014}. This upturn in the low-mass end for quiescent galaxies could be associated to environmental quenching according to \citet{Peng2010, Peng2012}, while \citet{Schawinski2014} suggested a fast process consistent with major merging. Constraining the quenching timescale at different masses might therefore help to highlight the quenching mechanisms.
Until recently, the above conclusions were mostly based on deep galaxy surveys such as GOODS \citep[][]{Giavalisco2004}, VVDS \citep[][]{LeFevre2005}, COSMOS \citep[][]{Scoville2007}, and DEEP2 \citep[][]{Newman2013}, which are perfectly suited to provide the global picture of the galaxy stellar mass assembly over a wide range of redshifts. However, given their small angular coverage (they explore a rather small volume below $z < 1$), they can be particularly sensitive to statistical variance (i.e. \textit{cosmic variance}) at low redshift.
This is particularly crucial for the very rare galaxies at the high-mass end of the exponentially declining SMF, and it has been claimed that its apparent lack of evolution may be dominated by observational uncertainties \citep{Fontanot2009,Marchesini2009}.
A first attempt to constrain the density evolution of the high-mass galaxies at $z < 1$ has been performed by \citet{Matsuoka2010}. They combined the SDSS southern strip \citep[][]{York2000} and UKIDSS/LAS survey \citep[][]{Lawrence2007} over a total area of $\sim$ 55 deg$^2$. They observed a mild-to-high increase of the number density of massive galaxies $10^{11-11.5}M_{\odot}$/$10^{11.5-12.}M_{\odot}$ with a corresponding drop of the fraction of star-forming galaxies in this stellar mass range from $z\sim $1 to $z\sim$0. While subject to large uncertainties in their photometric redshifts, stellar mass estimates, and reliability of the separation into quiescent and star-forming galaxies, this first result suggested that massive galaxies ($M_* > 10^{11} M_{\odot}$) evolve since $z \sim 1$.
\citet{Moustakas2013} estimated the SMF between $0 < z < 1$ over an area of $\sim$5.5 deg$^{2}$ using PRIMUS, a low-resolution prism survey \citep[for galaxies with $i_{AB}\le23$;][]{Coil2011}.
The wealth of multi-wavelength information from deep ultraviolet (GALEX satellite) to mid-infrared (Spitzer/IRAC) photometry allowed them to derive accurate stellar masses and a reliable separation between active and quiescent populations. Their SMF measurements confirmed the modest change in the number density of the massive star-forming galaxies ($M_*\ge 10^{11}M_{\odot}$), leaving little room for mergers, but observed a significant drop (50\%) of the fraction of active star-forming galaxies since $z\sim 1$ that is in contrast with the classical picture, in which the star-forming population remains constant across cosmic time.
Another major spectroscopic sample is provided by the VIMOS Public Extragalactic Redshift Survey \citep[VIPERS; ][]{Guzzo2014}, whose first $\sim 50,000$ galaxies down to $i_{AB}=22.5$ over an area of 10.3 deg$^2$ have recently been released \citep[PDR1,][]{Garilli2014}. Using the PDR1 combined with CFHTLS photometry and the same ultraviolet (UV) and near-infrared (NIR) data that we used here, \citet{Davidzon2013} produced the most reliable overall measurement of the high-mass end of the SMF in between $0.5 < z < 1.3$ to date.
The VIPERS SMF shows to high precision that the most massive galaxies had already assembled most of their stellar mass at $z\sim1$, but that a residual evolution is still present.
However, as discussed in \citet{Davidzon2013}, although these two studies use spectroscopic redshifts,
multi-wavelength information, and a large area, they disagree
slightly concerning the general amplitude of the SMF. These discrepancies might be due~to differences in the stellar mass estimates, for
example, or to selection effects that are not fully accounted for. It highlights how subtle effects become crucial and can introduce significant systematic errors when statistical uncertainties are reduced so drastically.
In this paper we exploit the broad photometric coverage assembled over the footprint of VIPERS to build a unique multi-wavelength photometric sample covering more than 22 deg$^2$ down to $K_s < 22$, as part of the VIPERS-Multi Lambda Survey \citep[VIPERS-MLS; see][]{Moutard2016a}. We benefit of the synergy with the VIPERS spectroscopic survey by using the PDR-1 data to compute reliable photometric redshifts, and we derive stellar masses for 760,000 galaxies out to $z=1.5$. This allows us to obtain a new estimate of the SMF that
(a) has greater control over the low-mass slope because of the $i < 23.7~/~K_s < 22$ depth of our sample for extended sources (more than 1 mag deeper in the $i$-band than VIPERS),
(b) extends over a wider redshift range than VIPERS, from $z = 0.2$ out to $z=1.5$,
(c) is less affected by the cosmic variance because the effective area is doubled with respect to the VIPERS PDR-1 used in \citet{Davidzon2013} (we cover nearly the entire footprint of the final VIPERS survey and avoid the 30\% area loss that is due to the detector gaps in VIPERS),
(d) suffers a reduced Poisson error because our sample is ten times larger in the common redshift range, and
(e) can be studied separately for star-forming and quiescent objects, which means that the quenching channels that characterise the massive galaxies up to $z = 1.5 $ can be explored, as well as the low-mass galaxies at low redshift.
The paper is organised as follows. In Sect. \ref{data} we describe our photometric and spectroscopic dataset. The photometric redshifts and galaxy classification are presented in Sect. \ref{photoz}, the stellar mass estimates in Sect. \ref{mass}. We detail the measurements of the galaxy SMFs and the associated uncertainties in Sect. \ref{SMF}, where we also point out the effect of the photometric absolute calibration in the new generation of large surveys. We present the evolution of the stellar mass function and stellar mass density in Sect. \ref{evol}. Finally, we discuss our results and their effects on the quenching channels in Sect. \ref{discut}.
Throughout this paper, we use the standard cosmology ($\Omega_m~=~0.3$, $\Omega_\Lambda~=~0.7$ with $H_{\rm0}~=~70$~km~s$^{-1}$~Mpc$^{-1}$). Magnitudes are given in the $AB$ system \citep{Oke1974}. The galaxy stellar masses are given in units of solar masses ($M_{\sun}$) for a \citet{Chabrier2003} initial mass function (Chabrier IMF).
\begin{figure}[!t]
\center
\includegraphics[width=\hsize, trim = 0.2cm 0cm 0.1cm 0cm, clip]{figures/W1W4_layout.png}
\caption{Footprints of the WIRCam $K_s$-band (red layout and background) and GALEX $NUV$/$FUV$ (blue circles) observations in the CFHTLS W1 (top) and W4 (bottom) fields. The regions covered by VIPERS (pink), PRIMUS (green), VVDS (yellow) and UDSz (magenta) are over-plotted. The SDSS-BOSS redshifts are distributed over the entire survey.
\label{footprint}}
\end{figure}
\section{Data description}
\label{data}
The observations and the data reduction are described in detail in the companion paper \citep{Moutard2016a} and are briefly summarised below.
\subsection{Optical CFHTLS photometry}
\label{optical}
The CFHTLS\footnote{http://www.cfht.hawaii.edu/Science/CFHTLS/} is an imaging survey performed with the MegaCam\footnote{http://www.cfht.hawaii.edu/Instruments/Imaging/Megacam/} camera in five optical bands, $u, g, r, i,$ and $z$.
It covers $\sim$ 155 deg$^{2}$ over four independent fields with sub-arcsecond seeing (median $\sim$ 0.8") and reaches a 80\% completeness limit of $u \sim 24.4$, $g \sim 24.7$, $r \sim 24.0$, $i / y \sim 23.7,$ and $z \sim 22.9$ for extended sources in AB system. We emphasise that the $y$-band refers to the new $i$-band filter, in accordance with the CFHTLS notation. We have used the $y$-band response curve in our analysis when appropriate, but we refer to the "$i$" filter term regardless of whether it was observed with the $i$ or $y$ filter.
In this work we use the W1 ($+02^h18^m00^s$ $-07^{\circ}00^m00^s$) and W4 ($+22^h13^m18s$ $+01^{\circ}19m00^s$) fields. Two independent photometric catalogues have been released to the community: the 7th$^{}$ and final release (noted T0007\footnote{http://terapix.iap.fr/cplt/T0007/doc/T0007-doc.html}) of the CFHTLS produced by Terapix\footnote{http://terapix.iap.fr/} , and the release from the CFHT Lensing Survey team (CFHTLenS\footnote{http://www.cfhtlens.org/}).
Both catalogues are based on the same raw images. The AstrOmatic software suite\footnote{http://www.astromatic.net/} has been used to generate the mosaic images \citep[SWARP,][]{Bertin2002} and to extract the photometric catalogues \citep[SExtractor,][]{Bertin1996}. The two releases differ in several points, however.
\begin{itemize}
\item In T0007, detection is based on $gri-\chi^2$ images, while the galaxies in CFHTLenS are $i$-detected.
\item A point spread function (PSF) homogenisation is implemented in CFHTLenS \citep[see][]{Hildebrandt2012} to improve the colour estimates. In practice, the PSF is homogenised across the field of view for each filter and degraded in all filters to the one with the highest PSF.
\item A new photometric calibration has been applied to the T0007 release. While the previous releases and the CFHTLenS release rely on Landolt standard stars \citep[see][]{Erben2013}, T0007 is based on the spectrophotometric tertiary standards from the Super Novae Legacy Survey \citep[SNLS; see the procedure described in][]{Regnault2009}. In brief, each tile of the CFHTLS-Wide is re-observed (with short exposures) during stable photometric conditions and bracketed by two observations of the CFHTLS-Deep field containing the SNLS tertiary standards.
\end{itemize}
The difference in the calibration scheme of the two releases affects the final photometry. A comparison of the magnitudes for point-like sources between the T0007 and CFHTLenS releases reveals systematic offsets that are significantly larger than the expected uncertainties. These offsets are reported in Table \ref{tab_zero_pt} (column $\Delta mag$).
We emphasise that the differences listed in this table are entirely due to the new calibration scheme established for T0007. The procedure used by the T0007 release allows transferring the percent level accuracy of the SNLS photometric calibration to the entire CFHTLS-Wide survey. For this reason, we use the T0007 catalogue as reference in this paper. However, we also perform the complete analysis with the CFHTLenS catalogue to discuss the effect of such differences in the photometric absolute calibration.
\subsection{WIRCam $K_s$ photometry}
\label{wircam_K}
We conducted a NIR $K_s$ -band follow-up of the VIPERS fields with the WIRCam instrument at CFHT \citep{Puget2004}. The layout of the observations is shown in Fig.\ref{footprint} (red background). We covered a total area of $\sim$27 deg$^2$ with an integration time per pixel of 1050 seconds. The image quality is very homogeneous, with an average seeing of all the individual exposures of $<IQ>=0.6\arcsec \pm0.09$. The data have been reduced by the Terapix team\footnote{http://terapix.iap.fr/} and the individual images were stacked and resampled on the pixel grid of the CFHTLS-T0007 release \citep{Hudelot2012}. The photometry was performed with \texttt{SExtractor} in dual-image mode with a $gri-\chi^2$ image as the detection image and the same settings as those adopted for the T0007 release. The images reach a depth of $K_s= 22$ at $\sim 3\sigma$.
The completeness reaches 80\% at $K_s=22;$ this was determined
from a comparison with the deepest surveys UKIDSS Ultra-Deep Survey\citep[UDS, ][]{Lawrence2007} and VIDEO \citep{Jarvis2013} in overlapping regions. Because the primary optical detection is based on the $gri-\chi^2$ image, we may miss the reddest high-redshift galaxies. To account for this possible bias, we measured our source incompleteness as a function of magnitude, $K,$ and colour, $(z-K)$, with all the sources detected in the deep VIDEO survey. We derived a colour-magnitude weight map that we show in Fig.
\ref{weight_map} and use in the remaining paper as multiplicative weighting for our statistical analyses.
We refer to the companion paper, \citet{Moutard2016a}, for a complete description of the method that was used to build this weight map.
\begin{figure}[!h]
\includegraphics[width=0.95\hsize]{figures/weight_color_map_0215.pdf}
\caption{Colour$-$magnitude weight map used for our statistical analysis. It takes the missing objects in the $K_s < 22$-limited sample into account. These objects are missed because the $gri$-detection was used to extract the $K_s$ fluxes.
Weights are multiplicative. This map is restricted to galaxies with redshift $z\le 1.5$ (cf. Sect.~\ref{photoz}) and the contours outline the galaxy density distribution.
\label{weight_map}}
\end{figure}
\subsection{GALEX photometry}
\label{galex}
When available, we made use of the UV deep-imaging photometry from the
GALEX satellite \citep{Martin2005}. We only considered the observations from the Deep Imaging Survey (DIS), which are shown in Fig.~\ref{footprint} as blue circles ($\varnothing \sim1.1^{\circ}$). All the GALEX pointings were observed with the near-ultraviolet (NUV) channel with exposure times of $T_{\rm exp} \ge 30$~ksec. Far-ultraviolet (FUV) observations are available for ten pointings in the central part of W1.
The large PSF of GALEX (FWHM$\sim$5\arcsec) means that source confusion becomes a
great problem in the deep survey. To extract the UV photometry, we used a
dedicated photometric code, \texttt{EMphot} \citep{Conseil2011}
that will be described in a separate paper (Vibert et al., in prep.). In
brief, EMphot uses the stamps in the $u$-band (here the T0007 release) as priors, and they are then convolved by the GALEX PSF to produce a simulated image. The scaling factors to be applied to each $u$-band prior is obtained by simultaneously maximising the likelihood between the observed and the predicted fluxes for all the sources in tiles of a few square arc-minutes. The uncertainties on the UV fluxes account for the residuals in the simulated or observed image. A typical depth of $NUV\sim 24.5$ at $\sim 5\sigma$ is observed over the entire survey. The NUV observations cover part of the WIRCam area with $\sim$10.8 and 1.9~deg$^2$ in the W1 and W4 fields, respectively.
\subsection{Final photometric catalogue}
\label{final_cat}
The catalogue of sources comes from the T0007 release and is based on detection in the $gri-\chi^2$ image. As mentioned above, the same procedure was applied to the $K_s$ images. Following \citet{Erben2013} and \citet{Hildebrandt2012}, we used the T0007 isophotal apertures for the photometry to estimate the colours. The apertures are smaller than the Kron-like apertures \citep{Kron1980},
which provides less noisy colours and leads to an improved photometric redshift accuracy \citep{Hildebrandt2012}. We also confirmed this with our large spectroscopic dataset (see below), which is especially relevant for faint sources ($i'>23.5$).
To derive galaxy physical properties, we need to know the total flux in all wavelengths. Therefore, we rescaled the isophotal flux to the Kron-like flux, $m^f_{total} = m^f_{ISO}+ \delta_m$, by adopting a unique factor, $\delta_m$, for each source to preserve the colours. $\delta_m$ is the weighted mean of the individual scaling factors, $\delta_m^f$, and is defined as $\delta_m= \sum_f \delta^f_m w^{f} / \sum_f w^f $, with $f=g,r,i,K_s$ , and $w^f$ its associated errors\footnote{For the CFHTLenS catalogue the scaling factor is computed only in $i$ band. Only the final magnitude is available in the public CFHTLenS catalogue of \citet{Erben2013}.}.
Finally, the GALEX photometry, which corresponds to the total flux measurement (i.e. model PSF photometry) was added in the same way as to the optical and NIR magnitudes.
We here limit the catalogue to galaxies brighter than $K_s<22$. The catalogue includes a total of $\sim$1.3 millions sources over an area of $\sim$27.1 deg$^2$ , which drops to one million sources over $\sim$22.4 deg$^2$ after applying the masks provided by the CFHTLenS team.
\subsection{Spectroscopic sample}
\label{zs_sample}
Our WIRCam survey has been designed to cover the VIMOS Public Extragalactic Survey \citep[VIPERS;][]{Guzzo2014} that is carried out with the VIMOS spectrograph and therefore provides many high-quality spectroscopic redshifts. We also added a compilation of the best-quality spectra from the VVDS survey \citep[$I_{AB}\le24$,][]{LeFevre2013}, the $K < 23$ limited UKIDSS spectroscopic Ultra Deep Survey \citep[UDSz,][]{Bradshaw2013,McLure2013}, the low-resolution spectra ($\lambda / \Delta \lambda \sim$40) from the PRIsm MUlti-object Survey \citep[PRIMUS, $i\sim 23$,][]{Coil2011}, and the bright-limited ($i < 19.9$) spectroscopic survey BOSS from the SDSS \citep[Baryon Oscillation Spectroscopic Survey,][]{Dawson2013}. The $K_s < 22$ spectroscopic sample we used is presented in detail in the companion paper \citep{Moutard2016a}.
We selected only the most secure spectroscopic redshifts, which means confidence levels above 95\% for high-resolution surveys and $\sigma < 0.005$ (8\% of outliers with $\delta z/(1+z) > 5 \sigma)$ for PRIMUS best redshifts. When they are available, the redshift measurements from VIPERS were used. Otherwise, the measurements from the deepest high-resolution spectra were favoured. In total, we assembled a $K_s < 22$-limited sample of $45951$ high-quality spectroscopic redshifts to calibrate and measure the accuracy of our photometric redshifts over the unmasked area of the survey (we refer to the companion paper for more details).
\section{Photometric redshifts}
\label{photoz}
\subsection{Photometric redshift measurement}
\label{zp_method}
The photometric redshifts were computed with the spectral energy
distribution (SED) fitting code \lephare \citep{Arnouts2002, Ilbert2006}, using the templates of \citet{Coupon2015}. The new templates are based on the \citet{Ilbert2009} library of 31 empirical templates from \citet{Polletta2007}, complemented by 12 star-forming templates from the Bruzual and Charlot stellar population synthesis models of 2003 \citep[][hereafter BC03]{BC2003}. These templates were optimised to be more representative of the VIPERS spectroscopic sample \citep[for more details we refer to][]{Coupon2015}.
The extinction was added as a free parameter with a reddening excess E(B-V) < 0.3 following different laws: \citet{Prevot1984}, \citet{Calzetti2000}, and a dusty Calzetti curve including a bump at 2175\AA. No extinction was allowed for SEDs redder than Sc. The extinction law of \citet{Prevot1984} was used for templates redder than SB3 templates \citep[see][]{Ilbert2009} and the law
of \citet{Calzetti2000} for bluer templates.
Finally, any possible difference between the photometry and the template library was corrected for by \lephare according to the method described in \citet{Ilbert2006}. In brief, in each band the code tracks a systematic shift between the predicted magnitudes at known redshift and the observed magnitudes. Since our observation area is divided into 47 tiles of $\lesssim 1 deg^2$ with the
relative calibration varying from tile to tile, we performed a tile-by-tile colour optimisation. We used the median offset over all the tiles when there were not enough galaxies with spectroscopic redshift in the tile ($N_{gal}^{spec} \leq 100$) available, which was the case in 12 tiles.
We stress that the corrections were computed to better fit the colours and are therefore relative. We normalised the median offset on the $K_s$-band because the NIR fluxes are the same (see Sect. \ref{data}). The median relative offsets thus calculated for each photometric band of the T0007 and CFHTLenS catalogues can be found in Table \ref{tab_zero_pt}, with the associated tile-to-tile deviation estimates (namely, the normalised median absolute deviation, NMAD). The difference between the T0007 and CFHTLenS relative offsets is consistent with the difference $\Delta mag$. In other words, we retrieved the shift between the two absolute photometric calibrations through the relative offsets computed with \lephare. This safety check confirms that the colour optimisation achieved with \lephare absorbs the uncertainties that are linked to the photometric calibration.
\begin{table}[h!]
\caption{T0007 - CFHTLenS photometric offsets ($\Delta mag$) obtained by comparing point-like sources in the two catalogues and relative corrections obtained with \lephare to optimise the photometric redshifts. Relative corrections are given using the $K_s$-band as reference (NIR data are identical). \label{tab_zero_pt}}
\vspace{0.3cm}
\centering
\begin{tabular}{l*{3}{r}}
\hline
\noalign{\smallskip}
& & \multicolumn{2}{c}{\lephare corrections } \\
Filter & $\Delta mag^{*}$~~~~~~~ & \begin{tiny}T0007\end{tiny}~~~~~~~ & \begin{tiny}CFHTLenS\end{tiny}~~~~~\\
\hline
\hline
\noalign{\medskip}
$FUV$ & --- ~~~~~~~~ & 0.102 $\pm$ \textit{0.070} & 0.084 $\pm$ \textit{0.079} \\
$NUV$ & --- ~~~~~~~~ & 0.054 $\pm$ \textit{0.055} & 0.022 $\pm$ \textit{0.065} \\
$u$ & -0.013 $\pm$ \textit{0.052} & 0.075 $\pm$ \textit{0.031} & 0.087 $\pm$ \textit{0.042} \\
$g$ & 0.071 $\pm$ \textit{0.053} & 0.028 $\pm$ \textit{0.019} & -0.053 $\pm$ \textit{0.016} \\
$r$ & 0.038 $\pm$ \textit{0.052} & 0.022 $\pm$ \textit{0.019} & -0.024 $\pm$ \textit{0.005} \\
$i$ & 0.066 $\pm$ \textit{0.045}& 0.013 $\pm$ \textit{0.015} & -0.055 $\pm$ \textit{0.009} \\
$y$ & 0.048 $\pm$ \textit{0.051} & 0.008 $\pm$ \textit{0.009} & -0.042 $\pm$ \textit{0.013} \\
$z$ & 0.148 $\pm$ \textit{0.054} & 0.087 $\pm$ \textit{0.027} & -0.063 $\pm$ \textit{0.015} \\
$K_s$ & --- ~~~~~~~~ & 0.0 $\pm$ \textit{0.016} & 0.0 $\pm$ \textit{0.019} \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\multicolumn{4}{l}{\begin{footnotesize} $^{*}$ $m_{_{T07}} - m_{_{LenS}}$ \end{footnotesize}} \\
\end{tabular}
\end{table}
\subsection{Accuracy and precision of photometric redshifts}
\label{zp_accu}
The comparison between our photometric redshifts and the corresponding spectroscopic redshifts for our $K_s < 22$ -limited sample is shown in Fig. \ref{zp_zs}. Using the NMAD to define the scatter\footnote{$\sigma_z = 1.48~median(~|z_{spec}-z_{phot}| / (1+z_{spec})~)$}, we find $\sigma_{\Delta z/(1+z)} \sim 0.05$ for faint ($i > 22.5$) galaxies, while the scatter reaches $\sigma_{\Delta z/(1+z)} \sim 0.03$ for the bright ($i < 22.5$) galaxies. Our photo-z outlier rate\footnote{$\eta$ is the percentage of galaxies with $\Delta z/(1+z)>0.15$} is $\eta = 1.2\%$ and $\eta = 9\%$ for corresponding bright and faint samples, respectively (see Fig. \ref{zp_zs}, top panels, lower right corners).
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.99\hsize]{figures/zp_zs_K22.pdf}\\
\includegraphics[width=0.49\hsize, trim = 0cm 0cm 0cm 1.5cm, clip]{figures/zp_esti_vs_zp_3.pdf}
\includegraphics[width=0.49\hsize, trim = 0cm 0cm 0cm 1.5cm, clip]{figures/zp_esti_vs_i_3.pdf}
\caption{Photometric redshift accuracy of our $K_s < 22$-limited sample. \textit{\textbf{Top}} : T0007 photometric redshift as a function of spectroscopic redshift for bright ($i < 22.5 ~\cap~ K_s < 22$) and faint ($i > 22.5 ~\cap~ K_s < 22$) galaxies. The dashed lines delimit the $\sigma_{\Delta z/(1+z)} \leq 0.15$ area, outside which photo-z measurements are considered as outliers. The accuracy estimators written in the upper left corners are weighted with respect to the $i$-band distribution of our photometric sample (see Sect. \ref{zp_accu}). \textit{\textbf{Bottom}}: Dispersion, outlier rate, bias, and spectroscopic redshift number ($N^{spec}_{gal}$) as a function of photometric redshift (left) and $i$-magnitude (right), using the T0007 (blue) and CFHTLenS (red) optical photometry. \label{zp_zs}}
\end{figure*}
Although the spectroscopic sample has been assembled to be as representative as possible, it is not as deep as the photometric sample. Aiming to correct this effect, we computed estimators that are weighted with respect to the $i$-band distribution of the photometric sample. By using these weighted estimators (marked with an orange $w$ in Fig. \ref{zp_zs}, top panels), we obtained an accuracy $\sigma_{\Delta z/(1+z)}^{w} \sim 0.03$ for bright ($i < 22.5$) galaxies with an outlier rate of $\eta^w = 1.4\%$, and $\sigma_{\Delta z/(1+z)}^{w} \sim 0.07$ and $\eta^w = 16.4\%$ for faint ($i > 22.5$) galaxies in our $K_s < 22$ -limited sample.
Even though the T0007 and CFHTLenS calibrations differ, the photometric redshifts obtained in both cases agree well\footnote{$z_{phot}^{ _{ ~T07}}-z_{phot}^{ _{ ~LenS}} = -0.008 \pm 0.048 ~$ for $ ~0.2 < z_{phot}^{ _{ ~T07}} \leq 1.5$} and their accuracies are similar. This is expected from the colour corrections described in Sect. \ref{zp_method}, which absorb the differences between the two calibrations.
Finally, based on Fig. \ref{zp_zs}, we can define a range of reliable redshifts up to $z = 1.5$, with $\sigma_{\Delta z/(1+z)}(z) < 0.1$. The highest redshift bin that we consider, namely between $z = 1.1$ and $z = 1.5$, is characterised by the weighted accuracy $\sigma_{\Delta z/(1+z)}^w \sim 0.08$ and weighted outlier rate $\eta^w \sim 20\%$.
\subsection{Star and galaxy classification }
\label{star_gal}
Being able to separate galaxies and stars is crucial in our sample, especially for the W4 field, which is close to the Galactic plane and therefore highly populated by stars. \citet{Garilli2008} have found that more than 32\% of the objects in the VVDS-Wide survey are stars. This is a pure $i < 22.5$ -selected spectroscopic survey lying in the CFHTLS W4 field. Aiming to better control the type of the objects that we select as galaxies without compromising the completeness of our sample, we performed a classification based on three different diagnostics. Our classification is presented in detail in the companion paper \citep{Moutard2016a} and is summarised below.
\begin{itemize}
\item[•] First, we used the maximum surface brightness versus magnitude (hereafter $\mu_{max}-m_{obs}$) plane where bright point-like sources are well separated from galaxies \citep[see][]{Bardeau2005, Leauthaud2007}.
\item[•] Secondly, we compared the reduced $\chi^2$ obtained with galaxy templates described in Sect. \ref{zp_method} and a representative stellar library \citep[based on][]{Pickles1998}. An object can be defined as a star when its photometry is better fitted by a stellar spectrum.
\item[•] Finally, we used the $g-z/z-K_s$ plane \citep[equivalent to the $BzK$ plane of][]{Daddi2004} to isolate the stellar sequence and imposed that a star belong to this colour region.
This sine qua non condition enabled us to catch faint stars while preventing us from losing faint compact galaxies.
\end{itemize}
We also identified a sample of QSOs (Type-1 AGNs) as point-like sources lying on the galaxy side of the BzK diagram. Dominated by their nucleus, the emission of these AGN galaxies is currently poorly linked to their stellar mass. However, they represent less than 0.5\% of the objects, and we removed them from our sample without compromising its completeness.
All the objects that were not defined as stars or QSOs were considered as galaxies. We verified on a sample of 1241 spectroscopically confirmed stars that we caught 97\% of them in this way, while we kept more than 99\% of our spectroscopic galaxy sample. With this selection we finally found and removed $\sim$ 8\% and $\sim$ 19\% of objects at $K_s$ < 22 for W1 and W4, respectively, outside the masked area.
\section{Stellar mass estimation}
\label{mass}
\subsection{Method}
\label{mass_method}
Stellar mass, $M_*$, and the other physical parameters were computed by using the stellar population synthesis models of \citet{BC2003} with \lephare. As in \citet{Ilbert2013}, the stellar mass corresponds to the median of the stellar mass probability distribution ($PDF_{M_*}$) marginalised over all other fitted parameters. Two metallicities were considered ($Z=0.008$ and $Z=0.02$ i.e. $Z{\sun}$) and the star formation history declines exponentially following $\tau^{-1} e^{-t/\tau}$ with nine possible $\tau$ values between 0.1 Gyr and 30 Gyr as in \citet{Ilbert2013}.
The importance of the assumed extinction laws for the physical parameter estimation has been stressed in several recent studies,
for example, by \citet{Ilbert2010} and \citet{Mitchell2013} for the stellar masses, or by \citet{Arnouts2013} for the star formation rate (SFR). We considered three laws and a maximum dust reddening of E(B-V) $\leq$ 0.5: the \citet{Prevot1984}, the \citet{Calzetti2000},
and an intermediate-extinction curve \citep[see][for more details]{Arnouts2013}. As in \citet{Fontana2006}, we imposed a low extinction for low-SFR galaxies (E(B-V) $\leq$ 0.15 if age/$\tau$ > 4). The emission-line contribution was taken into account following an empirical relation between UV and line fluxes \citep{Ilbert2009}.
Using a method similar to \citet{Pozzetti2010}, we based our estimate of the stellar mass completeness limit, $M_{lim}$, on the distribution of the lowest stellar mass, $M_{min}$, at which a galaxy could have been detected given its redshift. For our sample, which is limited at $K_s < 22$, $M_{min}$ is given by
\begin{equation}
log(M_{min}) = log(M_*) + 0.4 \ (K_s - 22)
\label{mass_lim_eq}
.\end{equation}
We then considered the upper envelope of the $M_{min}$ distribution. In each redshift bin, $M_{lim}$ is defined by the stellar mass at which 90\% of the population have $M_* > M_{min}$. We show the resulting stellar mass completeness limits (open circles) as a function of redshift in the Fig. \ref{mass_comp} over the $M_*$ distribution for our $K_s < 22$-limited sample.
\begin{figure}[!]
\centering
\includegraphics[width=\hsize]{figures/mass_complet3.pdf}
\caption{Stellar mass versus redshift for our $K_s < 22$-limited sample of galaxies. The black open circles represent the stellar mass completeness limit computed with the $K_s$ completeness limit $M_{lim}$ according to Eq. \ref{mass_lim_eq}. The black dots represent the mass at which the $V_{max}$ and the SWML SMF estimators diverge (see below Sect. \ref{smf_measur}). \label{mass_comp}}
\end{figure}
\subsection{Stellar mass error budget}
\label{mass_errors}
In this section, we quantify the uncertainties associated with the stellar masses, which will be propagated into the error budget of the stellar mass functions. The first to be considered is the uncertainty in the flux measurements. The photon noise is taken into account by the \lephare code during the $\chi^2$ SED fitting procedure (rescaling and model selection), where it returns the 68\% confidence interval enclosed in the probability distribution function marginalised on the stellar mass ($PDF_{M_*}$).
The second source of error is introduced by the photometric redshift uncertainty, which is not included in the $PDF_{M_*}$. One way to measure its effect is to compare the stellar masses derived with the photometric and spectroscopic redshifts. We emphasise that this analysis is probably limited by the completeness of our spectroscopic sample. By contrast, it is powerful when used
to reflect all the photo-z error contributions (quality of the photometry and representativity of the templates).
The difference between the two mass estimates is shown in Fig. \ref{Mzs_Mzp} as a function of stellar mass $M_*^{z_{phot}}$. In the top panel, we show the difference in four redshift bins between $z = 0.2$ and $z = 1.5$. No dependence with redshift is observed. The linear regression of the whole sample, plotted as a black dashed line, also suggests that it is not mass dependent. The bottom panel shows the $M_*^{z_{phot}} / M_*^{z_{spec}}$ dispersion in five stellar mass bins and reveals a median dispersion of 0.06 dex, with a maximum of 0.19 dex at low mass.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.99\hsize]{figures/dMass_vs_mass_z2.pdf}
\includegraphics[width=0.99\hsize]{figures/dMass_vs_mass2.pdf}
\caption{Redshift contribution to the stellar mass uncertainty as a function of the stellar mass. The uncertainty is computed from the 1 $\sigma$ dispersion of the ratio $M_*(z_{spec})/M_*(z_{phot})$ in the spectroscopic sample after removing photo-z outliers. In the \textit{\textbf{top panel}}, the distribution is shown in four redshift bins, while in the \textit{\textbf{bottom panel}} it is shown in five stellar mass bins. The error bars correspond to the dispersion reported in each bin of $M_*$, while the dashed line is the linear regression associated with the whole sample. \label{Mzs_Mzp}}
\end{figure}
We then defined the resulting mass uncertainty as the sum in quadrature of all contributions:
\begin{equation}
\sigma_{M} = \sqrt{ ~\sigma_{fit}^2 + \sigma_{z}^2~ }
\label{eq_err_M}
.\end{equation}
However, we have to keep in mind that the stellar mass estimation relies on the numerous assumptions made when we generate our SED templates. For example, \citet{Maraston2005} has shown that a different treatment of the thermally pulsing asymptotic giant branch (TP-AGB) phase in the SSP can lead to a global shift in the stellar mass estimation\footnote{\citet{Pozzetti2007} and \citet{Ilbert2010} estimated on offset of $\sim 0.14$ dex between the stellar masses of BC03 and \citet{Maraston2005}.}. \citet{Ilbert2010} showed that the use of the \citet{Salpeter1955} IMF instead of
the Chabrier IMF decreases the stellar masses by $0.24$ dex. These systematic shifts are therefore not expected to affect the conclusions of our study. \citet{Mitchell2013} also pointed out the potential effect of the assumed dust attenuation on the stellar mass estimation\footnote{\citet{Mitchell2013} estimated that the stellar mass can be underestimated by up to 0.6 dex by assuming the \citet{Calzetti2000} for massive galaxies.}. As presented in the previous section, we considered three different extinction laws. This allows a higher diversity of possible values for dust attenuation, which is expected to limit the bias that may affect our stellar mass estimation.
\subsection{Effect of the CFHTLS absolute calibration}
\label{systematics}
\begin{figure}[!]
\includegraphics[width=0.499\hsize, trim = 0cm 2.1cm 0cm 0cm, clip]{figures/dM_LenS_t07_0205.pdf}
\hspace{-0.4cm}\includegraphics[width=0.499\columnwidth, trim = 0cm 2.1cm 0cm 0cm, clip]{figures/dM_LenS_t07_0508.pdf}
\hspace{0.01cm}\includegraphics[width=0.497\hsize]{figures/n_M_0205.pdf}
\hspace{-0.38cm}\includegraphics[width=0.497\columnwidth]{figures/n_M_0508.pdf}
\includegraphics[width=0.499\hsize, trim = 0cm 2.1cm 0cm 0cm, clip]{figures/dM_LenS_t07_0811.pdf}
\hspace{-0.4cm}\includegraphics[width=0.499\columnwidth, trim = 0cm 2.1cm 0cm 0cm, clip]{figures/dM_LenS_t07_1115.pdf}
\hspace{0.01cm}\includegraphics[width=0.497\hsize]{figures/n_M_0811.pdf}
\hspace{-0.38cm}\includegraphics[width=0.497\columnwidth]{figures/n_M_1115.pdf}
\caption{Differences between the stellar mass obtained with T0007 and CFHTLenS, $M_*^{ T07}$ and $M_*^{ LenS}$ at different redshifts: the object-by-object $M_*^{ LenS} / M_*^{ T07}$ ratio versus $M_*^{ T07}$ in the upper panel, where the red dashed line is the linear regression, and the $M_*^{ T07}$ (blue) and $M_*^{ LenS}$ (red) normalised number counts in the lower panel, where the vertical black dashed line represents the mass completeness limit.\label{T07_LENS_dMASS}}
\end{figure}
As shown in Sect. \ref{optical}, the absolute photometric calibrations of the T0007 and CFHTLenS magnitudes differ by more than 0.05 mag on average. Even if the T0007 were significantly improved in its calibration, we compare the photometric redshifts and the stellar masses computed with both catalogues blindly to quantify the effect of these offsets.
As seen in Sect. \ref{zp_accu}, the colour corrections applied during the photometric redshift computation allows us to obtain very similar photo-z despite the offset between their calibrations. However, these corrections are a combination of the photometry and the SEDs used to calculate photo-z. As described in Sect. \ref{zp_method}, the templates used for photo-z are different from those used for the masses. Consequently, we did not apply the photo-z colour corrections with the BC03 templates. The differences in the T0007 and CFHTLenS photometries thus directly affect the stellar mass estimation.
Figure \ref{T07_LENS_dMASS} presents these differences in the redshift bins 0.2< z < 0.5, 0.5 < z < 0.8, 0.8 < z < 1.1 and 1.1 < z < 1.5. The difference between the stellar masses obtained with the T0007 ($M_*^{T07}$) and those obtained with the CFHTLenS ($M_*^{LenS}$) is stellar mass dependent. On average, this systematic difference can reach $\pm 0.1$ dex at low redshift ($z < 0.8$). At higher redshift, we do not observe a systematic difference between the two stellar mass catalogues since we used the same $K_s$-band calibration. Even if the object-by-object $M_*^{ LenS}$ to $M_*^{ T07}$ ratio is characterised by a mean offset that never exceeds 0.2 dex, the comparison of the T0007 and CFHTLenS number counts above the mass completeness limit reveals different shapes around $M_* \sim 10^{11} M_{\odot}$, notably at $z < 0.8$. This suggests a significant effect on the SMF massive end at low redshift, as we discussed below.
\section{Measuring the stellar mass functions}
\label{SMF}
To compute the galaxy stellar mass function, we selected a sample of $\sim 760,000$ galaxies at $K_s \le 22$ over an effective area of 22.38 deg$^2$. According to what we discussed in Sect. \ref{zp_accu}, we restricted our analysis to the range $0.2\le z \le 1.5$, where we combined reliable redshifts and large volumes.
The galaxy stellar mass function was derived with the tool ALF \citep{Ilbert2005}, which provides three non-parametric estimators: $V_{max}$ \citep{Schmidt1968}, SWML \citep[the step-wise maximum likelihood;][]{Efstathiou1988}, and $C^+$ \citep{Zucca1997}. The $V_{max}$ estimator is most widely used because of its simplicity. The $1/V_{max}$ is the inverse sum of the volume in which each galaxy was observed. The $V_{max}$ is the only estimator that is directly normalised. The SWML (Efstathiou et al. 1988) determines the SMF by maximising the likelihood of observing a given stellar mass - redshift sample. The $C^+$ method overcomes the assumption of a uniform galaxy distribution, as is the case when using the $V_{max}$\footnote{For more details about these estimators, we refer to \citet{Ilbert2005} and \citet{Johnston2011}.}. As described in \citet{Ilbert2015}, these estimators diverge below a stellar mass limit that should correspond to the limit calculated in Sect. \ref{mass}. In Fig. \ref{mass_comp} we verify that the $V_{max}$ and SWML estimators (black dots) are consistent with our $K_s$-based stellar mass completeness limit (black open circles). We used the colour-magnitude weight map shown in Fig.\ref{weight_map} to correct the SMF for the potential incompleteness described in Sect. \ref{wircam_K}. In the remainder of this study, we work with stellar masses $M_* > M_{min}$ where all the non-parametric estimators agree.
\subsection{Measurements by type and field}
\label{smf_measur}
To separate quiescent and star-forming galaxies, we used the rest frame $(NUV-r)^\textsc{o}$ versus $(r-K)^\textsc{o}$ diagram (hereafter NUVrK) presented by \citet{Arnouts2013}, which is based on the method introduced by \citet{Williams2009}.
Figure \ref{NUVrK_z} presents the galaxy distribution in the NUVrK diagram for several redshift bins.
This optical-NIR diagram allows us to properly separate red dusty star-forming galaxies from red quiescent ones. Edge-on spirals are clearly identifiable, as is illustrated by the morphological study of the NUVrK diagram at low redshift presented in the companion paper \citep{Moutard2016a}.
When we computed the rest-frame colours, we adopted the procedure described in Appendix A.1 of \citet{Ilbert2005} to minimise the dependency of the absolute magnitudes to the template library. An absolute magnitude at $\lambda^0$ was derived from the apparent magnitude in the filter passband that was the closest from $\lambda^0$ $\times (1+z)$ to minimise the k-correction term, except when the apparent magnitude had an error above 0.3 mag, to avoid too noisy colours.
The small break\footnote{It is important to keep in mind that the NUVrK diagram is particularly stretched along the $( r-K_s )^\textsc{o}$ axis.} in the red clump is artificial and is an effect of the template discretisation, when our procedure used to limit the template dependency fails because of the low signal-to-noise
ratio measurements (here due to the intrinsic low rest-frame NUV emission of quiescent galaxies)\footnote{We note that this effect of discretisation from the templates is smoothed if we use a high number of templates, such as with the BC03 library. However, this smoothing is somehow artificial since the NUV part is not better constrained in practice. We verified with BC03 that the SMF of quiescent galaxies is not significantly affected by the set of templates we used to compute absolute magnitudes.}.
As shown in Fig. \ref{NUVrK_z}, by following the low-density valley of the NUVrK diagram (the so-called \textit{green valley}), the selection of quiescent galaxies can be defined with the general form
\begin{equation}
[~ ( NUV-r )^\textsc{o} > B_2 ~]~\cap~[~ ( NUV-r )^\textsc{o} > A ~ ( r-K_s )^\textsc{o} + B_1 ~] ~.
\label{eq_NrK}
\end{equation}
$A$, $B_1$ , and $B_2$ are three parameters to be adjusted in each redshift bin, as suggested by \citet{Ilbert2015} and \citet{Mortlock2015},
because of the global ageing of the galaxy population.
In the four redshift bins, the slope $A$ of Eq. \ref{eq_NrK} seems to be constant, with a typical value of $A = 2.25$.
By projecting the galaxy distribution in a plane perpendicular to the axis of slope $A$\footnote{We selected the galaxies in the range $0.4 < (r - K_s)^\textsc{o} < 0.9$ to avoid the objects in transition.}, we clearly distinguish the red and blue clouds as two normal distributions that we fitted by two Gaussians. We define $B_1$ as the position where the two Gaussians intersect.
\begin{figure}[t]
\includegraphics[width=\hsize]{figures/B_lbt2.pdf}
\caption{Cosmic evolution of the $(NUV-r)^\textsc{o}$ normalisation. The dots represent the position of the minimum density along the $(NUV-r)^\textsc{o}$ axis across cosmic time, while the bars are defined by the extreme values that delimit the NUVrK green valley. The solid line is the linear fit and the dashed lines represent their mean upper and lower envelopes.\label{B_evol}}
\end{figure}
In Fig~\ref{B_evol} we show the evolution of $B_1$ as a function of the look-back time ($t_\textsc{l}$). By assuming a linear relation between $B_1$ and cosmic time, we derive $B_1(t_\textsc{l}) = -0.029$ $t_\textsc{l} +
2.368$ in our highest precision redshift range ($0.2 - 1.0$). Assuming that $B_2$ evolves as $B_1$, we empirically set $B_2(t_\textsc{l} = 2.5 Gyr) = 3.3$, and we find $B_2(t_\textsc{l}) = B_1(t_\textsc{l}) + 1.004$. We can write our selection of quiescent galaxies as
\begin{eqnarray}
\left[ ~( NUV-r )^{\textsc{o}} > 3.372 - 0.029 \ t_{\textsc{l}} ~\right] ~ \cap \nonumber\\
\left[ ~( NUV-r )^{\textsc{o}} > 2.25 \ ( r-K_s )^{\textsc{o}} + 2.368 - 0.029 \ t_\textsc{l} ~\right] ~.
\label{eq_sel}
\end{eqnarray}
All the galaxies that are not selected as quiescent are considered to be star forming. In Fig. \ref{NUVrK_z} the separations between quiescent and star-forming galaxies are shown as white solid line. We also define the green valley as the region around minimum $B_1$, reaching 10\% of the peak of the red Gaussians, as shown by the white dotted lines. We consider these limits as possible systematic uncertainties when discussing the evolution of the quiescent and star-forming SMFs.
\begin{figure}[!]
\includegraphics[width=0.532\columnwidth, trim = 0.5cm 1.25cm 1.1cm 0cm, clip]{figures/NUVrKs_Evol3_0205_errbin0.pdf}
\hspace{-0.15cm}\includegraphics[width=0.474\columnwidth, trim = 1.5cm 1.25cm 1.6cm 0cm, clip]{figures/NUVrKs_Evol3_0508_errbin0.pdf}
\includegraphics[width=0.532\columnwidth, trim = 0.5cm 0cm 1.1cm 0cm, clip]{figures/NUVrKs_Evol3_0811_errbin0.pdf}
\hspace{-0.15cm}\includegraphics[width=0.474\columnwidth, trim = 1.5cm 0cm 1.6cm 0cm, clip]{figures/NUVrKs_Evol3_1115_errbin0.pdf}
\caption{Star-forming and quiescent galaxy selection in the NUVrK diagram. The colour code shows the galaxy density. The averaged colour uncertainties (based on the observed photometric errors) are shown in the upper left corner of each panel. The binning used for the density map is tuned to match the typical uncertainties at $0.2 < z < 0.5$.
The solid line represents the mean selection of quiescent galaxies in a given redshift bin. The dotted lines represent the two extreme selections delimiting the \textit{green valley}. \label{NUVrK_z}}
\end{figure}
Figure \ref{SMFs_fig} presents the global ($black$), star-forming ($blue$), and quiescent ($red$) galaxy SMFs for the two fields separately (W1: dot and W4: cross) in four redshift bins. The sample consists of 481,518 galaxies over 14.43 deg$^2$ in W1 and 268,010 galaxies over 7.96 deg$^2$ in W4.
The error bars shown in the upper sub-panels reflect only the Poissonian contributions.
The SMF comparison between the two fields agrees within the errors. In the lower sub-panels, we plot the stellar mass uncertainty by type, $\sigma_M$, defined in Sect. \ref{mass_errors}, as function of the stellar mass. First, $\sigma_M$ decreases exponentially with stellar mass, as already noted in previous studies \citep[e.g.][]{Grazian2015}. We can then fit the $\sigma_M(M_*)$ relation with a power law (Fig. \ref{SMFs_fig} sub-panels, dashed lines). Secondly, the size of our galaxy sample allows for very small relative Poissonian errors down to densities of around $\sim 10^{-5}$--$10^{-6} Mpc^{-3}$ even if we split by type and field.
The cosmic variance contribution in the budget of the errors that affects our SMF measurement is therefore expected to be small, as discussed in the next section.
\begin{figure}[!]
\includegraphics[width=0.495\columnwidth, trim = 0cm 1.9cm 2cm 0cm, clip]{figures/SMFs_0205_b.pdf}
\includegraphics[width=0.495\columnwidth, trim = 0cm 1.9cm 2cm 0cm, clip]{figures/SMFs_0508_b.pdf}
\includegraphics[width=0.495\columnwidth, trim = 0cm 0cm 2cm 0cm, clip]{figures/ErrM_02_05.pdf}
\includegraphics[width=0.495\columnwidth, trim = 0cm 0cm 2cm 0cm, clip]{figures/ErrM_05_08.pdf}
\includegraphics[width=0.495\columnwidth, trim = 0cm 1.9cm 2cm 0cm, clip]{figures/SMFs_0811_b.pdf}
\includegraphics[width=0.495\columnwidth, trim = 0cm 1.9cm 2cm 0cm, clip]{figures/SMFs_1115_b.pdf}
\includegraphics[width=0.495\columnwidth, trim = 0cm 0cm 2cm 0cm, clip]{figures/ErrM_08_11.pdf}
\includegraphics[width=0.495\columnwidth, trim = 0cm 0cm 2cm 0cm, clip]{figures/ErrM_11_15.pdf}
\caption{Galaxy SMF in the fields W1 (dots) and W4 (crosses) for the global (black), star-forming (blue), and quiescent (red) populations in four redshift bins (\textit{\textbf{upper}} sub-panels). The error bars reflect only the Poissonian contribution, while the corresponding mass uncertainties are shown in the \textit{\textbf{lower}} sub-panels. Only SMF points above the stellar mass completeness are plotted. \label{SMFs_fig}}
\end{figure}
\subsection{SMF uncertainties}
\label{uncertainties}
In this section, we describe the error budget associated to our SMF measurements. All the contributions to the SMF uncertainties are expressed as a function of the stellar mass and redshift. In addition to the stellar mass and Poissonian errors already mentioned, the large-scale density inhomogeneities represent a source of uncertainty. This cosmic variance is known to represent a fractional error of 15 -- 25\% at the high-mass end ($M\ge 10^{11} M_{\odot}$) in the COSMOS survey and of around 20 -- 50\% in narrower pencil-beam surveys, generally dominating the error budget.
Following the procedure discussed by \citet{Coupon2015}, we investigated the contribution of the cosmic variance in our sample by dividing our survey into N patches of equal areas. Since the effective surface can change from one patch to another, every patch was weighted according to its unmasked area. For a given observed area, we computed the number density dispersion N times over (N-1) patches by discarding a different patch every time. We then considered the mean number density dispersion over the N measurements as our internal estimate of the cosmic variance for a given effective area and the dispersion around the mean as an error estimate of the cosmic variance.
\begin{figure}[!]
\includegraphics[width=\hsize, trim = 0cm 1.2cm 0cm 0cm, clip]{figures/cv_a_0205_mos.pdf}
\includegraphics[width=\hsize]{figures/cv_a_1115_mos.pdf}
\caption{Cosmic variance as a function of the effective observed area for three stellar mass bins. The dashed lines correspond to the linear fit of the empirical cosmic variance estimates plotted with pentagons. The squares locate the extrapolated cosmic variance estimate for our entire survey. The solid lines show the corresponding theoretical estimates computed according to \citet{Moster2011}. \label{cv_a}}
\end{figure}
In Fig. \ref{cv_a} we plot our cosmic variance estimate $\sigma_{cv}$ in the redshift bins [0.2, 0.5] and [1.1, 1.5], considering three stellar mass bins from $M_* = 10^{10} M_{\odot}$ up to $M_* = 10^{11.5} M_{\odot}$ (with blue, purple and red dots, respectively) and for mean effective areas ranging from $a \simeq 0.1$ to $a \simeq 2.8$ deg$^2$.
The relation of cosmic variance -- area is well fitted by a power-law with $\sigma_{cv}(a)$ = 10 $^{\beta}$ $a$ $^{\alpha}$ (shown as dashed lines). To estimate the cosmic variance that affects our entire survey, we extrapolated the relations up to $a = 22$ deg$^2$, shown as squares.
For comparison, we also show the cosmic variance predicted for the same redshift and stellar mass bins (triangles) by using the code \texttt{getcv} \citep{Moster2011}. Our internal cosmic variance estimate ($\sigma_{cv}$) and the predicted one agree
remarkably well up to our observed areas of $a = 2$ deg$^2$. For larger areas, the two estimates diverge slightly for high-mass ($M_* > 10^{11} M_{\odot}$) galaxies at $z < 0.5$, where we slightly underestimate $\sigma_{cv}$ with respect to the theoretical prediction. We have to stress that the \citet{Moster2011} procedure is optimised for pencil-beam surveys of areas $a < 1$ deg$^2$. At $z > 0.5$, the theoretical estimators always predict a cosmic variance lower than our own extrapolation. By using our internal estimate, we therefore adopt a conservative approach.
Finally, the last source of error that we need to consider is that of the stellar mass uncertainty defined in Sect. \ref{mass_errors}.
To do so, we generated 200 mock catalogues with perturbed stellar masses according to the expected $\sigma_{M}$ (which includes the photometric redshift uncertainties and the photon noise, Eq. \ref{eq_err_M}) and measured the
1$\sigma$ dispersion in the density $\Phi$ of the reconstructed SMFs that we refer to as $\sigma_{\Phi, M}$.
At the end, the error of the stellar mass function is the quadratic sum of all the contributions discussed above and is defined as
\begin{equation}
\sigma_{tot} = \sqrt{ ~\sigma_{cv}^2 + \sigma_{poi}^2+ \sigma_{\Phi, M}^2~ } ~.
\label{eq_err_tot}
\end{equation}
\subsection{Importance of photometric calibration in large surveys}
As mentioned in Sect. \ref{systematics}, a mean offset of $\sim 0.06$ mag in the optical absolute photometric calibration
(cf. $\Delta mag$ in Table \ref{tab_zero_pt}) can change the stellar mass estimate by 0.1 dex.
In the top panel of Fig. \ref{sys_t07_lens} we show the difference between the two SMFs measured with the T07 and CFHTLens photometries, $\Delta \Phi_{\textbf{calib}} = \Phi_{LenS} - \Phi_{T07}$. This difference is normalised by the total statistical error discussed in the previous section, $\sigma_{tot}$. In general, $\Delta \Phi_{\textbf{calib}} > \sigma_{tot}$ in our survey (solid lines), which means that the SMF variation induced by the calibration offsets is several times larger than the uncertainty of our SMF. It even reaches 5 $\sigma_{tot}$ at low redshift ($0.2 < z < 0.5$; blue solid line), where the stellar mass is essentially driven by the optical photometry.
In contrast, by considering a subsample of 2 deg$^2$ (dashed lines), we find that $| \Delta \Phi_{\textbf{calib}} | \lesssim \sigma_{tot}^{2deg}$ (green shaded area).
This means that the SMF variation driven by the calibration offsets is smaller than the other uncertainties affecting the SMF in a 2 deg$^2$ survey (i.e. the variation is contained within the error bars). In other words, we cannot see the variation that
is due to the calibration because it is hidden by other sources of uncertainties (Poissonian and cosmic variance).
The systematic differences that are due to the T0007-CFHTLenS photometric offsets can therefore be neglected in a 2 deg$^2$ survey, while in surveys of 20 $deg^2$ and more, we reach a regime where the systematic uncertainty that is due to the photometric calibration dominates the error budget.
\begin{figure}[!t]
\includegraphics[width=\hsize, trim = 0cm 1.9cm 0cm 0cm, clip]{figures/dPhi_LenS-T07_sig_tot.pdf}
\includegraphics[width=\hsize]{figures/dPhi_zbias_sig_tot.pdf}
\caption{Ratio between the systematic stellar mass function difference and the total statistical error, $\Delta \Phi / \sigma_{tot}$, as a function of the stellar mass and in four redshift bins. We consider $\Delta \Phi_{\textbf{calib}}$ (\textit{top} panel) and $\Delta \Phi_{\textbf{zbias}}$ (\textit{bottom} panel), the systematics coming from the absolute photometric calibration and from the photometric redshift bias (cf. Sect. \ref{zp_method}), respectively. The green shaded areas show the region where $| \Delta \Phi | \leq \sigma_{tot}$. \label{sys_t07_lens}}
\end{figure}
For comparison, we also investigated another source of systematic uncertainty: the photometric redshift bias. Using the photo-z bias ($z_{bias} = z_{phot}-z_{spec}$) presented in Sect. \ref{zp_method} (see Fig. \ref{zp_zs}, lower panels), we corrected our photometric redshifts. Instead of using a global correction, we applied a photo-z bias correction for different galaxy types\footnote{We
also checked this by estimating the correction with half of the spectroscopic sample and improving the photo-zs of the other half.}.
Similarly to $\Delta \Phi_{\textbf{calib}}$, the difference between the stellar mass functions computed with the \textit{corrected} photometric redshifts and with the original ones ($\Delta \Phi_{\textbf{zbias}}$) is shown in the lower panel of Fig. \ref{sys_t07_lens}. The effect of the photo-z bias on the SMF measurement is much weaker than the effect of the photometry. The SMF differences induced by the photo-z bias as measured in our sample are largely dominated by the statistical uncertainties in a 2 deg$^2$ subsample. Moreover, in the entire 22 deg$^2$ survey, the difference can only be detected at $z \sim 0.65$, while $|\Delta \Phi_{\textbf{zbias}}| < 2 \ \sigma_{tot}$.
Given the limited amplitude of its effect on our sample, the photo-z bias can be neglected in our study. By contrast, the SMF variations that are due to the difference in photometry stress the need of carefully controlling the absolute photometric calibration in large surveys. In the present study, the choice of the CFHTLS-T0007 photometry is supported by (1) the SNLS photometric calibration based on a new spectrophotometric standard for high-precision cosmology, and (2) the careful treatment by the Terapix team that enables homogeneously propagating the SNLS photometric calibration over the entire survey.
\section{Evolution of the galaxy stellar mass function and density}
\label{evol}
As shown in Sect. \ref{uncertainties}, the large volume probed by our survey allows us to reduce both the cosmic variance and the Poisson uncertainties. We exploit this large volume to quantify the evolution of the galaxy SMF, especially at the high-mass end, where it is most relevant.
\subsection{Evolution of the SMF}
\label{SMF_evol}
\subsubsection{Comparison of the global SMF with the literature}
\label{SMF_comparison}
\begin{figure*}[!]
\vspace{-0.05cm}\includegraphics[width=0.499\hsize, trim = 0cm 1.5cm 0cm 0cm, clip]{figures/Litt_02_05_b.pdf}
\vspace{-0.05cm}\includegraphics[width=0.499\hsize, trim = 0cm 1.5cm 0cm 0cm, clip]{figures/Litt_05_08_b.pdf}
\includegraphics[width=0.499\hsize, trim = 0cm 0cm 0cm 0.5cm, clip]{figures/ErrSMF_02_05_t.pdf}
\includegraphics[width=0.499\hsize, trim = 0cm 0cm 0cm 0.5cm, clip]{figures/ErrSMF_05_08_t.pdf}
\vspace{-0.05cm}\includegraphics[width=0.499\hsize, trim = 0cm 1.5cm 0cm 0cm, clip]{figures/Litt_08_11_b.pdf}
\vspace{-0.05cm}\includegraphics[width=0.499\hsize, trim = 0cm 1.5cm 0cm 0cm, clip]{figures/Litt_11_15_b.pdf}
\includegraphics[width=0.499\hsize, trim = 0cm 0cm 0cm 0.5cm, clip]{figures/ErrSMF_08_11_t.pdf}
\includegraphics[width=0.499\hsize, trim = 0cm 0cm 0cm 0.5cm, clip]{figures/ErrSMF_11_15_t.pdf}
\caption{Galaxy stellar mass functions (SMF) in four redshift bins. \textit{\textbf{Top sub-panels}}: The SMF measured in the present study (black stars) is compared to previous measurement: \citet{Tomczak2014}, pink squares; \citet{Davidzon2013}, red up triangles; \citet{Moustakas2013}, cyan down triangles; \citet{Ilbert2013}, yellow circles;
and \citet{Santini2012}, green up triangles. The error bars plotted on the measures reflect different contributions to the SMF uncertainty, depending on the considered study: only Poissonian for Ilbert et al., Moustakas et al. and Davidzon et al.; Poissonian and stellar mass for Santini et al.; and Poissonian, stellar mass and cosmic variance for Tomczak et al. and the present study. The dashed line shows the SDSS-Galex local measurement of \citet{Moustakas2013}. \textbf{\textit{Lower sub-panels}}: The corresponding SMF error contributions normalised by the total SMF uncertainty (see Eq. \ref{eq_err_tot}). The blue dash-dotted line represents the Poissonian contribution. The red dashed line and the cyan solid line represent the cosmic variance and the mass uncertainty contributions, respectively. \label{SMF_litt}}
\end{figure*}
In the upper sub-panels of the Fig. \ref{SMF_litt}, we compare our global SMF measurements with the literature. Our results
agree well overall with many previous studies, although some differences exist. We discuss these in this section. The error bars corresponding to our measurement (black) reflect the total error $\sigma_{tot}$ defined in Eq. \ref{eq_err_tot}. In the lower sub-panels, we show the contribution of each error to the error budget. We show each contribution normalised by the total error $\sigma_{tot}$ as a function of the stellar mass. First, we note that the Poissonian error (blue dash-dotted line) represents a minor contribution to the total SMF uncertainty up to the very high-mass end (i.e. above $10^{11.5} M_{\odot}$). Secondly, the contribution of the cosmic variance ($\sigma_{ cv}$; red dashed line) is dominant up to stellar masses around the SMF knee ($M_* < 10^{11} M_{\odot}$). We finally note that the contribution of the stellar mass uncertainty ($\sigma_{\Phi, M}$; cyan solid line) drives the total uncertainty of the SMF high-mass end. We recall that while the Poissonian uncertainty is always taken into account in the literature, the error bars may reflect different contributions to the SMF uncertainty depending on the study considered in Fig. \ref{SMF_litt} (as specified in the caption).
The comparison with \citet{Davidzon2013} is straightforward since their observations were taken in the same two fields of the CFHTLS survey, covering an effective area of 5.34 and 4.97 deg$^2$ in W1 and W4, respectively.
The authors derived the SMF between $z = 0.5$ and $z = 1.3$ using the VIPERS-PDR1 dataset ($\sim$ 50,000 galaxies), that is,~the main spectroscopic sample used to calibrate our redshifts (Sect.~\ref{zp_accu}). The work of \citet{Davidzon2013} clearly illustrates the advantages of using spectroscopic redshifts (e.g.~the easier removal of stellar interlopers and QSO). However, to estimate the SMF through spectroscopic data, some difficulties need to be solved, such
as the statistical weighting to account for the spectroscopic sampling rate \citep[see][for more details about how these weights are computed in VIPERS]{Garilli2014}. We observe a good agreement between the two SMF estimates, especially at $M_* > 10^{11} M_{\odot}$.
The statistical uncertainties are very low in both VIPERS and our analysis, and the two surveys are additionally collected almost in the same area. Any difference is likely due to some combination of the photometric redshift uncertainty of our sample, the spectroscopic incompleteness affecting VIPERS, the adopted SED fitting method, or the photometric calibration used in VIPERS (T0005) and in our survey (T0007). However, the only significant discrepancy is observed close to the stellar mass completeness limit of VIPERS, where the measurements of \citet{Davidzon2013} are $\lesssim 0.2$ dex lower. This difference at low masses could be due to some incompleteness correction that is due to the $i$-band selection, while our sample is $K_s$-band selected.
\citet{Moustakas2013} also measured the SMF by relying on the spectroscopic redshift sample of PRIMUS ($\sim$ 40,000 galaxies between $z=0.2$ and 1 and cover $\sim5.5$ deg$^2$ over five fields). In general, we observe that the SMF measurements from PRIMUS form the upper limit of the literature. Their SMF estimate is significantly above the others at $0.5<z<0.8$.
In the range $10^{10.5}<M_* < 10^{11}\,M_{\odot}$, the difference reaches 0.2 dex, while the authors predict that the cosmic variance should not affect the measurement by more than 10\%; a larger offset is observed at $M_*>10^{11.5} M_{\odot}$, which could be mainly explained by the cosmic variance affecting their measurement, which is estimated to be very strong at high mass\footnote{\citet{Moustakas2013} estimated $\sigma_\mathrm{cv} = 0.1-1.4$ for $\log M_* > 11.5 M_{\odot}$ at $0.5<z<0.8$.}. In the next redshift bin ($0.8<z<1.1$), the SMF of \citet{Moustakas2013} is also significantly higher than ours. The reason for the discrepancy may be linked to the different recipe (dust models, template libraries, etc.) adopted by \citet{Moustakas2013} in their SED fitting procedure \citep[see also][for a discussion about the effect of different SED fitting methods on the SMF]{Davidzon2013}. We compared their stellar masses in the XMM-LSS field, which overlaps the W1 field. We found that the PRIMUS masses are higher than ours by
$0.17 \pm 0.09$ dex at $0.2 < z < 0.5$, $0.15 \pm 0.08$ dex at $0.5 < z < 0.8$ and $0.12 \pm 0.1$ dex at $0.8 < z < 1$\footnote{Even by using the same photometry as \citet{Moustakas2013}, i.e.~including GALEX, CFHTLS, and SWIRE (3.6 and 4.5$\mu$m), similar differences in the stellar masses are observed.}. This could explain part of the observed shift in the SMFs. It is worth noting that the two largest spectroscopic surveys so far, VIPERS and PRIMUS, lead to the largest difference in the SMF measurements. This highlights the great effect of systematic uncertainties in the latest large surveys \citep[see also][]{Coupon2015}.
The comparison of our measurements with deep photometric surveys
shows that our results agree well with those of \citet{Tomczak2014} and \citet{Ilbert2013}, down to the lowest stellar masses we can explore. Their analysis was based on much deeper data, which confirms the estimate of our lower mass limits. Only in the redshift bin $0.8 < z < 1.1$, we note a significant difference with \citet{Ilbert2013} in the high-mass end of the SMF. This can be explained by the well-known over-density in the COSMOS field \citep{Kovac2010a, Bielby2012}\footnote{The SMFs at $0.8 < z < 1.1$ are consistent with each other if $\sigma_\mathrm{cv}$ computed by \citet{Ilbert2013} is included in the error budget of their SMF ($\sigma_\mathrm{cv} = 0.1-0.25$ for $\log M_*/M_{\odot} = 11-12$).}.
Finally, we show the local, $z\sim 0.1$, GALEX-SDSS SMF from \citet{Moustakas2013} in all panels of Fig.~\ref{SMF_litt} (as a dash-dotted line). A small but clear and progressive deviation of the SMF with redshift is obvious, in comparison to the local SMF. Far from evident in the previous studies, the trend observed at high mass is confirmed and quantified in Sects. \ref{quant_SMF_evol} and \ref{dens_evol}, and is discussed in Sect. \ref{high_mass_evol}.
\subsubsection{Fitting the global, star-forming, and quiescent SMF}
\label{fitting}
\begin{figure*}[!]
\vspace{-0.05cm}\includegraphics[width=0.499\hsize, trim = 0.5cm 1.5cm 1.1cm 0cm, clip]{figures/SMF_fit_0205_ref.pdf}
\vspace{-0.05cm}\includegraphics[width=0.499\hsize, trim = 0.5cm 1.5cm 1.1cm 0cm, clip]{figures/SMF_fit_0508_ref.pdf}
\vspace{-0.05cm}\includegraphics[width=0.499\hsize, trim = 0.5cm 0cm 1.1cm 0cm, clip]{figures/SMF_fit_0811_ref.pdf}
\vspace{-0.05cm}\includegraphics[width=0.499\hsize, trim = 0.5cm 0cm 1.1cm 0cm, clip]{figures/SMF_fit_1115_ref.pdf}
\caption{Stellar mass function for all (black), star-forming (blue), and quiescent (red) galaxies in four redshift bins. The solid lines show the best parametric form of our SMF measurements (stars), while the shaded areas represent the systematic uncertainty due to the SF/Q separation (cf. Sect. \ref{smf_measur}). The dashed lines show the parametric forms obtained if a single-Schechter function is assumed to fit the SF population. The measurements of \citet[][squares]{Tomczak2014} and \citet[][circles]{Ilbert2013} are plotted for comparison. \label{SMF_fitt}}
\end{figure*}
To quantify the evolution of the SMF, we adopted the parametrisation proposed by \citet{Schechter1976}. As already noted, the total stellar mass function is better fitted with a double Schechter function \citep{Baldry2008,Pozzetti2010,Ilbert2013,Tomczak2014}, defined as
\begin{equation}
\Phi(M_*) \ dM_* = e^{-\frac{M_*}{\mathcal{M}^\star}} \ \left[ \Phi^\star_1 \left(\dfrac{M_*}{\mathcal{M}^\star} \right) ^{\alpha_1} + \Phi^\star_2 \left(\dfrac{M_*}{\mathcal{M}^\star} \right) ^{\alpha_2} \right] \ \dfrac{dM_*}{\mathcal{M}^\star} ~,
\label{eq_double_sch}
\end{equation}
where $\mathcal{M}^\star$ is the characteristic stellar mass, $\Phi^\star_1$ and $\Phi^\star_2$ are the normalisation factors, and $\alpha_1$ and $\alpha_2$ are the power-law slopes satisfying $\alpha_2 < \alpha_1$.
It has been shown that the massive end of the stellar mass function can be significantly affected by the stellar mass uncertainty \citep{Caputi2011} through the so-called Eddington bias \citep{Eddington1913}. We corrected the SMF for the Eddington bias during the fitting process by convolving the SMF parametric form by the stellar mass uncertainty $\sigma_M$\footnote{Only statistical uncertainties (Poisson and cosmic variance) are considered during the fitting process, while the mass uncertainty is already taken into account in the convolution with the SMF.} following the procedure described in \citet{Ilbert2013}. The authors estimate d$\sigma_M$ for each redshift bin, but \citet{Grazian2015} have pointed out the importance of using an estimate of $\sigma_M$ that depends on the stellar mass in addition to the redshift dependence\footnote{By considering the mass dependency of $\sigma_M$ , we find that the deconvolution has a weaker effect than if we use the mean estimate of $\sigma_M$ at a given redshift.}. We used the $\sigma_M(M_*, z)$ estimate described in Sect. \ref{smf_measur} (cf. Fig. \ref{SMFs_fig}, sub-panels).
Figure \ref{SMF_fitt} shows the SMF of the global (black stars), star-forming (blue stars), and quiescent (red stars) populations. We included the SMFs measured by \citet{Tomczak2014}
and \citet{Ilbert2013}, who probed the very low-mass populations for SF and Q galaxies. A simple Schechter function (i.e. $\Phi^\star_2 = 0$ in Eq. \ref{eq_double_sch}) seems to be sufficient to fit the star-forming contribution above the stellar mass completeness limit (blue dashed lines). However, as already shown by several studies working with deeper surveys \citep[see e.g.][]{Drory2009,Ilbert2013,Tomczak2014}, the SMF of star-forming galaxies reveals an upturn at low mass and is better fitted with a double-Schechter function \citep[ ][Sect. 3.2]{Tomczak2014}. Given our stellar mass completeness limit, we can only constrain the low-mass end of the star-forming SMF at $z < 0.5$. Therefore we set the low-mass components of the double-Schechter function to the values found at $0.2 < z < 0.5$, $\alpha_{2\ \textsc{sf}} = -1.49$ and $\Phi^\star_{2\ \textsc{sf}} = -3.24$. Our choice is supported the lack of evolution that is observed for $\alpha_2$ and $\Phi^\star_2$ by \citet{Ilbert2013} and \citet{Tomczak2014}. In addition, our values agree quite well with \citet{Tomczak2014}, who probed the SMF at lower stellar mass. The resulting double-Schechter function is plotted in Fig. \ref{SMF_fitt} (blue solid line).
For the quiescent galaxies, we clearly need a double-Schechter function to fit the SMF at low redshift (red stars in the Fig. \ref{SMF_fitt} upper left panel). The upturn at low mass is slightly more pronounced in our measurement than in the literature, regardless of the position of the quiescent galaxy selection in the NUVrK green valley (cf. Sect. \ref{smf_measur}). In other words, the low-mass slope that we measure does not depend significantly on our selection of quiescent galaxies. We also verified that we find the same shape when we select the quiescent galaxies based on their specific star formation rate (sSFR), using $sSFR < 10^{-11} Gyr^{-1}$ \citep[see][for more details on this threshold]{Ilbert2010}. We find the upturn position around $M_* \simeq 10^{9.5} M_{\odot}$ , in good agreement with previous measurements, that is, $M_* \simeq 10^{9.2} M_{\odot}$, $M_* \simeq 10^{9.4} M_{\odot}$ and $M_* \simeq 10^{9.6} M_{\odot}$ for \citet{Tomczak2014}, \citet{Ilbert2013} and \citet{Drory2009}, respectively.
Even though several deep surveys show that the low-mass upturn of the quiescent SMF is still present at $z > 0.5$, using a single-Schechter
function is sufficient given our survey stellar mass limit. The discrepancies between our star-forming and quiescent SMF and the literature are mainly explained by the different criteria used to separate quiescent and star-forming galaxies. If we include the galaxies lying in the green valley in the quiescent sample (i.e. by considering the upper or lower envelopes of the quiescent
or star-forming SMF), our measurements of the SMF agree with those of \citet{Tomczak2014} and \citet{Ilbert2013} at $z < 1.1$. At higher redshift, including the green valley in the quiescent \textit{locus} of the NUVrK diagram is not enough to reconcile the estimates.
We cannot exclude the possibility that we may have missed some fainter red galaxies as a result of the $gri$-detection described in Sect. \ref{final_cat}. However, this effect should be limited since we corrected for this incompleteness according to the weight colour map shown in Fig. \ref{weight_map}, as previously explained.
To add an independent validation of our procedure of correcting for this incompleteness, we used the CFHTLS-Deep/WIRcam Deep Survey \citep[WIRDS;][]{Bielby2012}, which overlaps our CFHTLS-Wide/$K_s<22$ survey. We estimated the completeness of the $gri$ selection as a function of redshift and stellar masses separately for quiescent and star-forming galaxies. Below $z<1.1$, we did not find any completeness problems, regardless of galaxy type or stellar mass range. At $1.1 < z < 1.5$, the quiescent sample is $>85\%$ complete after applying our weighting scheme, and applying a correction based on the WIRDS sample would shift the density by less than 0.1 dex, which is well inside our uncertainties. It appears that star-forming galaxies can also be affected by incompleteness around $M_* \sim 10^{10.9} M_{\odot}$ (probably because of dust extinction in massive galaxies at high redshift)\footnote{A similar trend for extremely dusty star-forming galaxies is visible in Fig. 8 of \citet{Ilbert2010}.}. However, comparison with the literature suggests that our SF sample does not significantly suffer from this incompleteness (Fig. \ref{SMF_fitt}).
Moreover, we have to highlight that our total SMF agrees with \citet{Tomczak2014} at $M_* > M_{lim}$, while our SMF estimate for SF galaxies is continuously higher (by 0.02 dex at $M_* = 10^{10.5} M_{\odot}$ and 0.08 dex at $M_* = 10^{10.75} M_{\odot}$). This SMF difference for SF galaxies would allow a transfer (between the SF and Q populations) that is sufficient to reconcile the our SMF estimate for quiescent galaxies with the estimate of these authors. This stresses the sensitivity of the SMF to the Q/SF selection\footnote{We recall that \citet{Ilbert2013} and \citet{Tomczak2014} used a constant selection of quiescent galaxies at $z < 1.5$, while we used a time-dependent selection (cf. Sect. \ref{smf_measur}, Eq. \ref{eq_sel}).}.
Since the low-mass end of the global SMF is strongly dominated by the star-forming population at $z > 0.5$, we assumed the same parametrisation of $\alpha^*_2$ and $\Phi^\star_2$ (i.e. $\alpha^*_2 = \alpha^*_{2\ \textsc{sf}}$ and $\Phi^\star_2 = \Phi^\star_{2\ \textsc{sf}}$). We derived two parametric forms of the global SMF, depending on whether the double or the simple Schechter form of the star-forming SMF is considered, as shown in Fig. \ref{SMF_fitt} (with dashed and solid black lines, respectively).
The corresponding best-fit parameters are reported in Tables \ref{table_bestfit} and \ref{table_single_schech_SF}, respectively\footnote{All the parameters are given after correction for the Eddington bias (cf. Sect. \ref{fitting}).}.
\subsubsection{Quantifying the SMF evolution}
\label{quant_SMF_evol}
In Fig. \ref{evolMF} we plot the evolution of the SMF for all (left panel), star-forming (middle panel), and quiescent galaxies (right panel). Each redshift bin is coded with a different colour. As in Fig. \ref{SMF_fitt}, the shaded areas show the systematic uncertainty induced by the star-forming or quiescent classification in the NUVrK diagram, while the solid lines represent the parametric form of reference. The arrows show the position of the corresponding characteristic mass $\mathcal{M}^\star$.
\begin{figure*}[!]
\includegraphics[width=0.3525\hsize, trim = 0.3cm 0cm 0.8cm 0cm, clip]{figures/T07_SMF_Evol_Fix3_Mlim.pdf}
\includegraphics[width=0.32\hsize, trim = 1.6cm 0cm 0.8cm 0cm, clip]{figures/T07_SMF_SF_Evol_Fix3_Mlim.pdf}
\includegraphics[width=0.32\hsize, trim = 1.6cm 0cm 0.8cm 0cm, clip]{figures/T07_SMF_Q_Evol_Fix3_Mlim.pdf}
\caption{Evolution of the SMF for the global (left), star-forming (middle), and quiescent (right) populations. The solid lines represent the best SMF parametric form at each redshift, while the arrows show the corresponding $\mathcal{M}^\star$ parameter positions. \textbf{\textit{Left}} and \textbf{\textit{middle}} panels: The dashed lines show the best fit with a single-Schechter function. \textbf{\textit{Middle}} and \textbf{\textit{right}} panels: The shaded areas represent the systematic uncertainties that are due to the separation into star-forming or quiescent galaxies depending on whether we insert the galaxies in transition (cf. Sect. \ref{smf_measur}). \label{evolMF}}
\end{figure*}
\begin{table*}[!]
\begin{center}
\caption{Best-fit parameters of the SMF parametric form for the total, quiescent, and star-forming populations. \label{table_bestfit}}
\begin{tabular}{l*{9}{c}}
\hline \\
\multicolumn{9}{c}{Quiescent} \\
\noalign{\smallskip}
\hline \\[-2mm]
\hline \\
Redshift & $N_{gal}$ & $\log(M_{lim})$ $^{(a)}$ & $\log(\mathcal{M}^\star)$ $^{(a)}$ & $\log(\Phi^\star_1)$ $^{(b)}$ & $\alpha_1$ & $\log(\Phi^\star_2)$ $^{(b)}$ & $\alpha_2$ & $\log(\rho_*)$ $^{(c)}$\\[1mm]
\hline \\[-2mm]
$0.2 < z < 0.5$ & 29078 & 8.75 & 10.78$^{{\rm + 0.02}}_{{\rm -0.02}}$ & -2.86$^{{\rm + 0.02}}_{{\rm -0.02}}$ & -0.44$^{{\rm + 0.05}}_{{\rm -0.04}}$ & -5.88$^{{\rm + 0.21}}_{{\rm -0.42}}$ & -2.43$^{{\rm + 0.20}}_{{\rm -0.21}}$ & 7.88$^{{\rm + 0.03}}_{{\rm -0.03}}$ \\[1mm]
$0.5 < z < 0.8$ & 38708 & 9.50 & 10.79$^{{\rm + 0.01}}_{{\rm -0.01}}$ & -2.97$^{{\rm + 0.01}}_{{\rm -0.01}}$ & -0.38$^{{\rm + 0.03}}_{{\rm -0.03}}$ & ~~~~~~~~ & ~~~~~~~~ & 7.76$^{{\rm + 0.02}}_{{\rm -0.02}}$ \\[1mm]
$0.8 < z < 1.1$ & 43421 & 9.97 & 10.68$^{{\rm + 0.02}}_{{\rm -0.02}}$ & -2.94$^{{\rm + 0.02}}_{{\rm -0.02}}$ & -0.03$^{{\rm + 0.10}}_{{\rm -0.10}}$ & ~~~~~~~~ & ~~~~~~~~ & 7.73$^{{\rm + 0.03}}_{{\rm -0.03}}$ \\[1mm]
$1.1 < z < 1.5$ & 15567 & 10.28 & 10.61$^{{\rm + 0.02}}_{{\rm -0.02}}$ & -3.60$^{{\rm + 0.03}}_{{\rm -0.04}}$ & 1.04$^{{\rm + 0.15}}_{{\rm -0.14}}$ & ~~~~~~~~ & ~~~~~~~~ & 7.31$^{{\rm + 0.03}}_{{\rm -0.03}}$ \\[1mm]
\hline \\
\end{tabular}
\begin{tabular}{l*{9}{c}}
\multicolumn{9}{c}{Star-forming} \\
\noalign{\smallskip}
\hline \\[-2mm]
\hline \\
Redshift & $N_{gal}$ & $\log(M_{lim})$ $^{(a)}$ & $\log(\mathcal{M}^\star)$ $^{(a)}$ & $\log(\Phi^\star_1)$ $^{(b)}$ & $\alpha_1$ & $\log(\Phi^\star_2)$ $^{(b)}$ & $\alpha_2$ & $\log(\rho_*)$ $^{(c)}$\\[1mm]
\hline \\[-2mm]
$0.2 < z < 0.5$ & 143500 & 8.75 & 10.68$^{{\rm + 0.04}}_{{\rm -0.04}}$ & -2.89$^{{\rm + 0.09}}_{{\rm -0.11}}$ & -0.82$^{{\rm + 0.30}}_{{\rm -0.23}}$ & -3.24$^{{\rm + 0.22}}_{{\rm -0.48}}$ & -1.49$^{{\rm + 0.09}}_{{\rm -0.18}}$ & 7.98$^{{\rm + 0.03}}_{{\rm -0.03}}$ \\[1mm]
$0.5 < z < 0.8$ & 155173 & 9.50 & 10.67$^{{\rm + 0.01}}_{{\rm -0.01}}$ & -2.85$^{{\rm + 0.02}}_{{\rm -0.03}}$ & -0.64$^{{\rm + 0.03}}_{{\rm -0.03}}$ & -3.24~~~~~~ & -1.49~~~~~~ & 8.00$^{{\rm + 0.01}}_{{\rm -0.01}}$ \\[1mm]
$0.8 < z < 1.1$ & 114331 & 9.97 & 10.64$^{{\rm + 0.01}}_{{\rm -0.01}}$ & -2.78$^{{\rm + 0.02}}_{{\rm -0.02}}$ & -0.36$^{{\rm + 0.05}}_{{\rm -0.05}}$ & -3.24~~~~~~ & -1.49~~~~~~ & 8.01$^{{\rm + 0.01}}_{{\rm -0.01}}$ \\[1mm]
$1.1 < z < 1.5$ & 73600 & 10.28 & 10.63$^{{\rm + 0.02}}_{{\rm -0.02}}$ & -2.97$^{{\rm + 0.02}}_{{\rm -0.02}}$ & 0.02$^{{\rm + 0.06}}_{{\rm -0.06}}$ & -3.24~~~~~~ & -1.49~~~~~~ & 7.92$^{{\rm + 0.01}}_{{\rm -0.01}}$ \\[1mm]
\hline \\
\end{tabular}
\begin{tabular}{l*{9}{c}}
\multicolumn{9}{c}{Total} \\
\noalign{\smallskip}
\hline \\[-2mm]
\hline \\
Redshift & $N_{gal}$ & $\log(M_{lim})$ $^{(a)}$ & $\log(\mathcal{M}^\star)$ $^{(a)}$ & $\log(\Phi^\star_1)$ $^{(b)}$ & $\alpha_1$ & $\log(\Phi^\star_2)$ $^{(b)}$ & $\alpha_2$ & $\log(\rho_*)$ $^{(c)}$\\[1mm]
\hline \\[-2mm]
$0.2 < z < 0.5$ & 166658 & 8.75 & 10.83$^{{\rm + 0.02}}_{{\rm -0.03}}$ & -2.63$^{{\rm + 0.03}}_{{\rm -0.03}}$ & -0.95$^{{\rm + 0.10}}_{{\rm -0.08}}$ & -4.01$^{{\rm + 0.28}}_{{\rm -1.14}}$ & -1.82$^{{\rm + 0.18}}_{{\rm -0.22}}$ & 8.23$^{{\rm + 0.02}}_{{\rm -0.02}}$ \\[1mm]
$0.5 < z < 0.8$ & 185245 & 9.50 & 10.76$^{{\rm + 0.01}}_{{\rm -0.01}}$ & -2.66$^{{\rm + 0.02}}_{{\rm -0.02}}$ & -0.57$^{{\rm + 0.03}}_{{\rm -0.03}}$ & -3.24~~~~~~ & -1.49~~~~~~ & 8.20$^{{\rm + 0.01}}_{{\rm -0.01}}$ \\[1mm]
$0.8 < z < 1.1$ & 153881 & 9.97 & 10.68$^{{\rm + 0.02}}_{{\rm -0.02}}$ & -2.57$^{{\rm + 0.03}}_{{\rm -0.03}}$ & -0.33$^{{\rm + 0.08}}_{{\rm -0.08}}$ & -3.24~~~~~~ & -1.49~~~~~~ & 8.19$^{{\rm + 0.02}}_{{\rm -0.03}}$ \\[1mm]
$1.1 < z < 1.5$ & 85722 & 10.28 & 10.66$^{{\rm + 0.02}}_{{\rm -0.02}}$ & -2.88$^{{\rm + 0.01}}_{{\rm -0.01}}$ & 0.19$^{{\rm + 0.07}}_{{\rm -0.07}}$ & -3.24~~~~~~ & -1.49~~~~~~ & 8.01$^{{\rm + 0.02}}_{{\rm -0.02}}$ \\[1mm]
\hline \\[-2mm]
\multicolumn{9}{l}{\begin{footnotesize} $^{(a)}$ $M_{\odot}$ \end{footnotesize}} \\
\multicolumn{9}{l}{\begin{footnotesize} $^{(b)}$ $dM_*^{-1}$ $Mpc^{-3}$ \end{footnotesize}} \\
\multicolumn{9}{l}{\begin{footnotesize} $^{(c)}$ $M_{\odot}$ $Mpc^{-3}$ \end{footnotesize}} \\
\end{tabular}
\end{center}
\end{table*}
\begin{table*}[!]
\begin{center}
\caption{Best-fit parameters of the SMF parametric form for the total and star-forming populations if a single-Schechter function
is assumed to fit the SMF of star-forming galaxies. \label{table_single_schech_SF}}
\begin{tabular}{l*{9}{c}}
\hline \\
\multicolumn{9}{c}{Star-forming} \\
\noalign{\smallskip}
\hline \\[-2mm]
\hline \\
Redshift & $N_{gal}$ & $\log(M_{lim})$ $^{(a)}$ & $\log(\mathcal{M}^\star)$ $^{(a)}$ & $\log(\Phi^\star_1)$ $^{(b)}$ & $\alpha_1$ & $\log(\Phi^\star_2)$ $^{(b)}$ & $\alpha_2$ & $\log(\rho_*)$ $^{(c)}$\\[1mm]
\hline \\[-2mm]
$0.2 < z < 0.5$ & 143500 & 8.75 & 10.79$^{{\rm + 0.01}}_{{\rm -0.01}}$ & -2.89$^{{\rm + 0.02}}_{{\rm -0.02}}$ & -1.29$^{{\rm + 0.01}}_{{\rm -0.01}}$ & ~~~~~~~~~~~~~~~ & ~~~~~~~~~~~~~~~ & 7.98$^{{\rm + 0.02}}_{{\rm -0.02}}$ \\[1mm]
$0.5 < z < 0.8$ & 155173 & 9.50 & 10.78$^{{\rm + 0.01}}_{{\rm -0.01}}$ & -2.83$^{{\rm + 0.02}}_{{\rm -0.02}}$ & -1.18$^{{\rm + 0.02}}_{{\rm -0.02}}$ & ~~~~~~~~~~~~~~~ & ~~~~~~~~~~~~~~~ & 7.99$^{{\rm + 0.01}}_{{\rm -0.01}}$ \\[1mm]
$0.8 < z < 1.1$ & 114331 & 9.97 & 10.72$^{{\rm + 0.01}}_{{\rm -0.01}}$ & -2.70$^{{\rm + 0.02}}_{{\rm -0.02}}$ & -0.88$^{{\rm + 0.04}}_{{\rm -0.04}}$ & ~~~~~~~~~~~~~~~ & ~~~~~~~~~~~~~~~ & 7.99$^{{\rm + 0.02}}_{{\rm -0.02}}$ \\[1mm]
$1.1 < z < 1.5$ & 73600 & 10.28 & 10.73$^{{\rm + 0.02}}_{{\rm -0.02}}$ & -2.83$^{{\rm + 0.02}}_{{\rm -0.02}}$ & -0.71$^{{\rm + 0.07}}_{{\rm -0.04}}$ & ~~~~~~~~~~~~~~~ & ~~~~~~~~~~~~~~~ & 7.85$^{{\rm + 0.02}}_{{\rm -0.02}}$ \\[1mm]
\hline \\
\end{tabular}
\begin{tabular}{l*{9}{c}}
\multicolumn{9}{c}{Total} \\
\noalign{\smallskip}
\hline \\[-2mm]
\hline \\
Redshift & $N_{gal}$ & $\log(M_{lim})$ $^{(a)}$ & $\log(\mathcal{M}^\star)$ $^{(a)}$ & $\log(\Phi^\star_1)$ $^{(b)}$ & $\alpha_1$ & $\log(\Phi^\star_2)$ $^{(b)}$ & $\alpha_2$ & $\log(\rho_*)$ $^{(c)}$\\[1mm]
\hline \\[-2mm]
$0.2 < z < 0.5$ & 166658 & 8.75 & 10.83$^{{\rm + 0.02}}_{{\rm -0.03}}$ & -2.63$^{{\rm + 0.03}}_{{\rm -0.03}}$ & -0.95$^{{\rm + 0.10}}_{{\rm -0.08}}$ & -4.01$^{{\rm + 0.28}}_{{\rm -1.14}}$ & -1.82$^{{\rm + 0.18}}_{{\rm -0.22}}$ & 8.23$^{{\rm + 0.02}}_{{\rm -0.02}}$ \\[1mm]
$0.5 < z < 0.8$ & 185245 & 9.50 & 10.79$^{{\rm + 0.02}}_{{\rm -0.02}}$ & -2.99$^{{\rm + 0.05}}_{{\rm -0.06}}$ & -0.40$^{{\rm + 0.07}}_{{\rm -0.07}}$ & -2.83~~~~~~ & -1.18~~~~~~ & 8.19$^{{\rm + 0.01}}_{{\rm -0.01}}$ \\[1mm]
$0.8 < z < 1.1$ & 153881 & 9.97 & 10.73$^{{\rm + 0.03}}_{{\rm -0.04}}$ & -2.99$^{{\rm + 0.09}}_{{\rm -0.11}}$ & -0.33$^{{\rm + 0.08}}_{{\rm -0.08}}$ & -2.70~~~~~~ & -0.88~~~~~~ & 8.17$^{{\rm + 0.02}}_{{\rm -0.02}}$ \\[1mm]
$1.1 < z < 1.5$ & 85722 & 10.28 & 10.68$^{{\rm + 0.10}}_{{\rm -0.05}}$ & -3.40$^{{\rm + 0.08}}_{{\rm -0.32}}$ & 0.64$^{{\rm + 0.27}}_{{\rm -0.73}}$ & -2.83~~~~~~ & -0.71~~~~~~ & 7.96$^{{\rm + 0.02}}_{{\rm -0.02}}$ \\[1mm]
\hline \\[-2mm]
\multicolumn{9}{l}{\begin{footnotesize} $^{(a)}$ $M_{\odot}$ \end{footnotesize}} \\
\multicolumn{9}{l}{\begin{footnotesize} $^{(b)}$ $dM_*^{-1}$ $Mpc^{-3}$ \end{footnotesize}} \\
\multicolumn{9}{l}{\begin{footnotesize} $^{(c)}$ $M_{\odot}$ $Mpc^{-3}$ \end{footnotesize}} \\
\end{tabular}
\end{center}
\end{table*}
As mentioned above, the galaxy population at low masses is strongly dominated by its star-forming component, and the global SMF evolution is then mainly driven by the star-forming population. We note
that the evolution of the global SMF that is characterised by a $\sim 0.2$ dex increase of the $\mathcal{M}^\star$ (see the arrows in Fig. \ref{evolMF} left panel). However, there is almost no evolution of the star-forming population (Fig. \ref{evolMF} middle panel): the characteristic mass is nearly constant, with $\log( \mathcal{M}^\star _{\textsc{sf}} / M_{\odot}) = 10.66^{+ 0.02}_{- 0.03}$ in the redshift range $0.2 < z < 1.5$, while the evolution of the low-mass slope remains very stable, as discussed previously. This confirms that the probability of finding a star-forming galaxy declines exponentially above a certain stellar mass $M_* > \mathcal{M}^\star _{\textsc{sf}}$, which is constant with time. This stresses that the star formation seems to be impeded beyond this stellar mass independent of the redshift up to $z = 1.5$. This is one of the cornerstones of the empirical description proposed by \citet{Peng2010}, in which the evolution of high-mass galaxy is dominated by internal quenching mechanisms (named \textit{mass quenching} by the authors). \citet{Peng2010} suggested that the efficiency of mass quenching is proportional to SFR/$\mathcal{M}^\star$ to keep the SMF of star-forming galaxies constant with redshift.
The right panel of Fig. \ref{evolMF} shows that the main contribution to the evolution of the total SMF is due to the quiescent population build-up. In addition to galaxies that are quenched by mass quenching (around $\mathcal{M}^\star_{\textsc{sf}}$), the SMF evolution of quiescent galaxies reveals an increase of low-mass galaxies with time, as shown in \citet{Ilbert2010}. In particular, the SMF upturn built at $z < 0.5$ suggests that the star formation of $M_* < 10^{9 - 9.5} M_{\odot}$ galaxies is efficiently quenched, at least at low redshift. Ascribed by \citet{Peng2010} to \textit{environmental quenching}, the build-up of the low-mass quiescent population is discussed in Sect. \ref{qu_channels}. The increase of the very high-mass population that we observe in the quiescent sample (and consequently also in the total SMF) is discussed in Sect. \ref{high_mass_evol}.
\subsection{Evolution of the number densities and stellar mass densities}
\label{dens_evol}
We derived the galaxy number and stellar mass densities, $n_*$ and $\rho_*$ , respectively, by integrating the stellar mass function \begin{equation}
n_* = \int_{M_1}^{M_2} \Phi(M_*) \ dM_*\\
\end{equation}
and
\begin{equation}
\rho_* = \int_{M_1}^{M_2} \Phi(M_*) \ M_* \ dM_*
.\end{equation}
We adopted the parametric form of the SMF corrected for the Eddington bias. We derived the number densities above the stellar mass completeness limit. The stellar mass density was calculated by integrating the SMF over the stellar mass range $9 < \log( M_* / M_{\odot}) < 13$, as in \citet{Tomczak2014}. We recall that at $z > 0.5$, the stellar mass density relies partially on the extrapolation of the SMF to the lower stellar mass limit.
\begin{figure*}[t!]
\includegraphics[width=0.3525\hsize, trim = 0.3cm 0cm 0.8cm 0cm, clip]{figures/numdens_10_75_vs_time.pdf}
\includegraphics[width=0.32\hsize, trim = 1.6cm 0cm 0.8cm 0cm, clip]{figures/numdens_11_25_vs_time.pdf}
\includegraphics[width=0.32\hsize, trim = 1.6cm 0cm 0.8cm 0cm, clip]{figures/numdens_11_75_vs_time.pdf}
\caption{Evolution of the number densities in three bins of $M_*$, for the global (black), SF (blue), and Q (red) populations. The corresponding shaded area shows the systematic uncertainty that is due to the SF/Q selection around our reference measurement (stars). The measurements of \citet[][triangles]{Moustakas2013} and \citet[][pentagons]{Matsuoka2010} are plotted for comparison. \label{densities}}
\end{figure*}
In Fig. \ref{densities} we plot the cosmic evolution of the number densities, $n_*$, in the stellar mass bins $10.5 < \log( M_* / M_{\odot}) < 11$ (left), $11 < \log( M_* / M_{\odot}) < 11.5$ (middle), and $11.5 < \log( M_* / M_{\odot}) < 12$ (right), between redshifts $z = 0.2$ and $z = 1.5$. For every mass bin, we show the densities for the global, star-forming, and quiescent galaxy populations that we compare with the measurements from \citet[][triangles]{Moustakas2013} and \citet[][pentagons]{Matsuoka2010} when available.
For the global population in our sample, we distinguish two types of evolution. In the two lowest stellar mass bins ($10^{10.5} < M_* / M_{\odot} < 10^{11.5}$), we observe a two-phase evolution, with an increase of $\sim25 - 50 \%$ from $z \sim 1.3$ down to $z \sim 1$, followed by a plateau down to $z\sim 0.2$. For the most massive population ($M_* > 10^{11.5} M_{\odot}$), we observe a continuous increase by slightly less than a factor two from $z\sim 1.5$ to $z\sim 0.2$.
A similar, but weaker, trend is seen in VIPERS because of the narrower redshift range. Our results are directly comparable with \citet{Matsuoka2010} for $M_*>10^{11}M_{\odot}$. These authors
also took the Eddington bias in their density estimates into
account (the estimates are based on simulations).
They also emphasised that their measurements at $z < 0.5$ are strongly biased because of their less reliable photo-zs. Within these limits, our $n_*$ evolution measurements for the entire population agree well with their results.
The trend observed with PRIMUS is also similar for the lowest mass bins, $M_* < 10^{11.5} M_{\odot}$, although they have systematically higher densities ($\sim 40\%$ and $\sim 25\%$ for $M_* \sim 10^{10.75} M_{\odot}$ and $M_* \sim 10^{11.25} M_{\odot}$ , respectively), as expected from the higher normalisation of their SMFs (cf. Fig. \ref{SMF_litt}). In addition, it is important to recall that they did not take Eddington bias into account, which can enhance the differences, especially at $M_*> 10^{11} M_{\odot}$.
For the evolution by galaxy type, we observe a two-phase evolution of $M_* < 10^{11.5} M_{\odot}$ quiescent galaxies, while star-forming galaxies experience a constant evolution, if not a decreasing evolution. At low mass, $M_* < 10^{11} M_{\odot}$ (left panel), the density of quiescent galaxies increases with redshift and equals the star-forming density in the lowest redshift bin, at $z \sim 0.3$.
For the intermediate masses, $10^{11}< M_*/M_{\odot} < 10^{11.5} $ (middle panel), the quiescent population becomes dominant at higher redshift, $z\sim 0.9$.
In the highest stellar mass bin ($M_* > 10^{11.5} M_{\odot}$, right panel), the quiescent population always outnumbers the star-forming one by representing already $50-60\%$ of the global population at $z \sim 1.3$ and more than $80\%$ at $z \sim 0.3$ (i.e. $n_*$ multiplied by 2.5). From $z\sim 1$ to $z\sim0.2$, the number density of the massive star-forming galaxies has diminished by a factor of 1.5 and 2 in the two highest mass bins, respectively.
The number densities computed in VIPERS are not plotted since the stellar mass bins used by \citet{Davidzon2013} are different from ours. However, the authors observed the same general trends, though their uncertainties prevent them from distinguishing the two-phase evolutions observed in our survey \citep[][Fig. 6]{Davidzon2013}.
We also generally agree with \citet{Moustakas2013} for star-forming galaxies, as our studies observe a decreasing $n_*$ between $z= 1$ and $z=0.3$ for $M_* > 10^{10.5} M_{\odot}$\footnote{Our highest stellar mass bin is not explored in \citet{Moustakas2013}, who limited their analysis to $M_*<10^{11.5}\,M_\odot$).}. The continuous increase of the corresponding quiescent population is also detected by \citet{Moustakas2013} between $z=1$ and $z=0.1$ when they measured the weighted linear fits of $n_*(z)$.
\begin{figure}[!]
\includegraphics[width=\hsize, trim = 0.5cm 0cm 1cm 0cm, clip]{figures/massdensity3_9_13.pdf}
\caption{Evolution of the cosmic stellar mass density for all (black), star-forming (blue), and quiescent (red) galaxies. The shaded areas show the corresponding systematic uncertainties that are due to the SF/Q selection. The open stars represent the measurement that we obtain by assuming a single-Schechter function to fit the star-forming galaxies. Measurements of \citet[][squares]{Tomczak2014}, \citet[][circles]{Ilbert2013}, and \citet[][quiescent only, red crosses]{Arnouts2007} are shown for comparison. The filled and open red circles represent the quiescent measurements of \citet{Ilbert2013}, using a selection of quiescent galaxies based on the NUV-r/r-J plan and the sSFR respectively. The quiescent measurement of \citet{Arnouts2007} is based on the $K$-band luminosity density, and the selection uses the SED-fitting. For the sake of clarity, the star-forming
or quiescent measurements are plotted with of shift of +0.03
or /-0.03 Gyr.\label{sm_density}}
\end{figure}
Figure \ref{sm_density} presents the cosmic evolution of the stellar mass density $\rho_*$ for all (black), star-forming (blue), and quiescent (red) galaxies. We compare our results (filled stars) with previous studies. We also plot the stellar mass density obtained by assuming a different slope of the star-forming SMF low-mass end (open stars; cf. Sect. \ref{SMF_evol}), but it does not change the results significantly. In good agreement with \citet[][circles]{Ilbert2013}\footnote{With respect to our results, the slightly higher values measured in COSMOS are expected, given the $8 < \log( M_* / M_{\odot}) < 13$ integration range adopted by \citet{Ilbert2013}.} and \citet[][squares]{Tomczak2014}, our measurement of the global evolution of $\rho_*$ reveals two phases: a $>50 \%$ increase from $z \sim 1.3$ down to $z \sim 1$, and a continuous 12--20\% increase from $z \sim 1$ down to $z \sim 0.3$.
As mentioned in Sect. \ref{SMF_evol}, our selection of quiescent galaxies is more compatible with the selections of \citet{Ilbert2013} and \citet{Tomczak2014} when we consider that galaxies lying in the green valley are classified as quiescent. This corresponds to the upper red and lower blue envelopes of $\rho_*$ in Fig. \ref{sm_density}. Still, our measurement for quiescent galaxies is smaller than previous measurements by up to 25\%. We do not find this difference when we consider the global stellar mass density. The importance of the Q/SF selection is reinforced by the fact that the agreement is better with \citet{Ilbert2013}, when they use the log $sSFR= -11$ selection\footnote{See \citet{Ilbert2010} concerning this threshold.} (Fig. \ref{sm_density}, open red circles). Our measurement is also consistent with the $\rho_*$ measured by \citet[][ red crosses]{Arnouts2007} for quiescent galaxies, which are selected thanks to SED-fitting (we do not plot their star-forming $\rho_*$\footnote{The $\rho_*$ measurement of \citet{Arnouts2007} is based on the $K$-band luminosity. \citet[][Appendix D]{Ilbert2010} showed that the mass-to-light ratio derived by \citet{Arnouts2007} for star-forming galaxies is not appropriate at low and intermediate masses.}).
As previously suggested, the evolution of the stellar mass density of star-forming galaxies seems to be quite stable at $z < 1.5$. At the same time, a rapid increase of the stellar mass contained in quiescent galaxies is observed, increased by a factor $>2.5$ from $z \sim 1.3$ down to $z \sim 1$. At lower redshift, we detect a small and continuous $\gtrsim30\%$ increase of $\rho_*$ from $z \sim 1$ down to $z \sim 0.3$, which reflects the progressive quenching of less massive galaxies.
\section{Discussion}
\label{discut}
\subsection{High-mass end evolution}
\label{high_mass_evol}
As highlighted above, our sample can be used to investigate the evolution of massive ($M_*> 10^{10.5} M_{\odot}$) and rare ($M_*> 10^{11.5} M_{\odot}$) galaxies, thanks to the large volume of our survey. Most importantly, we are interested in the evolution of these objects across cosmic time, in particular to understand which mechanisms determine their evolution from star-forming to quiescent galaxies. Several studies \citep[e.g.][]{Kauffmann2003,Bundy2006,Davidzon2013} have characterised galaxy quenching with the so-called \textit{transition mass}, which is~the stellar mass at which the quiescent and star-forming populations are equal in a given redshift bin.
In the same spirit, we define the transition redshift, $z_{tr}$, at which the quiescent population becomes dominant. As shown in Fig. \ref{densities}, the transition redshift is found to be $z_{tr} \gtrsim 1.4$, $z_{tr} \sim 0.9$ and $z_{tr} \sim 0.2$, for $M_* \sim10^{11.75} M_{\odot}$, $M_* \sim10^{11.25} M_{\odot}$ , and $M_* \sim10^{10.75} M_{\odot}$ galaxies, respectively: globally, the more massive a galaxy, the earlier its star formation is stopped. This is qualitatively consistent with the redshift evolution of the transition mass \citep[e.g. see][]{Davidzon2013}.
As already mentioned, several physical mechanisms could explain this trend within a hierarchical context \citep[e.g.][]{DeLucia2007,Neistein2008,Weinmann2012}.
For instance, based on the stellar-halo mass relation from \citet[][]{Coupon2015}, the star-forming galaxies with stellar masses of $M^*_{SF} (\sim 10^{10.64} M_{\odot}$) should reside in dark matter halos of masses of around $M_h \sim 10^{12.4} M_{\odot}$. This value agrees
well with the halo mass threshold invoked by \citet{Cattaneo2006}, corresponding to halo' quenching, but we cannot exclude that some radio-AGN quenching could also explain why massive galaxies cease forming stars and/or are not fuelled anymore by fresh infalling gas \citep{Croton2006}.
We find that the number density of the most massive ($M_* > 10^{11.5} M_{\odot}$) galaxies almost doubled from $z \sim 1$ to $z \sim 0.3$ (Fig. \ref{densities}). This corresponds to the $< 0.25$ dex increase of the SMF high-mass end that is seen between $z \sim 1$ and $z \sim 0.3$ (Fig. \ref{evolMF}). Because the high-mass end is dominated by quiescent galaxies at $z < 1$, the increase of the $M_* > 10^{11.5} M_{\odot}$ population cannot be explained by incidental star formation \citep{Arnouts2007}. If we assume that, in general, these very high-mass galaxies do not experience significant star formation, they can still assemble stellar mass through mergers at $z<1$, in particular through dry merging.
\subsection{Taming of galaxies}
\label{qu_channels}
In Sect. \ref{SMF_evol} we have shown that the characteristic stellar mass of the star-forming SMF does not vary significantly between redshifts $z = 0.2$ and $z = 1.5$. As described in Sect. \ref{smf_measur}, we performed three selections of the SF galaxies, and the values of $\mathcal{M}^\star _{\textsc{sf}}$ differed slightly from one selection to another. In Fig. \ref{Mstar_z} we plot $\mathcal{M}^\star _{\textsc{sf}}$ as a function of the redshift and the SF galaxy selection in the NUVrK diagram. First, we find that $\mathcal{M}^\star _{\textsc{sf}} $ is between $10^{10.6}$ and $10^{10.8} M_{\odot}$ at $0.2 < z < 1.5$, regardless of the SF selection in the NUVrK diagram. More precisely, we find
\begin{itemize}
\item log $\mathcal{M}^\star _{\textsc{sf}} / M_{\odot} = 10.69^{+ 0.04}_{- 0.05}$ if the galaxies in transition are included in the selection of SF galaxies (upper dotted lines in Fig \ref{NUVrK_z}),
\item log $\mathcal{M}^\star _{\textsc{sf}} / M_{\odot} = 10.66^{+ 0.02}_{- 0.03}$ for our intermediate selection and
\item log $\mathcal{M}^\star _{\textsc{sf}} / M_{\odot} = 10.64^{+ 0.01}_{- 0.01}$ for the most conservative selection.
\end{itemize}
Therefore, the evolution of $\mathcal{M}^\star _{\textsc{sf}}$ is consistent with being constant if the galaxies transitioning in the green valley are excluded from the selection of SF galaxies. The invariance with respect to redshift of $\mathcal{M}^\star _{\textsc{sf}}$ for the most conservative selection strongly supports a mass-quenching process occurring around a constant stellar mass, which makes this selection suitable for investigating the galaxies that are about to quench.
\subsubsection{Tracking galaxies in the green valley}
\label{gal_tracking}
\begin{figure}
\includegraphics[width=\hsize, trim = -0.5cm 0cm -1cm 0cm, clip]{figures/Mstar_Sel_z_2.pdf}
\caption{Redshift evolution of $\mathcal{M}^\star _{\textsc{sf}}$, corresponding to the three selections of SF galaxies in the NUVrK diagram defined in Sect. \ref{smf_measur}: the reference selection (for which the limit lies in the middle of the green valley; cyan circles), its \textit{lower} limit (when galaxies in transition are excluded; blue triangles), and the \textit{upper} limit (if the green valley is included in the SF \textit{locus}; green squares).\label{Mstar_z}}
\end{figure}
\begin{figure}[!]
\includegraphics[width=0.5\columnwidth, trim = -0.15cm 1.5cm 0cm 0cm, clip]{figures/NrK_Mass_rb2_0205.pdf}
\hspace{-0.15cm}\includegraphics[width=0.5\columnwidth, trim = -0.15cm 1.5cm 0cm 0cm, clip]{figures/NrK_Mass_rb2_0508.pdf}
\includegraphics[width=0.5\columnwidth, trim = -0.05cm 0.9cm 0cm 0cm, clip]{figures/n_rK_0205.pdf}
\includegraphics[width=0.484\columnwidth, trim = 0.5cm 0.9cm 0cm 0cm, clip]{figures/n_rK_0508.pdf}
\includegraphics[width=0.5\columnwidth, trim = -0.15cm 1.5cm 0cm 0cm, clip]{figures/NrK_Mass_rb2_0811.pdf}
\hspace{-0.15cm}\includegraphics[width=0.5\columnwidth, trim = -0.15cm 1.5cm 0cm 0cm, clip]{figures/NrK_Mass_rb2_1115.pdf}
\includegraphics[width=0.5\columnwidth, trim = -0.05cm 0cm 0cm 0cm, clip]{figures/n_rK_0811.pdf}
\includegraphics[width=0.484\columnwidth, trim = 0.5cm 0cm 0cm 0cm, clip]{figures/n_rK_1115.pdf}
\caption{NUVrK galaxy distribution ouside and inside the green valley, shown in four redshift bins. \textbf{\textit{Top sub-panels:}} NUVrK diagram as a function of the galaxy stellar mass. The red and blue contours show the equal density of the quiescent and star-forming populations, respectively, after excluding the transitioning galaxies (i.e. the galaxies lying in the green valley defined in Fig. \ref{NUVrK_z}). \textbf{\textit{Bottom sub-panels:}} Normalised number counts along the $(r-K_s)^\textsc{o}$ colour in the green valley (black solid line). The distribution at $0.2 < z < 0.5$ is repeated in each panel for comparison (blue shaded area). The vertical green dashed line shows the limit of the $\mathcal{M}^\star _{\textsc{sf}}$-quenching channel, as discussed in Sect. \ref{gal_tracking}. \label{NrK_Mass}}
\end{figure}
To identify a potential quenching channel for $\mathcal{M}^\star _{\textsc{sf}}$ galaxies, we isolate and characterise the green valley galaxies in Fig. \ref{NrK_Mass}, where each panel shows a different redshift bin. The contours represent the density of quiescent and star-forming galaxies, when the galaxies in transition are excluded (i.e. using the strictest selection of Q/SF galaxies). The colour code expresses the stellar mass. In the lower panels, we show the rest-frame $(r-K_s)^\textsc{o}$ distribution of the transitioning galaxies (i.e. the galaxies lying in the NUVrK green valley).
As explained in Sect. \ref{smf_measur}, the NUVrK diagram is very efficient in separating dusty star-forming galaxies from quiescent ones (see Fig.16 of the companion paper), which allows us to properly define transitioning galaxies in the green valley.
We observe that
\begin{itemize}
\item[1] the $(r-K_s)^\textsc{o}$ distribution of galaxies in transition is narrow and does not evolve with redshift ($>80\%$ of these galaxies have $0.76 < (r-K_s)^\textsc{o} < 1.23$), and
that \item[2] the typical stellar mass of galaxies in transition is around $\mathcal{M}^\star _{\textsc{sf}}$ ($>60\%$ of these galaxies have $10^{10.5} < M_* / M_{\odot} < 10^{11}$).
\end{itemize}
Therefore, we isolated the quenching channel of the $\mathcal{M}^\star _{\textsc{sf}}$-galaxies with the colour criterion $(r-K_s)^\textsc{o} > 0.76$ in the NUVrK green valley (green dashed lines in Fig. \ref{NrK_Mass}, sub-panels).
\begin{figure}[!h]
\includegraphics[width=\hsize]{figures/SMF_Qyo.pdf}
\caption{Deconstruction of the quiescent SMF at $0.2 < z < 0.5$. The red squares represent the measurement for the whole quiescent population, while the magenta triangles and the darkred circles show the SMF for the \textit{young} ($Q_{yng}$) [ $(r-K_s)^\textsc{o} < 0.76$ ] and \textit{old} ($Q_{old}$) [ $(r-K_s)^\textsc{o} > 0.76$ ] quiescent populations, respectively.\label{SMF_Q}}
\end{figure}
We also detect a clear plume of \textit{young} quiescent galaxies in Fig. \ref{NrK_Mass}, with $(r-K_s)^\textsc{o} < 0.76$ (i.e. bluer than observed for galaxies following the $\mathcal{M}^\star _{\textsc{sf}}$ channel) at $z < 0.5$. It is well established that rest-frame optical-NIR colours are sensitive to both dust attenuation and age of the stellar populations \citep[see e.g.][]{Whitaker2012}. Under the assumption that, on average, the $(r-K_s)^\textsc{o}$ colour of quiescent galaxies cannot become bluer with time, the \textit{young} part of the quiescent population should have used another quenching channel. According to the limit that we defined to isolate the $\mathcal{M}^\star _{\textsc{sf}}$ quenching channel (green dashed line in Fig. \ref{NrK_Mass}), we separated the \textit{young} quiescent ($Q_{yng}$) and \textit{old} quiescent ($Q_{old}$) galaxies with $(r-K_s)^\textsc{o}=0.76$. Figure \ref{NrK_Mass} also reveals that $Q_{yng}$ galaxies are characterised by relative low masses ($M_* \lesssim 10^{9.5} M_{\odot}$), which seems to match the low-mass upturn of the quiescent SMF (see Fig. \ref{SMF_fitt}) at $z < 0.5$. In Fig. \ref{SMF_Q} we compute the SMF for $Q_{yng}$ (magenta triangles) and $Q_{old}$ (dark red circles) galaxies at $0.2 < z < 0.5$. The $Q_{yng}$ galaxies dominate at low mass, and they are responsible for the low-mass upturn in the quiescent SMF.
At the same time, the SMF of $Q_{old}$ galaxies peaks at $\mathcal{M}^\star_{\textsc{sf}}$, which clearly supports the idea that the building of the quiescent SMF is led through two quenching channels that can be distinguished with a cut in the NUVrK diagram at $(r-K_s)^\textsc{o} = 0.76$.
The timescale might then be a key element for characterising the mechanisms that are involved in each channel.
\subsubsection{Quenching timescales}
\label{qu_timesc}
\begin{figure*}[!]
\begin{small}
\hspace*{2.3cm}$t_Q$ = 1 Gyr \hspace*{5.54cm}$\tau$ = 0.1 Gyr \hspace*{3.51cm}$\tau$ = 1 Gyr
\end{small}
\includegraphics[width=0.35\hsize, trim = 0cm 0cm 0cm 0cm, clip]{figures/NrK_Qrb_EB-V.pdf}
\includegraphics[width=0.35\hsize, trim = -1.35cm 0cm 1.35cm 0cm, clip]{figures/NrK_Qplum_EB-V.pdf}
\hspace*{-0.5cm}\includegraphics[width=0.35\hsize, trim = 1.35cm 0cm -1.35cm 0cm, clip]{figures/NrK_QMstar_EB-V_b.pdf}
\caption{Predicted BC03 tracks in the NUVrK diagram at $0.2 < z < 0.5$ for $Z=0.008$ \citep{Calzetti2000} and E(B-V) = 0.2. The arrow shows the shift expected for E(B-V) + 0.1. Analogously to Fig. \ref{NUVrK_z}, the black solid and dashed lines correspond to the limits of the green valley and its middle, respectively, while we report the $(r-K_s)^{o}$-limit of the $\mathcal{M}^\star$-quenching channel with a vertical magenta solid line. The grey contours outline the galaxy density distribution. Each marker is coloured with respect to the corresponding stellar age (in Gyr). \textit{\textbf{Left panel}}: Only one quenching time is considered: $t_Q$ = 1 Gyr, with $\tau$ = 0.1 Gyr (triangles), $\tau$ = 0.25 Gyr (squares), $\tau$ = 1 Gyr (inverted triangles), $\tau$ = 1.5 Gyr (circles), and $\tau$ = 2.5 Gyr (diamonds). \textbf{\textit{Right panels}}: Two quenching timescales are considered: $\tau$ = 0.1 Gyr (middle panel) and $\tau$ = 1 Gyr (right panel), for $t_Q$ = 1 Gyr (triangles), 2 Gyr (diamonds), 5 Gyr (squares), and 9 Gyr (inverted triangles). The filled circles show the track for a continuous star formation without quenching. The red solid line linking the black edge triangles shows the track for $t_Q$ = 9 Gyr and $\tau$ = 0.5 Gyr.\label{NrK_qu}}
\end{figure*}
In Sect. \ref{gal_tracking} we have identified two possible channels in which galaxies are transitioning to build the quiescent population. We now investigate the nature of these channels through their characteristic timescales.
The restframe UV is sensitive to timescales of $10^{-2} - 10^{-1} Gyr$, and the scarcity of \textit{young}/low-mass galaxies in the green valley allows us to expect that some quenching processes occur on timescales of the same order or shorter.
To better constrain the timescale of the quenching that affects the star formation of low-mass and $\mathcal{M}^\star_{\textsc{sf}}$ galaxies, we explored the behaviour of simple scenarios of star formation history (SFHs) within the NUVrK diagram in a similar way as the approach adopted by \cite{Schawinski2014}. We performed this analysis at $0.2 < z < 0.5,$ where both \textit{old} and \textit{young} quiescent galaxies are well identified.
The use of simple e-folding SFHs implies that we assumed that galaxies can only become redder with time. This is motivated by the fact that the fraction of quiescent galaxies has continuously increased between $z \sim 3$ and $z \sim 0.2$ \citep[e.g.][]{Ilbert2010,Muzzin2013,Mortlock2015} and by assuming that most green valley galaxies are transitioning for the first time \citep{Martin2007}.
Doing so, we neglect the green valley galaxies produced by \textit{rejuvenation}
processes, as observed in the local Universe \citep[e.g][]{Salim2010,Thomas2010} and recently predicted at higher redshift in the \textsc{eagle} simulations \citep{Trayford2016}. However, in these simulations, the rejuvenation is responsible for a small fraction of the green valley galaxies.
Figure \ref{NrK_qu} presents the resulting tracks in the NUVrK diagram for SFHs constructed in the same way: a continuous star formation up to the quenching time at $t_{Q}$, followed by an exponentially declining star formation characterised by $\tau$. To mimic the average properties of our $K_s < 22$ sample at $0.2 < z < 0.5,$ the example is plotted for one metallicity ($Z=0.008$), one extinction law \citep{Calzetti2000}, one value of the dust attenuation (E(B-V) = 0.2), and with a stellar age of at least
1 Gyr. The stellar age is colour coded, and only the ages allowed by the given redshift bin are plotted. In the left panel of Fig. \ref{NrK_qu} the SFHs are characterised by $t_{Q}$ = 1 Gyr, with $\tau$ = 0.1, 0.25, 1, 2, and 2.5 Gyr. The tracks are constructed in a very simple way, and the evolution assumes a constant dust attenuation based on its average value for the bluest SF galaxies. The arrows show the shift that is due to a 0.1 increase of E(B-V). It is expected that quiescent galaxies are less affected by dust, which would tend to make the tracks steeper in the NUVrK green valley. Keeping this effect in mind, we see as a first result that the presence of $Q_{yng}$ galaxies is expected if any quenching process occurs early ($t_Q \sim 1$ Gyr) with a typical timescale
of $\tau \lesssim 0.25$ Gyr (triangles and squares in the left
panel of Fig. \ref{NrK_qu} ). As a second result, $\tau$ = 1 Gyr (inverted triangles in Fig. \ref{NrK_qu} left panel) seems to be a lower limit for the quenching timescale that is compatible with the channel drawn by $\mathcal{M}^\star_{\textsc{sf}}$ galaxies. The galaxies with a quenching $\tau >$ 2 Gyr do not reach the quiescent cloud.
In the middle and right panels of Fig. \ref{NrK_qu}, we also investigate the effect of the quenching epoch. We fixed $\tau$ = 1 Gyr and $\tau$ = 0.1 Gyr for several values of $t_Q$ between 1 and 9 Gyr. Any $t_Q>9$ Gyr will produce the same result as $t_Q=9$ Gyr since the NUVrK colours of SF galaxies saturate at ages $> 9$ Gyr, as shown by the predicted track with a continuous star formation (circles). All the models with $\tau$ = 1 Gyr are able to explain the galaxy presence within the $\mathcal{M}^\star$ channel. We could also imagine that a shorter timescale combined with a late quenching time can reproduce the observed $\mathcal{M}^\star$ channel. However, if we consider an SFH with $\tau$ = 0.1 Gyr after 9 Gyr on the SF main sequence (middle panel, inverted triangles), the track seems to move away from the channel that is drawn by $\mathcal{M}^\star$-galaxies in the NUVrK diagram. To produce a track that is compatible with this channel, we need to consider a quenching timescale $\tau \gtrsim$ 0.5 Gyr (red solid line), regardless of the considered SHF. We recall that we have considered the shortest timescales compatible with the $\mathcal{M}^\star_{\textsc{sf}}$-\textit{quenching channel}, and we could pick out SFHs that agree better. Namely, SFHs characterised by $t_Q$ = 1 Gyr and $\tau$ = 1.5 Gyr, $t_Q$ = 5 Gyr and $\tau$ = 1 Gyr, or $t_Q$ = 9 Gyr and $0.5 < \tau < 1$ Gyr could also explain the presence of this channel. This suggests a quenching timescale range of $0.5 < \tau < 2$ Gyr for $\mathcal{M}^\star$-galaxies, which corresponds to a quenching duration of between $\sim 1$ and 3.5 Gyr\footnote{These values agree with the estimate of \citet{Fritz2014} in VIPERS, who found that massive ($\log(M_*/M_{\odot})>11$) galaxies are expected to turn quiescent in $\sim$1.5 Gyr at $0.7 < z < 1.3$, and more slowly at $z < 0.7$ (i.e. with longer quenching durations).}.
Therefore, the physical mechanism explaining the building of the quiescent SMF around $\mathcal{M}^\star_{\textsc{sf}}$ at $z < 1$ seems to be a slow process.
Such a \textit{mass dependent} mechanism is compatible with a \textit{strangulation} picture where the star formation quenching occurs on several Gyr, moving slowly away from the SF main sequence in the NUVrK diagram, while the gas supply is progressively halted \citep{Schawinski2014, Peng2015}.
Figure \ref{NrK_qu} shows that the plume formed by $Q_{yng}$ galaxies in the NUVrK plan is explained by a $\sim 0.1$ Gyr-quenching process occurring during the first $\sim 5$ Gyr of the galaxy life (squares, diamonds, and triangles in the middle panel of Fig. \ref{NrK_qu}). The absence of these low-mass galaxies lying in the green valley can be first explained by the rapidity of their quenching. Indeed, a galaxy quenching with $\tau$ = 0.1 Gyr (triangles in the left and middle panels of Fig. \ref{NrK_qu}) is expected to cross the green valley (delimited by the black solid lines) in $\sim 0.4$ Gyr, while a galaxy with $\tau \sim 0.5-2$ Gyr spends $\sim 1-3.5$ Gyr there, on average. Nevertheless, the potential reservoir of SF $M_* < 9.5$ galaxies that can quench is $ \text{about ten}$ times larger than for galaxies around $\mathcal{M}^\star _{\textsc{sf}}$ (cf. Fig. \ref{SMF_fitt}). We could then expect to see more low-mass galaxies in transition. By adopting a conservative approach, we can assume that the ratio between the two quenching timescales is $\sim 10$ (0.1 Gyr for $M_* < 9.5$ galaxies, 1 Gyr around $\mathcal{M}^\star _{\textsc{sf}}$). The corresponding quenching rate should consequently be $ \text{about
ten}$ times lower for the low-mass galaxies that are the progenitors of the $Q_{yng}$ galaxies than for the $\mathcal{M}^\star_{\textsc{sf}}$ galaxies. The resulting flux of quenching galaxies (i.e. quenching rate $\times$ SF reservoir) that cross the green valley is then expected to be of the same order of magnitude, both at low and high mass, except when only a fraction of the low-mass galaxies is likely to be affected by the quenching. The SF satellite galaxies, which are estimated to be $\gtrsim 3$ times less abundant than field galaxies \citep{Yang2009,Peng2012}, are therefore good candidates for this low-mass quenching mechanism. Moreover, its typical timescale is compatible with the scenario suggested by \citet{Schawinski2014} for the rapid formation of young early-type galaxies. In this picture, the quiescent low-mass galaxies are formed through dramatic events such as major mergers and not through \textit{ram-pressure stripping} or \textit{strangulation}, by explaining both the almost instantaneous star formation shutdown and the morphological transformation.
\section{Summary}
We analysed the evolution of the stellar mass function in an unprecedentedly large ($>22$ deg$^2$) NIR selected ($K_s < 22$) survey. This allowed us to provide reliable constraints on the evolution of massive galaxies and to investigate quenching processes below redshift $z\sim 1.2$. Covering the VIPERS spectroscopic survey, we computed highly reliable photometric redshifts, with usual estimates of the precision $\sigma_{\Delta z/(1+z)}< 0.03$ and $\sigma_{\Delta z/(1+z)}< 0.05$ for bright ($i < 22.5$) and faint ($i > 22.5$) galaxies, respectively.
Paying particular attention to several sources of uncertainties (photometry, star-galaxy separation, photometric redshift, dust extinction treatment, and classification into quiescent and star-forming galaxies), we computed the SMF between redshifts $z = 0.2$ and $z = 1.5$. The unique size of our sample enabled us
to drastically reduce the statistical uncertainties affecting the SMFs and stellar mass densities with respect to other current surveys over the stellar mass range we consider: the Poissonian error and cosmic variance are reduced by factors of $\sim 3.3$ and $\sim 2, $ respectively, compared to a 2 deg$^2$-survey.
Combined with a careful treatment of the Eddington bias that
is due to the stellar mass uncertainty, we produced an unprecedentedly precise measurement of the massive end of the SMF at $z<1.5$. In particular, we stress the importance of constraining all sources of systematic uncertainties, which quickly become the dominant sources of error in large-scale surveys such as those planned with Euclid or LSST.
Using the $(NUV-r)$ versus $(r-K)$ rest-frame colour diagram to classify star-forming and quiescent galaxies in our sample, we measured the evolution of the SMFs of the two populations and investigated the possible quenching processes that could explain the build-up of the quiescent population. Our main conclusions are summarised below.
\begin{itemize}
\item[1)] We provided clear evidence that the number density of the most massive ($M_* > 10^{11.5} M_{\odot}$) galaxies increases by a factor $\sim 2$ from $z \sim 1$ to $z \sim 0.3,$ which was first highlighted by \citet[][]{Matsuoka2010}. This population is largely dominated
by the quiescent population since $z\sim 1$, allowing for the possibility of galaxy mass assembly through dry-mergers in very massive galaxies.
\item[2)] The characteristic mass of the SF population was found to be very stable in the redshift range $0.2 < z < 1.5$, with $\log(\mathcal{M}^\star_{\textsc{sf}} / M_{\odot}) = 10.64 \pm 0.01$.
This confirms that the star formation is impeded above a certain stellar mass \citep[][]{Ilbert2010,Peng2010}.
\item[3)] Using the NUVrK diagram as a tracer of the galaxy evolution,
we identified one main \textit{quenching channel} between the star-forming and quiescent sequences at $0.2 < z < 1.5$, which is followed by galaxies with stellar masses around $\mathcal{M}^\star_{\textsc{sf}}$. This channel is characterised by a colour $(r-K_s)^\textsc{o}>0.76$, typical of evolved massive star-forming galaxies, which should feed the majority of the quiescent population.
We also identified a \textit{young} quiescent population with $(r-K_s)^\textsc{o}<0.76$, whose galaxies likely followed another path to reach the quiescent sequence. We showed that this \textit{blue} quiescent population, dominated by low-mass galaxies, is responsible for the upturn of the quiescent SMF at low redshift.
\item[4)] Assuming simple e-folding SFHs (galaxies can only become redder with time), we found that the $\mathcal{M}^\star_{\textsc{sf}}$ channel is explained by long quenching timescales, with 0.5 Gyr $< \tau \lesssim$ 2 Gyr.
Galaxies in this channel are expected to turn quiescent after $\sim 1-3.5$ Gyr on average.
This is compatible with \textit{strangulation} processes occurring when the gas cooling or the cold gas inflows are impeded, allowing the galaxy to progressively consume its remaining gas reservoir \citep{Peng2015}. Conversely, the quenching of low-mass galaxies that is visible at low redshift is characterised by short timescales with $\tau \sim$ 0.1 Gyr. This quenching that halts star formation in $\sim$ 0.4 Gyr can be consistent with major merging \citep[][]{Schawinski2014} and may preferentially affect satellite galaxies.
\end{itemize}
\begin{acknowledgements}
We gratefully acknowledge the anonymous referee, whose advice helped much in improving the clarity of the paper.
We wish to thank J.-C. Cuillandre, H. Aussel, S. de la Torre and B. C. Lemaux for helpful discussions. We would also like to thank J. Moustakas for providing the PRIMUS stellar mass estimates.
This research is in part supported by the Centre National d'Etudes Spatiales (CNES) and the Centre National de la Recherche Scientifique (CNRS) of France, and the ANR Spin(e) project (ANR-13-BS05-0005, http://cosmicorigin.org).
L.G. acknowledges support of the European Research Council through the Darklight ERC Advanced Research Grant (\# 291521).
This paper is based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, and with WIRCam, a joint project of CFHT, Taiwan, Korea, Canada and France, at the Canada-France-Hawaii Telescope (CFHT).The CFHT is operated by the National Research Council (NRC) of Canada, the Institut National des Science de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii.
This work is based in part on data products produced at Terapix available at the Canadian Astronomy Data Centre as part of the Canada-France-Hawaii Telescope Legacy Survey, a collaborative project of NRC and CNRS.
We thank the Terapix team for the reduction of all the WIRCAM images and the preparation of the catalogues matching with the T0007 CFHTLS data release.
This paper is based on observations made with the Galaxy Evolution Explorer (GALEX). GALEX is a NASA Small Explorer, whose mission was developed in cooperation with the Centre National d'Etudes Spatiales (CNES) of France and the Korean Ministry of Science and Technology. GALEX is operated for NASA by the California Institute of Technology under NASA contract NAS5-98034.
This paper uses data from the VIMOS Public Extragalactic Redshift Survey (VIPERS). VIPERS has been performed using the ESO Very Large Telescope, under the "Large Programme" 182.A-0886. The participating institutions and funding agencies are listed at http://vipers.inaf.it.
This paper uses data from the VIMOS VLT Deep Survey (VVDS) obtained at the ESO Very Large Telescope under programs 070.A-9007 and 177.A-0837, and made available at the CESAM data center, Laboratoire d'Astrophysique de Marseille, France.
Funding for PRIMUS is provided by NSF (AST-0607701, AST-0908246, AST-0908442, AST-0908354) and NASA (Spitzer-1356708, 08-ADP08-0019, NNX09AC95G).
Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The Participating Institutions of the SDSS-III Collaboration are listed at http://www.sdss3.org/.
\end{acknowledgements}
\bibliographystyle{aa}
|
2,877,628,090,994 | arxiv | \section{Introduction}
Quantum entanglement provides a valuable resource for many important applications in quantum communication, quantum computation and quantum
metrology. For example, quantum teleportation \cite{teleportation}, quantum key distribution \cite{qkd}, and other quantum communication protocols all require
the entanglement to set up the quantum channel. In one-way quantum computation, they require to create the cluster state to perform the task \cite{one-way}.
However, entanglement is generally fragile. In a practical noisy environment, the entanglement will decoherence and lose the quantum features.
In current quantum information processing, one of the main goals is to protect the entanglement against influences from the uncontrollable environment. In long-distance quantum communication, the main approach is to protect the
entanglement encoded in the physical qubits directly. For example, the approaches of quantum repeaters \cite{repeater} and photon noiseless linear amplification \cite{amplification} are used to extend
the distance of the entanglement and protect the photon from the photon loss, respectively. The approaches of entanglement purification and concentration are used to extract the maximally entangled states from the degraded entangled states \cite{purification,concentration}. In quantum computation, the main approach is to utilize the quantum correction code, by encoding a single physical quantum state to a logic quantum qubit which contains many physical qubits \cite{errorcorrection1,errorcorrection2}. Therefore, by using redundant encodings together with manipulations and measurements in such a way that the quantum features can be protected.
Interestingly, the approach of encoding many physical qubits in a logic qubit has also been discussed for logic qubits entanglement \cite{cghz,yan,pan}. Recently,
Fr\"{o}wis and D\"{u}r studied a class of quantum entangled state which shows similar feature as the Greenberger-Horne-Zeilinger (GHZ) state,
but it is more robust than the normal GHZ state in a noisy environment \cite{cghz}.
It is called concatenated GHZ (C-GHZ) state of the form
\begin{eqnarray}
|\Phi_{1}^{\pm}\rangle_{N,m}=\frac{1}{\sqrt{2}}(|GHZ^{+}_{m}\rangle^{\otimes N} \pm |GHZ^{-}_{m}\rangle^{\otimes N}),\label{logic}
\end{eqnarray}
with $|GHZ^{\pm}_{m}\rangle=\frac{1}{\sqrt{2}}(|0\rangle^{\otimes m}\pm|1\rangle^{\otimes m})$. The degree of such logic qubits entanglement decreases polynomially with particle number in $N$ and $m$. Ding \emph{et al.} described a way of creating the C-GHZ state with cross-Kerr nonlinearity \cite{yan}. Lu \emph{et al.}
reported the experimental realization of a C-GHZ state in optical system with $m=2$ and $N=3$ \cite{pan}. They also demonstrated that the C-GHZ state is more robust than the conventional GHZ state.
On one hand, up to now, though several groups discussed the C-GHZ state in both theory and experiment, such logic entangled state has not been studied in current entanglement based quantum communication.
On the other hand, as the C-GHZ state is more robust, setting up the entanglement channel with logic qubits entanglement rather than the physical qubit entanglement may be an alternative way to resit the noisy environment.
In this paper, we will discuss one of the most important two-qubit measurement, say the Bell-state measurement \cite{bellanalysis}. The Bell-state measurement enables many important applications in quantum information processing, such as teleportation \cite{teleportation}, quantum key distribution \cite{qkd}, and so on \cite{repeater}. Different from the previous Bell-state analysis, we describe the logic Bell-state analysis (LBSA). Here, the logic Bell state is the state in Eq. (\ref{logic}) with $N=m=2$. It is shown that such state can be deterministic distinguished with the help of controlled-not (CNOT) gate. Moreover, the approach for LBSA can also be extended to distinguish the C-GHZ state with arbitrary $N$ and $m$.
This paper is organized as follows: In Sec. II, we will describe the approach of the LBSA. In Sec. III, we will extend this approach for
distinguishing arbitrary C-GHZ state. Our protocol reveals that the logic Bell-State and the C-GHZ state analysis can be simplified to the conventional
Bell-state and GHZ analysis, respectively. In Sec. IV, we will present a discussion. It is shown that, with the help of LBSA, we can teleportate an unknown logic qubit. We also show that
we can perform the complete logic entanglement swapping. In Sec. V, we will make a conclusion.
\section{Logic Bell-state analysis}
The four logic Bell states can be described as
\begin{eqnarray}
|\Phi^{\pm}\rangle_{AB}=\frac{1}{\sqrt{2}}(|\phi^{+}\rangle_{A}|\phi^{+}\rangle_{B}\pm|\phi^{-}\rangle_{A}|\phi^{-}\rangle_{B}),\nonumber\\
|\Psi^{\pm}\rangle_{AB}=\frac{1}{\sqrt{2}}(|\phi^{+}\rangle_{A}|\phi^{-}\rangle_{B}\pm|\phi^{-}\rangle_{A}|\phi^{+}\rangle_{B}).
\end{eqnarray}
Here
\begin{eqnarray}
|\phi^{\pm}\rangle=\frac{1}{\sqrt{2}}(|0\rangle|0\rangle\pm|1\rangle|1\rangle),\nonumber\\
|\psi^{\pm}\rangle=\frac{1}{\sqrt{2}}(|0\rangle|1\rangle\pm|1\rangle|0\rangle),
\end{eqnarray}
with $|0\rangle$ and $|1\rangle$ are the physical qubit, respectively.
$|\Phi^{+}\rangle_{AB}$ essentially is the state with $m=N=2$ in Eq. (\ref{logic}). In Fig. 1, four physical qubits comprise two logic qubits A and B. The four physical qubits are in the spatial mode a$_{1}$, a$_{2}$, b$_{1}$ and b$_{2}$ respectively.\\
\begin{figure}[!h
\begin{center}
\includegraphics[width=6cm,angle=0]{bellcolor.eps}
\caption{Schematic diagram of the logic Bell-state analysis. $H$ represents the Hadamard operation and $M$ represents the measurement in the basis $\{|0\rangle, |1\rangle\}$.}
\end{center}
\end{figure}
As shown in Fig. 1, we first perform a Hadamard operation (H) on each qubit. The Hadamard operation will
make $|0\rangle\rightarrow|+\rangle=\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)$, and $|1\rangle\rightarrow|-\rangle=\frac{1}{\sqrt{2}}(|0\rangle-|1\rangle)$. They will make
$|\phi^{+}\rangle$ do not change, but $|\phi^{-}\rangle$ become $|\psi^{+}\rangle$.
After performing the four Hadamard operations, the four logic Bell states can be rewritten as
\begin{eqnarray}
|\Phi^{\pm}\rangle_{AB}=\frac{1}{\sqrt{2}}(|\phi^{+}\rangle_{A}|\phi^{+}\rangle_{B}\pm|\psi^{+}\rangle_{A}|\psi^{+}\rangle_{B}),\nonumber\\
|\Psi^{\pm}\rangle_{AB}=\frac{1}{\sqrt{2}}(|\phi^{+}\rangle_{A}|\psi^{+}\rangle_{B}\pm|\psi^{-}\rangle_{A}|\phi^{+}\rangle_{B}).\label{bell}
\end{eqnarray}
After both performing the CNOT operations, the states $|\Phi^{\pm}\rangle_{AB}$ will evolve as
\begin{eqnarray}
&&|\Phi^{\pm}\rangle_{AB}=\frac{1}{\sqrt{2}}(|\phi^{+}\rangle_{A}|\phi^{+}\rangle_{B}\pm|\psi^{+}\rangle_{A}|\psi^{+}\rangle_{B}),\nonumber\\
&=&\frac{1}{\sqrt{2}}[\frac{1}{\sqrt{2}}(|0\rangle_{a_{1}}|0\rangle_{a_{2}}+|1\rangle_{a_{1}}|1\rangle_{a_{2}})
\otimes\frac{1}{\sqrt{2}}(|0\rangle_{b_{1}}|0\rangle_{b_{2}}\nonumber\\
&+&|1\rangle_{b_{1}}|1\rangle_{b_{2}})
\pm\frac{1}{\sqrt{2}}(|0\rangle_{a_{1}}|1\rangle_{a_{2}}+|1\rangle_{a_{1}}|0\rangle_{a_{2}})\nonumber\\
&\otimes&\frac{1}{\sqrt{2}}(|0\rangle_{b_{1}}|1\rangle_{b_{2}}
+|1\rangle_{b_{1}}|0\rangle_{b_{2}}]\nonumber\\
&\rightarrow&\frac{1}{\sqrt{2}}[\frac{1}{\sqrt{2}}(|0\rangle_{a_{1}}|0\rangle_{a_{2}}+|1\rangle_{a_{1}}|0\rangle_{a_{2}})
\otimes\frac{1}{\sqrt{2}}(|0\rangle_{b_{1}}|0\rangle_{b_{2}}\nonumber\\
&+&|1\rangle_{b_{1}}|0\rangle_{b_{2}})
\pm\frac{1}{\sqrt{2}}(|0\rangle_{a_{1}}|1\rangle_{a_{2}}+|1\rangle_{a_{1}}|1\rangle_{a_{2}})\nonumber\\
&\otimes&\frac{1}{\sqrt{2}}(|0\rangle_{b_{1}}|1\rangle_{b_{2}}+|1\rangle_{b_{1}}|1\rangle_{b_{2}}]\nonumber\\
&=&|+\rangle_{a_{1}}|+\rangle_{b_{1}}\otimes\frac{1}{\sqrt{2}}(|0\rangle_{a_{2}}|0\rangle_{b_{2}}\pm|1\rangle_{a_{2}}|1\rangle_{b_{2}}).\label{evolve1}
\end{eqnarray}
Similarly, the states $|\Psi^{\pm}\rangle_{AB}$ will evolve as
\begin{eqnarray}
&&|\Psi^{\pm}\rangle_{AB}=\frac{1}{\sqrt{2}}(|\phi^{+}\rangle_{A}|\psi^{+}\rangle_{B}\pm|\psi^{-}\rangle_{A}|\phi^{+}\rangle_{B})\nonumber\\
&\rightarrow&|+\rangle_{a_{1}}|+\rangle_{b_{1}}\otimes\frac{1}{\sqrt{2}}(|0\rangle_{a_{2}}|1\rangle_{b_{2}}\pm|1\rangle_{a_{2}}|0\rangle_{b_{2}}).\label{evolve2}
\end{eqnarray}
From Eqs. (\ref{evolve1}) and (\ref{evolve2}), one can find that the qubits a$_{1}$ and b$_{1}$ disentangle with the qubits a$_{2}$ and b$_{2}$.
Interestingly, by removing the qubits a$_{1}$ and b$_{1}$, the remained qubits a$_{2}$ and b$_{2}$ essentially are the standard Bell states shown in Eq. (\ref{bell}).
That is the $|\Phi^{\pm}\rangle_{AB}$ become $|\phi^{\pm}\rangle_{AB}$ and $|\Psi^{\pm}\rangle_{AB}$ become $|\psi^{\pm}\rangle_{AB}$, respectively.
The standard Bell states can be well distinguished with the CNOT gate and the Hadamard gate.
From Fig. 1, the qubits a$_{2}$ and b$_{2}$ pass through the CNOT gate in a second time,
and $|\phi^{\pm}\rangle_{AB}$ and $|\psi^{\pm}\rangle_{AB}$ will become
\begin{eqnarray}
|\phi^{\pm}\rangle_{AB}&=&\frac{1}{\sqrt{2}}(|0\rangle_{a_{2}}|0\rangle_{b_{2}}\pm|1\rangle_{a_{2}}|1\rangle_{b2})\nonumber\\
&\rightarrow&\frac{1}{\sqrt{2}}(|0\rangle_{a_{2}}\pm|1\rangle_{a_{2}})|0\rangle_{b2},\nonumber\\
|\psi^{\pm}\rangle_{AB}&=&\frac{1}{\sqrt{2}}(|0\rangle_{a_{2}}|1\rangle_{b_{2}}\pm|1\rangle_{a_{2}}|0\rangle_{b2})\nonumber\\
&\rightarrow&\frac{1}{\sqrt{2}}(|0\rangle_{a_{2}}\pm|1\rangle_{a_{2}})|1\rangle_{b2}.
\end{eqnarray}
Finally, after the qubit a$_{2}$ passing through the Hadamard gate, both qubits are measured in the basis $\{|0\rangle,|1\rangle\}$.
If the measurement result is $|0\rangle|0\rangle$, the original state is $|\Phi^{+}\rangle_{AB}$. If the measurement result is
$|1\rangle|0\rangle$, the original state is $|\Phi^{-}\rangle_{AB}$. On the other hand, if the measurement result is $|0\rangle|1\rangle$
or $|1\rangle|1\rangle$, the original state is $|\Psi^{+}\rangle_{AB}$ or $|\Psi^{-}\rangle_{AB}$, respectively. In this way, the LBSA is
completed.
\section{Entanglement analysis for arbitrary concatenated entangled state}
In this section, we show that the way of distinguishing the LBSA can be used to
analysis the arbitrary C-GHZ state of the form
\begin{eqnarray}
|\Phi^{\pm}_{1}\rangle_{N,m}&=&\frac{1}{\sqrt{2}}(|GHZ^{+}_{m}\rangle^{\otimes N} \pm |GHZ^{-}_{m}\rangle^{\otimes N}),\nonumber\\
|\Phi^{\pm}_{2}\rangle_{N,m}&=&\frac{1}{\sqrt{2}}(|GHZ^{-}_{m}\rangle|GHZ^{+}_{m}\rangle^{\otimes N-1}\nonumber\\
&\pm& |GHZ^{+}_{m}\rangle|GHZ^{-}_{m}\rangle^{\otimes N-1}),\nonumber\\
|\Phi^{\pm}_{3}\rangle_{N,m}&=&\frac{1}{\sqrt{2}}(|GHZ^{+}_{m}\rangle|GHZ^{-}_{m}\rangle|GHZ^{+}_{m}\rangle^{\otimes N-2}\nonumber\\
&\pm& |GHZ^{-}_{m}\rangle|GHZ^{+}_{m}\rangle|GHZ^{-}_{m}\rangle^{\otimes N-2}),\nonumber\\
&\cdots&\nonumber\\
|\Phi^{\pm}_{2^{N-1}}\rangle_{N,m}&=&\frac{1}{\sqrt{2}}(|GHZ^{+}_{m}\rangle^{\otimes N-1}|GHZ^{-}_{m}\rangle\nonumber\\
&\pm&|GHZ^{-}_{m}\rangle^{\otimes N-1}|GHZ^{+}_{m}\rangle).\label{multi1}
\end{eqnarray}
From Eq. (\ref{multi1}), the logic qubits denote as $|GHZ^{\pm}_{m}\rangle$.
The C-GHZ state analysis for the states in Eq. (\ref{multi1}) can be simplified to distinguish the states
\begin{eqnarray}
&&|\Phi^{\pm}_{1}\rangle_{N,2}=\frac{1}{\sqrt{2}}(|\phi^{+}\rangle^{\otimes N}\pm|\phi^{-}\rangle^{\otimes N}),\nonumber\\
&&|\Phi^{\pm}_{2}\rangle_{N,2}=\frac{1}{\sqrt{2}}(|\phi^{-}\rangle|\phi^{+}\rangle^{\otimes N-1}\pm|\phi^{+}\rangle|\phi^{-}\rangle^{\otimes N-1}),\nonumber\\
&&\cdots\nonumber\\
&&|\Phi^{\pm}_{2^{N-1}}\rangle_{N,2}=\frac{1}{\sqrt{2}}(|\phi^{+}\rangle^{\otimes N-1}|\phi^{-}\rangle\pm|\phi^{-}\rangle^{\otimes N-1}|\phi^{+}\rangle.\nonumber\\\label{multi2}
\end{eqnarray}
In C-GHZ state analysis, we only need to distinguish the states in the logic qubit level, and do not need to care about
the exact information in each logic qubit.
\begin{figure}[!h
\begin{center}
\includegraphics[width=6cm,angle=0]{ghzcolor.eps}
\caption{Schematic diagram of the arbitrary C-GHZ state analysis.}
\end{center}
\end{figure}
Before we start this protocol, we first transform the states $|\Phi^{\pm}_{1}\rangle_{N,m}$, $|\Phi^{\pm}_{2}\rangle_{N,m}$, $\cdots$, $|\Phi^{\pm}_{2^{N-1}}\rangle_{N,m}$ to $|\Phi^{\pm}_{1}\rangle_{N,2}$, $|\Phi^{\pm}_{2}\rangle_{N,2}$, $\cdots$, $|\Phi^{\pm}_{2^{N-1}}\rangle_{N,2}$, respectively. Such transformation can be completed by performing $N-2$ Hadamard operations on each qubit and measuring these $N-2$ qubits in $\{|0\rangle, |1\rangle\}$ basis. From the measurement results, if the number of $|1\rangle$ is even, the states $|\Phi^{\pm}_{1}\rangle_{N,m}$, $|\Phi^{\pm}_{2}\rangle_{N,m}$, $\cdots$, $|\Phi^{\pm}_{2^{N-1}}\rangle_{N,m}$ have fully transformed to $|\Phi^{\pm}_{1}\rangle_{N,2}$, $|\Phi^{\pm}_{2}\rangle_{N,2}$, $\cdots$, $|\Phi^{\pm}_{2^{N-1}}\rangle_{N,2}$, respectively. Otherwise, if the number of $|1\rangle$ is odd,
the states $|\Phi^{\pm}_{1}\rangle_{N,m}$, $|\Phi^{\pm}_{2}\rangle_{N,m}$, $\cdots$, $|\Phi^{\pm}_{2^{N-1}}\rangle_{N,m}$ have fully transformed to $|\Phi^{\mp}_{1}\rangle_{N,2}$, $|\Phi^{\mp}_{2}\rangle_{N,2}$, $\cdots$, $|\Phi^{\mp}_{2^{N-1}}\rangle_{N,2}$, respectively. In this way, we should perform a phase-flip operation to transform the states $|\Phi^{\mp}_{1}\rangle_{N,2}$, $|\Phi^{\mp}_{2}\rangle_{N,2}$, $\cdots$, $|\Phi^{\mp}_{2^{N-1}}\rangle_{N,2}$ to $|\Phi^{\pm}_{1}\rangle_{N,2}$, $|\Phi^{\pm}_{2}\rangle_{N,2}$, $\cdots$, $|\Phi^{\pm}_{2^{N-1}}\rangle_{N,2}$, respectively.
In Fig. 2, we denote each logic qubit $|\phi^{\pm}\rangle$ as $A$, $B$, $\cdots$, etc. We first perform the Hadamard operation on each physical qubit, and make the states in Eq. (\ref{multi2}) becomes
\begin{eqnarray}
&&|\Phi^{\pm}_{1}\rangle_{N,2}=\frac{1}{\sqrt{2}}(|\phi^{+}\rangle^{\otimes N}\pm|\psi^{+}\rangle^{\otimes N}),\nonumber\\
&&|\Phi^{\pm}_{2}\rangle_{N,2}=\frac{1}{\sqrt{2}}(|\psi^{+}\rangle|\phi^{+}\rangle^{\otimes N-1}\pm|\phi^{+}\rangle|\psi^{+}\rangle^{\otimes N-1}),\nonumber\\
&&\cdots\nonumber\\
&&|\Phi^{\pm}_{2^{N-1}}\rangle_{N,2}=\frac{1}{\sqrt{2}}(|\phi^{+}\rangle^{\otimes N-1}|\psi^{+}\rangle\pm|\psi^{+}\rangle^{\otimes N-1}|\phi^{+}\rangle.\nonumber\\\label{multi3}
\end{eqnarray}
After passing through the CNOT gates, the states in Eq. (\ref{multi3}) can be written as
\begin{eqnarray}
|\Phi^{\pm}_{1}\rangle_{N,2}&\rightarrow&\frac{1}{\sqrt{2}}(|+\rangle^{\otimes N}|0\rangle^{\otimes N}\pm|+\rangle^{\otimes N}|1\rangle^{\otimes N}),\nonumber\\
|\Phi^{\pm}_{2}\rangle_{N,2}&\rightarrow&\frac{1}{\sqrt{2}}(|+\rangle^{\otimes N}|1\rangle|0\rangle^{\otimes N-1}\nonumber\\
&\pm&|+\rangle^{\otimes N}|0\rangle|1\rangle^{\otimes N-1})\nonumber\\
&\cdots&\nonumber\\
|\Phi^{\pm}_{2^{N-1}}\rangle_{N,2}&\rightarrow&\frac{1}{\sqrt{2}}(|+\rangle^{\otimes N}|0\rangle^{\otimes N-1}|1\rangle\nonumber\\
&\pm&|+\rangle^{\otimes N}|1\rangle^{\otimes N-1}|0\rangle).\label{multi4}
\end{eqnarray}
From Eq. (\ref{multi4}), all the first physical qubits a$_{1}$, b$_{1}$ $\cdots$, n$_{1}$ in each logic qubits disentangle with the second one. By removing
these qubits, the states in Eq. (\ref{multi4}) can be written as the standard $N$-particle GHZ state of the form
\begin{eqnarray}
&&|\Phi^{\pm}_{1}\rangle_{N,2}\rightarrow|\Phi^{\pm}_{1}\rangle_{N}=\frac{1}{\sqrt{2}}(|0\rangle^{\otimes N}\pm|1\rangle^{\otimes N}),\nonumber\\
&&|\Phi^{\pm}_{2}\rangle_{N,2}\rightarrow|\Phi^{\pm}_{2}\rangle_{N}=\frac{1}{\sqrt{2}}(|1\rangle|0\rangle^{\otimes N-1}\pm|0\rangle|1\rangle^{\otimes N-1}),\nonumber\\
&&\cdots\nonumber\\
&&|\Phi^{\pm}_{2^{N-1}}\rangle_{N,2}\rightarrow|\Phi^{\pm}_{2^{N}}\rangle_{N}=\frac{1}{\sqrt{2}}(|0\rangle^{\otimes N-1}|1\rangle\pm|1\rangle^{\otimes N-1}|0\rangle).\nonumber\\\label{multi5}
\end{eqnarray}
The $N$ particles are denoted as a$_{2}$, b$_{2}$, $\cdots$, n$_{2}$, as shown in Fig. 2. Therefore, the discrimination of the C-GHZ state equals to the discrimination of the standard $N$-particle GHZ state. The next step can be described as follows.
We first perform the CNOT gate between the neighboring qubits (n-1)$_{2}$ and n$_{2}$. In Fig. 2, we denote $k\equiv n-1$. The qubit k$_{2}$ is the source qubit and the qubit n$_{2}$ is the target qubit. After performing the CNOT operation, we let the qubit (n-2)$_{2}$ be the source qubit and the qubit k$_{2}$ ((n-1)$_{2}$) be the target qubit and perform the CNOT operation again. In each round, we let the l$_{2}$ (l$_{2}$=1,2,$\cdots$, N-1) qubit be the source qubit and the (l+1)$_{2}$ qubit be the target qubit and perform the CNOT operation. After performing $N-1$ CNOT operations, the states in Eq. (\ref{multi5}) become
\begin{eqnarray}
|\Phi^{\pm}_{1}\rangle_{N}&\rightarrow&\frac{1}{\sqrt{2}}|\pm\rangle|0\rangle^{\otimes N-1},\nonumber\\
|\Phi^{\pm}_{2}\rangle_{N}&\rightarrow&\frac{1}{\sqrt{2}}|\pm\rangle|1\rangle|0\rangle^{\otimes N-2},\nonumber\\
&&\cdots\nonumber\\
|\Phi^{\pm}_{2^{N}}\rangle_{N}&\rightarrow&\frac{1}{\sqrt{2}}|\pm\rangle|0\rangle^{\otimes N-2}|1\rangle.
\end{eqnarray}
Finally, after performing the Hadmard operation on the first qubit a$_{2}$ to transform $|+\rangle$ to $|0\rangle$ and $|-\rangle$ to $|1\rangle$, all the
states in Eq. (\ref{multi1}) can be distinguished deterministically by measuring all the qubits in the basis $\{|0\rangle,|1\rangle\}$.
For example, if the measurement results are $|0\rangle|0\rangle\cdots|0\rangle$, the original state is $|\Phi^{+}_{1}\rangle_{N,m}$.
If the measurement results are $|1\rangle|0\rangle\cdots|0\rangle$, the original state is $|\Phi^{-}_{1}\rangle_{N,m}$. In this way, we can distinguish
arbitrary C-GHZ state.
\section{Discussion}
So far, we have fully explained the LBSA and the C-GHZ state analysis. It is interesting to discuss some applications of such state analysis. Quantum teleportation \cite{teleportation} and entanglement swapping \cite{repeater} are two unique techniques in quantum communication. The former can transmit an unknown state of the information encoded in a particle to a remote point without distributing the particle itself. The later can be used to extend the distance of quantum communication. We will show that the unknown logic qubit can also be teleportated and entanglement swapping based on the logic entanglement can also be performed in principle.
\subsection{Logic qubit teleportation}
Suppose that an arbitrary logic qubit in Alice's laboratory can be defined as
\begin{eqnarray}
|\varphi\rangle_{A}=\alpha|GHZ^{+}_{m}\rangle_{A}+\beta|GHZ^{-}_{m}\rangle_{A},
\end{eqnarray}
with $|\alpha|^{2}+|\beta|^{2}=1$. Alice and Bob share the logic qubits entanglement in the channel BC of the form
\begin{eqnarray}
|\Psi\rangle_{BC}&=&\frac{1}{\sqrt{2}}(|GHZ^{+}_{m}\rangle_{B}|GHZ^{+}_{m}\rangle_{C}\nonumber\\
&+&|GHZ^{-}_{m}\rangle_{B}|GHZ^{-}_{m}\rangle_{C}).
\end{eqnarray}
Alice performs the LBSA on her two logic qubits A and B. The whole state can be described as
\begin{eqnarray}
&&|\varphi\rangle_{A}\otimes|\Psi\rangle_{BC}=(\alpha|GHZ^{+}_{m}\rangle_{A}+\beta|GHZ^{-}_{m}\rangle_{A})\nonumber\\
&&\otimes[\frac{1}{\sqrt{2}}(|GHZ^{+}_{m}\rangle_{B}|GHZ^{+}_{m}\rangle_{C}
+|GHZ^{-}_{m}\rangle_{B}|GHZ^{-}_{m}\rangle_{C})]\nonumber\\
&=&\frac{1}{2}[\frac{1}{\sqrt{2}}(|GHZ^{+}_{m}\rangle_{A}|GHZ^{+}_{m}\rangle_{B}
+|GHZ^{-}_{m}\rangle_{A}|GHZ^{-}_{m}\rangle_{B})]\nonumber\\
&\otimes&(\alpha|GHZ^{+}_{m}\rangle_{C}+\beta|GHZ^{-}_{m}\rangle_{C})\nonumber\\
&+&\frac{1}{2}[\frac{1}{\sqrt{2}}(|GHZ^{+}_{m}\rangle_{A}|GHZ^{+}_{m}\rangle_{B}
-|GHZ^{-}_{m}\rangle_{A}|GHZ^{-}_{m}\rangle_{B})]\nonumber\\
&\otimes&(\alpha|GHZ^{+}_{m}\rangle_{C}-\beta|GHZ^{-}_{m}\rangle_{C})\nonumber\\
&+&\frac{1}{2}[\frac{1}{\sqrt{2}}(|GHZ^{+}_{m}\rangle_{A}|GHZ^{-}_{m}\rangle_{B}
+|GHZ^{-}_{m}\rangle_{A}|GHZ^{+}_{m}\rangle_{B})]\nonumber\\
&\otimes&(\alpha|GHZ^{-}_{m}\rangle_{C}+\beta|GHZ^{+}_{m}\rangle_{C})\nonumber\\
&+&\frac{1}{2}[\frac{1}{\sqrt{2}}(|GHZ^{+}_{m}\rangle_{A}|GHZ^{-}_{m}\rangle_{B}
-|GHZ^{-}_{m}\rangle_{A}|GHZ^{+}_{m}\rangle_{B})]\nonumber\\
&\otimes&(\alpha|GHZ^{-}_{m}\rangle_{C}-\beta|GHZ^{+}_{m}\rangle_{C}).\label{teleportation}
\end{eqnarray}
Obviously, from Eq. (\ref{teleportation}), according to the LBSA, Alice can teleportate the arbitrary
logic qubit $|\varphi\rangle_{A}$ to Bob.
\subsection{Logic qubit entanglement swapping}
Let pairs AB and CD be in the following logic entangled
states
\begin{eqnarray}
|\Psi\rangle_{AB}&=&\frac{1}{\sqrt{2}}(|GHZ^{+}_{m}\rangle_{A}|GHZ^{+}_{m}\rangle_{B}\nonumber\\
&+&|GHZ^{-}_{m}\rangle_{A}|GHZ^{-}_{m}\rangle_{B}),
\end{eqnarray}
and
\begin{eqnarray}
|\Psi\rangle_{CD}&=&\frac{1}{\sqrt{2}}(|GHZ^{+}_{m}\rangle_{C}|GHZ^{+}_{m}\rangle_{D}\nonumber\\
&+&|GHZ^{-}_{m}\rangle_{C}|GHZ^{-}_{m}\rangle_{D}).
\end{eqnarray}
If we perform a logic Bell-state measurement between the logic qubits B and C, the whole system can be described as
\begin{eqnarray}
&&|\Psi\rangle_{AB}\otimes|\Psi\rangle_{CD}=[\frac{1}{\sqrt{2}}(|GHZ^{+}_{m}\rangle_{A}|GHZ^{+}_{m}\rangle_{B}\nonumber\\
&+&|GHZ^{-}_{m}\rangle_{A}|GHZ^{-}_{m}\rangle_{B})]
\otimes[\frac{1}{\sqrt{2}}(|GHZ^{+}_{m}\rangle_{C}|GHZ^{+}_{m}\rangle_{D}\nonumber\\
&+&|GHZ^{-}_{m}\rangle_{C}|GHZ^{-}_{m}\rangle_{D})]\nonumber\\
&&=\frac{1}{2}[\frac{1}{\sqrt{2}}(|GHZ^{+}_{m}\rangle_{A}|GHZ^{+}_{m}\rangle_{D}\nonumber\\
&+&|GHZ^{-}_{m}\rangle_{A}|GHZ^{-}_{m}\rangle_{D})]
\otimes[\frac{1}{\sqrt{2}}(|GHZ^{+}_{m}\rangle_{B}|GHZ^{+}_{m}\rangle_{C}\nonumber\\
&+&|GHZ^{-}_{m}\rangle_{B}|GHZ^{-}_{m}\rangle_{C})]\nonumber\\
&+&\frac{1}{2}[\frac{1}{\sqrt{2}}(|GHZ^{+}_{m}\rangle_{A}|GHZ^{+}_{m}\rangle_{D}\nonumber\\
&-&|GHZ^{-}_{m}\rangle_{A}|GHZ^{-}_{m}\rangle_{D})]
\otimes[\frac{1}{\sqrt{2}}(|GHZ^{+}_{m}\rangle_{B}|GHZ^{+}_{m}\rangle_{C}\nonumber\\
&-&|GHZ^{-}_{m}\rangle_{B}|GHZ^{-}_{m}\rangle_{C})]\nonumber\\
&+&\frac{1}{2}[\frac{1}{\sqrt{2}}(|GHZ^{+}_{m}\rangle_{A}|GHZ^{-}_{m}\rangle_{D}\nonumber\\
&+&|GHZ^{-}_{m}\rangle_{A}|GHZ^{+}_{m}\rangle_{D})]
\otimes[\frac{1}{\sqrt{2}}(|GHZ^{+}_{m}\rangle_{B}|GHZ^{-}_{m}\rangle_{C}\nonumber\\
&+&|GHZ^{-}_{m}\rangle_{B}|GHZ^{+}_{m}\rangle_{C})]\nonumber\\
&+&\frac{1}{2}[\frac{1}{\sqrt{2}}(|GHZ^{+}_{m}\rangle_{A}|GHZ^{-}_{m}\rangle_{D}\nonumber\\
&-&|GHZ^{-}_{m}\rangle_{A}|GHZ^{+}_{m}\rangle_{D})]
\otimes[\frac{1}{\sqrt{2}}(|GHZ^{+}_{m}\rangle_{B}|GHZ^{-}_{m}\rangle_{C}\nonumber\\
&-&|GHZ^{-}_{m}\rangle_{B}|GHZ^{+}_{m}\rangle_{C})].\label{swapping}
\end{eqnarray}
From Eq. (\ref{swapping}), if the LBSA between B and C is projected to $\frac{1}{\sqrt{2}}(|GHZ^{+}_{m}\rangle_{B}|GHZ^{+}_{m}\rangle_{C}
+|GHZ^{-}_{m}\rangle_{B}|GHZ^{-}_{m}\rangle_{C})$, the logic qubits A and D will collapse to the same state with
$\frac{1}{\sqrt{2}}(|GHZ^{+}_{m}\rangle_{A}|GHZ^{+}_{m}\rangle_{D}+|GHZ^{-}_{m}\rangle_{A}|GHZ^{-}_{m}\rangle_{D})$, with the probability of $\frac{1}{4}$. On the other hand, if the
measurement results of B and C are the other states, we can also obtain the corresponding entangled states, which can be transformed to
$|\Psi\rangle_{AD}$ deterministically.
\section{Conclusion}
In conclusion, we have presented a protocol for LBSA. We exploit the CNOT gate, Hadamard gate and single qubit measurement
to complete the task. We also showed that arbitrary C-GHZ state can also be well distinguished.
It is shown that the number of physical qubit in each logic qubit does not affect the analysis of C-GHZ state. Therefore, the analysis of
C-GHZ state with arbitrary $N$ and $m$ equals to the analysis of the C-GHZ state with $m=2$. Both the LBSA and C-GHZ state can be simplified to the
standard Bell-state analysis and GHZ state analysis, respectively, which can be well distinguished with the help of CNOT gate.
As the C-GHZ state is more robust than the normal GHZ states in a noisy environment \cite{cghz,pan}, our LBSA shows that it is possible to perform the
long-distance quantum communication based on the logic qubits rather than the physical qubits directly.
\section*{ACKNOWLEDGEMENTS}
This work is supported by the National Natural Science Foundation of
China under Grant No. 11104159 and 11347110, and a Project Funded by the Priority
Academic Program Development of Jiangsu Higher Education
Institutions.
|
2,877,628,090,995 | arxiv | \section{Introduction}
On the collision of two nuclei, the density and pressure increases
in the interaction region and at finite colliding geometries,
there is an inherent asymmetry in the pressure which results in a
collective transverse flow of matter toward the direction of
lowest pressure. This collective transverse flow is one of the
most extensively used observable to study the equation of state
(EoS) as well as in-medium nucleon-nucleon (nn) cross-section of
the nuclear matter. At the particular value of energy where the
attractive scattering (dominant at energies around 10 MeV/nucleon)
balances the repulsive interactions (dominant at energies around
400 MeV/nucleon), collective transverse flow in the reaction plane
disappears. This energy is termed as Balance Energy ($E_{bal}$)
\cite{1,2,3,4,5,6,7,8,9,10}.
\par
Both collective flow and $E_{bal}$ are found to be highly
sensitive towards the nn cross-section \cite{2,3,4,5,6,7,8,9,10},
size of the system ($A_{TOT} = A_{T}+A_{P}$; where $A_{T}$ and
$A_{P}$ are the masses of the target and projectile, respectively)
\cite{4}, mass asymmetry of the reaction ($\eta$ =
$\frac{A_{T}-A_{P}}{A_{T}+A_{P}}$) \cite{11}, nuclear matter
equation of state \cite{2,3,4,5,6,7,8,9,10}, incident energy (E
MeV/nucleon) \cite{12} as well as the colliding geometry
($\hat{b}$ = $\frac{b}{b_{max}}$; where $b_{max}$ = $R_{1} +
R_{2}$; $R_{i}$ is the radius of projectile or target)
\cite{2,5,7,9,10}. Several experimental and theoretical attempts
have been employed in order to explain and understand these
observations \cite{1,2,3,4,5,6,7,8,9,10,11,12}. Due to the
decrease in the compression reached in heavy-ion collisions with
increase in the impact parameter, $E_{bal}$ is found to increase
approximately linearly as a function of the impact parameter
\cite{13,14}. But most of the studies in the literature mainly
focus on the symmetric and nearly symmetric reactions. Our present
aim, therefore, is at least twofold. (1) To study the variation of
collective transverse flow with impact parameter for different
$\eta$ keeping the incident energy fixed. (2) To study the
sensitivity of collective transverse flow and its disappearance on
the mass asymmetry of the reaction at different impact parameters.
The Quantum Molecular Dynamics (QMD) model \cite{8,15,16} used for
the present analysis is explained in the section 2. Results and
discussion are presented in section 3 followed by summary in
section 4.
\section{\label{model}Description of the model}
In the QMD model, each nucleon propagates under the influence of
mutual two- and three-body interactions. The propagation is
governed by the classical equations of motion:
\begin{equation}
\frac {d\vec{r}_{i}}{dt} = \frac {dH}{d\vec{p}_{i}},
\end{equation}
\begin{equation}
\frac {d\vec{p}_{i}}{dt} = -\frac {dH}{d\vec{r}_{i}},
\end{equation}
where the Hamiltonian is given by
\begin{equation}
H=\sum_{i} \frac {\vec{p}_{i}^{2}}{2m_{i}} + V ^{tot}.
\end{equation}
Our total interaction potential $V^{tot}$ reads as \cite{8,15,16}
\begin{equation}
V^{tot} = V^{Loc} + V^{Yuk} + V^{Coul},
\end{equation}
with
\begin{equation}
V^{Loc} = t_{1}\delta(\vec{r}_{i}-\vec{r}_{j})+
t_{2}\delta(\vec{r}_{i}-\vec{r}_{j})
\delta(\vec{r}_{i}-\vec{r}_{k}),
\end{equation}
\begin{equation}
V^{Yuk}=t_{3}e^{-|\vec{r}_{i}-\vec{r}_{j}|/m}/\left(|\vec{r}_{i}-\vec{r}_{j}|/m\right),
\end{equation}
with ${\it m}$ = 1.5 fm and $\it{t_{3}}$ = -6.66 MeV.
\par
The static (local) Skyrme interaction can further be parametrized
as:
\begin{equation}
U^{Loc}=\alpha\left(\frac{\rho}{\rho}_o\right)+
\beta\left(\frac{\rho}{\rho}_o\right)^{\gamma}.
\end{equation}
Here $\alpha, \beta$ and $\gamma$ are the parameters that define
equation of state. The parameters $\alpha$, $\beta$, and $\gamma$
in above Eq. (7) must be adjusted so as to reproduce the ground
state properties of the nuclear matter. The set of parameters
corresponding to different equations of state can be found in Ref.
\cite{16}. It is worth mentioning that as shown by Puri and
coworkers, Skyrme forces are very successful in the analysis of
low energy phenomena such as fusion, fission and
cluster-radioactivity, where nuclear potential plays an important
role \cite{sky}.
\begin{figure}[!t]
\centering \vskip - 1.0 cm
\includegraphics* [scale=0.7] {1.1.eps
\vskip -5.0 cm \caption {(Color online) The $<P^{dir}_{x}>$
(MeV/c) as a function of reduced impact parameter ($b/b_{max}$)
for different system masses. The results for different mass
asymmetries $\eta$ = 0, 0.1, 0.3, 0.5, and 0.7 are represented,
respectively, by the solid squares, circles, triangles, inverted
triangles, and diamonds. Results are at an incident energy of 200
MeV/nucleon.}
\end{figure}
\begin{figure}
\centering \vskip -7.5 cm
\includegraphics* [scale=0.7] {1.2.eps
\vskip -2.0 cm \caption {(Color online) The geometry of vanishing
flow (GVF) as a function of ${\eta}$ for different system masses.
The results for different system masses ($A_{TOT}$) = 40, 80, 160,
and 240 are represented, respectively, by the half filled squares,
circles, triangles, and inverted triangles.}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics* [scale=0.7] {1.3.eps
\vskip -1.0 cm \caption {(Color online) The $<P^{dir}_{x}>$
(MeV/c) as a function of incident energy for system mass $A_{TOT}$
= 240. The results are shown for different mass asymmetries
($\eta$ = 0-0.7) at reduced impact parameters $b/b_{max}$ = 0.25
(upper panel), 0.5 (middle panel), and 0.75 (bottom panel). The
lines are only to guide the eye. Symbols have the same meaning as
in Fig. 1.}\label{fig3}
\end{figure}
\section{\label{results} Results and Discussion}
For the present study, we simulated the reactions of
$^{20}_{10}Ne+^{20}_{10}Ne$ ($\eta = 0$),
$^{17}_{8}O+^{23}_{11}Na$ ($\eta = 0.1$),
$^{14}_{7}N+^{26}_{12}Mg$ ($\eta = 0.3$),
$^{10}_{5}B+^{30}_{14}Si$ ($\eta = 0.5$), and
$^{6}_{3}Li+^{34}_{16}S$ ($\eta = 0.7$) for total mass ($A_{TOT}$)
= 40, $^{40}_{20}Ca+^{40}_{20}Ca$ ($\eta = 0$),
$^{36}_{18}Ar+^{44}_{20}Ca$ ($\eta = 0.1$),
$^{28}_{14}Si+^{52}_{24}Cr$ ($\eta = 0.3$),
$^{20}_{10}Ne+^{60}_{28}Ni$ ($\eta = 0.5$), and
$^{10}_{5}B+^{70}_{32}Ge$ ($\eta = 0.7$) for total mass
($A_{TOT}$) = 80, $^{80}_{36}Kr+^{80}_{36}Kr$ ($\eta = 0$),
$^{70}_{32}Ge+^{90}_{40}Zr$ ($\eta = 0.1$),
$^{54}_{26}Fe+^{106}_{48}Cd$ ($\eta = 0.3$),
$^{40}_{20}Ca+^{120}_{52}Te$ ($\eta = 0.5$), and
$^{24}_{12}Mg+^{136}_{58}Ce$ ($\eta = 0.7$) for total mass
($A_{TOT}$) = 160, and $^{120}_{52}Te+^{120}_{52}Te$ ($\eta = 0$),
$^{108}_{48}Cd+^{132}_{56}Ba$ ($\eta = 0.1$),
$^{84}_{38}Sr+^{156}_{66}Dy$ ($\eta = 0.3$),
$^{60}_{28}Ni+^{180}_{74}W$ ($\eta = 0.5$), and
$^{36}_{18}Ar+^{204}_{82}Pb$ ($\eta = 0.7$) for total mass
($A_{TOT}$) = 240. The impact parameter is varied from b/b$_{max}$
= 0 to 1 in small steps of 0.25. The charges are chosen in a way
so that colliding nuclei are stable nuclides. A soft equation of
state with isotropic energy dependent cugnon cross-section
(labeled as Soft$^{iso}$) is used for the present calculations.
\par
The balance energy ($E_{bal}$) is calculated using the {\it
directed transverse momentum $<P^{dir}_{x}>$}, which is defined
as:
\begin{equation}
\langle P_{x}^{dir}\rangle=\frac{1}{A}\sum_i {\rm
sign}\{Y(i)\}~{\bf{p}}_{x}(i),
\end{equation}
where $Y(i)$ and ${\bf{p}}_{x}(i)$ are the rapidity distribution
and transverse momentum of $i^{th}$ particle, respectively.
\par
In Fig. 1, we display at a fixed energy, the $<P^{dir}_{x}>$ as a
function of reduced impact parameter ($b/b_{max}$) for ${\eta}$ =
0-0.7, keeping total system mass fixed as 40, 80, 160, and 240.
All reactions are followed till 200 fm/c, where $<P^{dir}_{x}>$
saturates. In all cases $<P^{dir}_{x}>$ first increases with
increase in impact parameter, reaches a maximal value and after
passing through a zero at some intermediate value of impact
parameter, attains negative values. The trend is uniform
throughout the mass asymmetry range. The value of impact parameter
at which $<P^{dir}_{x}>$ attains a zero (which is termed as
Geometry of Vanishing Flow (GVF) \cite{17}) varies with ${\eta}$
and $A_{TOT}$. For lighter systems and larger ${\eta}$, the value
of GVF is smaller compared to the heavier systems and smaller
${\eta}$.
\begin{figure}[!t]
\centering
\includegraphics* [scale=0.7]{1.4.eps
\vskip -0.5 cm \caption {(Color online) The decomposition of
$<P^{dir}_{x}>$ displayed in Fig. 3 into mean field (half filled
symbols) and collision part (open symbols) as a function of
incident energy.}\label{fig4}
\end{figure}
\begin{figure}
\centering \vskip -7.5 cm
\includegraphics* [scale=0.7]{1.5.eps
\vskip -2.0 cm \caption {(Color online) The $E_{bal}$ as a
function of $\eta$ for system mass $A_{TOT}$ = 240. The results
for different impact parameters $b/b_{max}$ = 0.25, 0.5, and 0.75
are represented, respectively, by the crossed squares, circles,
and triangles. Lines are to guide the eye.}\label{fig5}
\end{figure}
\begin{figure}
\centering \vskip -1.0 cm
\includegraphics* [scale=0.7] {1.6.eps
\vskip -5.0 cm \caption {(Color online) The percentage difference
$\Delta E_{bal}^{b/b_{max}}$(\%) as a function of ${\eta}$ for
different system masses. The results of the percentage difference
for different colliding geometries $b/b_{max}$ = 0.5 and 0.75 are
represented, respectively, by the crossed circles and triangles.
Horizontal lines represent the mean value of $\Delta
E_{bal}^{b/b_{max}}$(\%) for each $b/b_{max}$.}\label{fig6}
\end{figure}
\begin{figure}
\centering \vskip - 1.0 cm
\includegraphics* [scale=0.7] {1.7.eps
\vskip -5.0 cm \caption {(Color online) The percentage difference
$\Delta E_{bal}^{\eta}$(\%) as a function of $b/b_{max}$ for
different system masses. The results of the percentage difference
for different asymmetries $\eta$ = 0.1, 0.3, 0.5, and 0.7 are
represented, respectively, by the solid circles, triangles,
inverted triangles, and diamonds. Lines are the linear fits
($\propto m\frac {b}{b_{max}}$); {\it m} values without errors are
displayed.}\label{fig7}
\end{figure}
\par
In Fig. 2, the variation of GVF as a function of ${\eta}$ is
displayed for different system masses. The percentage variation in
GVF while going from ${\eta}$ = 0 to 0.7 is -40.9\%, -24.59\%,
-16.44\%, and -20.93\%, respectively for $A_{TOT}$ = 40, 80, 160,
and 240. It is clear from the figure that the effect of mass
asymmetry of the reaction on GVF decreases with increase in system
mass. This is similar to as predicted for $E_{bal}$ \cite{11}. Due
to the decrease in nn collisions with increase in ${\eta}$ and
impact parameter and increase in Coulomb repulsion with increase
in $A_{TOT}$, the $E_{bal}$ increases with increase in ${\eta}$
and impact parameter, while it decreases with increase in
$A_{TOT}$. Since the present study is at fixed incident energy,
therefore, the value of impact parameter, where flow vanishes,
decreases as ${\eta}$ increases.
\par
In Fig. 3, we display $<P^{dir}_{x}>$ as a function of incident
energy ranging between 40 MeV/nucleon and 800 MeV/nucleon, for
different mass asymmetric reactions keeping the total mass of the
system fixed as 240. The results are shown for different impact
parameters. In all the cases, transverse momentum is negative at
lower incident energies which turns positive at relatively higher
incident energies. The value of the abscissa at zero value of
$<P^{dir}_{x}>$ corresponds to the balance energy. The figure
indicates that (i) for all values of $\eta$ and impact parameters,
the transverse momentum increases monotonically with increase in
the incident energy. The increase is sharp at smaller incident
energies compared to higher incident energies where it starts
saturating. (ii) due to decrease in overlap volume and hence
number of collisions with increase in $\eta$, the transverse
momentum starts suppressing as $\eta$ increases and hence the
balance energy increases with increase in $\eta$. (iii) the
variation in $<P^{dir}_{x}>$ with $\eta$ decreases with increase
in impact parameter. But with increase in impact parameter the
balance energy increases for all values of $\eta$.
\par
The above mentioned findings can be understood by decomposing the
total transverse momentum into contributions due to mean field and
two- body nn collisions as shown in Fig. 4. The symbols are
explained in the caption of the figure. One notices that mean
field flow increases up to couple of hundred MeV/nucleon and then
saturates. In the lower incident energy region, the flow due to
mean field is smaller for larger asymmetries compared to smaller
asymmetries. At higher incident energies the mean field flow is
nearly independent of $\eta$. Similar behavior is seen for other
impact parameters also. The flow due to binary nn collisions,
however, increases with increase in incident energy for all
$\eta$, but as the impact parameter increases, the nn collision
flow starts saturating. Obviously as explained earlier, the flow
due to binary nn collisions decreases with increase in $\eta$.
\par
In Fig. 5, we display $E_{bal}$ as a function of $\eta$ for
$b/b_{max}$ = 0.25, 0.5, and 0.75, keeping $A_{TOT}$ fixed as 240.
Various symbols are explained in the caption of the figure. From
the figure, we see that $E_{bal}$ increases with increase in
$\eta$ for all values of $b/b_{max}$. Also the $\eta$ dependence
of $E_{bal}$ increases with increase in impact parameter.
\par
In Fig. 6, we display the percentage change in balance energy
$\Delta E_{bal}^{b/b_{max}}$(\%), defined as $\Delta
E_{bal}^{b/b_{max}}$(\%) =
(($E_{bal}^{b/b_{max}\neq0.25}$-$E_{bal}^{b/b_{max}=0.25}$)/$E_{bal}^{b/b_{max}=0.25}$)$\times$100
as a function of $\eta$. Lines represents the mean value of
variation. It is clear from the figure that the effect of impact
parameter variation is almost uniform throughout the asymmetry
range for every fixed system mass. We also found that the mean
variation decreases with increase in $A_{TOT}$. Clearly, the
effect of impact parameter variation is independent of ${\eta}$.
\par
In Fig. 7, we display the percentage difference $\Delta
E_{bal}^{\eta}$(\%) defined as $\Delta E_{bal}^{\eta}$(\%) =
(($E_{bal}^{\eta\neq0}$ -
$E_{bal}^{\eta=0}$)/$E_{bal}^{\eta=0}$)$\times$100 as a function
of reduced impact parameter ($b/b_{max}$). Lines are the linear
fits ($\propto m\frac {b}{b_{max}}$). It is clear from the figure
that the effect of the asymmetry variation increases with increase
in the impact parameter for each mass range. This is due to the
fact that with increase in impact parameter, the number of binary
nn collisions decreases and the increase of mass asymmetry further
adds the same effect.
\section{\label{summary} Summary}
We presented a detailed study on the role of impact parameter on
the collective flow and its disappearance for different mass
asymmetric reactions using the quantum molecular dynamics model.
For the present study, the mass asymmetry of the reaction is
varied from 0 to 0.7 by keeping the total mass of the system
fixed. A significant role of impact parameter on the collective
flow and its disappearance for the mass asymmetric reactions is
seen. The impact parameter dependence is also found to vary with
mass asymmetry of the reaction.
\section{Acknowledgments}
Author is thankful to Council of Scientific and Industrial
Research (CSIR) for providing the Junior Research Fellowship.
\section{References}
\medskip
|
2,877,628,090,996 | arxiv | \section{Introduction}
Compressed Sensing \cite{Candes06,Donoho06} is a theoretical framework which gives guarantees to recover sparse signals (signals reprensented by few non-zero coefficients in a given basis) from a limited number of linear projections.
In some applications, the measurement basis is fixed and the projections should be selected amongst a fixed set.
For instance, in MRI, the signal is sparse in the wavelet basis, and the sampling is performed in the spatial~(2D or 3D) Fourier basis (called $k$-space). Possible measurements are then projections on the lines of matrix $A=F^* \Psi$, where $F^*$ and $\Psi$ denote the Fourier and inverse wavelet transform, respectively.
Recent results~\cite{Rauhut10,Candes11} give bounds on the number of measurement $m$ needed to exactly recover $s$-sparse signals in $\C^n$ or $\R^n$ in the framework of bounded orthogonal systems. The authors have shown that for a given $s$-sparse signal, the number of measurements needed to ensure its perfect recovery is $O(s \log(n))$. This methodology, called \textit{variable density sampling}, involves an independent and identically distributed~(iid) random drawing and has already given promising results in reconstruction simulations~\cite{Lustig07,Puy11}. Nevertheless, in real MRI, such sampling patterns cannot be implemented, because of the limited speed of magnetic field gradient commutation. Hardware constraints require at least continuity of the sampling trajectory, which is not satisfied by two-dimensional iid sampling. In this paper, we introduce a new Markovian sampling scheme to enforce continuity. Our approach relies on the following reconstruction condition introduced by Juditski, Karzan and Nemirovki~\cite{Juditsky11}:
\begin{thm}[\cite{Juditsky11}]
If $A$ satisfies:
\begin{equation*}
\gamma(A)=\min_{Y\in \R^{n\times m}} \|I_n-Y^T A\|_\infty < \frac{1}{2 s}.
\end{equation*}
All $s$-sparse signals $x \in \R^n$ are recovered exactly by solving:
\begin{equation}
\label{eq:minL1}
\underset{A_m w=A_m x}{\operatorname{argmin}} \ \|w\|_1
\end{equation}
\end{thm}
\noindent which can be seen as an alternative to the \textit{mutual coherence}~\cite{Donoho06}.
We will show that this criterion makes it possible to obtain theoritical guarantees on the number of measurements necessary to reconstruct $s$ sparse signals, using variable density sampling or markovian sampling. Unfortunately the bounds we obtain are in $O(s^2)$. This phenomenon is due to the \textit{quadratic bottleneck} described in \cite{Rauhut10}. We are currently trying to obtain $O(s)$ results using different proof strategies.
\subsection*{Notation}
A signal $x \in \R^n$ is said to be $s$-sparse if it has at most $s$ non-zero coefficients. $x$ is measured through the acquisition system represented by a matrix $A_0$. Downsampling the measurements consists of deriving a matrix $A$ composed of $m$ lines of $A_0$ and observing $y=Ax \in \R^m$.
\section{\label{part:2}Theoretical result}
\subsection{Independent Sampling}
We aim at finding $A_m \in \R^{m \times n}$ composed of $m$ rows of $A$, and $Y_m \in \R^{m\times n}$ such that $\|I_n-Y_m^T A_m \|_\infty < \frac{1}{2 s}$, for a given positive integer $s$.
Following \cite{Juditsky11b}, we set $\Theta_i=\frac{a_i a_i^T}{\pi_i}$ and use the decomposition $I_n = A^TA = \sum_{i=1}^{n} \pi_i \Theta_i$.
We consider a sequence of $m$ random i.i.d. matrices $Z_1 , \dots, Z_m$, taking value $\Theta_i$ with probability $\pi_i$.
We set $\pi_i = \|a_i\|_\infty^2/L$, where $L = \sum_{i=1}^n \|a_i\|_\infty^2$, so that $\|Z_l\|_\infty$ is equal to $L$.
Let us denote $W_m = \frac{1}{m} \sum_{l=1}^m Z_l$. Then $W_m$ may be written as $Y_m^T A_m$
\begin{lemma}
\label{prop:indep}
$\forall t >0$ \\
\begin{equation}
\label{eq:conc_indep}
\mathbb{P}(\|I_n - W_m\|_\infty >t) \leq n (n+1) \exp \Bigl(- \frac{m t^2}{2 L^2 + 2 Lt/3} \Bigr).
\end{equation}
\end{lemma}
\begin{proof}
Bernstein's concentration inequality \cite{ledoux01} states that if $X_1, \dots, X_m$ are independent zero-mean random variables such that for all $i$, $|X_i| \leq \alpha$ and $\displaystyle \sigma^2= \sum_i \Exp \left(X_i^2\right) < \infty$, then $\forall t>0$
\begin{equation*}
\mathbb{P} \left( |\sum_{i=1}^{m} X_i| >t \right) \leq 2 \exp \left(- \frac{t^2}{2(\sigma^2+\alpha t/3)} \right).
\end{equation*}
For $1\leq a, b \leq n$, let $M^{(a,b)}$ denote the $(a,b)$th entry of a matrix $M \in \R^{n \times n}$. The random variable $(I_n-Z_l)^{(a,b)}$ is centered since $\sum_{i=1}^n \pi_i \Theta_i = I_n$.
Moreover, $|(I_n-Z_l)^{(a,b)}| \leq L$. Applying Bernstein's inequality to the sequence $\frac{1}{m}\left((I_n-Z_l)^{(a,b)}\right)_{1\leq l\leq m}$ gives
\begin{equation*}
\mathbb{P} \left( |(I_n-W_m)^{(a,b))} | >t\right) \leq 2 \exp \left(- \frac{m t^2}{2 L^2 + 2Lt/3 } \right).
\end{equation*}
Finally, using a union bound and the symmetry property of matrix $(I_n-W_m)$, we get:
\begin{equation}
\label{eq:ineg_inf}
\mathbb{P} \left(\|I_n-W_m\|_\infty > t\right) \leq \sum_{1\leq a \leq b \leq n} \mathbb{P} \left(|I_n-W_m|^{(a,b)} > t\right).
\end{equation}
Since $\mathbb{P} \left(|I_n-W_m|^{(a,b)} > t\right)$ is independent of $(a,b)$, we obtain Eq.~\eqref{eq:conc_indep}.
\end{proof}
\begin{remark}
Setting $t=4 L \sqrt{\frac{2\ln (2 n^2)}{m}}$ in lemma~\ref{prop:indep}, the bound given by Juditsky et al. in \cite{Juditsky11b} is $\mathbb{P} \left( \|I_n-W_m\|_\infty \geq t \right) \leq \frac{1}{2}$. This bound is obtained by upper-bounding the mean of $\|I_n-W_m\|_\infty$ and using Markov inequality.
Setting the same $t$ value in Eq.~\eqref{eq:conc_indep}, and assuming $t\leq L$, we obtain $\mathbb{P} \left( \|I_n-W_m\|_\infty \geq t \right) \leq \frac{1}{2 n^{4}}$.
This huge difference comes from inability of Markov inequality to capture large deviations behaviors.
\end{remark}
From lemma~\ref{prop:indep}, we can derive the immediate following result by setting $t=1/2s$:
\begin{prop}
Let $A_m$ be a measurement matrix designed by drawing $m$ lines of $A$ under the distribution $\pi$. Then, with probability $1-\eta$, if
\begin{equation}
m \geqslant 5 L^2 s^2 \log(n^2/\eta),
\end{equation}
every $s$-sparse signal $x$ is the unique solution of the $\ell_1$ problem:
\begin{equation*}
\underset{A_m w=A_m x}{\operatorname{argmin}} \ \|w\|_1
\end{equation*}
\end{prop}
\subsection{Markovian sampling}
Sampling patterns obtained using the strategy presented in Section \ref{part:2} are not usable for many practical devices.
A common constraint met on many hardwares (e.g. MRI) is the proximity of successive measurements.
A simple way to model dependence between successive samples consists of introducing a Markov chain $X_1 \dots X_m$ on the set $\{1, \dots, n\}$ that represents locations of possible measurements. The transition probability to go from location $i$ to location $j$ is positive if and only if sampling $i$ and $j$ successively is possible. We denote $W_m=\frac{1}{m}\sum_{l=1}^{m} \Theta_{X_l}$.\\
In order to use a concentration inequality, $W_m$ should satisfy $\Exp \left(W_m\right)=I_n$. We thus need (i) to set the stationary distribution of the Markov chain to $\pi$ and (ii) to set up the chain with its stationnary distribution $\pi$. These two conditions ensure that the marginal distribution of the chain is $\pi_i$ at any time. The issue of designing such a chain is widely studied in the frame of Markov chain Monte Carlo (MCMC) algorithms.
A simple way to build up the transition matrix $P= (P_{ij})_{1\leq i,j \leq n}$ is the Metropolis
algorithm~\cite{hastings1970montecarlo}. Let us now recall a concentration inequality for finite-state Markov chains~\cite{Lezaud98}.
\begin{thm}
\label{thm:Lezaud}
Let $(P,\pi)$ be an irreductible and reversible Markov chain on a finite set G of size $n$. Let $f:G \rightarrow \mathbb{R}$ be such that $\sum_{i=1}^n\pi_i f_i = 0 , \, \|f\|_\infty \leq 1$ and $0< \sum_{i=1}^n f_i^2 \pi_i \leq b^2$. Then, for any initial distribution $q$, any positive integer $m$ and all $0< t\leq 1$,
\begin{equation*}
\mathbb{P} \Bigl(\frac{1}{m} \sum_{i=1}^m f(X_i) \geq t \Bigr) \leq e^{\frac{\epsilon(P)}{5}} N_q \exp \Bigl(- \frac{m t^2 \epsilon(P)}{4 b^2(1+h(5 t/b^2))} \Bigr)
\end{equation*}
where $N_q=(\sum_{i=1}^n (\frac{q_i}{\pi_i})^2 \pi_i)^{1/2}$, $\beta_1(P)$ is the second largest eigenvalue of $P$, and $\epsilon(P)=1-\beta_1(P)$ is the spectral gap of the chain. Finally $h$ is given by $h(x)=\frac{1}{2}(\sqrt{1+x} - (1-x/2))$.
\end{thm}
Using this theorem, we can guarantee the following control of the term $\|I_n - W_m\|_\infty$:\\
\begin{lemma}
$ \forall\ 0<t\leq1$,
\begin{equation}
\label{eq:conc_markov}
\mathbb{P} \left(\|I_n - W_m\|_\infty \! \geq t \right) \! \leq \! n (n+1) e^{\frac{\epsilon(P)}{5}} \! \exp \Bigl(\!- \frac{mt^2 \epsilon(P)}{12L^2}\Bigr).
\end{equation}
\end{lemma}
\begin{proof}
By applying Theorem~\ref{thm:Lezaud} to a function $f$ and then to its opposite $-f$, we get:
\begin{multline*}
\mathbb{P} \Bigl(\Bigl|\frac{1}{m} \sum_{i=1}^m f(X_i)\Bigr| \geq t \Bigr) \leq 2 e^{\frac{\epsilon(P)}{5}} N_q \\ \exp \Bigl(- \frac{m t^2 \epsilon(P)}{4 b^2(1+h(5 t/b^2))} \Bigr).
\end{multline*}
Then we set $f(X_i)=(I_n-\Theta_{X_i})^{(a,b)}/(1+L)$.
The Markov chain is constructed such that $\sum_{i=1}^n\pi_i f(X_i)=0$.
Since we have $\|f\|_\infty \leq 1$, $b=1$, and since $t\leqslant 1$, $1+h(5t)<3$.
Moreover, since the initial distribution is $\pi$, $q_i=\pi_i, \forall i$ and thus $N_q=1$. Again, resorting to a union bound (\ref{eq:ineg_inf}) enables us to extend the result for the $(a,b)$th entry to the whole infinite norm of the $n \times n$ matrix $I_n-W_m$~\eqref{eq:conc_markov}.\\ \end{proof}
Then we can quantify the number of measurements needed to ensure exact recovery:
\begin{prop}
\label{prop:measurements_needed}
Let $A_m$ be a measurement matrix designed by drawing $m$ lines of $A$ under the Markovian process described above. Then, with probability $1-\eta$, if
\begin{equation}
\label{eq:measurements}
m \geqslant \frac{12 L^2}{\epsilon(P)} s^2 \log(2n^2/\eta),
\end{equation}
every $s$-sparse signal $x$ is the unique solution of the $\ell_1$ problem:
\begin{equation*}
\underset{A_m w=A_m x}{\operatorname{argmin}} \ \|w\|_1
\end{equation*}
\end{prop}
\begin{remark}
The spectral gap $\epsilon(P)$ takes its value between 0 and 1 and describes the mixing properties of the Markov chain.
The closer the spectral gap to 1, the fastest the convergence to the mean. \\
\end{remark}
\begin{remark}
All the results above can be extended to the complex case using a slightly different proof.
\end{remark}
\section{Results and discussion}
In order to cover a larger domain of $k$-space, we consider the following chain: $P^{(\alpha)}=(1-\alpha)P+ \alpha\tilde{P}$, where $\tilde{P}$ corresponds to an independent drawing $\tilde{P}_{ij}=\pi_j,\forall i,j$. This chain has $\pi$ as invariant distribution, and fulfills the continuity property while enabling a jump with probability of $\alpha$.
Weyl's Theorem~\cite{Horn91} ensures that $\epsilon(P^{(\alpha)}) > \alpha$. This bound is useful because of the dependence of $\epsilon(P)$ with respect to the problem dimension, which would have weakened condition~\eqref{eq:measurements}.
Sampling scheme obtained by these methods are composed of $1/\alpha$-average length random walks on the $k$-space. All our experiments consist of reconstructing a two-dimensional image from a sampled $k$-space by solving an $\ell_1$ minimization problem. Constrained $\ell_1$ minimization (Eq.~\eqref{eq:minL1}) is performed using the Douglas-Rachford algorithm~\cite{Combettes11b}. In each case, only twenty percent of the Fourier coefficients are kept, which corresponds to an acceleration factor of $r=5$. Since the schemes are obtained by a random process, we run each experiment 10 times independently, and compared the mean value of the reconstruction results in terms of \textit{Peak Signal-to-Noise Ratio} (PSNR).\\
In Fig.~\ref{fig:1}, it is shown that the image reconstruction quality degrades when $\alpha$ decreases. These results can be explained by the spatial confinement of the continuous parts of a given Markov chain, except for large values of $\alpha$. There seems to be a compromise between the number of discontinuities of the chain (linked to the hardware constraints in MRI) and the $k$-space coverage. Nevertheless, accurate reconstruction results can be observed with reasonable average mean length of connected subparts ($\alpha=0.01$ or $0.001$).
The mixing properties of the chain (through its spectral gap) seem to have a strong impact on the quality of the scheme, as shown in Proposition~\ref{prop:measurements_needed}. Unfortunately, the spectral gap is strongly related to the problem dimension $n$ and can tend to zero if $n$ goes to infinity. This proves to be a theoretic limitation of this method. Nevertheless, we obtained reliable reconstruction results which cannot be explained by the proposed theory. Since the design process is based on randomness, we can even expose a specific scheme which provides accurate reconstruction results instead of considering the mean behavior (Fig.~\ref{fig:2}). We currently aim at deriving a stronger result on the number of measurements needed, involving a $O(s)$ bound. Meanwhile, we are developing second order chains which can ensure more regularity of the trajectories and for which we have already observed good reconstruction results (Fig.~\ref{fig:3}).
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\yaxis{$k_y$} \includegraphics[width=.2\textwidth]{ref.png} & \includegraphics[width=.2\textwidth]{pi_cropped2.png} \includegraphics[height=.2\textwidth]{piS_colorbar.png}\\ [-4.2cm]
{\small (a)} & {\small (b)}\\[4.3cm]
\yaxis{$k_y$} \includegraphics[width=.2\textwidth]{Tirage_PI_R5.png} & \includegraphics[width=.2\textwidth]{Recons_Pi_col.png}\\ [-4.2cm]
{\small (c) $\alpha=1$} & {\small (d) mean-PSNR=33.4dB}\\[4.3cm]
\yaxis{$k_y$} \includegraphics[width=.2\textwidth]{scheme_alpha_0_1.png} & \includegraphics[width=.2\textwidth]{Recons_0_1_col.png}\\ [-4.2cm]
{\small (e) $\alpha=0.1$} & {\small (f) mean-PSNR=32.4dB}\\[4.3cm]
\yaxis{$k_y$} \includegraphics[width=.2\textwidth]{scheme_alpha_0_001.png} & \includegraphics[width=.2\textwidth]{Recons_0_001_col.png}\\
$k_x$ & \\[-4.6cm]
{\small (g) $\alpha=0.001$} & {\small (h) mean-PSNR=30.3dB}\\[4.3cm]
\end{tabular}\vspace*{-.5cm}
\end{center}
\caption{{\bf First line:} reference image used in our experiments (a) and $\pi$ distribution (b). {\bf Lines 2 to 4, left:} different sampling patterns (with an acceleration factor $r=5$). {\bf right:} reconstruction results. From line 2 to bottom: independent drawing from distribution $\pi$~(c), corresponding to $\alpha=1$. (e) (resp (g)) represents a sampling scheme designed with the presented markovian process with transition matrix $P^{(\alpha)}$ for $\alpha=0.1$) (resp. $\alpha=0.001$).\label{fig:1}}
\end{figure}
\begin{figure}[!h]
\begin{center}
\begin{tabular}{cc}
\yaxis{$k_y$} \includegraphics[width=.2\textwidth]{scheme_alpha_0_01.png} & \includegraphics[width=.2\textwidth]{Recons_0_01_col.png}\\
$k_x$ & \\[-4.6cm]
{\small (a) $\alpha=0.01$} & {\small (b) PSNR=34.2dB}\\[4.3cm]
\end{tabular}\vspace*{-.5cm}
\end{center}
\caption{Sampling scheme obtained setting $\alpha=0.01$ and $r=5$ (a) and its corresponding reconstructed image (b). \label{fig:2}}
\end{figure}
\begin{figure}[!h]
\vspace*{.3cm}
\begin{center}
\begin{tabular}{cc}
\yaxis{$k_y$} \includegraphics[width=.2\textwidth]{scheme_2_0_01.png} & \includegraphics[width=.2\textwidth]{Recons_2_0_01_col.png}\\
$k_x$ & \\[-4.6cm]
{\small (a) $\alpha=0.01$} & {\small (b) PSNR=33.4dB}\\[4.3cm]
\end{tabular}\vspace*{-.5cm}
\end{center}
\caption{Preliminary results for second order Markov chain: sampling scheme obtained setting $\alpha=0.01$ and $r=5$ (a) and its corresponding reconstructed image (b). \label{fig:3}}
\end{figure}
\section{Conclusion}
We proposed a novel approach combining compressed sensing and Markov chains to design continuous sampling trajectories, required for MRI applications. Our work may easily be extended to a 3D framework by considering a different neighbourhood of each $k$-space location. Existing continuous trajectories in CS-MRI only exploit 1D or 2D randomness for 2D or 3D $k$-space sampling, respectively. In the latter case, the points are randomly drawn in the plane defined by the partition and phase encoding directions so as to maintain continuous sampling in the orthogonal readout direction (frequency encoding). Here, the novelty relies both \newpage \noindent on the use of randomness in all $k$-space dimensions, and the establishment of compressed sensing results for continuous trajectories, based on a concentration result for Markov chains.
\section*{Acknowledgements}
We thank J\'er\'emie Bigot for the time he dedicated to our questions and his helpful remarks. The authors would like to thank the CIMI Excellence Laboratory for inviting Philippe Ciuciu on an excellence researcher position during winter 2013.
{\footnotesize
\bibliographystyle{IEEEbib}
|
2,877,628,090,997 | arxiv | \section{Introduction}
The interaction of light with mechanical objects~\cite{KV08,MG09} enjoys continued interest due to the successful construction and manipulation of optomechanical devices
over a wide range of system sizes and parameter combinations
(see the recent reviews~\cite{M13,AKM13} and references cited therein).
With these devices both classical non-linear dynamics such as
self-sustained oscillations~\cite{MHG06,LKM08,KRCSV05,RKCV05}
and chaos~\cite{CRYKV05,CCV07,BAF14_PRL}
as well as quantum mechanical mechanisms such as
cooling into the groundstate~\cite{C_etal11,T_etal11} and quantum non-demolition measurements~\cite{HRNSCS10,VHCA13,SWLWSMCS14} can be studied in a unified experimental setup.
This raises the question whether it might be possible to detect
the crossover from classical to quantum mechanics directly in the dynamical behaviour of optomechanical systems.
In a previous paper~\cite{BAF14_PRL} we observed that the classical dynamical patterns,
which are characterized by the multistability of self-sustained oscillations, change in a characteristic way if one moves into the quantum regime. Previously stable orbits become unstable, the system oscillates at a new amplitude,
and especially the classical chaotic dynamics is almost immediately replaced by simple periodic oscillations.
In this paper we explain this behaviour from the point of view of classical and quantum phase space dynamics.
Most importantly, we will show that the dynamical patterns do not change at random but that clearly identifiable and new signatures can be observed.
The prototypical optomechanical system is a vibrating cantilever subject to the radiation pressure of a cavity photon field, for which the Hamilton operator reads~\cite{Law95,M13,AKM13}
\begin{equation}
\begin{split}
\tfrac1\hbar H = \left[ \Omega_\text{cav} - \Omega_\text{las} + g_\text{rad}( b^\dagger + b) \right] a^\dagger a \\ + \Omega b^\dagger b + \alpha_\text{las} (a^\dagger + a) \;,
\end{split}
\label{eq:hamiltonian}
\end{equation}
where $b^{(\dagger)}$ and $a^{(\dagger)}$ are bosonic operators for the vibrational mode of the cantilever (frequency $\Omega$) and for the cavity photon field ($\Omega_\text{cav}$), respectively.
This Hamilton operator applies to any generic optomechanical system, but we adopt the cavity-cantilever terminology throughout this paper.
For our theoretical analysis we use the quantum optical master equation~\cite{Carm99}
\begin{equation}
\partial_t \rho = - \frac\mathrm{i}\hbar [H,\rho] + \Gamma \mathcal D [b,\rho]
+ \kappa \mathcal D [a,\rho]
\label{eq:master}
\end{equation}
for the cantilever-cavity density matrix $\rho(t)$,
with the dissipative terms
\begin{equation}
\mathcal D [L,\rho] = L \rho L^\dagger - \dfrac{1}{2} (L^\dagger L \rho + \rho L^\dagger L)
\label{eq:dissipator}
\end{equation}
that account for cantilever damping ($\propto \Gamma$) and radiative losses ($\propto \kappa$).
Note that the above Hamilton operator is given in a frame that rotates with the frequency $\Omega_\text{las}$ of the external pump laser such that only the cavity-laser detuning $\Omega_\text{cav} - \Omega_\text{las}$ appears, and that we assume zero temperature in the master equation.
\begin{figure}
\includegraphics[width=0.47\linewidth]{1a}
\hfill
\includegraphics[width=0.47\linewidth]{1b}
\caption{(color online) Left panel: Chart of self-sustained oscillations in the classical limit for $P=1.5$.
Self-sustained oscillations occur for amplitudes $A$ where the power balance
between gains from the radiation pressure
($P_\text{rad} =P \langle |\alpha|^2 \mathrm{Im} \beta \rangle_\mathrm{avg}$)
and losses due to friction
($P_\text{fric} =\bar\Gamma \langle |\beta|^2 \rangle_\mathrm{avg} $)
changes from positive to negative values with increasing $A$~\cite{MHG06,LKM08}.
Right panels: Classical orbits in the $(x,p)$ cantilever phase space,
for (a) $\Delta = -0.4$, (b) $\Delta = - 1.1$, (c) $\Delta = - 0.85$, and (d) $\Delta =-0.7$, as marked by vertical lines in the left panel. In case (a), the two innermost orbits have amplitudes $A_1 \approx 1.2$ and $A_2 \approx 2.7$.
In cases (b), (c) the innermost orbit shows a few period doubling bifurcations that occur on the route to chaos~\cite{BAF14_PRL}, in case (d) it is chaotic.
}
\label{fig:chart}
\end{figure}
Now introduce the five dimensionless parameters~\cite{MHG06, LKM08}
\begin{equation}\label{eq:NewParams}
\Delta = \dfrac{\Omega_\text{las} - \Omega_\text{cav}}{\Omega} \;, \quad
P = \dfrac{8 \alpha_\text{las}^2 g_\text{rad}^2}{\Omega^4} \;,
\quad
\sigma = \dfrac{g_\text{rad}}{\kappa} \;,
\end{equation}
and $\bar\kappa = \kappa / (2 \Omega)$, $\bar\Gamma = \Gamma / (2 \Omega)$,
and measure time as $\tau = \Omega t$.
The parameter $\Delta$ gives the detuning of the pump laser and cavity, while $P$ gives the strength of the laser pumping.
For later numerical results we set the damping parameters $\bar\kappa = 0.5$, $\bar \Gamma = 5 \times 10^{-4}$
to typical experimental values~\cite{AKM13}.
The quantum-classical scaling parameter $\sigma$ is the ratio of the quantum mechanical quantity $g_\text{rad}$, which is of order $\hbar^{1/2}$ because the quantum mechanical position operator $\hat x \propto \hbar^{1/2} (b^\dagger + b)$ of the cantilever enters the expression for the radiation pressure, to the classical quantity $\kappa$ that measures the cavity quality.
The parameter $\sigma$ thus controls the crossover from classical ($\sigma = 0$) to quantum ($\sigma > 0$) mechanics~\cite{LKM08}.
In the following we will increase $\sigma$ to move into the quantum regime,
but keep $\sigma \ll 1$ in order to remain in the vicinity of the classical limit $\sigma = 0$.
\section{Classical multistability}
Our analysis begins in the limit $\sigma = 0$,
where the optomechanical system is described by the classical equations of motion~\cite{LKM08}
\begin{subequations}
\begin{align}
\partial_\tau \alpha & = (\mathrm{i} \Delta - \bar\kappa) \alpha - \mathrm{i} (\beta + \beta^*) \alpha - \tfrac12 \mathrm{i} \;, \\[0.25ex]
\partial_\tau \beta & = (-\mathrm{i} -\bar{\Gamma} ) \beta - \tfrac12 \mathrm{i} P |\alpha|^2
\end{align}
\label{eq:SC}%
\end{subequations}
for the cavity and cantilever phase space variables
$\alpha = (\Omega/(2 \alpha_\text{las})) \langle a \rangle$, $\beta = (g_\text{rad}/\Omega) \langle b \rangle$.
We also use the
cantilever position and momentum operator
$\hat x =(1/\sqrt2) (g_\text{rad}/\Omega) (b^\dagger + b)$,
$\hat p = (\mathrm{i}/\sqrt2) (g_\text{rad}/\Omega) (b^\dagger - b)$,
with corresponding phase space variables $x =\langle \hat x \rangle = 1/\sqrt{2}\, (\beta + \beta^*)$ and $p =\langle \hat p \rangle = (\mathrm{i}/\sqrt{2}) (\beta^* - \beta)$.
The classical equations of motion predict the onset of self-sustained cantilever oscillations
$x(\tau) = x_0 + A \cos \tau$
as the pump power $P$ is increased. Figure~\ref{fig:chart} shows the possible amplitudes $A$ of these oscillations, which are obtained with the ansatz from Ref.~\cite{MHG06}, for the value $P=1.5$.
We keep this value fixed throughout the paper, as the behaviour discussed here does not depend on it.
Note in Fig.~\ref{fig:chart} that several stable oscillatory solutions at different amplitudes $A$ can coexist for one parameter choice.
This classical multistability of self-sustained oscillations is the origin of the quantum multistability analyzed next.
\section{Quantum multistability}
\begin{figure}
\centering
\includegraphics[width=0.55\linewidth]{2a}
\hfill
\includegraphics[width=0.43\linewidth]{2b}
\caption{(color online) Left panel: Cantilever position $x(\tau)$ from the classical equations of motion~\eqref{eq:SC} and from the quantum mechanical master equation~\eqref{eq:master} at $\sigma = 0.1$,
for $P=1.5$, $\Delta = -0.4$ (case (a) in Fig.~\ref{fig:chart}).
Right panel: Cantilever position-momentum uncertainty product $\sigma_x \sigma_p$ for the same parameters.
}
\label{fig:boring}
\end{figure}
We now move into the quantum regime by letting $\sigma$ become finite.
In all our examples the quantum system is initially prepared in the pure product state
of a coherent cantilever and cavity state at $\alpha = \beta =0$, i.e., in the state that is closest to a classical state at these coordinates.
The cantilever-cavity density matrix is then evolved according to Eq.~\eqref{eq:master}.
Figure~\ref{fig:boring} shows the cantilever position $x$ and the position-momentum uncertainty product $\sigma_x \sigma_p$, with the uncertainty
$\sigma_O = (\langle {\hat O}^2 \rangle - \langle \hat O \rangle^2)^{1/2}$
of an observable $O$.
The quantum dynamics at finite $\sigma$ closely follows the classical oscillations for an initial period of time, before it deviates significantly at later times.
Deviations occur because the quantum state spreads out in phase space,
as witnessed by the growth of the uncertainty product, whereby the cantilever position is smeared out.
\begin{figure}
\centering
\begin{minipage}{0.48\linewidth}
\centering
\includegraphics[width=0.98\linewidth]{3a} \\[1ex]
\includegraphics[width=0.98\linewidth]{3b} \\[1ex]
\includegraphics[width=0.98\linewidth]{3c}
\end{minipage}
\begin{minipage}{0.48\linewidth}
\centering
\includegraphics[width=0.98\linewidth]{3d} \\[1ex]
\includegraphics[width=0.98\linewidth]{3e} \\[1ex]
\includegraphics[width=0.98\linewidth]{3f}
\end{minipage}
\caption{(Color online)
Wigner function $W(x,p)$ in cantilever phase space (left panels)
and cantilever position autocorrelation function $R_\tau(\delta\tau)$ (right panels)
for case (a) from Fig.~\ref{fig:chart},
at $\sigma=0.1$ slightly away from the classical limit.
The autocorrelation functions for the two inner classical orbits at amplitudes $A_{1/2}$ are included as dashed curves.}
\label{fig:Wigner}
\end{figure}
The full phase space dynamics in Fig.~\ref{fig:Wigner},
which we display with the Wigner function $W(x,p)$ of the cantilever mode (see, e.g., Ref.~\cite{Schl01} for the definitions), reveals a more definite dynamical pattern.
For early times ($\tau \simeq 16$) the Wigner function retraces the classical orbit with amplitude $A_1 \approx 1.2$ from case (a) in Fig.~\ref{fig:chart}.
At later times ($\tau \simeq 64$) the Wigner function shows a contribution from a second circular orbit with larger amplitude,
before almost all weight is concentrated on the new orbit ($\tau \simeq 270$).
In comparison to case (a) in Fig.~\ref{fig:chart} this orbit is identified as the second classical orbit with amplitude $A_2 \approx 2.7$.
During time evolution the quantum state spreads out along, but not perpendicular to, these two classical orbits.
The classical multistability of the optomechanical system thus has a direct counterpart in the quantum dynamics at small $\sigma$,
where the system moves between the different classical orbits.
This kind of quantum multistability leads to distinct dynamical features because the oscillatory nature of the different orbits is preserved.
The quantum multistability is clearly detected with the cantilever position autocorrelation function
\begin{equation}\label{eq:auto}
R_\tau(\delta \tau) = \int\limits_{\tau-\pi}^{\tau+\pi} \langle \hat x(\tau') \hat x(\tau'+\delta \tau) \rangle \, d\tau' \;,
\end{equation}
instead of the position expectation value that averages over the phase space distribution.
We choose this function because the dynamics is best described in cantilever phase space. Autocorrelation functions for the cavity mode could be used as well
and should be more accessible to experimental measurements,
but their interpretation is less straightforward because of the additional sidebands at multiples of the fundamental oscillation frequency.
The autocorrelation function in Fig.~\ref{fig:Wigner} is the weighted sum of the oscillatory motion on the two orbits seen in the Wigner function.
The frequency of the two orbits is identical (essentially, the cantilever frequency $\Omega$), such that only one oscillation is visible in $R_\tau(\delta \tau)$.
The amplitude of $R_\tau(\delta \tau)$ increases as weight is transferred from the inner to the outer orbit.
Noteworthy, the oscillations persist at all times.
In this way, the multistability of the quantum dynamics
is not only observable during a short initial time period
but during extended periods of time.
\section{Multistability of quantum trajectories}
The mechanism behind the quantum multistability can be understood through the phase space dynamics of individual quantum trajectories, as they arise in the quantum state diffusion (QSD) approach~\cite{GP92} to the solution of Lindblad master equations such as Eq.~\eqref{eq:master}.
In QSD the density matrix is represented by an ensemble of quantum trajectories $|\psi_k(\tau) \rangle$,
from which it is obtained as an average
\begin{equation}\label{eq:mean}
\rho(\tau) = \textsf{mean}_k \Big\{ |\psi_k(\tau) \rangle \langle \psi_k(\tau) | \Big\} \;.
\end{equation}
Accordingly, expectation values are computed as ensemble averages
$O(\tau) = \tr [ \rho(\tau) \hat O ] = \textsf{mean}_k \Big\{ \langle \psi_k(\tau) | \hat O | \psi_k(\tau) \rangle \Big\} $.
Each quantum trajectory $|\psi_k(\tau) \rangle$ follows a stochastic equation of motion that combines the Hamiltonian and dissipative dynamics with a noise term~\cite{GP92}.
Numerically, the density matrix is obtained through Monte Carlo sampling of the trajectories for different noise realizations. We use the QSD implementation from Ref.~\cite{SB97},
and typically average over $\simeq 3000$ trajectories to obtain the results in
Figs.~\ref{fig:boring}--\ref{fig:uncertain},~\ref{fig:WignerOthers},~\ref{fig:AutoOthers}.
Although a single quantum trajectory is not observable by itself,
the phase space dynamics of individual trajectories as shown in Figs.~\ref{fig:Trajectory},~\ref{fig:TrajectoriesOther} allows us to deduce the properties of the entire density matrix.
\begin{figure}
\centering
\includegraphics[width=0.48\linewidth]{4a}
\includegraphics[width=0.48\linewidth]{4b}
\caption{(color online)
Left panels: Wigner function $W(x,p)$ for a single quantum trajectory starting from a ``Schr\"odinger cat'' state at (i) $\tau = 0$,
and at later times (ii) $\tau =0.001$, (iii) $\tau = 0.008$, and (iv) $\tau = 0.4$.
Right panels:
Cantilever position $x_k$ and uncertainty product $\sigma_x \sigma_p$ (see Eq.~\eqref{eq:Heisenberg}) for a single quantum trajectory at later times, in the situation of Fig.~\ref{fig:Trajectory}.
All results are for case (a) from Fig.~\ref{fig:chart} and $\sigma = 0.1$.
}
\label{fig:uncertain}
\end{figure}
Close to the classical limit quantum trajectories evolve rapidly into localized phase space states as a consequence of dissipation~\cite{Per94,SBP95,RG96}.
This is illustrated in Fig.~\ref{fig:uncertain} for a single trajectory that starts from a ``Schr\"odinger cat'' state, given as the superposition of two coherent states, with the characteristic interference pattern in the Wigner function.
In less than one oscillation period ($\tau =0.4 < 1$) the trajectory evolves into a nearly coherent state with a positive Wigner function, which shows the rapid decoherence.
The quantum trajectory remains in such a state during the subsequent time evolution,
and the uncertainty product stays close to its minimal value
\begin{equation}\label{eq:Heisenberg}
\sigma_q \sigma_p \ge \tfrac12 (g_\text{rad}/\Omega)^2 = \tfrac12 (\sigma \bar\kappa)^2
\end{equation}
given by the Heisenberg uncertainty relation for the $\hat x$, $\hat p$ operators (here, the quantum-classical scaling parameter $\sigma$ comes into play).
Notice that phase space localization occurs only in the vicinity of the classical limit, for $\sigma \ll 1$.
It also explains the transition into the classical limit:
For $\sigma \to 0$ the quantum trajectories evolve infinitely fast into minimal uncertainty states,
and at the same time the lower bound in Eq.~\eqref{eq:Heisenberg} goes to zero.
Then, every trajectory occupies one point in phase space, i.e., it has become classical.
Under this condition the classical equations of motion~\eqref{eq:SC}
can be derived directly from the master equation~\eqref{eq:master}.
Because a quantum trajectory is very localized in phase space it is well represented by a single phase space point, similar to a classical trajectory.
In Fig.~\ref{fig:Trajectory} this representation is used for a stroboscopic phase space plot of a single quantum trajectory that contributes to the Wigner functions in Fig.~\ref{fig:Wigner}.
This plot clearly shows the multistability of the quantum trajectory,
which initially follows the inner orbit before it moves towards the outer orbit.
During the time evolution the quantum trajectory follows the oscillatory motion of the two orbits at the cantilever frequency, and because the trajectory state is well localized in phase space these oscillations are not averaged out but appear directly in the position expectation value $x_k(\tau) = \langle \psi_k(\tau) | \hat x | \psi_k(\tau) \rangle$ that is depicted in Fig.~\ref{fig:uncertain}.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{5a}
\caption{(color online) Stroboscopic $(x,p)$-phase space plot of a single quantum trajectory (red dots), for case (a) from Fig.~\ref{fig:chart} and $\sigma=0.1$, at early (left panel), intermediate (central panel), and later (right panel) times $\tau$ as indicated.
The initial conditions are $x(0)=p(0)=0$, the quantum system is prepared in a coherent state at these coordinates.
The two classical orbits at amplitudes $A_{1/2}$ are depicted with dashed curves.
}
\label{fig:Trajectory}
\end{figure}
Since every individual trajectory shows this type of quantum multistability it is also seen in the entire density matrix,
given as the ensemble average of all trajectories.
Because of the noise term in the stochastic QSD equation of motion the quantum trajectories are not exactly at the same phase space point but at different points on the respective orbits. This results in the broad distribution of the relative angle in phase space seen in the Wigner functions in Fig.~\ref{fig:Wigner} especially at later times,
when the quantum trajectories are spread out fully along the second orbit.
Consequently, all oscillations are averaged out in expectation values such as the cantilever position $x(\tau)$ in Fig.~\ref{fig:boring}. Such values are, therefore, not the right quantities to detect the quantum multistability.
Instead, successful detection requires autocorrelation functions such as $R_\tau(\delta\tau)$ from Eq.~\eqref{eq:auto}.
Similar to the density matrix the function $R_\tau(\delta\tau)$ can be expressed (dropping the $\tau'$-integration here) as an ensemble average
\begin{multline}\label{eq:autocorr}
R_\tau(\delta\tau) = \sum_k x_k(\tau) \, x_k(\tau+\delta\tau) \\
+ \sum_k \big\langle \big(\hat x(\tau) - x_k(\tau) \big) \big( \hat x(\tau+\delta\tau) - x_k(\tau+\delta\tau) \big) \big\rangle_k \;,
\end{multline}
where the expectation value $\langle \cdot \rangle_k = \langle \psi_k | \cdot | \psi_k \rangle$ is computed for each individual quantum trajectory.
The correlation function in the second line is bounded by
\begin{multline}
\Big| \big\langle \big(\hat x(\tau) - x_k(\tau) \big) \big( \hat x(\tau+\delta\tau) - x_k(\tau+\delta\tau) \big) \big\rangle_k \Big|^2 \\
\le \big\langle \big(\hat x(\tau) - x_k(\tau) \big)^2 \big\rangle_k \,
\big\langle \big(\hat x(\tau+\delta\tau) - x_k(\tau+\delta\tau) \big)^2 \big\rangle_k \;.
\end{multline}
Whenever the position uncertainty $\langle (\hat x - x_k)^2 \rangle_k$ of each trajectory becomes small,
as it is the case for $\sigma \ll 1$, the autocorrelation function $R_\tau(\delta\tau)$ is thus given by the ensemble average of the autocorrelation functions of the individual trajectories, i.e., by the first line in Eq.~\eqref{eq:autocorr}.
Accordingly, the oscillations seen in $x_k(\tau)$ for each individual trajectory (cf. Fig.~\ref{fig:uncertain}) are preserved in the autocorrelation function in spite of the ensemble average.
Furthermore, $R_\tau(\delta\tau)$ is the weighted sum of the autocorrelation functions for the different classical orbits, which are directly related to the orbit amplitudes $A_{1/2}$ as seen in Fig.~\ref{fig:Wigner}.
\begin{figure}
\centering
\includegraphics[width=0.92\linewidth]{6ab} \\
\includegraphics[width=0.92\linewidth]{6c}
\caption{(color online) Phase space plot of many quantum trajectories in the QSD ensemble (red points) for
cases (a)--(d) from Fig.~\ref{fig:chart}, different times $\tau$, and values of $\sigma$ as indicated.
In all cases, the two innermost classical orbits from Fig.~\ref{fig:chart} are included as solid curves.
In case (b) the second orbit is missing.
}
\label{fig:TrajectoriesOther}
\end{figure}
Notice that the behaviour described here---the motion of quantum trajectories between different classical orbits---emerges only because the trajectory states $|\psi_k\rangle$ deviate from coherent states.
The noise terms in the QSD equations have the form $\bar\Gamma (b - \langle b \rangle_k) |\psi_k\rangle \mathrm d \xi$, here for the mechanical damping, with a random variable $\mathrm d\xi \propto \mathrm d\tau^{1/2}$ from the underlying Wiener process~\cite{GP92}.
If $|\psi_k\rangle$ is exactly a coherent state, such that $(b - \langle b \rangle_k) |\psi_k\rangle = 0$,
the noise term will vanish identically.
This observation explains why the ``quantum noise'' disappears in the classical limit $\sigma=0$,
and the quantum trajectories follow the deterministic classical equations of motion~\eqref{eq:SC}.
At finite but small $\sigma \ll 1$ trajectories are almost but not exactly in coherent states.
The noise terms become effective but remain small,
such that the quantum trajectories still follow the classical dynamics but are subject to a small stochastic correction.
This small correction can change the long-time stability of classical orbits and their basin of attraction but does not destroy the classical dynamical patterns.
Consequently, the quantum trajectories do not move arbitrarily in phase space but follow a classical orbit for some time before they leave the orbit with a finite probability.
Afterwards, the trajectories can settle on a different attractive orbit if such an orbit exists at larger amplitudes.
\section{Quantum multistability and classical orbits}
The quantum multistability observed for case (a) from Fig.~\ref{fig:chart} depends on the presence of at least two classical orbits between which the quantum trajectories can move.
The remaining cases (b)--(d) are variations of this situation, where either the second orbit is missing (case (b)) or the nature of the first orbit has changed (cases (c), (d)).
The four cases are compared in Fig.~\ref{fig:TrajectoriesOther} with phase space plots of many quantum trajectories that represent the QSD ensemble for the density matrix.
In all cases the time scale relevant for quantum multistability shortens with increasing $\sigma$ because the quantum trajectories leave the first classical orbit more rapidly when the noise terms become larger.
For too large $\sigma$ (e.g., $\sigma = 0.3$ in case (a)) the clear dynamical pattern of quantum multistability---the movement between different classical orbits---disappears altogether.
In case (b) the quantum trajectories cannot settle on a nearby classical orbit once they left the first orbit. Quantum multistability, which is characterized by the prevalence of oscillatory motion over random diffusion, cannot be observed in such a situation.
\begin{figure}
\centering
\includegraphics[width=0.47\linewidth]{7a}
\includegraphics[width=0.47\linewidth]{7b}
\caption{(color online) Wigner function $W\left(x,p\right)$ in cantilever phase space for case (d) from Fig.~\ref{fig:chart},
for $\tau$ and $\sigma$ as in Fig.~\ref{fig:TrajectoriesOther}.
}
\label{fig:WignerOthers}
\end{figure}
In cases (c), (d) the inner orbit is no longer simpler periodic but a period-two orbit after the first period doubling bifurcation on the route to chaos (case (c)) or a chaotic orbit (case (d)).
Quantum multistability is not affected by the different nature of the inner classical orbit,
because still a second simple periodic orbits at larger amplitude exists
such that oscillations can be observed after the quantum trajectories have left the inner orbit.
This is illustrated for case (d) in Figs.~\ref{fig:WignerOthers},~\ref{fig:AutoOthers}.
First, we observe again that the relevant time scale changes significantly with $\sigma$.
If $\sigma$ is increased from $0.05$ to $0.1$ in Fig.~\ref{fig:WignerOthers} almost all weight of the Wigner function is transferred from the inner to the outer orbit.
Second, the Wigner functions themselves look quite similar to those for case (a) in Fig.~\ref{fig:Wigner}.
In agreement with this, well-defined oscillations are observed in the cantilever position and autocorrelation function in Fig.~\ref{fig:AutoOthers}, and the respective amplitudes can be related to those of the classical orbits in Fig.~\ref{fig:chart}.
The present data might suggest a more ambitious interpretation.
Apparently, all curves at finite $\sigma$ in Fig.~\ref{fig:AutoOthers} show simple periodic oscillations even if (at $\sigma =0.05$) most weight in the Wigner function is still on the inner---classically chaotic---orbit.
To a certain extent, quantum mechanics protects the optomechanical system against classical chaotic dynamics.
Initially, the quantum state cannot follow the intricate chaotic orbit curve because it occupies a finite part of phase space. Because of phase space averaging the chaotic motion is replaced by simple oscillations at the fundamental system (i.e., cantilever) frequency.
Later, the quantum trajectories move to the second---classically simple periodic---orbit.
At all times, the chaotic classical dynamics is replaced by clearly defined simple oscillations in the quantum regime. Notice that we here discuss possible signatures of classical chaos in the associated dissipative quantum dynamics and not in quantities such as the level statistics that are defined for conservative Hamiltonian systems only~\cite{Gutz90,Haake10}.
\begin{figure}
\centering
\centering
\includegraphics[width=0.92\linewidth]{8}
\caption{(color online) Cantilever position $x(\tau)$ (left panels) and position autocorrelation function $R_\tau(\delta \tau)$ (right panels) for case (d) at finite $\sigma$, in comparison to the results in the classical limit $\sigma = 0$ (top panels, and dashed curves in the lower panels). These curves correspond to the Wigner functions in Fig.~\ref{fig:WignerOthers}.}
\label{fig:AutoOthers}
\end{figure}
\section{Conclusions}
In this paper we establish the quantum mechanical counterpart of the classical multistability of optomechanical systems.
While classical multistability corresponds to the coexistence of self-sustained oscillations at multiple amplitudes,
quantum multistability is a dynamical effect in which the amplitude of oscillations changes over time.
The change can be detected with phase space techniques such as the Wigner function, and analyzed quantitatively with autocorrelation functions.
Quantum multistability is observed close to the classical limit.
There, the quantum trajectories in the QSD picture of dissipative dynamics are well localized in phase space.
Quantum multistability results from corrections to the classical dynamics given by the noise terms in the stochastic QSD equations of motion.
The picture of quantum trajectories also provides the link between the oscillatory quantum dynamics and the classical orbits such that, e.g., the oscillations in the autocorrelation functions can be traced back to the classical self-sustained oscillations.
The time scale relevant for quantum multistability is set by the quantum-classical scaling parameter $\sigma$.
An interesting goal is to obtain the time scale from the QSD equations by quantifying the size of the noise term.
This is not an entirely trivial task, though, because the noise term depends not directly on $\sigma$ but on the deviation of the quantum trajectory state from a coherent state.
An important aspect for experimental investigations of quantum multistability is the robustness of the feature. Quantum multistability manifests itself over an extended period of time, is observable in autocorrelation functions after the initial dynamics has evolved into a stable dynamical pattern,
and does not require specific system preparations. The experimental feasibility depends mainly on the ability to tune the quantum-classical scaling parameter $\sigma$.
For the prototypical cantilever-cavity system $\sigma$ is changed, e.g., by simultaneous adjustment of the cantilever mass and pump laser power (thus preserving the self-sustained oscillations).
The central experimental challenge is to distinguish ``quantum'' multistability from the effects of ``classical'' thermal noise, which requires that the temperature be sufficiently low.
The relevant dynamical energies are larger than the energy separation of low-lying quantum states,
which allows for comparatively high temperatures.
Furthermore, variation of $\sigma$ changes the quantum mechanical time scale while the thermal noise is not affected.
This might open up the possibility of observing the crossover from classical to quantum mechanics directly in the dynamical behaviour of an optomechanical system.
\acknowledgments
This work was supported by Deutsche Forschungsgemeinschaft via Sonderforschungsbereich 652 (project B5).
|
2,877,628,090,998 | arxiv | \section{Introduction}
The main challenge in probing the weak interaction using hadrons is
the presence of non-perturbative QCD dynamics. $U$-spin symmetry can be utilized to probe short distance physics when we do not have the ability to calculate the effect of the strong interactions directly.
$U$-spin is an approximate $SU(2)$ symmetry of the QCD Lagrangian under the unitary rotation of down and strange quarks.
Using this approximate symmetry between down and strange quarks, which form doublets under $U$-spin,
\begin{equation} \label{eq:sd-uspin}
\begin{bmatrix}
\,d\,\, \\ s
\end{bmatrix} =
\begin{bmatrix}
\ket{\frac{1}{2}, +\frac{1}{2}}\\
\ket{\frac{1}{2}, -\frac{1}{2}}
\end{bmatrix}, \hspace{25pt}
\begin{bmatrix}
\,\bar{s}\,\, \\
-\bar{d}
\end{bmatrix} =
\begin{bmatrix}
\ket{\frac{1}{2}, +\frac{1}{2}}\\
\ket{\frac{1}{2}, -\frac{1}{2}}
\end{bmatrix}\,,
\end{equation}
we are able to derive relations between amplitudes of various processes that involve $d$ and $s$ quarks. Such relations are called
\emph{$U$-spin amplitude sum rules}.
Using amplitude sum rules we can reduce the number of unknown hadronic parameters. Then, in some cases, that is what is needed in order to
transform a system of measurements that we cannot solve, into one which we can use to extract fundamental parameters.
$U$-spin symmetry is broken by a small parameter of order $(m_s-m_d)/\Lambda_{QCD} \sim 0.3$.
We can systematically
expand in this small parameter and obtain $U$-spin sum rules that
hold beyond the symmetry limit.
Approximate flavor symmetries for non-leptonic decays have been extensively discussed in the literature~\cite{Kingsley:1975fe, Einhorn:1975fw, Altarelli:1974sc, Abbott:1979fw, Golden:1989qx, Quigg:1979ic, Voloshin:1975yx, Savage:1991wu, Chau:1991gx, Falk:2001hx, Pirtskhalava:2011va, Grossman:2006jg, Pirtskhalava:2011va, Hiller:2012xm, Grossman:2012ry, Grossman:2013lya, Muller:2015rna, Adolph:2020ema, deBoer:2018zhz, Brod:2012ud, Grinstein:2014aza, Bhattacharya:2012ah, Franco:2012ck, Hiller:2012xm, Grossman:2018ptn, Buccella:1994nf, Cheng:2012xb, Feldmann:2012js, Atwood:2012ac, Buccella:2019kpn, Muller:2015lua, Pirtskhalava:2011va, Chau:1993ec, Zeppenfeld:1980ex, Jung:2014jfa, Buras:2004ub, Gronau:2000zy, Fleischer:1999pa, Gronau:2000md,Jung:2009pb, Grossman:2003qp, Ligeti:2015yma}.
They are also especially important in the context of the theoretical interpretation of the recent first observation of charm CP
violation \cite{LHCb:2019hro, Grossman:2019xcj, Khodjamirian:2017zdu, Chala:2019fdb, Li:2019hho, Soni:2019xko, Dery:2021mll, Schacht:2021jaz}.
In particular, sum rules that are valid up to second order had been pointed out in the past, for example
in Refs.~\cite{Kingsley:1975fe, Voloshin:1975yx, Barger:1979fu, Brod:2012ud, Grossman:2012ry}.
Some results on general sum rules were also given in Ref.~\cite{Hassan:2022ucn}.
Note that we discuss linear sum rules,~i.e. sum rules linear in the decay amplitudes. The expansion parameter is relevant only when we talk about such relations. Clearly, one can get an arbitrary precision by using non-linear relations. Some examples of non-linear relations can be found in Refs.~\cite{Gronau:2013xba, Gronau:2015rda}.
In this work we focus on exploring the mathematical structure of $U$-spin amplitude sum
rules.
The procedure of generating relations between amplitudes is then
needed to be transformed into physical observables like decay rates and CP
asymmetries. This step is simple in a few cases, but in general it is
not.
Our primary objective in this paper is to analyze the underlying mathematical structure of higher order amplitude sum rules, and not the practicality of these results for phenomenological analyses.
The latter is left for future work, which also has to consider phase space effects and the possible effects from the differing resonance structure of different decay channels.
Our analysis reveals a rich mathematical structure underlying amplitude
sum rules. This structure enables us to derive all the
sum rules to any order of $U$-spin breaking without performing any calculation.
In particular, we develop an algorithm to derive the complete set of sum rules for an arbitrary order of
$U$-spin breaking, by mapping the amplitudes onto a
multi-dimensional lattice, from which the sum rules can be directly read.
The standard method of deriving sum rules obscures the underlying structure. It requires one to compute a table of Clebsch-Gordan coefficients and then read the sum rules from it. As a result one could obtain sum rules of many different forms depending on the basis choice and the specific method used to read off the sum rules. The novel method that we present below, on the contrary, is very transparent. It utilizes the symmetry of the problem and allows for a systematic derivation of the sum rules. An additional advantage of the algorithm that we propose is that it is straightforward and simple to execute.
Even though in this paper we focus on $U$-spin, all the results are also applicable to any $SU(2)$ flavor symmetry. While the result is also valid for isospin, as we explain, its applicability to observables may be limited.
The rest of this paper is organized as follows. In Sec.~\ref{sec:definitions} we present our definitions, assumptions and notations and introduce basic concepts that are used throughout the paper.
In Sec.~\ref{sec:n_doublet_system} we discuss the systematics of amplitude sum rules at arbitrary order in the $U$-spin breaking for systems of $U$-spin doublets. Furthermore, we present a method for deriving the sum rules in a purely geometric way.
We generalize our results in Sec.~\ref{sec:gen-arbitrary-irreps} to the case of arbitrary irreducible representations (irreps) and provide several examples of the application of our algorithm in Sec.~\ref{sec:gen_algo}.
We conclude in Sec.~\ref{sec:conclusions}. All formal derivations and technical details are provided in appendices.
\section{Definitions, assumptions, and $U$-spin sum rules \label{sec:definitions}}
There are two main ideas that allow one to
write sum rules for a physical system. First, the basis rotation between
the physical basis and the $U$-spin basis, that is, the basis of
definite values of $U$-spin.
Second, the application of the
Wigner-Eckart theorem that is used to reduce the number of basis
elements that are used to describe the amplitudes. Then, we can have
a situation where the number of different basis elements in the $U$-spin basis
becomes less than the number of amplitudes in the physical system thus
yielding linear relations between amplitudes. These relations are
called sum rules.
In this section we start by describing the system under consideration in subsections~\ref{sec:Uspin} and~\ref{sec:comments}. Then we review the standard approach to deriving amplitude sum rules in subsections~\ref{sec:expansion-in-b}--\ref{sec:sum-rules}. Finally, in subsection~\ref{sec:universality} we discuss the universality of sum rules and motivate our novel approach to amplitude sum rules.
\subsection{$U$-spin systems}\label{sec:Uspin}
We consider a general set of processes where all the initial and final state particles have definite properties under $U$-spin. We denote a set of amplitudes that correspond to physical decay processes that include
all the processes that are related by $U$-spin as {\em a $U$-spin set of
processes} or simply as {\em a $U$-spin set} or {\em a $U$-spin system}. To describe a $U$-spin set it is necessary to describe the $U$-spin properties of the initial state, final state and the Hamiltonian. We use $n_A$ to denote the
number of amplitudes in the $U$-spin set, and $\mathcal{A}_j$ to denote these amplitudes.
The dynamics of the processes is encoded in the effective Hamiltonian
$\mathcal{H}_{\mathrm{eff}}$. In the $U$-spin limit the most general Hamiltonian is as follows
\begin{equation}\label{eq:H_LO}
\mathcal{H}_{\text{eff}}^{(0)} = \sum_{u,m,\Gamma} f_{u,m} H^{u}_m(\Gamma)\,,
\end{equation}
where the superscript $(0)$ indicates the $U$-spin limit,
$H_m^u(\Gamma)$ are
different $U$-spin operators
with total $U$-spin $u$, third component of $U$-spin $m$, and Dirac structure $\Gamma$. The
factors $f_{u,m}$ encode the weak interaction factors like CKM matrix
elements, the Fermi constant $G_F$ and loop factors. Note that $f_{u,m}$ depends on $\Gamma$ but we keep it implicit.
We define $H^u$ without subscript index and without the $\Gamma$ dependence to refer to a set of
Hamilton operators with a common~$u$ and a common Dirac structure. In this work we only consider the cases where the effective
Hamiltonian in Eq.~\eqref{eq:H_LO} is dominated by one specific
$H^u$. In
this limit, there is no sum over $u$ and $\Gamma$ in Eq.~(\ref{eq:H_LO}), and the effective Hamiltonian takes the following form
\begin{equation}\label{eq:H_LO_u}
\mathcal{H}_{\text{eff}}^{(0)} = \sum_{m} f_{u,m} H^{u}_m\,.
\end{equation}
Note that in what follows, unless explicitly mentioned otherwise, when we say ``Hamiltonian'' we refer to the zeroth order expression given in Eq.~\eqref{eq:H_LO_u}.
\subsection{Comments about the assumptions}\label{sec:comments}
In the above section we make two working assumptions about the $U$-spin properties of the states and the Hamiltonian:
\begin{itemize}
\item[${(i)}$]
All initial and final state particles are arranged into pure $U$-spin multiplets.
\item[${(ii)}$]
The Hamiltonian contains only operators with one fixed value of $U$-spin and one type of Dirac structure.
\end{itemize}
These two assumptions are similar in nature. As we discuss in detail in Section~\ref{sec:universality}, from the $U$-spin point of view it does not matter if a multiplet belongs to a state or the Hamiltonian. Thus, these two assumptions are simply stating that the description of the $U$-spin set is given in terms of pure multiplets.
To put things into context, consider, for example, the $U$-spin limit Hamiltonian for charm decays
\begin{equation}\label{eq:Heff-charm}
\mathcal{H}^{(0)}_{\text{eff, charm}} = f_{0,0}H^0_0 + \sum_{m = -1}^{m = 1} f_{1,m} H^1_m.
\end{equation}
In Eq.~\eqref{eq:Heff-charm}, $H^{0}_0$ is an operator for Singly-Cabibbo suppressed (SCS) decays
\begin{equation} \label{H0-charm}
H^0_0 = \frac{(\bar{u} s) (\bar{s} c)+(\bar{u} d) (\bar d c)}{\sqrt{2}}.
\end{equation}
Here, and in what follows, the Dirac structure is implicit. In terms of $U$-spin, $H^0_0$ is given by an antisymmetrized combination of two doublets:
\begin{equation}\label{eq:0-u-spin-form}
H_0^0 = \frac{\ket{+-} - \ket{-+}}{\sqrt{2}}.
\end{equation}
Similarly, the three operators that form the triplet in charm decay, $H^1$,
are given by
\begin{equation} \label{H1-charm}
H^1_{1} = (\bar{u} s) (\bar d c),\qquad
H^1_{-1}= -(\bar{u} d) (\bar s c), \qquad
H^1_0 = {(\bar{u} s) (\bar s c)-(\bar{u} d) ( \bar d c)\over \sqrt{2}},
\end{equation}
which can be written in terms of $U$-spin doublets as
\begin{equation}\label{eq:1-u-spin-form}
H^1_{1} = \ket{++},\qquad
H^1_{-1}= \ket{--}, \qquad
H^1_0 = \frac{\ket{+-} + \ket{-+}}{\sqrt{2}}.
\end{equation}
Here, $H^1_{1}$ is the Hamiltonian for doubly-Cabibbo suppressed (DCS) charm decays, and $H^1_{-1}$ is the one for Cabibbo-favored (CF) charm decays. $H_0^1$ is the CKM-leading part of the Hamiltonian for SCS charm decays, and $H^0_0$ is the corresponding CKM-suppressed part.
The CKM-factors in Eq.~\eqref{eq:Heff-charm} are given by
\begin{equation}\label{eq:charmCKM-f00}
f_{0,0} = \frac{V_{cs}^* V_{us} + V_{cd}^* V_{ud}}{2}\approx 0,
\end{equation}
\begin{equation}\label{eq:charmCKM-f1m}
f_{1,1} = V_{cd}^* V_{us}, \qquad
f_{1,-1} = -V_{cs}^* V_{ud}, \qquad
f_{1,0} = \frac{V_{cs}^* V_{us} - V_{cd}^* V_{ud}}{\sqrt{2}}\approx \sqrt{2}\,\left(V_{cs}^* V_{us}\right).
\end{equation}
The approximations used for $f_{0,0}$ and $f_{1,0}$ hold up to $O(\lambda^4)$, where $\lambda \approx 0.22$ is the Wolfenstein parameter.
In this example, due to the large ratio of CKM factors, we can neglect the singlet operator $H^0$ compared to the triplet $H^1$, resulting in the Hamiltonian of the form as in Eq.~\eqref{eq:H_LO_u}, that is
\begin{equation}
\mathcal{H}^{(0)}_{\text{eff, charm}} \simeq \sum_{m = -1}^{m = 1} f_{1,m} H^1_m.
\end{equation}
Thus the Hamiltonian for the charm decays when the SCS part is neglected satisfies our assumptions. This example also explicitly demonstrates how arbitrary $U$-spin representations can be build from doublets, which is important for the discussion that follows.
The two assumptions (\emph{i}) and (\emph{ii}) could have different degrees of validity in different systems. In principle, when one aims at writing higher order sum rules for a $U$-spin system, the degree to which these assumptions are satisfied needs also to be compared with the breaking parameter.
In this work we focus on group-theoretical properties of $U$-spin systems and leave the questions related to the applicability of the results to physical systems for future study, giving only for illustration several amplitude level examples in Sec.~\ref{sec:gen_algo}. Thus, in what follows, we do not discuss much the validity of our assumptions.
\subsection{Expansion in $U$-spin breaking}\label{sec:expansion-in-b}
Our main focus in this work is the systematic expansion in $U$-spin breaking. We define $\varepsilon$ to be the breaking parameter. The concrete numerical value of $\varepsilon$ can vary depending on the process.
On the fundamental level, the breaking arises from the mass difference between the $s$ and $d$ quarks. The relevant terms in the Lagrangian are $m_i q_i \bar q_i$ for $q_i=d,s$. Since the quarks form a doublet under $U$-spin, $q_i \bar q_i$ transforms as a sum of a singlet, $u=0,m=0$, and a triplet, $u=1,m=0$. The singlet respects the symmetry. It is the triplet that corresponds to $U$-spin breaking. Consequently, the breaking can be described by a spurion that transforms under
$U$-spin as an operator with $u=1$ and $m=0$. We denote this operator as
$H_{\varepsilon}$. We stress that, being a spurion, $H_{\varepsilon}$
has a definite $m=0$, and while we treat it as a triplet, only
its $m=0$ component is present.
Terms of order $b$ in $U$-spin breaking are expected to be suppressed by $\varepsilon^b$. This terms are obtained from the leading order Hamiltonian, Eq.~\eqref{eq:H_LO_u}, by taking a tensor product with $b$ spurions leading to the full Hamiltonian at all orders of $U$-spin breaking
\begin{equation}\label{eq:Heff}
\mathcal{H}_{\text{eff}} = \sum_{m,b} f_{u,m} \left(H^u_m \otimes H_\varepsilon^{\otimes b}\right)\,,
\end{equation}
where
\begin{align}
H_{\varepsilon}^{\otimes b}&\equiv \underbrace{H_{\varepsilon} \otimes \dots \otimes H_{\varepsilon}}_{\text{$b$}}
\end{align}
is a tensor product of $b$ copies of $H_{\varepsilon}$.
The resulting direct sum decomposition of the tensor product above has the following property
\begin{equation} \label{eq:b-parity}
\left(1, 0\right)^{\otimes b} = \mathop{\oplus}_{j = 0}^{[b/2]} (b - 2j, 0)\,.
\end{equation}
That is, only the terms with total values of $U$-spin that have the same parity as the
parity of $b$ appear in the decomposition.
For example, for $b=3$ the
decomposition of the tensor product into direct sum contains only
$u=1$ and $u=3$ terms, but not $u=0$ nor $u=2$.
Moreover, the $u=1$ term from the $b=3$ case can be absorbed into the $u=1$
term from the $b=1$ case. That is, from the group theory point of view we
cannot tell if a breaking term with $u=1$ comes from $b=1$ or
$b=3$. This implies that the relevant new Hamiltonian structure at each order $b$ is given by the highest representation only. Therefore, when performing the sum rule analysis it is enough to only consider the $u=b,m=0$ term out of $H_{\varepsilon}^{\otimes b}$. This property is discussed formally and in more detail in
Appendix~\ref{app:SR_counting_doublets}.
We close this subsection with two remarks.
\begin{enumerate}
\item
The fact that the breaking comes only from a $u=1$, $m=0$ spurion is important to the results that follow.
\item
Note that while we concentrate on $U$-spin, also for isospin the symmetry-breaking operator is a $(1,0)$ spurion.
\end{enumerate}
\subsection{Decomposition in terms of reduced matrix elements} \label{sec:basis-transformation}
We define the \emph{physical basis of amplitudes} as a basis in which each particle in the initial and final state is represented by a component of a multiplet with definite value of $U$-spin, and the operators in the effective Hamiltonian are written as tensor products of operators from the $U$-spin limit Hamiltonian and possibly several insertions of the $U$-spin breaking spurion, see Eq.~\eqref{eq:Heff}.
We use $\mathcal{A}_j$ to denote amplitudes in the physical basis.
The physical basis of amplitudes $\mathcal{A}_j$ is to be contrasted with the \textit{$U$-spin basis of amplitudes}. The $U$-spin basis is defined as the basis in which the initial state, final state and all the terms in the Hamiltonian have definite values of total $U$-spin. For both bases, it is also natural to talk about the physical and $U$-spin basis for states and operators.
For formal details and definitions of physical and $U$-spin bases see Appendix~\ref{app:physical_vs_Uspin_basis}. Here we continue with a more schematic discussion.
The $U$-spin set is defined by listing the $U$-spin representations of the particles in the initial state, final state, and the Hamiltonian. Each amplitude from the $U$-spin set is defined by its specific set of $m$ QNs each corresponding to a component of a representation in the initial or final state. We use the index $j$ to enumerate the amplitudes of the $U$-spin set and, since all the amplitudes are defined by the sets of $m$ QNs, the index $j$ implicitly contains information about $m$. We assume that $\mathcal{H}_\text{eff}$ is known and can be written up to an arbitrary order of $U$-spin breaking using Eq.~\eqref{eq:Heff}. Each decay amplitude is then given by
\begin{equation} \label{eq:Aj-phys}
\mathcal{A}_j = {\mel{\text{out}}{\mathcal{H}_{\text{eff}}}{\text{in}}}_j.
\end{equation}
The amplitude in Eq.~\eqref{eq:Aj-phys} is an amplitude written in the physical basis. The rotation from the physical basis to the $U$-spin basis is
performed by the decomposition of the tensor products in $\ket{\text{in}}$,
$\ket{\text{out}}$, and $\mathcal{H}_\text{eff}$ into direct sums of irreducible representations such that each term in the resulting decomposition of each $\mathcal{A}_j$ has a definite value of $U$-spin in its initial state, final state, and the Hamiltonian.
The application of the Wigner-Eckart theorem to this decomposition allows to rewrite the amplitudes $\mathcal{A}_j$ in terms of the so called \emph{reduced matrix elements} (RME) $X_{\alpha}$:
\begin{equation}\label{eq:A_decomp}
\mathcal{A}_j = \sum_{\alpha} c_{j \alpha} X_\alpha\,.
\end{equation}
where $\alpha$ is a multi-index defined in Eq.~\eqref{eq:def-alpha}. For formal derivation of Eq.~\eqref{eq:A_decomp} see Appendix~\ref{app:RMEdecomposition}. We emphasize that the RMEs do not depend on the $m$-QNs associated with the states nor on the
$m$-QNs of the operators in the Hamiltonian.
The information on $m$ is contained in the coefficients $c_{j\alpha}$ only.
In our convention, the $c_{j\alpha}$ are products of Clebsch-Gordan (CG) coefficients $C_{j\alpha}$ and weak interaction parameters,
\begin{equation}
c_{j\alpha} = C_{j\alpha} \, f_{u,m}\,,
\end{equation}
where $m$ is fully defined by the initial and final states of the amplitude $\mathcal{A}_j$. The value of $m$ in the Hamiltonian is given as the difference between the $m$ QNs of the final and the initial states. Only operators with this specific QN $m$ contribute to a given amplitude.
We stress that in our notation, the RMEs, $X_\alpha$ are complex. Yet,
they contain
strong phases only. They do not include CKM matrix elements and thus
do not carry a weak phase.
The factorization formula that follows from the Wigner-Eckart theorem is then given as
\begin{equation}
\mathcal{A}_j = \sum_\alpha C_{j\alpha} f_{u,m} X_\alpha\,. \label{eq:factorization-gen}
\end{equation}
Note that, since $m$ is fixed by the initial and final state, it is not contained in the multi-index $\alpha$. Moreover, since in this work we consider only cases where only one $u$ is present in the
leading order Hamiltonian, we can factor out the CKM
factor $f_{u,m}$ and thus Eq.~\eqref{eq:factorization-gen} can be rewritten as
\begin{equation}
\mathcal{A}_j = f_{u,m} \sum_\alpha C_{j\alpha} X_\alpha\,. \label{eq:factorization}
\end{equation}
We learn that in the case under consideration the CKM dependence can be factored out.
That is, we can factorize the weak and strong physics. We emphasize that this factorization takes place due to our assumption that there is only one $U$-spin and one Dirac structure in the Hamiltonian. The fact that in this case the CKM dependence factors out is important for the results that follow.
\subsection{Amplitude sum rules}\label{sec:sum-rules}
We introduce the notion of \emph{CKM-free amplitudes} which we denote
by $A_j$ and define as
\begin{equation} \label{eq:def-ckm-free-amp}
A_j = \frac{\mathcal{A}_j}{f_{u,m}}\,.
\end{equation}
Note the following regarding Eq.~\eqref{eq:def-ckm-free-amp}:
\begin{enumerate}
\item
In general, we can define CKM-free amplitudes only for
cases where in the $U$-spin limit the Hamiltonian has only one specific
$u$ and one Dirac structure.
\item
We can define CKM-free amplitudes only for
cases where the CKM factor, $f_{u,m}$, is not zero.
\end{enumerate}
It is point number two above that makes our result of limited use for isospin. In many cases charge conjugation
implies $f_{u,m}=0$ for the case of isospin.
In terms of the CKM-free amplitudes the decomposition
in Eq.~\eqref{eq:factorization} takes the form
\begin{equation}\label{eq:CKMfree_decomposition}
A_j = \sum C_{j \alpha} X_\alpha\,.
\end{equation}
From this point on we are working with CKM-free amplitudes.
It is possible that some of the RMEs, $X_\alpha$, enter the
decompositions of the amplitudes from the $U$-spin set as fixed linear
combinations. Thus, we define $n_X^{(b)}$ to be the number of linearly
independent combinations of RMEs in the decompositions of the
amplitudes to an order of breaking $b$.
Note the following:
\begin{itemize}
\item
The number of such linearly independent combinations
is equal to the rank of the matrix of Clebsch-Gordan coefficients,
that is, $n_X^{(b)} = \text{rank } [ C_{j\alpha} ]$.
(Recall that the multi-index $\alpha$ includes $b$.) Here, the notation \lq\lq{}$[\dots]$\rq\rq{} is used to represent a matrix that corresponds to an object with given indices.
\item
The number of linearly independent combinations of RMEs can only increase when we go to higher
order in the breaking, that is, $n_X^{(b)} \le n_X^{(b+1)}$.
\item
The maximum value of $n_X^{(b)}$ is the same as the number of amplitudes in the $U$-spin system, that is, $n_X^{(b)} \le n_A$.
\end{itemize}
In particular, we are interested in the case when
\begin{equation}
n_X^{(b)}< n_A\,.
\end{equation}
In cases like this, there are sum rules between the amplitudes. That
is, there are algebraic relations between amplitudes of the form
\begin{equation}
\sum w_j A_j = 0, \label{eq:sum-rule-form}
\end{equation}
where $w_j$ are numerical constants.
We define $n_{\text{SR}}^{(b)}$ to be
the number of sum rules that are valid up to order $b$.
It is given by
\begin{equation}
n_{\text{SR}}^{(b)} = n_A - n_X^{(b)} = n_A- \text{rank } [C_{j \alpha}]\,.
\end{equation}
The sum rules can be found as the null space of the matrix
$[C_{j\alpha}]^{T}$~\cite{Grossman:2013lya}. This treatment thus suggests that in order to find
amplitude sum rules for a given system one needs first to explicitly
find the matrix $[C_{j \alpha}]$ and then, if its rank is less than the
number of amplitudes, $n_A$, the null space of this matrix gives the desired sum rules.
\subsection{Universality of sum rules}\label{sec:universality}
Consider as an example three systems of amplitudes whose $U$-spin structure can be described as follows:
\begin{itemize}
\item System I: A $U$-spin singlet in the initial state, two $U$-spin doublets in the final state, and the
Hamiltonian is a triplet, i.e.
$\ket{i}\sim \ket{0}$,
$\mathcal{H}\sim\ket{1}$,
$\ket{f}\sim\ket{1/2}\otimes \ket{1/2}$.
\item System II: A $U$-spin doublet in the initial state, a
$U$-spin triplet in the final state, the Hamiltonian is a doublet, i.e.
$\ket{i}\sim\ket{1/2}$,
$\mathcal{H}\sim \ket{1/2}$,
$\ket{f}\sim\ket{1}$.
\item System III: $U$-spin singlets in the initial state and the Hamiltonian, four $U$-spin doublets in the final state, i.e. $\ket{i}\sim \ket{0}$,
$\mathcal{H}\sim\ket{0}$\,,
$\ket{f}\sim \ket{1/2}\otimes \ket{1/2}\otimes \ket{1/2}\otimes \ket{1/2}$\,.
\end{itemize}
Let us first discuss systems I and II. In the standard approach systems I and II look different
and require one to find and study the matrix $C_{j \alpha}$ for
each of them separately.
However, from the point of view of $U$-spin symmetry, these systems are
identical: the sum rules for the two systems are the same. In Appendix~\ref{app:signs} we provide a formal proof of this statement and explain what it means that sum rules for two such different systems are identical. Basically, by identical we mean that
there is a one-to-one mapping between the amplitudes of the two systems and that the sum rules for the two are the same up to relative signs between amplitudes.
We define a \emph{universality class} as a collection of all the $U$-spin sets that are described by the same representations independently if these representations belong to the initial state, final state, or the Hamiltonian. According to this definition, systems I and II belong to the same universality class. As we show in Appendix~\ref{app:signs}, it is always enough to study only one $U$-spin set from any universality class and then the results for the rest of the systems in the class can be obtained trivially. All universality classes contain a $U$-spin set such that the $U$-spin limit Hamiltonian and the initial state are singlets and all the representations belong to the final state. This is one of the cases that is the most straightforward to study and in our work we focus on it. That is, we first obtain the sum rules for this case and then generalize our results to any other system in the same universality class.
To get an intuitive understanding of the universality of sum rules
we recall the concept of crossing symmetry. Let us focus on the $U$-spin structure of amplitudes. Crossing symmetry suggests that $U$-spin multiplets can be moved freely between initial states, final states, and the Hamiltonian and the relationships between amplitudes, that is, the sum rules, be preserved up to possibly relative signs between amplitudes. Here we put the Hamiltonian in one line with initial and final states since from the $U$-spin point of view the products $H^{u}_m \ket{u^\prime, m^\prime}$ and $ \ket{u,m} \otimes \ket{u^\prime, m^\prime}$ are identical. In more technical terms, when
we move multiplets between the initial state, final state, and the Hamiltonian, the only change that takes place is the flipping of a sign of the $m$-QNs of some of the multiplets. As a result, the two matrices $C_{j\alpha}$ for systems I and II might look different, but their
structure, that is the number of sum rules and their form (up to relative signs between amplitudes) is preserved.
In addition to the relations between $U$-spin sets belonging to the same universality class, one can also establish relations between $U$-spin sets from different universality classes. For that recall that all higher $U$-spin representations can be obtained from the tensor product of $U$-spin doublets with proper symmetrization taken into account.
This fact implies that all the sum rules for any system with arbitrary $U$-spin representations, can be obtained from the sum rules of a system that is made only from doublets. For example, systems~I and~II can be obtained from system~III via symmetrization of the tensor product of two out of four doublets. As a result the sum rules for systems~I and~II can be derived from the sum rules for system~III.
The above two observations allow us to view any $U$-spin system as a set of $U$-spin doublets where for some of them a symmetrization rule is specified.
Thus, we can define the number $n$ of ``would-be'' doublets as the minimal number of doublets needed to describe a $U$-spin system. For systems~I,~II, and III considered above, $n = 4$.
In our work we show that for an arbitrary $U$-spin set with the number of would-be doublets given by $n$, all the sum rules can be derived from the sum rules for a $U$-spin set described by $n$ doublets. Moreover, one never needs to explicitly find the matrix $C_{j \alpha}$ to write the sum rules and the $U$-spin symmetry ensures that the sum rules take a very simple form.
\section{Systems of $n$ doublets}\label{sec:n_doublet_system}
In the previous section we motivated the consideration of the $U$-spin set of processes with the following $U$-spin structure:
\begin{equation}\label{eq:only_doublets_process}
0 \xrightarrow{\hspace{3pt} u = 0 \hspace{3pt}} \left(\frac{1}{2}\right)^{\otimes n},
\end{equation}
where the initial state is a $U$-spin singlet, the final state has $n$
doublets and the process is realized via a singlet operator in the
$U$-spin limit. The breaking of order $b$
is realized via the insertion of a tensor product of $b$ spurion
operators $H_\varepsilon^{\otimes b}$, as discussed in Section~\ref{sec:expansion-in-b}.
Note that in order to have non-zero amplitudes $n$ must be even. At this point we focus on the $U$-spin structure of
processes, that is, we consider an abstract system of $U$-spin
doublets. We further assume that all the $U$-spin doublets are distinguishable. This assumption is motivated by the fact that the physical multiplets generally are distinguishable due to the additional momentum variables assigned to them, unless a specific kinematic region is studied.
\subsection{Amplitude $n$-tuples}\label{sec:An-tuples-doublets}
We consider a set of CKM-free amplitudes, $A_j$, that form a $U$-spin
set of the processes in Eq.~\eqref{eq:only_doublets_process}. We map any set of amplitudes $A_j$ onto a set of \emph{amplitude $n$-tuples} which we define as follows. We order
the $U$-spin doublets in an arbitrary (but defined) order and for each amplitude we represent the up components of the doublets as \lq\lq{}$+$\rq\rq{} and the down components as \lq\lq{}$-$\rq\rq{}.
The $n$-tuple representation of each process is then defined as a string of pluses and minuses for all the doublets according to the set order.
Note the following:
\begin{enumerate}
\item
The length of the amplitude $n$-tuple is $n$, which is even.
\item
The numbers of pluses \lq\lq{}$+$\rq\rq{} and minuses
\lq\lq{}$-$\rq\rq{} in the $n$-tuple are equal, that is there are $n/2$
of each.
\item
While we could write $n$-tuples where the numbers of \lq\lq{}$+$\rq\rq{} and \lq\lq{}$-$\rq\rq{} are not equal, the corresponding amplitudes vanish.
\item
In what follows we are using the terms $n$-tuple and amplitude interchangeably.
\end{enumerate}
As a bookkeeping device we assign numbers to amplitudes according to the binary code given by the $n$-tuple and use the assignment of
\begin{itemize}
\item \lq\lq{}$+$\rq\rq{} to \lq\lq{}one\rq\rq{}.
\item \lq\lq{}$-$\rq\rq{} to \lq\lq{}zero\rq\rq{}.
\end{itemize}
Everywhere in this paper, if not stated otherwise explicitly, we use the following index notation:
\begin{itemize}
\item $i$ takes values from $0, ..., 2^{n-1} - 1$.
\item $\ell$ takes values from $2^{n-1}, ..., 2^n - 1$.
\item $j$ and $k$ are used as generic indices.
\end{itemize}
Note that due to the constraint that the number of pluses and minuses in any $n$-tuples are equal, not all the values in the above ranges for $i$ and $\ell$ are used.
To demonstrate our notation we use the $n = 4$ case as an example.
The non-vanishing $n$-tuples with corresponding indices $i$ and $\ell$
are given by
\begin{equation}\label{eq:n4_example_notation}
\begin{gathered}
A_3 = (-, -, +, +) \qquad A_{12} = (+, +, -, -)\\
A_{5} = (-, +, -, +) \qquad A_{10} = (+, -, +, -)\\
A_{6}=(-, +, +, -) \qquad A_{9} = (+, -, -, +)
\end{gathered}
\end{equation}
where the first column are the $A_i$ amplitudes with $i = 3, 5, 6$ and
the second column are the corresponding $A_\ell$ amplitudes with $\ell = 12, 10, 9$. The $U$-spin structure of any process is fully described by its
corresponding $n$-tuple.
\subsection{$U$-spin amplitude pairs}\label{sec:U-spin-amp-pairs}
We next define the notion of \emph{$U$-spin conjugation}. In physical
terms in our phase convention this operation is realized by simultaneously performing the following exchanges
\begin{equation}
s \leftrightarrow d, \qquad \bar s \leftrightarrow -\bar d.
\end{equation}
In the notation of $n$-tuples, $U$-spin conjugation
corresponds to a complete exchange between \lq\lq{}$+$\rq\rq{} and \lq\lq{}$-$\rq\rq{} for all the entries of the $n$-tuples.
That is, the $U$-spin conjugation operator
interchanges all the up and down components of all the $U$-spin
doublets.
We call a pair of amplitudes that are $U$-spin
conjugate to each other a \emph{$U$-spin pair of amplitudes} or simply
\emph{$U$-spin pair}. In the example of
Eq.~\eqref{eq:n4_example_notation} the $U$-spin pairs are
\begin{equation}
A_3 \text{ and } A_{12}, \qquad A_5 \text{ and } A_{10}, \qquad A_6 \text{ and } A_{9}.
\end{equation}
The relation between their indices is
\begin{equation} \label{eq:cong-ind}
\ell = 2^n-1-i\,.
\end{equation}
Similarly to the case of amplitudes, we can use $n$-tuples to refer to
$U$-spin pairs. Since the number of $U$-spin pairs is half the number
of amplitudes
we adopt a convention of using the $n$-tuples that start with a minus sign to describe the $U$-spin pairs of amplitudes. In the example above we thus have the following correspondence between $U$-spin pairs and $n$-tuples:
\begin{equation}
\begin{gathered}
A_3 \text{ and } A_{12}: \qquad \left(-, -, +, +\right),\\
A_5 \text{ and } A_{10}: \qquad \left(-,+,-,+\right),\\
A_6 \text{ and } A_{9}:\, \, \qquad \left(-,+,+,-\right).
\end{gathered} \label{eq:notation-for-pairs}
\end{equation}
Our starting point in the analysis of this section is Eq.~\eqref{eq:CKMfree_decomposition} which we rewrite here for convenience for an amplitude $A_i$
\begin{equation}\label{eq:factorization-again}
A_i = \sum_\alpha C_{i \alpha} X_\alpha \,.
\end{equation}
Recall that $C_{i \alpha}$ are products of CG coefficients. According to our result from Appendix~\ref{app:Upair_relation}, Eq.~\eqref{eq:Upair_decomp_theorem}, the decomposition of $A_\ell$
is given by
\begin{equation}\label{eq:u-pair}
A_\ell= (-1)^{p}
\sum_\alpha (-1)^b C_{i \alpha} X_\alpha\,,
\end{equation}
where $C_{i\alpha}$ and $X_\alpha$ are the same sets that appear in
Eq.~\eqref{eq:factorization-again}, the relation between $\ell$ and $i$ is given in Eq.~\eqref{eq:cong-ind},
and
$p$ is an integer defined in Eq.~(\ref{eq:p_def}).
Note that since we care only about the parity of $p$, adding multiples of two as well as multiplying $p$ by an overall sign does not matter: these operations leave $(-1)^p$ invariant.
For the
case of $n$ doublets in the final state, and only singlets in the initial state and the Hamiltonian, we have
\begin{align}
p &= \frac{n}{2}\,.
\end{align}
The order of the breaking of the RME $X_\alpha$ is denoted by
$b$. We emphasize that $b$ is included in the multi-index $\alpha$.
We also recall that
$p$ is the same for all amplitudes from the $U$-spin set.
We would also like to emphasize the following point regarding Eqs.~\eqref{eq:factorization-again}
and~\eqref{eq:u-pair}. The relative sign between the terms in the decomposition of the $U$-spin pair amplitudes alternate with the order of the breaking $b$. For
example, for even $p$, there is no relative sign between terms at
$b=0$, there is a relative sign between the terms at $b=1$, no relative sign at $b=2$, and so on.
Before we go on and discuss the implications of these results we provide a simple intuition where the alternating minus signs come from. The reason is related to the fact that the $U$-spin breaking is realized by a spurion that transforms as $u=1$ and $m=0$. Writing the spurion in a matrix form using $a_{jk} = a_\alpha (\sigma_\alpha)_{jk}$, the spurion is proportional to $\sigma_3$, that is
\begin{equation}
\begin{pmatrix} +1 & 0 \\ 0 & -1\end{pmatrix}\,. \label{eq:pauli-matrix}
\end{equation}
We see that applying the operation of $U$-spin conjugation, which in Eq.~(\ref{eq:pauli-matrix}) corresponds to $j \leftrightarrow k$, we gain a minus sign. Applying the spurion $b$ times, the resulting expression
picks up a $(-1)^b$ under $U$-spin conjugation.
It is that property that makes the $U$-spin amplitude pairs simpler to work with.
\subsection{The $a$-type and $s$-type amplitudes}
We next define
\begin{equation}\label{eq:as-comb-def}
a_i \equiv A_i-
(-1)^p
A_\ell , \qquad
s_i \equiv A_i +
(-1)^p
A_\ell,
\end{equation}
where $a_i$ are the \emph{a-type amplitudes} and are defined as the
anti-symmetric combinations of CKM-free amplitudes from a $U$-spin
pair,
and $s_i$ are the \emph{s-type amplitude} and are defined as the symmetric
combinations of CKM-free amplitudes from a $U$-spin pair.
Note the following:
\begin{enumerate}
\item
The $a$- and $s$-type amplitudes are eigenstates of $U$-spin conjugation. All $a$-type amplitudes of a given system have eigenvalue $(-1)^{p+1}$, while all $s$-type amplitudes have eigenvalue $(-1)^p$.
\item
The factor $(-1)^p$ in Eq.~(\ref{eq:as-comb-def}) cancels the same factor in Eq.~(\ref{eq:u-pair}). Thus, when writing the $a-$and $s-$type amplitudes in terms of the RMEs they do not depend on $p$
\begin{equation} \label{eq:a-s-rme}
a_i = \sum_\alpha \Big(1-(-1)^b\Big)C_{i \alpha} X_\alpha \,, \qquad
s_i = \sum_\alpha \Big(1+(-1)^b\Big)C_{i \alpha} X_\alpha \,. \end{equation}
\end{enumerate}
The definitions of the $a$-type and $s$-type amplitudes turn out to be
particularly helpful when constructing sum rules. First, we note that
working with $a$- and $s$-type amplitudes instead of using $A_i$ and $A_\ell$ is merely a change of
basis. Thus,
all the sum rules can be expressed in terms of $a_i$ and
$s_i$. In the following we present most of the results in the basis of the
$a$- and $s$-type amplitudes.
Using Eq.~\eqref{eq:a-s-rme}
we find the following properties:
\begin{itemize}
\item
$a_i$ only contains terms that are odd in the breaking $b$,
\item
$s_i$ only contains terms that are even in the breaking $b$.
\end{itemize}
One important direct result of the above is that the $a$- and $s$-type
amplitudes are fully decoupled from each other. Each of them contains
different sets of $X_\alpha$: the $a$-type amplitudes can be written as
a sum of
$X_\alpha$ with $b$ odd, while the $s$-type amplitudes can be written as
a sum of
$X_\alpha$ with $b$ even. Thus, we can write sum rules for any system such that each sum rule involves only $a$-type or $s$-type amplitudes. We call these sum rules \emph{$a$-type sum rules} and \emph{$s$-type sum rules}, respectively.
Based on the above properties we also note the following:
\begin{enumerate}
\item
To leading order, that is, for $b=0$,
\begin{equation}\label{eq:a-SR-LO}
a_i = 0.
\end{equation}
This result provides us with $n/2$ sum rules and exhausts the set of
linearly independent $a$-type sum rules that are valid at the leading
order. We also call this type of sum rule \lq\lq{}trivial\rq\rq{}.
These leading order $a$-type sum rules were pointed out before,
see Refs.~\cite{Gronau:2000zy, Fleischer:1999pa, Gronau:2000md,Jung:2009pb}.
\item
For even $b$: Any $s$-type sum rule that is valid to order $b$
is also valid to order $b+1$.
\item
For odd $b$: Any $a$-type sum rule that is valid to order $b$ is also valid to order $b+1$.
\end{enumerate}
In particular, we learn that $s$-type sum rules that are valid at
leading order also hold at the first order of breaking.
\subsection{Amplitude sum rules}\label{sec:amp-sum-rules}
In this subsection we show how to obtain all the sum rules for a given
system of $n$ $U$-spin doublets.
The subsection is based on the results derived in Appendices~\ref{app:signs}
and~\ref{app:ThII_doublets}.
Employing the notation for $U$-spin pairs introduced in Eq.~(\ref{eq:notation-for-pairs}), we define $S_j^{(k)}$ to be a subset of all $U$-spin pairs that share $k$ minuses at the same position of their $n$-tuple representation.
Given that there could be more than
one such subset, we also attach an index $j$ to them.
In the example of $n = 4$ the $U$-spin pairs are given explicitly in Eq.~(\ref{eq:notation-for-pairs}) and there are consequently four such subsets: three with $k=2$ and one with $k = 1$:
\begin{equation} \label{eq:S-defs-ex}
\begin{gathered}
S_1^{(2)} = \{\left(-,-,+,+\right)\}, \hspace{25pt} S_2^{(2)} = \{\left(-,+,-,+\right)\}, \hspace{25pt} S_3^{(2)} = \{\left(-,+,+,-\right)\},\\
S_1^{(1)} = \{\left(-,-,+,+\right), \left(-,+,-,+\right), \left(-,+,+,-\right)\}.
\end{gathered}
\end{equation}
The numbering scheme is arbitrary. Note that the subsets can overlap.
We further define $S^{(k)} \equiv \{S_j^{(k)}\}$ to be the set of all $S_j^{(k)}$ that
share the same $k$.
In the example of $n = 4$ there are two such sets of subsets
\begin{equation}
\label{eq:neq4sets}
S^{(2)} = \{S_1^{(2)}, S_2^{(2)}, S_3^{(2)}\}, \qquad
S^{(1)} = \{S_1^{(1)}\},
\end{equation}
where we used the definitions of Eq.~\eqref{eq:S-defs-ex}.
We define $n_S^{(k)}$ to be the number of subsets $S_j^{(k)}$ in the set $S^{(k)}$. It is given by
\begin{equation}\label{eq:n_S_k}
n_S^{(k)} = \binom{n-1}{k-1}.
\end{equation}
This result is a consequence of the first entry being fixed to be a minus sign.
Therefore only $k-1$ minus signs are left to be picked from the $n-1$ entries of the $n$-tuples. In the $n=4$ example we have
\begin{equation}
n_S^{(2)}=\binom{3}{1}=3,
\qquad
n_S^{(1)}=\binom{3}{0}=1,
\end{equation}
in agreement with the explicit counting in Eq.~(\ref{eq:neq4sets}).
All subsets $S^{(k)}_j$ from a set $S^{(k)}$ consist of the same number of amplitude pairs. This number, $n_A^{(k)}$, is given by
\begin{equation}\label{eq:nA-k}
n_A^{(k)} = \binom{n-k}{{n}/{2}-k}\,.
\end{equation}
In the $n=4$ example we have
\begin{equation}
n_A^{(2)}=\binom{3}{0}=1, \qquad
n_A^{(1)}=\binom{3}{1}=3,
\end{equation}
in agreement with the explicit counting in Eq.~(\ref{eq:S-defs-ex}).
We are now ready to write all the sum rules. As we show in Appendix~\ref{app:ThII_doublets}, each $S^{(k)}_j$ corresponds to a different sum rule. The correspondence is as follows
\begin{itemize}
\item
For $S^{(k)}_j$ with
$b=n/2-k$ even, the sum rule is given by
\begin{equation}\label{eq:sym-form-of-SR-a}
\sum_{a_i \in S^{(k)}_j} a_i = 0.
\end{equation}
\item
For $S^{(k)}_j$ with
$b=n/2-k$ odd, the sum rule is given by
\begin{equation}\label{eq:sym-form-of-SR-s}
\sum_{s_i \in S^{(k)}_j} s_i = 0.
\end{equation}
\end{itemize}
Note the following regarding Eqs.~\eqref{eq:sym-form-of-SR-a} and \eqref{eq:sym-form-of-SR-s}:
\begin{enumerate}
\item
Each of the sum rules in Eqs.~(\ref{eq:sym-form-of-SR-a}) and ~(\ref{eq:sym-form-of-SR-s})
is valid up to order $b=n/2-k$.
\item
Each of the sum rules in Eqs.~(\ref{eq:sym-form-of-SR-a}) and ~(\ref{eq:sym-form-of-SR-s}) are broken at order $b+1$.
\item
There could be, however, linear combinations of these sum rules that are not broken at order $b+1$. For example, the sum rules that correspond to subsets $S_j^{(k - 2)}$ are the linear combinations of the sum rules that correspond to subsets $S_j^{(k)}$that are valid up to order $b$.
\item As we show at the end of Appendix~\ref{app:SR_counting_doublets}, the maximum order of breaking for which there is still a sum rule in the system is $b_\text{max} = n/2-1$.
\item
At $b_\text{max}$ there is only one sum rule. The type of this sum rule alternates with the value of $b_\text{max}$. It is an $a$-type sum rule for odd
${n}/{2}$, that is even $b_\text{max}$. It is an $s$-type sum rule
for even ${n}/{2}$, that is odd $b_\text{max}$. This sum rule is given simply by a sum of all $a$-type or $s$-type amplitudes in the system.
\end{enumerate}
Given the correspondence between $S^{(k)}_j$ and the sum rules, we can find an algorithm to generate all the sum rules that are valid to order $b$ (with $b \le n/2 - 1$).
The algorithm is as follows. \begin{itemize}
\item For $b$ even: use all $S^{(k)}_j$ with
$k=n/2-b$ to generate the relevant $a$-type sum rules. Then use $k=n/2-b-1$ to generate the relevant $s$-type sum rules.
\item For $b$ odd: use all $S^{(k)}_j$ with
$k=n/2-b$ to generate the relevant $s$-type sum rules. Then use $k=n/2-b-1$ to generate the relevant $a$-type sum rules.
\end{itemize}
Table~\ref{tab:sum-rules-rule} in Appendix~\ref{app:ThII_doublets}
summarizes the counting and the form of sum
rules for $a$- and $s$-type amplitudes at even and odd
orders of breaking. One can readily use this table to write the sum
rules for a system of $n$ doublets in the final state at any order of
breaking.
In the $n=4$ example, the sum rules that are valid to $b=0$ come from $S_j^{(2)}$ and they are
\begin{equation} \label{eq:n-4-a}
a_3=0, \qquad a_5=0, \qquad a_6=0.
\end{equation}
The sum rule for
$b=1$ corresponds to $S_1^{(1)}$ and it is given by
\begin{equation} \label{eq:n-4-s}
s_3+s_5+s_6=0.
\end{equation}
\subsection{An example, $n=6$}
To demonstrate how the described algorithm works, we consider a $U$-spin set of processes such that the initial state and the Hamiltonian are $U$-spin singlets and the final state contains $n = 6$ doublets. Our goal is to use the algorithm to obtain all the sum rules for this system at any order of breaking.
The amplitudes for the system under consideration and the corresponding $n$-tuples are listed in Table~\ref{tab:n6-n-tuples}. The resulting sum rules at different orders of breaking and the counting of sum rules are summarized in Table~\ref{tab:sum-rules-n6-summary}. For the counting of sum rules we use the results derived in Appendix~\ref{app:SR_counting_doublets}, see Eq.~\eqref{eq:nSR_doublets}.
\begin{table}[t]
\centering
$\begin{array}{cc}
& \\
a_7, s_7 & (-,-,-,+,+,+)\\
a_{11}, s_{11} & (-,-,+,-,+,+)\\
a_{13}, s_{13} & (-,-,+,+,-,+)\\
a_{14}, s_{14} & (-,-,+,+,+,-)\\
a_{19}, s_{19} & (-,+,-,-,+,+)\\
\end{array}
\hspace{25pt}
\begin{array}{cc}
& \\
a_{21}, s_{21} & (-,+,-,+,-,+)\\
a_{22}, s_{22} & (-,+,-,+,+,-)\\
a_{25}, s_{25} & (-,+,+,-,-,+)\\
a_{26}, s_{26} & (-,+,+,-,+,-)\\
a_{28}, s_{28} & (-,+,+,+,-,-)\\
\end{array}$
\caption{$a$- and $s$-type amplitudes and their corresponding $n$-tuples for the case $n = 6$. The numbering scheme is described in Section~\ref{sec:An-tuples-doublets}.}\label{tab:n6-n-tuples}
\end{table}
\begin{table}[h!]
\centering
\begin{tabular}{|c|c|}
\hline
\multicolumn{2}{|c|}{6 doublets}\\
\hline
$a$-type & $s$-type \\
\hline
\multicolumn{2}{|c|}{$b = 0$, $n_{SR}^{(b = 0)} = 15$}\\
\hline
$n_{SR-a}^{(b = 0)} = 10$ & $n_{SR-s}^{(b = 0)} = 5$\\
\(\displaystyle ~~a_7 = a_{11} = a_{13} = a_{14}= a_{19} = a_{21} =\) ~~& \(\displaystyle s_7 + s_{11} + s_{13} + s_{14} = 0 \)\\
\(\displaystyle = a_{22} = a_{25} = a_{26} = a_{28} = 0\) & \(\displaystyle s_7 + s_{19} + s_{21} + s_{22} = 0 \) \\
& \(\displaystyle s_{11} + s_{19} + s_{25} + s_{26} = 0 \)\\
& \(\displaystyle s_{13} + s_{21} + s_{25} + s_{28} = 0 \)\\
& ~~\(\displaystyle s_{14} + s_{22} + s_{26} + s_{28} = 0 ~~\)\\
\hline
\multicolumn{2}{|c|}{$b = 1$, $n_{SR}^{(b = 1)} = 6$}\\
\hline
$n_{SR-a}^{(b = 1)} = 1$ & $n_{SR-s}^{(b = 1)} = 5$\\
\(\displaystyle a_7 + a_{11} + a_{13} + a_{14} + a_{19} + a_{21} +\) &
same as for $b = 0$\\
\(\displaystyle + a_{22} + a_{25} + a_{26} + a_{28} = 0\) &
\\
\hline
\multicolumn{2}{|c|}{$b = 2$, $n_{SR}^{(b = 2)} = 1$}\\
\hline
$n_{SR-a}^{(b = 2)} = 1$ & $n_{SR-s}^{(b = 2)} = 0$\\
same as for $b = 1$ & no sum rules\\
\hline
\multicolumn{2}{|c|}{$b \ge 3$, $n_{SR}^{(b\ge3)} = 0$, no sum rules}\\
\hline
\end{tabular}
\caption{Sum rules for a system of $n=6$ $U$-spin
doublets in the final state. $n_{SR}^{(b)}$, $n_{SR-a}^{(b)}$ and $n_{SR-s}^{(b)}$ denote the total number of sum rules, the number of $a$-type sum rules and the number of $s$-type sum rules that are valid at the order of breaking $b$ respectively.\label{tab:sum-rules-n6-summary}}
\end{table}
In the symmetry limit, $b = 0$, there are $n_{SR}^{(b=0)} = 15$
sum rules between amplitudes of the system, see Eq.~\eqref{eq:nSR_doublets}. Using $k=n/2-b$ we see that $k=3$ for $b=0$.
Out of the $15$ sum rules $n_S^{(3)} = 10$ are $a$-type sum rules, where each sum rule is a sum over $a$-type amplitudes for $U$-spin pairs in subsets $S^{(3)}_j$, with index $j = 1, 2, ..., 10$. The $S^{(3)}_j$ are defined via $n$-tuples that share at least $3$ minus signs at the same positions. According to Eq.~\eqref{eq:nA-k}, each such subset contains exactly one amplitude pair, that is $n^{(3)}_A = 1$. Thus the sum rules are trivial $a$-type sum rules that we already mentioned above, see Eq.~\eqref{eq:a-SR-LO}. The remaining 5 sum rules that hold at the leading order are $s$-type sum rules. They are given by the sums of $s$-type amplitudes over the subsets $S_j^{(2)}$, with $j = 1, 2, ..., 5$, which are defined via $n$-tuples that share at least $2$ minus signs at the same positions. There are $n_A^{(2)} = 4$ amplitude pairs in each such subset.
At the first order of breaking $b = 1$, there are $n_{SR}^{(b = 1)} = 6$ sum
rules. We see that 9 out of 15 leading order sum rules are broken by the order
$b=1$ corrections.
We know that moving from $b=0$ to $b=1$ can only break $a$-type sum rules.
Thus we learn that there is one $a$-type sum rule that is valid to
$b=1$, and thus must also be valid to $b=2$. This sum rule corresponds to $S_1^{(1)}$ and it is given by the
sum of all 10~$a$-type amplitudes of the system.
Note, that this sum rule is also valid at the leading order, and can be expressed as a linear combination of the 10 sum rules that corresponds to $S^{(3)}$.
The 5 $s$-type sum rules that were valid at $b=0$ are also valid at $b=1$, since corrections of odd orders $b$ do not contribute to $s$-type amplitudes.
Finally, we discuss the highest order of breaking at which there are
still sum rules in the system, $b = 2$. At this order there is only
one sum rule which is the $a$-type sum rule given by the sum of all $a$-type amplitudes of the $U$-spin set.
For $b=2$ all $s$-type sum rules are broken.
\subsection{Geometrical picture}\label{sec:halves-lattice}
In this section we provide the geometrical interpretation of Eqs.~\eqref{eq:sym-form-of-SR-a} and \eqref{eq:sym-form-of-SR-s}.
To proceed with the geometrical picture we first introduce a new
notation for the $U$-spin pairs. We call this notation a {\it
coordinate notation} (the reason for that name becomes clear below).
All the $U$-spin pairs can be one to one
mapped onto $n$-tuples starting with a minus sign.
We then enumerate the elements of such $n$-tuples starting from $0$. That is, the first element of the $n$-tuple is assigned the index $0$, the second element is assigned the index $1$ and so on, up until the last element of an $n$-tuple is assigned the index $n-1$. In the coordinate notation
any $n$-tuple is labeled by a string of ${n}/{2}-1$ numbers that
indicate the positions of the minuses in the $n$-tuple excluding the
first minus.
For example, in the coordinate notation
\begin{equation}
(\underset{0}{-},\underset{1}{-},\underset{2}{-},\underset{3}{+},\underset{4}{-},\underset{5}{+},\underset{6}{+},\underset{7}{+}) = \left(1,2,4\right).
\end{equation}
Note, that all permutations of the numerical labels denote the same $n$-tuple and thus the same $U$-spin pair, that is
\begin{equation}\label{eq:example-permutations-lattice-points}
\left(1,2,4\right) = \left(1,4,2\right) = \left(2,1,4\right) = \left(2,4,1\right) = \left(4,1,2\right) = \left(4,2,1\right).
\end{equation}
With this notation in mind, the $U$-spin pairs can be represented by
nodes in a lattice with dimension
\begin{align}
d = \frac{n}{2}-1\,.
\end{align}
The coordinates of the nodes are given by the numbers in the coordinate notation. Hence the name \lq\lq{}coordinate notation\rq\rq{}.
Note the following:
\begin{enumerate}
\item
Due to the permutation
symmetry, each $U$-spin pair appears in the lattice $d!$ times.
\item
Not all the lattice points represent valid $U$-spin pairs.
Nodes with any number of repeating
coordinates, for example, $\left(1,1,2\right)$ or $\left(1,1,1\right)$
do not represent any $U$-spin pair from the $U$-spin set under
consideration. Thus we exclude them from the lattice.
\item
The length of each dimension in the lattice is $n-1$.
\end{enumerate}
According to the algorithm described in the previous section, sum rules
are given by the sums of $a$-type or $s$-type amplitudes with $n$-tuples that
share a certain number of minuses at the same positions. Below we
consider a case when for a chosen order of breaking and a type of sum
rules the number of shared minuses is given by $k$.
Eqs.~\eqref{eq:sym-form-of-SR-a} and~\eqref{eq:sym-form-of-SR-s} can then be represented as follows. For a
given $k$, the sum rules correspond to the sums of all the nodes
that share $k-1$ coordinates. That is, a $d - (k - 1) = b$ dimensional
subspace of the lattice with dimension $d$.
For example, for $k=n/2$, $b = 0$ all the points give
the $a$-type sum-rules that are valid to $b=0$. For $k=n/2-1$, $b = 1$ all the lines
give us the $s$-type sum rules that are valid up to $b=1$.
Below we summarize the steps that are needed in order to write all the
sum rules for a $U$-spin system described by $n$ doublets in the geometrical
picture. We then work out explicitly the $n = 6$ case that is discussed
in the previous section.
For any $U$-spin system described by $n$ doublets we build a lattice according to the following rules:
\begin{enumerate}
\item
The dimension of the lattice is $d = {n}/{2} - 1$.
\item
Each node of the lattice is described by $d$ numbers (coordinates). If
all the numbers are different, that node represents a $U$-spin pair. Otherwise, it is removed from the lattice. In practice we just replace them by zeros for bookkeeping.
\item
Once the lattice is built we are ready to ``harvest'' the sum
rules. For this we consider all the $b$-dimensional subspaces of the lattice defined as follows. All the nodes that share $d-b = n/2-1-b$ coordinates form a $b$-dimensional subspace of interest. For $b=0,1,2$ the subspaces are given by nodes, lines, and planes, respectively. Sums of nodes of the lattice lying in such $b$ dimensional subspaces
correspond to sum rules that are valid to order $b$ and are broken by corrections of order $b+1$. For even(odd) $b$, the $b$-dimensional subspaces of the lattice correspond to $a$($s$)-type sum rules.
\item
In order to get all the sum rules that are valid to order $b$, we need to combine those that come from the $b$ and $b+1$ dimensional subspaces.
\end{enumerate}
As an example consider the $n=6$ case. In that case the lattice is two-dimensional and thus it can be easily visualized, see Fig.~\ref{fig:n6-lattice}.
The black nodes describe valid nodes of the lattice, while the white
nodes on the diagonal are forbidden nodes and do not correspond to any
$U$-spin pair.
\begin{figure}[t]
\centering
\includegraphics[width=0.3\textwidth]{lattice-6d-no-circles.pdf}
\caption{The lattice for a system of $n = 6$ doublets. The black nodes
correspond to valid $U$-spin pairs, while the white nodes on the
diagonal are forbidden and do not describe any $U$-spin pair.
Harvesting of the sum rules is performed as follows. The black nodes
give the leading order, that is, $b = 0$, $a$-type sum rules. Sums of
nodes grouped by solid lines give the $b = 1$, $s$-type sum rules. The
sum of nodes within the dashed line gives the $a$-type sum rule that
is valid up to $b = 2$. Note that we do not explicitly show the
identical $b = 1$ sum rules that are obtained by vertical instead of
horizontal solid lines.}
\label{fig:n6-lattice}
\end{figure}
The leading order $a$-type sum rules correspond to $b=0$ and thus are given by the nodes of the lattice (i.e.~the black nodes in the diagram in Fig.~\ref{fig:n6-lattice}). There are $10$ such sum rules:
\begin{equation}\label{eq:example-lattice-n6-LO-a-SR}
a_{(1,2)} = a_{(1,3)} = a_{(1,4)} = a_{(1,5)} = a_{(2,3)} = a_{(2,4)} = a_{(2,5)} = a_{(3,4)} = a_{(3,5)} = a_{(4,5)} = 0,
\end{equation}
where the notation $a_{(x_1, x_2)}$ is used to represent an
$a$-type combination of $U$-spin pair amplitudes with the coordinate notation
$\left(x_1, x_2\right)$.
In Eq.~\eqref{eq:example-lattice-n6-LO-a-SR} we also used the fact that nodes of the lattice related by permutations of coordinates represent the same $U$-spin pairs. We have chosen to use orderings of coordinates such that $x_1 < x_2$.
The $s$-type sum rules that are valid up to order $b = 1$ are given by lines. That is, they are given by the sums of lattice nodes in rows or columns of the lattice. These sum rules are represented by solid lines in Fig.~\ref{fig:n6-lattice}. We have
\begin{equation}\label{eq:example-lattice-SR}
\begin{gathered}
s_{\left(1,2\right)} + s_{\left(1,3\right)} + s_{\left(1,4\right)} + s_{\left(1,5\right)} = 0,\\
s_{\left(1,2\right)} + s_{\left(2,3\right)} + s_{\left(2,4\right)} + s_{\left(2,5\right)} = 0,\\
s_{\left(1,3\right)} + s_{\left(2,3\right)} + s_{\left(3,4\right)} + s_{\left(3,5\right)} = 0,\\
s_{\left(1,4\right)} + s_{\left(2,4\right)} + s_{\left(3,4\right)} + s_{\left(4,5\right)} = 0,\\
s_{\left(1,5\right)} + s_{\left(2,5\right)} + s_{\left(3,5\right)} + s_{\left(4,5\right)} = 0,
\end{gathered}
\end{equation}
where $s_{\left(x_1, x_2\right)}$ is used to represent an $s$-type combination of $U$-spin pair amplitudes with the coordinate notation $\left(x_1, x_2\right)$ and we use again the convention $x_1 < x_2$.
Finally, the $a$-type sum rules that are valid up to order
$b=2$ are represented by planes. In the case that we consider here there is just one
plane that corresponds to the sum of all the nodes of the lattice. This sum rule is represented by the dashed line in Fig.~\ref{fig:n6-lattice}:
\begin{equation}\label{eq:example-n6-NLO-a-SR}
a_{(1,2)} + a_{(1,3)} + a_{(1,4)} + a_{(1,5)} + a_{(2,3)} + a_{(2,4)} + a_{(2,5)} + a_{(3,4)} + a_{(3,5)} + a_{(4,5)} = 0.
\end{equation}
To be explicit, because of $a_{(i,j)} = a_{(j,i)}$ we have also, completely equivalent to Eq.~(\ref{eq:example-n6-NLO-a-SR}),
\begin{align}
a_{(2,1)} + a_{(3,1)} + a_{(4,1)} + a_{(5,1)} + a_{(3,2)} + a_{(4,2)} + a_{(5,2)} + a_{(4,3)} + a_{(5,3)} + a_{(5,4)} = 0\,.
\end{align}
Note, therefore, that Eq.~(\ref{eq:example-n6-NLO-a-SR}) really corresponds to a sum over the complete plane as shown in Fig.~\ref{fig:n6-lattice}, only that the corresponding factors of two resulting from $a_{(i,j)} = a_{(j,i)}$ have already been cancelled when writing Eq.~(\ref{eq:example-n6-NLO-a-SR}).
As we see the lattice representation reproduces the sum rules
listed in Table~\ref{tab:sum-rules-n6-summary}.
We have determined the above sum rules also using the traditional method using Clebsch-Gordan coefficient tables and indeed, both methods give the same results, as they should.
\subsection{Generalization: doublets not only in the final state}
The proof of Eqs.~\eqref{eq:sym-form-of-SR-a} and~\eqref{eq:sym-form-of-SR-s} is given in
Appendix~\ref{app:ThII_doublets} for the case where all of the $n$
doublets are in the
final state. Here we generalize the result to the rest of the systems in the universality class, that is, to the cases where some of
the doublets are in the initial state and/or in the
Hamiltonian.
The generalization of the result can be done in two steps. First, we need to introduce a modified convention for constructing $n$-tuples. Second, we need to slightly modify the definitions of the $a$- and $s$-type amplitudes given in Eq.~(\ref{eq:as-comb-def}).
For $n$-tuples the convention becomes as follows. We order the doublets in an arbitrary but defined order. For the doublets that belong to the final state, we represent the upper component of a doublet as \lq\lq{}$+$\rq\rq{} and the lower component as \lq\lq{}$-$\rq\rq{}. The convention is, however, different for the doublets belonging to the initial state and the Hamiltonian. For them we represent the upper component of a doublet by \lq\lq{}$-$\rq\rq{} and the lower component by \lq\lq{}$+$\rq\rq{}. Note that with this convention all the $U$-spin sets in the same universality class, that is, with the same number of doublets, are described by the same sets of $n$-tuples.
To modify the definitions of the $a$- and $s$-type sum rules we use the results derived in Appendix~\ref{app:signs} where it is shown that the form of sum rules given in Eqs.~\eqref{eq:sym-form-of-SR-a} and \eqref{eq:sym-form-of-SR-s} is preserved if we define the $a$- and
$s$-type amplitudes with an overall factor of $(-1)^{q_i}$ as follows
\begin{equation}\label{eq:as-comb-def-app}
a_i \equiv (-1)^{q_i} \left( A_i- (-1)^p A_\ell \right), \qquad
s_i \equiv (-1)^{q_i} \left( A_i + (-1)^p A_\ell \right).
\end{equation}
The factor $(-1)^{q_i}$ is equal to $+1$ if the final state has an even number of minus signs and it is equal to $-1$ if the final state has an odd number of minus signs. More formally, the factor can be written as
\begin{equation}\label{eq:q-def}
(-1)^{q_i} = \prod_{j=1}^{n_f} \left(-1\right)^{{1/2} - m_j^{(i)}},
\end{equation}
where the $m_j^{(i)}$ are the $m$ QNs of the $n_f$ final
state doublets of the $A_i$ amplitude.
The expression in Eq.~\eqref{eq:q-def} is identical to the product of the elements of the $n$-tuple that corresponds to the final state of the process with label $i$. Note that for the case where all the doublets are in the final state all the amplitudes gain the same factor $(-1)^{n/2}$ and thus it is cancelled in the sum rules.
We see that the factor $(-1)^{q_i}$ assigned to a specific $n$-tuple depends on the ordering chosen when defining the $n$-tuples. Particularly, it depends on the assignment of certain positions in the $n$-tuples to the initial state, final state, and the Hamiltonian. For example, in the $n=4$ system that has only one doublet in the final state, the $n$-tuple $(-,-,+,+)$ is assigned $(-1)^{q_i} = -1(+1)$ if the convention is such that the first (third) position of the $n$-tuple corresponds to the final state. This is a basis choice.
The fact that the $(-1)^{q_i}$ factors depend on the basis might appear to be in contradiction with the general idea that sum rules are basis independent, but it is not. The solution to the seeming contradiction lies in the fact that our definitions of the $a$- and $s$-type amplitudes given in Eq.~\eqref{eq:as-comb-def-app} are in fact basis dependent. The two CKM-free amplitudes $A_i$ and $A_\ell$ form a $U$-spin pair no matter what is the basis. The assignment of the index $i$ to one of them and index $\ell$ to another however, is basis dependent. Thus, the sign with which the $a$- and $s$-type amplitudes enter the sum rules depends on the choice of the ordering of doublets that was used when defining the $n$-tuples. When we write the sum rules in terms of amplitudes, however, they do not depend on the ordering, as it should be.
\section{Systems with arbitrary representations}\label{sec:gen-arbitrary-irreps}
In this section we generalize the results obtained for processes described exclusively by $U$-spin doublets to the case when a system has arbitrary representations in its description. As in the previous section, we assume that all the irreps are distinguishable. Note that we use the terms \lq\lq{}representation\rq\rq{} and \lq\lq{}irrep\rq\rq{} interchangeably.
First we focus on the systems of arbitrary representations in the final state. We generalize the definition of the $n$-tuple, and then we show how the sum rules for an arbitrary $U$-spin system can be obtained from the sum rules for a system of doublets.
Next, we generalize our geometric method in order to account for symmetrizations. This works in cases when there is at least one doublet in the system. When no doublets are present, we can use the lattice method for an auxiliary system with a doublet and then perform one additional step of symmetrization afterwards.
We close by generalizing the results of this section to the cases when $U$-spin representations also appear in the initial state and the Hamiltonian.
\subsection{Generalized $n$-tuples}\label{sec:n-tuples-generalized}
To generalize the definition of $n$-tuples introduced in
Section~\ref{sec:An-tuples-doublets} to the case of arbitrary irreps,
we consider a system with $r$ irreps $u_j$, where
$u_j$ is a positive integer or half-integer and $j = 0,1,...,r-1$. Note that only non-trivial irreps are included in this list, $U$-spin singlets are not relevant for sum rules. The addition of $U$-spin singlets to a system does not change the sum rules. As in Section~\ref{sec:n_doublet_system}, we first focus on systems that contain $U$-spin representations only in the final state.
Since each of the irreps $u_j$ can be built from $2 u_j$ doublets, we conclude that the complete system
can be constructed from combining $n$ doublets, where $n$ is given by
\begin{align}\label{eq:would-be-n-def}
n = 2 \sum_{j=0}^{r-1} u_j\,.
\end{align}
Note that the number of would-be doublets does not change when an arbitrary number of $U$-spin singlets is added to the system. Furthermore, the order of the irreps is arbitrary, yet, in practice we usually sort them and assign
$j=0$ to the lowest irrep.
Consider an arbitrary multiplet $u_j$. The minimum number of doublets needed to construct the representation $u_j$ is given by $n_j = 2u_j$. A component of the multiplet $u_j$ with the $m$-type QN $m_j$ is denoted as $\ket{u_j; m_j}$ and can be represented as a string of $n_j$ pluses and minuses such that the corresponding total $m$-QN is equal to $m_j$. Thus for the component $\ket{u_j; m_j}$ we have $u_j -m_j$ minus signs and $u_j +m_j$ plus signs. The ordering of the signs is, in principle, arbitrary.
We adopt a convention in which we order the signs starting with all minuses. Thus for the component $\ket{u_j; m_j}$ of an arbitrary multiplet $u_j$ in the final state we write
\begin{equation}\label{eq:uj_m-comvention}
\ket{u_j; m_j} \, :\,\, ( \underbrace{-\,...\,-}_{u_j -m_j}\,\underbrace{+\,...\,+}_{u_j + m_j})\,.
\end{equation}
For example, in the case of $u_j=3/2$ in the final state we have the following correspondence:
\begin{align}
\ket{\frac{3}{2},\frac{3}{2}}\hspace{9pt}: &\hspace{10pt} (+++)\,, &
\ket{\frac{3}{2},\frac{1}{2}}\hspace{9pt}: &\hspace{10pt} (-++)\,,\nonumber \\
\ket{\frac{3}{2},-\frac{1}{2}}: &\hspace{10pt} (--+)\,, &
\ket{\frac{3}{2},-\frac{3}{2}}: &\hspace{10pt} (---)\,.
\end{align}
We can represent any amplitude of an arbitrary $U$-spin system of $n$ would-be doublets via $n$ signs in the $n$-tuple. We do this by
letting the first $n_0$ signs of the $n$-tuple to represent a component of irrep $u_0$, the following $n_1$ signs to represent a component of irrep $u_1$ and so on. For the components of different irreps we use the convention in Eq.~\eqref{eq:uj_m-comvention}. We separate the positions of the $n$-tuple describing different irreps by a comma, as we did in the case of only doublets in the system. We call such $n$-tuples \emph{generalized $n$-tuples} or, when there is no ambiguity, we refer to them simply as $n$-tuples.
For example, the following generalized $n$-tuple
\begin{equation}\label{eq:gen-n-tuple-example}
A_{11} =\left(-,-+,- ++\right),
\end{equation}
represents an amplitude from the $U$-spin system described by representations $u_0=1/2$, $u_1=1$ and $u_2=3/2$. If all three representations belong to the final state, this specific amplitude has $m_0 = -\frac{1}{2}$, $m_1 = 0$, and $m_2 = \frac{1}{2}$. Note that generalized $n$-tuples that correspond to valid amplitudes must have the same number of pluses and minuses, as it is in the case for doublets-only systems.
Similarly to the case of systems described exclusively by doublets, we define $U$-spin conjugation for generic systems described by arbitrary representations. The $U$-spin conjugation in the generic case is defined as flipping the signs of all the $m$-QNs of multiplet components describing an amplitude. In terms of generalized $n$-tuples the operation corresponds to
\begin{enumerate}
\item
Exchange of all plus and minus signs.
\item
Reordering each set of signs corresponding to one representation such that it starts with minuses.
\end{enumerate}
For example, the $U$-spin conjugate of the amplitude $A_{11}$ given in Eq.~\eqref{eq:gen-n-tuple-example} is
\begin{equation}
A_{41}=\left(+,-+,- -+\right).
\end{equation}
Note
that Eq.~\eqref{eq:cong-ind} still holds, that is, for $n=6$, $\ell = 2^6-i-1$, which for $i=11$ gives $\ell=52$ and implies that $A_{52}$ is the $U$-spin conjugate of $A_{11}$. This is not in contradiction to the above since $A_{41}\equiv A_{52}$.
As in the case of doublets-only systems, an amplitude and its $U$-spin conjugate amplitude form a $U$-spin pair. To represent the pair we use the $n$-tuple for which the first non-zero $m$-QN is negative. For the $U$-spin pairs we can also define the $a$- and $s$-types amplitudes the same way as in Eq.~\eqref{eq:as-comb-def}, with $p$ given in Eq.~\eqref{eq:p_def}.
There is, however, one subtlety that needs to be discussed when the system is described exclusively by integer representations. In this case there is one amplitude in each system which is $U$-spin self-conjugate, that is, it is its own $U$-spin conjugate. This amplitude is the one where the $m$-QNs of all the irreps are zero.
Consider, for example, the system of two triplets. This system has $n=4$ would-be doublets. The amplitude with the following $n$-tuple is present in the $U$-spin set
\begin{equation}\label{eq:000}
A_{5}=\left( -+, -+\right)\,,
\end{equation}
and it is its own $U$-spin conjugate since $m = 0$ for both multiplets. Another way to see it is to note that the would-be conjugate amplitude is $A_{10}$ and in this case $A_5 \equiv A_{10}$.
For amplitudes that are $U$-spin self-conjugates, which is possible only when all the irreps are integers, one out of the $a$-type or the $s$-type amplitudes identically vanishes.
Which of the two vanishes depends on the parity of $p$, where for the case of integer-only irreps $p=n/2$.
For even $p$ we have
\begin{equation}\label{eq:self-conj-n/2-even}
s_{j} = 2 A_{j}, \qquad a_{j}\equiv 0,
\end{equation}
while for odd $p$
we have
\begin{equation}\label{eq:self-conj-n/2-odd}
a_{j} = 2 A_{j}, \qquad s_{j}\equiv 0,
\end{equation}
where $j$ represents the index of the amplitude that is self-conjugate. We emphasize that in this case $a_{j} \equiv 0$ (for even $p$) and $s_j \equiv 0$ (for odd $p$) are identities and not sum rules.
\subsection{Symmetrization}
\label{sec:sym}
Given the fact that all higher representations can be constructed from doublets, we move to showing how to derive the sum rules for any generic system based on the underlying system of $n$ doublets.
The key idea for this is to perform a change of basis. This is similar to what we did when we talked about the rotation between the physical and the $U$-spin basis.
\subsubsection{An example}
To begin, let us consider an example of a system of $n$ would-be doublets in the final state. We denote the amplitudes of this system as $A^{(d)}(m_1, m_2, \dots, m_n)$, where the label $(d)$ indicates that the amplitude belongs to a system of doublets and $m_j$, $j=1,\dots,n$ are the specific $m$-type QNs of these doublets that describe the amplitude. We assume we have used the algorithm described in Section~\ref{sec:n_doublet_system} and found all the sum rules for this system. Then we can use this result to write the sum rules for a system of $n-2$ doublets and a triplet. We recall that
\begin{equation}\label{eq:1/2times1/2}
\frac{1}{2}\otimes \frac{1}{2} = 0 \oplus 1
\end{equation}
and perform the basis rotation for the last two doublets of the system of doublets according to Eq.~\eqref{eq:1/2times1/2}. The result is as follows
\begin{align}\label{eq:doubleds_and_1_decomp}
A^{(d)}(m_1, m_2, \dots, m_n) = \mathop{C_{1/2, m_{n-1}}}_{\hspace{0pt} 1/2, m_n}^{\hspace{-14pt}1, M} A^{(1)}(m_1, m_2, \dots,m_{n-2},M) \nonumber \\
+ \mathop{C_{1/2, m_{n-1}}}_{\hspace{0pt} 1/2, m_n}^{\hspace{-14pt}0, M} A^{(0)}(m_1, m_2, \dots,m_{n-2},M).
\end{align}
In Eq.~\eqref{eq:doubleds_and_1_decomp} we introduced the notation $A^{(1)}(m_1, m_2, \dots,m_{n-2},M)$ for amplitudes that pick up the triplet component from the tensor product of two doublets.
Our notation is such that the label $(1)$ stands for triplet, $m_j$, $j = 1, \dots, n-2$ are the $m$-QNs of the $n-2$ doublets and $M = m_{n-1}+m_n$ is the $m$-QN of the triplet. Similarly, the amplitudes $A^{(0)}(m_1,m_2,\dots,m_{n-2},M)$ denote the amplitudes that pick up the singlet component from the tensor product, hence the label $(0)$. Note, that the second term in Eq.~\eqref{eq:doubleds_and_1_decomp} is present only if $M=0$.
Once we performed the basis rotation, the next step in writing the sum rules for the $U$-spin set of interest is to take the sum rules for the system of doublets and plug in the expressions in Eq.~\eqref{eq:doubleds_and_1_decomp}. This allows us to rewrite the sum rules of the system of doublets in terms of the $A^{(1)}$ and $A^{(0)}$ amplitudes. After we do this, we need to rearrange the amplitudes such that we obtain the sum rules that only involve the amplitudes $A^{(1)}$.
As we show in Appendix~\ref{app:mu-factor}, it is guaranteed that the sum rules for $A^{(1)}$ and $A^{(0)}$ decouple. The decoupling also means that instead of using the full expression in the RHS of Eq.~\ref{eq:doubleds_and_1_decomp} we can simply do the substitution
\begin{equation}\label{eq:substitution-example}
A^{(d)}(m_1, m_2, \dots, m_n) \, \rightarrow \, \mathop{C_{1/2, m_{n-1}}}_{\hspace{0pt} 1/2, m_n}^{\hspace{-14pt}1, M} A^{(1)}(m_1, m_2, \dots,m_{n-2},M).
\end{equation}
The sum rules that we obtain after this substitution give the full set of sum rules for the system of $n-2$ doublets and a triplet.
\subsubsection{Generalization}
Above we have considered a simple example of a system of many doublets and a triplet. The result in Eq.~\eqref{eq:substitution-example} can be generalized to the case of a system of arbitrary representations.
Consider a system of $r$ irreps $u_0, u_1, \dots, u_{r-1}$.
Each of the representations that are obtained from a symmetrization is the highest possible representation in the tensor product of $2 u_j$ would-be doublets. In this case, as shown in Appendix~\ref{app:mu-factor}, the substitution analogous to the one in Eq.~\eqref{eq:substitution-example} takes the following form
\begin{equation}\label{eq:symmetrization_gen}
A^{(d)}(m_1, m_2, \dots, m_n) \, \rightarrow \, \left(\prod_{j=0}^{r-1} \frac{1}{\sqrt{C_\text{sym}(u_j,M_j)}} \right) A(M_0, M_1, \dots, M_{r-1}),
\end{equation}
where we use $A(M_0, M_1, \dots, M_{r-1})$ to represent the amplitudes of the system described by representations $u_0, u_1, \dots, u_{r-1}$, and $M_j$, $j=0,\dots,r-1$ are the $m$-type QNs of the representations describing the amplitude.
Note that the symmetry factors $C_\text{sym}(u_j, M_j)$ do not depend on $m_j$ but only on $u_j$ and $M_j$.
Furthermore, $C_\text{sym}(u_j, M_j)$ can be written in terms of binomial coefficients as follows
\begin{equation}\label{eq:C_sym}
C_\text{sym}(u_j,M_j) = C(2u_j, u_j-M_j) \equiv \binom{2u_j}{u_j-M_j}\,,
\end{equation}
see Appendix~\ref{app:mu-factor} for details.
\subsubsection{Iterative approach}
On the fundamental level the symmetry factors in Eq.~\eqref{eq:C_sym} come from the products of the relevant Clebsch-Gordan coefficients. In some cases it becomes important that we understand how the symmetry factors are build iteratively.
Assume we know the sum rules for a system of representations $u_0, u_1, \dots, u_{r-1}$, where $u_0 = 1/2$, and the rest of the irreps are arbitrary. We denote the amplitudes of this system as $A^{(1/2, u_1)}(m_0,m_1,\dots,m_{r-1})$. We want, however, to obtain the sum rules for a system of representations $u_+, u_2, \dots, u_{r-1}$, where $u_+ = u_1 + 1/2$. We denote the amplitudes of the later system as $A^{(+)}(m, m_2, \dots, m_{r-1})$.
As above, we are building the higher representation as a component of the tensor product of doublets. When the construction is done iteratively we can focus on taking a tensor product of the arbitrary representation $u_1$ with the representation $u_0=1/2$. Using
\begin{equation}
u_1 \otimes 1/2 = u_+ \oplus u_-, \qquad u_\pm = u_1 \pm 1/2,
\end{equation}
we write for the amplitudes
\begin{align}\label{eq:iterative_decompose}
A^{(1/2, u_1)}(m_0,m_1,\dots,m_{r-1}) = C_+ A^{(+)}(m, m_2, \dots, m_{r-1})\nonumber \\ + C_- A^{(-)}(m, m_2, \dots, m_{r-1})\,,
\end{align}
where $m = m_1 + m_2$ and $C_+$ and $C_-$ are the appropriate CG coefficients
\begin{equation}
C_+ = \mathop{C_{u_1, m_1}}_{\hspace{10pt} 1/2, m_0}^{\hspace{6pt}u_+, m}\, ,
\qquad
C_- = \mathop{C_{u_1, m_1}}_{\hspace{10pt} 1/2, m_0}^{\hspace{6pt}u_-, m}\,.
\end{equation}
Note that the coefficients $C_+$ and $C_-$ depend on the $m$-QNs and thus are different for different amplitudes.
Once we have the decomposition in Eq.~\eqref{eq:iterative_decompose}, we can plug it into the sum rules for the system with representations $u_0 = 1/2$ and $u_1$ and obtain the sum rules for the system of interest that is described by $u_+$.
Using the fact that the sum rules for the two types of amplitudes $A^{(+)}$ and $A^{(-)}$ decouple, see Appendix~\ref{app:mu-factor}, we can just perform the substitution
\begin{equation}\label{eq:symmetrization-iterative}
A^{(1/2, u_1)}(m_0,m_1,\dots,m_{r-1}) \, \rightarrow \, C_+ A^{(+)}(m, m_2, \dots, m_{r-1}).
\end{equation}
If instead of combing the representations $1/2$ and $u_1$, we consider two arbitrary representations $u_0$ and $u_1$ from which we want to obtain the representation $u_+=u_0 + u_1$, we need to perform the following substitution
\begin{equation}\label{eq:substitution-iterative-gen}
A^{(u_0, u_1)}(m_0,m_1,\dots,m_{r-1}) \, \rightarrow \, C_+ A^{(+)}(m, m_2, \dots, m_{r-1}),
\end{equation}
where the coefficient $C_+$ needs to be modified as follows
\begin{equation}
C_+ = \mathop{C_{u_1, m_1}}_{\hspace{10pt} u_0, m_0}^{\hspace{6pt}u_+, m}.
\end{equation}
\subsubsection{Summary of the symmetrization process}
We refer to the substitutions described in Eqs.~\eqref{eq:substitution-example},~\eqref{eq:symmetrization_gen},~\eqref{eq:symmetrization-iterative}, and \eqref{eq:substitution-iterative-gen} as \emph{symmetrization}. When building higher representations we pick the highest components of the tensor products, which are totally symmetric, hence the name of the procedure. Note that when the symmetrization is performed the number $n$ of would-be doublets of the system stays the same.
In practice the task that we encounter is as follows. We are given the sum rules for a system of representations $u_0, u_1,\dots, u_{r-1}$. We refer to this system as \lq\lq{}original system\rq\rq{}. We want to obtain the sum rules for a system where some of the representations are symmetrized. We refer to the latter system as \lq\lq{}new system\rq\rq{}.
Assuming that $u_0, u_1,\dots, u_{r-1}$ are ordered such that the representations that we symmetrize are grouped together. In order to solve the problem at hand we proceed with the following two-step algorithm:
\begin{enumerate}
\item
Replace the $n$-tuples of the original system with $n$-tuples for the new system. For this take the $n$-tuples of the original system and remove all the commas between the components that correspond to the irreps that we symmetrize. Then rearrange the signs such that in entries corresponding to individual irreps minuses precede the pluses.
\item Take the sum rules written in terms of the new $n$-tuples and multiply each of the $n$-tuples with an appropriate symmetry factor.
\end{enumerate}
Note that we can perform the above for any $n$-tuple, that is, individual amplitudes $A_j$ as well as the $a$- and $s$- type amplitudes. For the case of the $a$- and $s$-type amplitude we can do it as long as $u_0$ is a doublet and it is not a part of the symmetrization process.
\subsubsection{Example: symmetrization of systems with $n=4$ would-be doublets} \label{sec:exam-n-4}
To demonstrate the idea of symmetrization we consider three different systems with $n=4$ would-be doublets. For simplicity we consider the case when all representations are in the final state.
\begin{itemize}
\item System I: 4 doublets, that is $u_0=u_1=u_2=u_3=1/2$. The six amplitudes of this system
are listed in Eq.~\eqref{eq:n4_example_notation}, and we rewrite them below with a slightly different notation where we add a superindex to indicate that an amplitude belongs to system~I
\begin{equation}\label{eq:n4_example_notation-again}
\begin{gathered}
A_3^{(\text{I})} = (-, -, +, +)\,, \qquad A_{12}^{(\text{I})} = (+, +, -, -)\,,\\
A_{5}^{(\text{I})} = (-, +, -, +)\,, \qquad A_{10}^{(\text{I})} = (+, -, +, -)\,, \\
A_{6}^{(\text{I})}=(-, +, +, -)\,, \qquad A_{9}^{(\text{I})} = (+, -, -, +)\,.
\end{gathered}
\end{equation}
The sum rules for this system in terms of $a$- and $s$-type amplitudes are given in Eqs.~\eqref{eq:n-4-a} and \eqref{eq:n-4-s}.
\item System II: 2 doublets and a triplet, which we order such that $u_0=u_1=1/2$, $u_2 = 1$. The amplitudes are given by
\begin{equation}\label{eq:n4_example_notation-again-2}
\begin{gathered}
A_3^{(\text{II})} = (-, -, + +)\,, \qquad A_{12}^{(\text{II})} = (+, +, - -)\,,\\
A_{5}^{(\text{II})} = (-, +, - +)\,, \qquad A_{10}^{(\text{II})} = (+, -, -+)\,.
\end{gathered}
\end{equation}
\item
System III: 2 triplets, that is $u_0=u_1=1$. In this case the amplitudes are
\begin{equation}
\begin{gathered}
A_3^{(\text{III})} = (--,++)\,, \qquad A_{12}^{(\text{III})}= (++,--)\,, \\
A_5^{(\text{III})}= (-+,-+)\,. \qquad
\phantom{A_9^{(\text{III})}= (-+,-+)} \end{gathered}
\end{equation}
Note that since $A_5^{(\text{III})}$ is a $U$-spin self conjugate amplitude, we have $s_5^{(\text{III})} \equiv 2 A_5^{(\text{III})}$ and $a_5^{(\text{III})}\equiv 0$.
\end{itemize}
We start by obtaining the sum rules of system II given the sum rules for system I. We do this by performing the steps outlined in the previous subsection. We only show it for one amplitude out of each $U$-spin pair. We have the following replacements
\begin{align}
A_3^{(\text{I})} &= (-, -, +, +) \to (-,-,++) = A_3^{(\text{II})}\,, \nonumber\\
A_{5}^{(\text{I})} &= (-, +, -, +) \to {1 \over \sqrt{2}}(-, +, -+) = {1 \over \sqrt{2}} A_{5}^{(\text{II})}\,, \label{eq:sub-rule} \\
A_{6}^{(\text{I})} &= (-, +, +,-) \to {1 \over \sqrt{2}}(-, +, -+) = {1 \over \sqrt{2}} A_{5}^{(\text{II})}\,. \nonumber
\end{align}
Note the following regarding Eq.~\eqref{eq:sub-rule}
\begin{enumerate}
\item The symmetry factor for amplitude $A_3^{(\text{II})}$ is 1 and we do not write it explicitly.
\item
The symmetry factor for amplitudes $A_5^{(\text{II})}$ and $A_6^{(\text{II})}$ is ${1/ \sqrt{2}}$.
\item
In the case of the $n$-tuple for amplitude $A_6^{(\text{I})}$ we had to rearrange the signs after dropping the comma, and thus it corresponds to $A_5^{(\text{II})}$.
\end{enumerate}
Similar substitutions work for the $a$ and
$s$-type amplitudes, that is
\begin{equation} \label{eq:sub-rule-a-s}
\begin{gathered}
a_3^{(\text{I})} \to a_3^{(\text{II})}, \qquad
a_5^{(\text{I})} \to \frac{a_5^{(\text{II})}}{\sqrt{2}}, \qquad
a_{6}^{(\text{I})} \to \frac{a_{5}^{(\text{II})}}{\sqrt{2}}, \\
s_3^{(\text{I})} \to s_3^{(\text{II})}, \qquad
s_5^{(\text{I})} \to \frac{s_5^{(\text{II})}}{\sqrt{2}}, \qquad
s_{6}^{(\text{I})} \to \frac{s_{5}^{(\text{II})}}{\sqrt{2}}.
\end{gathered}
\end{equation}
Substituting Eq.~\eqref{eq:sub-rule} into the $a$-type sum rules of Eq.~\eqref{eq:n-4-a},
we obtain
\begin{equation} \label{eq:n-4-t1-a}
a_3^{(\text{II})}= 0 ,\qquad
a_5^{(\text{II})} = 0,
\end{equation}
where the last equation appears twice.
Substituting Eq.~\eqref{eq:sub-rule} into the $s$-type sum rules of Eq.~\eqref{eq:n-4-s},
we obtain
\begin{equation} \label{eq:n-4-t1-s}
s_3^{(\text{II})} + {1 \over \sqrt{2}} s_5^{(\text{II})} + {1 \over \sqrt{2}} s_5^{(\text{II})}
= s_3^{(\text{II})} + \sqrt{2} s_5^{(\text{II})} = 0.
\end{equation}
Next, we derive the sum rules for system III from the sum rules for system II. For that we symmetrize the first two doublets and perform the following substitutions for the amplitudes:
\begin{equation}\label{eq:sub-rule-2}
\begin{aligned}
&A_3^{(\text{II})} = (-, -, ++) \to (--,++) = A_3^{(\text{III})}\,, \\
&A_{5}^{(\text{II})} = (-, +, -+) \to {1 \over \sqrt{2}}(-+, -+) = {1 \over \sqrt{2}} A_{5}^{(\text{III})} .
\end{aligned}
\end{equation}
In terms of the $a$- and $s$- type amplitude we have
\begin{equation}
\begin{aligned}
a_3^{(\text{II})} &\to a_3^{(\text{III})}\,, &\qquad s_{3}^{(\text{II})} &\to s_{3}^{(\text{III})}\,, \\
a_5^{(\text{II})} &\to a_5^{(\text{III})} \equiv 0\,, &\qquad
s_{5}^{(\text{II})} &\to \frac{s_{5}^{(\text{III})}}{\sqrt{2}}\,.
\end{aligned}
\end{equation}
Recall that $a_5^{(\text{III})}\equiv 0$ identically.
From the sum rules in Eqs.~\eqref{eq:n-4-t1-a} and
\eqref{eq:n-4-t1-s} we obtain
\begin{equation}
a_3^{(\text{III})} = 0, \qquad
s_3^{(\text{III})} + s_5^{(\text{III})} = 0,
\end{equation}
where the first one is valid to zeroth order in the breaking and the second one to first order.
Writing the above in terms of amplitudes we have
\begin{equation}
A_3^{(\text{III})}=
A_{12}^{(\text{III})}, \qquad
A_3^{(\text{III})}+
A_{12}^{(\text{III})}+ 2 A_5^{(\text{III})}=0.
\end{equation}
We have demonstrated how to obtain the sum rules for all the different $n=4$ systems from the system of 4 doublets.
\subsection{Generalization of the geometrical picture}
\label{sec:gen_1d}
We move to the discussion of the case of systems with at least one doublet. In this case, we can define a lattice in a way similar to the doublets-only case. Then, we can harvest the sum rules directly from the lattice without the need to perform the symmetrization explicitly.
\subsubsection{Generalized coordinate notation}
We start by generalizing the coordinate notation introduced in
Section~\ref{sec:halves-lattice} to the case that we discuss here. We order the multiplets such that the first one is a doublet, that is $u_0 = {1}/{2}$.
We then label every $n$-tuple by a string of ${n}/{2} - 1$
numbers as follows.
Out of the $n/2$ minus signs in the $n$-tuple we ignore the first one and for each of the rest we write the index $j$ of the irrep $u_j$ that it belongs to. For example,
\begin{equation}\label{eq:arbitrary-rep-notation}
\begin{gathered}
(\underset{0}{-},\underset{1}{--},\underset{2}{-+},\underset{3}{-++++}) = \left(1,1,2,3\right),\\
(\underset{0}{-},\underset{1}{-+},\underset{2}{-+},\underset{3}{--+++}) = \left(1,2,3,3\right),\\
(\underset{0}{-},\underset{1}{++},\underset{2}{-+},\underset{3}{---++} ) = \left(2,3,3,3\right).
\end{gathered}
\end{equation}
As in Section~\ref{sec:halves-lattice} the order of indices is
unimportant, that is,
all permutations describe the same amplitude.
With this generalized notation we see that similarly to the case of
doublets only, also for the case of arbitrary irreps, $U$-spin pairs can
be represented as nodes of a $d = n/2 - 1$ dimensional lattice.
The length of each dimension of the lattice is $r-1$.
Recall that $r$ denotes the number of irreps in the system. (Note that in the case of all doublets we have $r = n$ and the length of each dimension is $r-1 = n-1$.) The lattice is built by assigning each $U$-spin pair to a node of the lattice based on its coordinate notation.
We finish the construction of the lattice by assigning a multiplication factor to each node of the lattice. These factors account for the symmetrization process, which we then do not need to perform explicitly, and are denoted as $\mu$-factors. We show in the next subsection how the $\mu$-factors are calculated.
Once the lattice is built and the proper $\mu$-factors are assigned, the sum rules can be harvested from the lattice in a similar way to the way they are harvested in the case of doublets-only systems.
\subsubsection{The $\mu$-factor}
To write the explicit expressions for the $\mu$-factors we introduce yet another auxiliary notation for $U$-spin pairs. We call this notation the $y_j$ \emph{notation}. In this notation each amplitude is described by $r-1$ numbers $\left[y_1, y_2, ..., y_{r-1}\right]$, where $y_j$ is the number of times that representation $u_i$ enters the coordinate description of the amplitude pair. Square brackets are used to distinguish the $y_j$ notation from the coordinate notation. Equivalently, $y_j$ is the number of minuses in the $n$-tuple at positions that correspond to the representation $u_j$. For example, the amplitudes in Eq.~\eqref{eq:arbitrary-rep-notation} are denoted by
\begin{equation}
\begin{gathered}
(\underset{0}{-},\underset{1}{--},\underset{2}{-+},\underset{3}{-++++}) = (1,1,2,3) = [2,1,1] ,\\
(\underset{0}{-},\underset{1}{-+},\underset{2}{-+},\underset{3}{--+++}) = (1,2,3,3) = [1,1,2] ,\\
(\underset{0}{-},\underset{1}{++},\underset{2}{-+},\underset{3}{---++} ) = (2,3,3,3) = [0,1,3] .
\end{gathered}
\end{equation}
Note that also here we have omitted the very first minus sign.
We denote the $\mu$-factor that corresponds to a certain node as $\mu[y_1,y_2,\dots,y_r]$. There are two sources that contribute to the $\mu$-factors. The first is the symmetry factors introduced in Section~\ref{sec:sym}. The second comes from the fact that several amplitude pairs of the underlying doublets-only system may correspond to only one pair of the system under consideration.
We write the $\mu$-factor for a node as a product of factors corresponding to each of the $r$ representations
\begin{equation}\label{eq:mu_product}
\mu[y_1,y_2,...,y_{r-1}]=\prod_{j=0}^{r-1} \mu_j,
\end{equation}
where $\mu_j$ depends only on $u_j$ and $y_j$. In Appendix~\ref{app:mu-factor} we show that
\begin{equation}\label{eq:mu_j}
\mu_j = \sqrt{C(2u_j,y_j)}\times y_j!\,, \qquad C(2u_j,y_j) = \binom{2u_j}{y_j},
\end{equation}
where $C(2u_j, y_j)$ is a binomial coefficient.
Note that $C(2u_k,2u_k) = 1$ and for $y_k > 2u_k$, $C(2u_k,y_k)=0$. This means that for the doublets-only systems the $\mu$-factors are equal to $1$ for the allowed nodes and to $0$ for the nodes that do not correspond to a valid amplitude. This is a generalization of the empty and filled nodes in the lattices of Section~\ref{sec:n_doublet_system}.
\subsubsection{Harvesting the sum rules and the multiplication factor}
Once we constructed the lattice with the associated $\mu$-factors we are ready to harvest the sum rules. For order $b$, the sum rules correspond to the different sums over all the $b$-dimensional lattice subspaces. For even(odd) $b$ these sums correspond to $a$-type($s$-type) sum rules.
There is one subtle point that arises for $b \ge 2$. In that case some of the off-diagonal nodes are redundant. For example, for the two-dimensional case in coordinate notation we have $(x_{1}, x_{2}) \equiv (x_2, x_1)$. Thus, when we sum all the nodes in the lattice, the amplitudes that correspond to the off-diagonal nodes enter the sum rules more then once.
While one can manually collect all the identical nodes, we can also do it in the following, more straightforward way:
\begin{enumerate}
\item
Sum over subspaces without duplicating nodes. That is consider only nodes with $x_{1} \le x_{2} \le... \le x_{d}$.
\item
When harvesting a sum rule each node is multiplied by a corresponding $\mu$-factor and a multiplication factor $M_b$ that accounts for the redundancy of the lattice. Note that unlike the $\mu$ factor, $M_b$ depends on $b$.
\end{enumerate}
To calculate $M_b$, we account for the symmetry of the lattice by counting the number of nodes that correspond to the given amplitude in the $b$-dimensional subspace. To write an expression for this number, recall that the $b$-dimensional subspaces over which we sum the amplitudes, are defined via fixing certain coordinates in the coordinate notation. We define
\begin{equation}\label{eq:yjprime-def}
y_j = y_j^{\text{fix}} + y_j^{\prime},
\end{equation}
where $y_j^{\text{fix}}$ is the number of fixed coordinates that are equal to $j$ and thus correspond to representation $u_j$, and $y_j^\prime$ is the number of coordinates that can take value $j$ among the $b$ ``free'' coordinates that define the subspace. Recall that $y_j$ is the number of coordinates of the node that are equal to $j$. Using this notation, the number of the lattice nodes in the $b$-dimensional subspace that describe the same amplitude is given by the following multinomial coefficient
\begin{equation}
M_b[y_1^\prime, y_2^\prime, \dots, y_{r-1}^\prime] = \binom{b}{y_{1}^\prime, y_{2}^\prime, \dots, y_{r-1}^\prime} = \frac{b!}{y_{1}^\prime! y_{2}^\prime! \dots y_{r-1}^\prime!}.
\end{equation}
Note that the sum of all $y_j^\prime$ is equal to $b$
\begin{equation}
\sum_{j=1}^{r-1} {y_j^\prime}=b.
\end{equation}
We see that any $a$($s$)-type sum rule that corresponds to a $b$-dimensional subspace of the lattice can be found as a weighted sum over all the amplitudes that correspond to the nodes in the subspace (each amplitude accounted for only once). The weights are denoted by $W_b$ and they are the product of the $\mu$ and $M_b$ factors. They are given by
\begin{equation}\label{eq:Wb-def}
W_b[y_1, y_2, \dots, y_{r-1}] = \mu[y_1, y_2, \dots, y_{r-1}] \times M_b[y_1^\prime, y_2^\prime, \dots, y_{r-1}^\prime] = b! \prod_{j=1}^{r-1} \frac{\mu_j}{y_j^\prime!}.
\end{equation}
Note that the $M_b[y_1^\prime, y_2^\prime, \dots, y_{r-1}^\prime]$ depend on the amplitude and the subspace,~i.e.,~for the same $b$ they may be different for different sum rules.
\subsubsection{An example}\label{sec:gen-examples}
As an example we consider the case of $n=6$, $r=3$ with $u_0=1/2$, $u_1=1$, and $u_2=3/2$. For this system $d = n/2-1= 2$ and thus the lattice is two-dimensional. Each of the dimensions has a length of $r-1=2$. There are three $U$-spin pairs that we write below in the $n$-tuple, index, coordinate, and $y_j$ notations
\begin{align}
&(-,--,+++) = A_7\,\, = (1,1) = [2,0], \\
&(-,-+,-++) = A_{11} = (1,2) = [1,1], \\
&(-,++,--+) = A_{25} = (2,2) = [0,2].
\end{align}
Using Eqs.~\eqref{eq:mu_product} and~\eqref{eq:mu_j} we calculate the $\mu$-factors for all the amplitudes in the $U$-spin system
\begin{align}
&\mu[2,0]= \left(\sqrt{C(2,2)} \times 2!\right) \times
\left(\sqrt{C(3,0)} \times 0!\right) = 2, \\
&\mu[1,1]= \left(\sqrt{C(2,1)} \times 1!\right) \times
\left(\sqrt{C(3,1)} \times 1!\right) = \sqrt{6} ,\\
&\mu[0,2]= \left(\sqrt{C(2,0)} \times 0!\right) \times
\left(\sqrt{C(3,2)} \times 2!\right) = 2\sqrt{3}.
\end{align}
The resulting lattice is shown in Fig.~\ref{fig:n6-1d-1t-32}.
\begin{figure}[t]
\centering
\includegraphics[width=0.2\textwidth]{lattice_1d_1t_32.pdf}
\caption{Lattice for a system with $n=6$, $u_0=1/2$, $u_1 = 1$, $u_2 = 3/2$.}
\label{fig:n6-1d-1t-32}
\end{figure}
We are now ready to harvest the sum rules. The $a$-type sum rules that are valid to zeroth order are trivial
\begin{equation}
a_{(1,1)}= a_{(1,2)}= a_{(2,2)}=0.
\end{equation}
The $s$-type sum rules that correspond to the lines of the lattice are
\begin{equation}
2\, s_{(1,1)}+ \sqrt{6}\, s_{(1,2)} = 0, \qquad
2 \sqrt{3}\, s_{(2,2)}+ \sqrt{6} \,s_{(1,2)} = 0. \end{equation}
For the $a$-type sum rule that is obtained from the plane we need to calculate the $M_b$ factors
\begin{align}
(1,1): \quad M_b[2,0] &= \frac{2}{2!\times 0!} = 1, \\
(2,2): \quad M_b[0,2] &= \frac{2}{0!\times 2!} = 1, \\
(1,2): \quad M_b[1,1] &= \frac{2}{1!\times 1!} = 2.
\end{align}
We then find the sum rule that is valid up to $b=2$ to be
\begin{equation}
2\,a_{(1,1)}+ 2\sqrt{6}\,a_{(1,2)} +
2\sqrt{3} \,a_{(2,2)} = 0. \label{eq:a-type-sum-rule-plane-Mb}
\end{equation}
For completeness, in Figs.~\ref{fig:n6-4d-1t},~\ref{fig:n6-2d-2t}, and~\ref{fig:n6-3d-32} we show the lattices for other non-trivial $n=6$ systems with at least one doublet.
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{lattice_4d_1t.pdf}
\caption{Lattice for a system with $n=6$, $u_0=u_1=u_2=u_3=1/2$ and $u_4 = 1$.}
\label{fig:n6-4d-1t}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.3\textwidth]{lattice_2d_2t_v2.pdf}
\caption{Lattice for a system with $n=6$, $u_0=u_1=1/2$ and $u_2=u_3 = 1$.}
\label{fig:n6-2d-2t}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.3\textwidth]{lattice_3d_32.pdf}
\caption{Lattice for a system with $n=6$, $u_0=u_1=u_2=1/2$ and $u_3=3/2$.}
\label{fig:n6-3d-32}
\end{figure}
\subsection{Generalization for irreps also in the initial state and the Hamiltonian}
\label{sec:gen-gen}
Finally, we summarize how the results of this section are generalized to the case when $U$-spin representations are present not only in the final state, but also in the initial state and the Hamiltonian.
First, we discuss the convention for building $n$-tuples. As in the case of doublets, the $m$-QNs of the components of the multiplets belonging to the initial state and the Hamiltonian are inverted in the $n$-tuple. For example, the amplitude in Eq.~\eqref{eq:gen-n-tuple-example}, which is described by representations $u_0 = 1/2$, $u_1 = 1$ and $u_2=3/2$, would have $m_0 = 1/2$, $m_1 = 0$ and $m_2=-1/2$, if all the representations belong to the initial state or the Hamiltonian.
Second, we need to use the modified definitions of $a$- and $s$-type amplitudes given in Eq.~\eqref{eq:as-comb-def-app}. Note that both factors $(-1)^p$ and $(-1)^{q_i}$ are the same for a system and its underlying system of doublets. The general expression for $p$ is given in Eq.~\eqref{eq:p_def}. The $(-1)^{q_i}$ factor for each amplitude can be read-off the corresponding $n$-tuple as a product of all the signs in the final state. The generalized definition for $(-1)^{q_i}$ can be written as follows
\begin{equation}\label{eq:qi-gen}
(-1)^{q_i} = \prod_{j=1}^{r_f} (-1)^{u_j - m_j^{(i)}},
\end{equation}
where the product goes over the $r_f$ representations in the final state, and $m_j^{(i)}$ is the $m$-QN of the representation $u_j$ in $A_i$.
\section{Physical systems}\label{sec:gen_algo}
We are now ready to show how the mathematical results that we discuss above can be applied to physical systems. For that we first discuss how to map the physical systems into the mathematical ones, and then summarize the algorithm for writing the sum rules for physical systems. We then provide several examples of sum rules for physical systems. For some of the examples we also compare our results to the results obtained in the standard approach using Clebsch-Gordan coefficient tables. While the results are, of course, the same, these examples demonstrate that our novel approach provides a significant reduction of complexity of the calculation.
\subsection{Understanding the group-theoretical structure of physical $U$-spin sets}\label{sec:physical-to-Uspin-mapping}
The first step in obtaining amplitude sum rules for physical systems is to understand what is the group-theoretical structure of the system of interest. For that, one needs to list the $U$-spin representations that appear in the initial and final states as well as the group-theoretical properties of the operators in the Hamiltonian. Once the $U$-spin structure of the system is understood and the amplitudes of the physical system are mapped into the abstract mathematical amplitudes, the sum rules are derived by following the procedures discussed in Sections~\ref{sec:n_doublet_system} and~\ref{sec:gen-arbitrary-irreps}.
To understand the group-theoretical structure of the $U$-spin set we assign the physical particles in the initial and final state of the system, as well as the Hamiltonian that generates the processes, to $U$-spin multiplets. This is done based on the fundamental $U$-spin doublets defined in Eq.~\eqref{eq:sd-uspin}. We rewrite these definitions here for convenience
\begin{equation} \label{eq:sd-uspin-again}
\begin{bmatrix}
\,d\,\, \\
s
\end{bmatrix} =
\begin{bmatrix}
\ket{\frac{1}{2}, +\frac{1}{2}}\\
\ket{\frac{1}{2}, -\frac{1}{2}}
\end{bmatrix}, \hspace{25pt}
\begin{bmatrix}
\,\bar{s}\,\, \\
-\bar{d}
\end{bmatrix} =
\begin{bmatrix}
\ket{\frac{1}{2}, +\frac{1}{2}}\\
\ket{\frac{1}{2}, -\frac{1}{2}}
\end{bmatrix}\,.
\end{equation}
Note, that the lower component of the anti-doublet is defined as $-\bar d$.
The particles in the initial and final state are assigned to $U$-spin multiplets based on their quark content. This is a straightforward procedure, except for one subtle point. When arranging hadrons into $U$-spin multiplets, there is a freedom in the overall phase of the hadron. We adopt the convention such that all hadrons enter the multiplets with a plus sign. For example, we define $\pi^+ =-\ket{u \bar d}$
and then we write $P^+$ and $P^-$, which are pseudoscalar doublets, as
\begin{equation} \label{eq:Pp-Pm-def}
P^+ = \begin{bmatrix}
K^+\\
\pi^+
\end{bmatrix}=
\begin{bmatrix}
\ket{u\bar s}\\
-\ket{u \bar d} \end{bmatrix}=
\begin{bmatrix}
\ket{\frac{1}{2}, +\frac{1}{2}}\\
\ket{\frac{1}{2}, -\frac{1}{2}}
\end{bmatrix}, \hspace{25pt}
P^- = \begin{bmatrix}
\pi^-\\
K^-
\end{bmatrix} = \begin{bmatrix}
\ket{d \bar u}\\
\ket{s \bar u} \end{bmatrix}=
\begin{bmatrix}
\ket{\frac{1}{2}, +\frac{1}{2}}\\
\ket{\frac{1}{2}, -\frac{1}{2}}
\end{bmatrix}\,.
\end{equation}
Note that this convention is different from some other sign conventions present in the literature. For example comparing to Ref.~\cite{Soni:2006vi}, we have
\begin{align}\label{eq:sign-convention-relation}
\ket{\pi^+}_{\text{this work}} = -\ket{\pi^+}_{\text{Ref.~\cite{Soni:2006vi} }}.
\end{align}
Since amplitudes of physical processes are defined by the particles in the initial and final state, with this convention, the mapping of the amplitudes of the physical processes into abstract amplitudes does not introduce any relative phases. Thus, when writing the sum rules for the physical system, it is enough to derive the sum rules for the corresponding abstract system of $U$-spin representations and then just replace all the amplitudes with the corresponding CKM-free amplitudes of the physical system.
\subsection{The algorithm}\label{sec:algorithm}
In this subsection we summarize the step-by-step algorithm for writing all the amplitude sum rules for any $U$-spin set that is described by $r$ non-singlet representations and satisfies the assumptions introduced in Sections~\ref{sec:Uspin},~\ref{sec:comments}. As we discuss in Section~\ref{sec:n-tuples-generalized} the addition of $U$-spin singlets to the system does not affect the sum rules and thus all the singlet states and operators can be ignored when we describe the group-theoretical structure of the system.
The algorithm is organized into three main steps. First, we describe the group-theoretical structure of the physical system of interest and thus set up the mathematical problem. Then we find the sum rules for the abstract system of $U$-spin representations. Finally, we map the abstract amplitudes into the amplitudes of the physical system of interest.
The algorithm goes as follows (we repeat key formulas from the text for convenience):
\renewcommand{\labelenumii}{\arabic{enumi}.\arabic{enumii}}
\renewcommand{\labelenumiii}{\arabic{enumi}.\arabic{enumii}.\arabic{enumiii}}
\renewcommand{\labelenumiv}{\arabic{enumi}.\arabic{enumii}.\arabic{enumiii}.\arabic{enumiv}}
\begin{enumerate}
\item {\bf Set up the mathematical problem. }
\begin{enumerate}
\item \textit{Describe the group-theoretical structure of the system.} \\
Arrange the physical states and the operators in the Hamiltonian into $U$-spin multiplets according to the conventions in Section~\ref{sec:physical-to-Uspin-mapping}. List all the non-singlet $U$-spin multiplets $u_0, \dots, u_{r-1}$ that describe the system of interest. Order the representations such that $u_0$ is the lowest (or one of the lowest) representations.
\item \textit{List all the $n$-tuples and calculate the $(-1)^{q_i}$ factors for each $n$-tuple.} \\
The procedure of writing the generalized $n$-tuples is described in detail in Sections~\ref{sec:n-tuples-generalized} and~\ref{sec:gen-gen}. The length of the $n$-tuple is given by the number of would-be doublets in the system
\begin{equation}
n = \sum_{j=0}^{r-1} 2u_j.
\end{equation}
The $m$-QNs of the representations in the initial state and the Hamiltonian are inverted when generating the $n$-tuples. The calculation of the $(-1)^{q_i}$ factors for the generic systems is described in Section~\ref{sec:gen-gen}. They are given as the products over the representations in the final state
\begin{align}
(-1)^{q_i} = \prod_{j=1}^{r_f} (-1)^{u_j - m_j^{(i)}}\,.
\end{align}
Note that since for sum rules only relative minus signs are important, one could equivalently define the factor $(-1)^{q_i}$ as a product over the initial state and the Hamiltonian. This corresponds to multiplying all sum rules of a system by a factor $(-1)^{n/2}$.
\item \textit{Find $p$ and list all the $a$- and $s$-type amplitudes.}\\
The $p$ factor for the system can be found using Eq.~\eqref{eq:p_def},
\begin{align}
p = 2\sum_{j = 1}^{g_F} u^{F}_j - {n \over 2}\,,
\end{align}
where the sum goes over the representations in the final state. $p$ does not depend on the specific $n$-tuple,~\emph{i.e.}~it is system-universal, and only its parity is relevant. Furthermore, for integer-only systems and for doublet-only systems with all the doublets in the final state, one can choose $p=n/2$.
The $a$- and $s$-type amplitudes are defined in Eq.~\eqref{eq:as-comb-def-app},
\begin{align}
a_i \equiv (-1)^{q_i} \left( A_i- (-1)^p A_\ell \right), \qquad
s_i \equiv (-1)^{q_i} \left( A_i + (-1)^p A_\ell \right)\,.
\end{align}
Note that if the system contains only integer irreps, for the self-conjugated amplitude these definitions take the form given in Eqs.~\eqref{eq:self-conj-n/2-even} and \eqref{eq:self-conj-n/2-odd}.
\end{enumerate}
\item
{\bf Harvest the sum rules.}
\begin{enumerate}
\item \textit{System with at least one doublet.}\\
Build the lattice according to Section~\ref{sec:gen_1d} and calculate the relevant $\mu$-factors using Eqs.~\eqref{eq:mu_product} and~\eqref{eq:mu_j},
\begin{align}
\mu[y_1,y_2,...,y_r]=\prod_{j=0}^{r-1} \mu_j, \qquad
\mu_j = \sqrt{C(2u_j,y_j)}\times y_j!\,, \qquad C(2u_j,y_j) = \binom{2u_j}{y_j}\,.
\end{align}
\item
{\it Harvest the $a$- and $s$-type sum rules.} \\
The way to do this is described in
Section~\ref{sec:halves-lattice}.
For all even dimensional subspaces the sum rules are
\begin{equation}
\sum_{\text{$b$-dim subspace}} W_{b}[y_1, \dots, y_{r-1}] a_i = 0.
\end{equation}
For all odd dimensional subspaces the sum rules are
\begin{equation}
\sum_{\text{$b$-dim subspace}} W_{b}[y_1, \dots, y_{r-1}] s_i = 0.
\end{equation}
The weight factors are given in Eq.~\eqref{eq:Wb-def},
\begin{equation}
W_b[y_1, y_2, \dots, y_{r-1}] = \mu[y_1, y_2, \dots, y_{r-1}] \times M_b[y_1^\prime, y_2^\prime, \dots, y_{r-1}^\prime] = b! \prod_{j=1}^{r-1} \frac{\mu_j}{y_j^\prime!}.
\end{equation}
Note that the weights $W_b$ for $b=0$ and $b=1$ are simply given by the corresponding $\mu$-factors.
\item \textit{System without doublets.}\\
First, construct an auxiliary $U$-spin system such that
\begin{equation}
u_0^{\text{aux}} = \frac{1}{2}, \qquad u_1^{\text{aux}} = u_0 - \frac{1}{2}, \qquad u_j^{\text{aux}} = u_{j-1},\, \qquad \text{for } j = 2,\,\dots,\,r,
\end{equation}
and then write the sum rules for this system following the steps 1.2 to 2.2 of this algorithm. Then perform the last symmetrization
between $u_0^{\text{aux}}$ and $u_1^{\text{aux}}$ to get the sum rules for the system of interest as explained in Section~\ref{sec:sym}.
\end{enumerate}
\item {\bf Write the sum rules for the physical system.}
\begin{enumerate}
\item \textit{Obtain the sum rules for the CKM-free amplitudes of the physical system.}\\
Replace all the amplitudes of the abstract system of $U$-spin representations with the corresponding CKM-free amplitudes of the physical system. The mapping is performed according to Section~\ref{sec:physical-to-Uspin-mapping}. In our sign convention the CKM-free amplitudes and the amplitudes of the abstract system map identically.
\item \textit{Restore the CKM dependence.} \\
Write the sum rules in terms of the amplitudes with the CKM factors included.
\end{enumerate}
\end{enumerate}
Once all the sum rules are obtained, note that due to the alternating nature of the sum rules, in order to get the complete set of linear independent sum rules to a given order $b$, one has to harvest all the sum rules that are valid up to order $b$ and $b+1$.
\subsection{Examples}
\label{sec:examples}
\subsubsection{$D^0\to P^+ P^-$ decays} \label{sec:D-to-PP}
As our first example we consider the $U$-spin set of the $D^0\to P^+ P^-$ decay processes, where $D^0$ denotes the neutral $D$-meson, which is a $U$-spin singlet, and $P^\pm$ are the $U$-spin doublets of pseudoscalar mesons, which are defined in Eq.~\eqref{eq:Pp-Pm-def}. This system had been studied before, see, for example,
Refs.~\cite{Brod:2012ud, Grossman:2019xcj, Muller:2015lua, Grossman:2006jg, Hiller:2012xm, Grossman:2013lya, Grossman:2012ry}.
We already discuss this system in Section~\ref{sec:Uspin} using the traditional method.
Here we repeat the analysis using our novel approach.
The Hamiltonian that realizes the process in this $U$-spin set is a sum of a $U$-spin singlet and a triplet.
\begin{equation} \label{eq:D-Ham}
\mathcal{H}_\text{eff}^{(0)} = f_{0,0} H^0_0 + \sum_{m = -1}^{1} f_{1,m} H^1_m,
\end{equation}
where $H_0^0$ and $H^1_m$ are given in Eqs.~\eqref{H0-charm}
and \eqref{H1-charm} that we rewrite below
\begin{align}
H^0_0 &= {(\bar{u} s) (\bar{s} c)+(\bar{u} d) (\bar d c)\over
\sqrt{2}}, \label{H0-charm-copy} \\
H^1_{1} = (\bar{u} s) (\bar d c),\qquad
H^1_{-1} &= -(\bar{u} d) (\bar s c), \qquad
H^1_0 = {(\bar{u} s) (\bar s c)-(\bar{u} d) ( \bar d c)\over \sqrt{2}}, \label{H1-charm-copy}
\end{align}
The corresponding CKM-factors are given in Eqs.~\eqref{eq:charmCKM-f00} and \eqref{eq:charmCKM-f1m} and we repeat them here
\begin{align}
f_{0,0} &= \frac{V_{cs}^* V_{us} + V_{cd}^* V_{ud}}{2}\approx 0, \label{eq:charmCKM-f00-again} \\
f_{1,1} = V_{cd}^* V_{us}, \qquad
f_{1,-1} &= -V_{cs}^* V_{ud}, \qquad
f_{1,0} = \frac{V_{cs}^* V_{us} - V_{cd}^* V_{ud}}{\sqrt{2}}\approx \sqrt{2}\,\left(V_{cs}^* V_{us}\right)\,,\label{eq:charmCKM-f1m-again}
\end{align}
where the approximation used for $f_{0,0}$ and $f_{1,0}$ holds up to $O(\lambda^4)$.
In the following we use this approximation and, as a result, we only keep the triplet part of the Hamiltonian, $H^1$, and not the singlet part, $H^0$. Adopting this approximation, we can define the CKM-free amplitudes using Eq.~\eqref{eq:def-ckm-free-amp}.
Note that the standard convention in the literature is
\begin{equation}
H^1_0 = (\bar{u} s) (\bar s c)-(\bar{u} d) ( \bar d c), \qquad
f_{1,0} = \frac{V_{cs}^* V_{us} - V_{cd}^* V_{ud}}{2}\approx \left(V_{cs}^* V_{us}\right)
\,,
\end{equation}
and similarly for $H^0_0$ and $f_{0,0}$. The reason why we prefer to use the definitions in Eqs.~\eqref{H0-charm-copy}-\eqref{eq:charmCKM-f1m-again} is to keep the $H_0^1$ to be a part of a triplet, where the normalization is $\sqrt{2}$. The final result, of course, does not depend on this normalization choice.
Now we are ready to describe the $U$-spin structure of the system of interest. The number of non-trivial representations in the system is $r=3$. We order the representations as follows:
\begin{equation}
u_0 = u_1 = \frac{1}{2},\qquad u_2 = 1,
\end{equation}
where $u_0$ corresponds to $P^+$, $u_1$ to $P^-$ and $u_2$ to the Hamiltonian. The mapping of the four CKM-free amplitudes of the physical system into generalized $n$-tuples, as well as the $U$-spin paring is summarized in
Table~\ref{tab:map-c-mesons}. Note that we choose to show the results for $D^0$ decays, which contain a $c$ quark. Additionally, Table~\ref{tab:map-c-mesons} lists the coordinate notation for the amplitude pairs (``Nodes''), the values of the $\mu$-factors, and $(-1)^{q_i}$. The last column ``Indices'' lists the indices of the amplitudes from the $U$-spin pair using the notation introduced in Section~\ref{sec:An-tuples-doublets}.
Recall that the smaller index corresponds to the amplitude of the pair where the corresponding $n$-tuple starts with a minus sign.
\begin{table}[t]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Decay & $U$-spin conjugate & $n$-tuple & Node & ~~$\mu$-factor~~ &~~$(-1)^{q_i}$~~ & ~~Indices~~ \\
\hline
~~~$D^0 \rightarrow \pi^+ K^-$~~~ &
~~~$D^0 \rightarrow K^+\pi^-$~~~& ~~$(-,-,++)$~~ &~~$(1)$~~ & 1 & $+1$ & 3, 12 \\
~~~$D^0 \rightarrow \pi^+ \pi^- $~~~ &
~~~$D^0 \rightarrow K^+ K^-$~~~& ~~$(-,+,-+)$~~ &~~$(2)$~~ & $\sqrt{2}$ & $-1$ & 5, 10\\
\hline
\end{tabular}
\caption{The mapping of the $U$-spin amplitude pairs of $D^0\to P^+ P^-$ decays into generalized $n$-tuples.\label{tab:map-c-mesons}}
\end{table}
Next, we calculate $p$ and find that $p = 0$. Using this and the $(-1)^{q_i}$ factors listed in Table~\ref{tab:map-c-mesons} we define the $a$- and $s$-type amplitudes using Eq.~\eqref{eq:as-comb-def-app}. We have in the index notation
\begin{align}
a_3 &= A_3 - A_{12},& s_3 &= A_3 + A_{12}\nonumber\\
a_5 &= -(A_5 - A_{10}),& s_5 &= -(A_5 + A_{10}).
\end{align}
The resulting sum rules in terms of $a$- and $s$-type amplitudes take the following form
\begin{equation}
a_3=0,\qquad
a_5=0,\qquad
s_3+\sqrt{2} s_5=0,
\end{equation}
where the first two $a$-type sum rules hold only at zeroth order, while the last $s$-type sum rule holds up to order $b=1$ and is broken at order $b=2$.
In terms of the CKM-free amplitudes of the physical system, we have the following sum rules. The $a$-type sum rules that hold up to $b=0$ are
\begin{equation}
A(D^0 \rightarrow \pi^+ K^-) =
A(D^0 \rightarrow K^+\pi^-), \qquad
A(D^0 \rightarrow \pi^+\pi^-) =
A(D^0 \rightarrow K^+ K^-).
\end{equation}
The $s$-type sum rule that holds up to $b=1$ is given by
\begin{equation} \label{eq:DPP-s}
A(D^0 \rightarrow \pi^+ K^-) +
A(D^0 \rightarrow K^+\pi^-)-
\sqrt{2}A(D^0 \rightarrow \pi^+\pi^-) -
\sqrt{2}A(D^0 \rightarrow K^+ K^-)= 0.
\end{equation}
Our last step is to restore the CKM dependence. We find
\begin{align}
\frac{\mathcal{A}(D^0 \rightarrow \pi^+ K^-)}{ -V_{cs}^* V_{ud} } =
\frac{\mathcal{A}(D^0 \rightarrow K^+\pi^-)}{ V_{cd}^* V_{us} }, \qquad
\frac{\mathcal{A}(D^0 \rightarrow \pi^+\pi^-)}{\sqrt{2} V_{cs}^* V_{us}} =
\frac{\mathcal{A}(D^0 \rightarrow K^+ K^-)}{\sqrt{2} V_{cs}^* V_{us} }.
\end{align}
and
\begin{align}
\frac{\mathcal{A}(D^0 \rightarrow \pi^+ K^-)}{ -V_{cs}^* V_{ud} } +
\frac{\mathcal{A}(D^0 \rightarrow K^+\pi^-)}{ V_{cd}^* V_{us} }-
\sqrt{2}\frac{\mathcal{A}(D^0 \rightarrow \pi^+\pi^-)}{\sqrt{2} V_{cs}^* V_{us} } -
\sqrt{2}\frac{\mathcal{A}(D^0 \rightarrow K^+ K^-)}{\sqrt{2} V_{cs}^* V_{us} }= 0.
\end{align}
A few remarks are in order:
\begin{enumerate}
\item
The results agree with the known results.
\item
The conventional factors of $\sqrt{2}$ consistently cancel out in the final result.
\item
Recall that we work in the approximation $V_{cd}^*V_{ud}\approx - V_{cs}^* V_{us}$.
\item
Any sign difference between the literature and our result is due to the sign convention for the operator $H^1_{-1}$ and the corresponding coefficient $f_{1,-1}$.
\item
The difference between the literature and our result regarding the minus sign in $-V_{cs}^* V_{ud}$ is due to the conventional minus sign in the Hamiltonian Eq.~(\ref{H1-charm-copy}), and that the above CKM matrix elements are the ones for $D^0$ decays.
\end{enumerate}
\subsubsection{Semileptonic $K \to \pi$ decays}
All of the results presented in this work are also valid for $SU(2)$ isospin. The fundamental doublets of isospin are
\begin{equation} \label{eq:ud-isospin}
\begin{bmatrix}
\,u\,\, \\
d
\end{bmatrix} =
\begin{bmatrix}
\ket{\frac{1}{2}, +\frac{1}{2}}\\
\ket{\frac{1}{2}, -\frac{1}{2}}
\end{bmatrix}, \hspace{25pt}
\begin{bmatrix}
\,\bar{d}\,\, \\
-\bar{u}
\end{bmatrix} =
\begin{bmatrix}
\ket{\frac{1}{2}, +\frac{1}{2}}\\
\ket{\frac{1}{2}, -\frac{1}{2}}
\end{bmatrix}\,.
\end{equation}
In this example we consider the isospin system of $K \to \pi$ decays, where $K$ and $\pi$ stand for the following isospin doublet and triplet, respectively
\begin{equation}
K = \begin{bmatrix}
K^+\\
K^0
\end{bmatrix} =
\begin{bmatrix}
\ket{\frac{1}{2}, +\frac{1}{2}}\\
\ket{\frac{1}{2}, -\frac{1}{2}}
\end{bmatrix}, \hspace{25pt}
\pi = \begin{bmatrix}
\pi^+\\
\pi^0\\
\pi^-
\end{bmatrix} =
\begin{bmatrix}
\ket{1, +1}\\
\ket{1, 0}\\
\ket{1, -1}
\end{bmatrix}.
\end{equation}
The processes in the $K \to \pi$ isospin-set are realized via an isospin doublet Hamiltonian. We write the Hamiltonian as
\begin{equation}
\mathcal{H}_\text{eff}^{(0)} = \sum_{m=-1/2}^{1/2} f^u_m H_m^{1/2},
\end{equation}
where
\begin{equation} \label{eq:H-k-sl}
H_{1/2}^{1/2} =
(\bar{u} s) (\bar e \nu),\qquad
H_{-1/2}^{1/2} =
(\bar{d} s) ( \bar \nu\nu),
\end{equation}
where by $\nu$ we refer to $\nu_e$.
The CKM factors are given by
\begin{align}
f_{1/2}^{1/2} = V_{us}, \qquad
f_{-1/2}^{1/2} = \sum_{i=u,c,t} V_{id}^* V_{is} f(m_i^2),
\end{align}
where $f(m^2)$ is a known loop factor that can be found, for example, in Ref.~\cite{Buchalla:1998ba} and we do not write it explicitly here.
From the group theoretical point of view this system is described by $r=3$ representations: one doublet in the initial state, one triplet in the final state and a Hamiltonian that transforms as a doublet. We order the representations as follows:
\begin{equation}
u_0 = u_1 = \frac{1}{2},\qquad u_2 = 1,
\end{equation}
where $u_0$ corresponds to $K$, $u_1$ to $H$, and $u_2$ to $\pi$. We see that this system is in the same universality class as the $D^0\to P^+P^-$ system we consider in section~\ref{sec:D-to-PP}.
The mapping of the four CKM-free amplitudes of the set into the generalized $n$-tuples is presented in Table~\ref{tab:map-K-sl}. The value of $p$ for this system is the same as for $D^0\to P^+P^-$, that is, $p = 0$.
\begin{table}[t]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Decay & $U$-spin conjugate & $n$-tuple & Node & ~~$\mu$-factor~~ &~~$(-1)^{q_i}$~~ & ~~Indices~~ \\
\hline
~~~$K^+ \to \pi^+ \nu \bar \nu $~~~ &
~~~$K^0 \to \pi^- \nu e^+$~~~& ~~$(-,-,++)$~~ &~~$(1)$~~ & 1 & $+1$ & 3, 12 \\
~~~$K^+ \to \pi^0 \nu e^+$~~~ &
~~~$K^0 \to \pi^0 \nu \bar \nu$~~~& ~~$(-,+,-+)$~~ &~~$(2)$~~ & $\sqrt{2}$ & $-1$ & 5, 10\\
\hline
\end{tabular}
\caption{The mapping of the $U$-spin pairs of semileptonic $K\to \pi$ decays into generalized $n$-tuples.\label{tab:map-K-sl}}
\end{table}
Comparing Tables~\ref{tab:map-c-mesons} and~\ref{tab:map-K-sl} we write the sum rules for the $K \to \pi$ system. The $a$-type sum rules that hold in the isospin limit are
\begin{equation} \label{eq:Kpi-sum-rules-1}
A(K^+ \rightarrow \pi^+ \nu \bar \nu) =
A(K^0 \rightarrow \pi^- \nu e^+), \qquad
A(K^+ \rightarrow \pi^0 \nu e^+) =
A(K^0 \rightarrow \pi^0 \nu \bar \nu).
\end{equation}
The $s$-type sum rule that holds up to $b=1$ order of breaking is given by
\begin{equation} \label{eq:Kpi-sum-rules-2}
A(K^+ \to \pi^+ \nu \bar \nu) +
A(K^0 \to \pi^- \nu e^+)-
\sqrt{2}A(K^+ \to \pi^0 \nu e^+) -
\sqrt{2}A(K^0 \to \pi^0 \nu \bar \nu) = 0.
\end{equation}
Note the following:
\begin{enumerate}
\item
From the group theoretical point of view, the $D^0 \to P^+P^-$ and semileptonic $K \to \pi$ decays are identical.
\item
Often, the isospin relations above are written in terms of the hadronic part of the decay, which is parametrized by a form factor function of $q^2$. For example, a common notation is $f^{K^i\pi^j}$.
\item
The relation in Eq.~\eqref{eq:Kpi-sum-rules-2} may be of limited use, as there are also electromagnetic corrections that are not captured by the group theoretical treatment. These corrections are expected to be of the order of the first order isospin breaking.
\end{enumerate}
Our result is in agreement with the explicit calculation given in Eq.~(17) of Ref.~\cite{Mescia:2007kn}.
In order to see it we have to neglect electromagnetic corrections, account for the conventional factors of $\sqrt{2}$ and take the isospin-breaking meson mass splitting into account for the power counting. The latter is needed in order to see that the loop functions consistently cancel each other in the combinations appearing in the sum rules.
\subsubsection{Baryonic charm decays}\label{sec:CbtoLbPP}
Consider the following system
\begin{equation}
{C_b} \to {L_b} P^- P^+
\end{equation}
where $P^+$ and $P^-$ are defined in Eq.~\eqref{eq:Pp-Pm-def}, $C_b$ stands for a doublet of charmed baryons and $L_b$ for a doublet of light baryons defined as follows:
\begin{equation}
C_b = \begin{bmatrix}
\Lambda_c^+ \\ \Xi_c^+
\end{bmatrix} =
\begin{bmatrix}
\ket{cud}\\
\ket{cus}
\end{bmatrix} =
\begin{bmatrix}
\ket{\frac{1}{2}, +\frac{1}{2}}\\
\ket{\frac{1}{2}, -\frac{1}{2}}
\end{bmatrix}\,,
\qquad
L_b = \begin{bmatrix}
p \\
\Sigma^+
\end{bmatrix} =
\begin{bmatrix}
\ket{uud}\\
\ket{uus}
\end{bmatrix} =
\begin{bmatrix}
\ket{\frac{1}{2}, +\frac{1}{2}}\\
\ket{\frac{1}{2}, -\frac{1}{2}}
\end{bmatrix}\,.
\end{equation}
The Hamiltonian and CKM factors for this system are the same as for $D \to P^+ P^-$ and are given in Eqs.~\eqref{eq:D-Ham}--\eqref{eq:charmCKM-f1m-again}. As in Section~\ref{sec:D-to-PP} we only consider the $H^1$ operators in the Hamiltonian.
The $C_b \to L_b P^- P^+$ system is described by $r = 5$ irreps, among which four are doublets and one is a triplet. We order the representations as follows
\begin{equation}
u_0 = u_1 = u_2 = u_3 = \frac{1}{2},\qquad u_4 = 1,
\end{equation}
where we choose the order to be $(C_b,L_b,P^-,P^+,H^1)$.
The mapping of the decay processes to $n$-tuples is given in Table~\ref{tab:map-c-baryons}. For this system $p=0$.
\begin{table}[t]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
Decay & $U$-spin conjugate & $n$-tuple & Node & ~~$(-1)^{q_i}$~~ \\
\hline
~~~$\Lambda^+_c \to \Sigma^+ K^- K^+$~~~ & ~~~$\Xi_c^+ \to p\pi^- \pi^+$~~~ & ~~$(-, -, -, +, + +)$~~ & ~~$(1,2)$~~ & $+1$ \\
$\Lambda_c^+ \to \Sigma^+ \pi^- \pi^+$ & $\Xi_c^+ \to p K^- K^+$ & $(-,-,+,-,++)$ & $(1,3)$ & $+1$ \\
$\Lambda^+_c \to \Sigma^+ \pi^- K^+$ & $\Xi_c^+ \to p K^- \pi^+$ & $(-,-,+,+,-+)$ & $(1,4)$ & $-1$ \\
$\Lambda_c^+ \to p K^- \pi^+$ & $\Xi_c^+ \to \Sigma^+ \pi^- K^+$ & $(-,+,-,-,++)$ & $(2,3)$ & $+1$ \\
$\Lambda_c^+ \to p K^- K^+$ & $\Xi_c^+ \to \Sigma^+ \pi^- \pi^+$ & $(-,+,-,+,-+)$ & $(2,4)$ & $-1$ \\
$\Lambda_c^+ \to p \pi^- \pi^+$ & $\Xi_c^+ \to \Sigma^+ K^- K^+$ & $(-,+,+,-,-+)$ & $(3,4)$ & $-1$ \\
$\Lambda_c^+ \to p \pi^- K^+$ & $\Xi_c^+ \to \Sigma^+ K^- \pi^+$ & $(-,+,+,+,--)$ & $(4,4)$ & $+1$ \\
\hline
\end{tabular}
\caption{The mapping of the $U$-spin pairs of baryonic charm decays $C_b \to L_b P^- P^+$ decays into generalized $n$-tuples. \label{tab:map-c-baryons} }
\end{table}
We consider the system of four doublets and one triplet in Section~\ref{sec:gen-examples}. The lattice is shown in Fig.~\ref{fig:n6-4d-1t}. Harvesting the sum rules from the lattice we obtain seven trivial $a$-type sum rules that are valid to order $b = 0$. They are
\begin{equation}
a_{(1,2)} = a_{(1,3)} = a_{(1,4)} = a_{(2,3)} = a_{(2,4)} = a_{(3,4)} = a_{(4,4)} = 0.
\end{equation}
In terms of CKM-free amplitudes these $b=0$ sum rules take the following form
\begin{align}
A\left(\Lambda^+_c \to \Sigma^+ K^- K^+\right) &= A\left(\Xi_c^+ \to p\pi^- \pi^+\right),\\
A\left(\Lambda_c^+ \to \Sigma^+ \pi^- \pi^+\right) &= A\left(\Xi_c^+ \to p K^- K^+\right),\\
A\left(\Lambda^+_c \to \Sigma^+ \pi^- K^+\right) &= A\left(\Xi_c^+ \to p K^- \pi^+\right),\\
A\left(\Lambda_c^+ \to p K^- \pi^+\right) &= A\left(\Xi_c^+ \to \Sigma^+ \pi^- K^+\right),\\
A\left(\Lambda_c^+ \to p K^- K^+\right) &= A\left(\Xi_c^+ \to \Sigma^+ \pi^- \pi^+\right),\\
A\left(\Lambda_c^+ \to p \pi^- \pi^+\right)& = A\left(\Xi_c^+ \to \Sigma^+ K^- K^+\right),\\
A\left(\Lambda_c^+ \to p \pi^- K^+\right)& = A\left(\Xi_c^+ \to \Sigma^+ K^- \pi^+\right).
\end{align}
The $s$-type sum rules that are valid up to order $b = 1$ of $U$-spin breaking are read off the lines of the lattice in Fig.~\ref{fig:n6-4d-1t} and are given by
\begin{align}
s_{(1,2)} + s_{(1,3)} + \sqrt{2} s_{(1,4)} &= 0, \\
s_{(1,2)} + s_{(2,3)} + \sqrt{2} s_{(2,4)} &= 0, \\
s_{(1,3)} + s_{(2,3)} + \sqrt{2} s_{(3,4)} &= 0, \\
s_{(1,4)} + s_{(2,4)} + s_{(3,4)} + \sqrt{2} s_{(4,4)} &= 0.
\end{align}
The above four $s$-type sum rules take the following form in terms of CKM-free amplitudes of the physical system
\begin{align}
+A\left(\Lambda^+_c \to \Sigma^+ K^- K^+\right)+A\left(\Xi_c^+ \to p\pi^- \pi^+\right) + A\left(\Lambda_c^+ \to \Sigma^+ \pi^- \pi^+\right) & \nonumber \\
+A\left(\Xi_c^+ \to p K^- K^+\right) - \sqrt{2}A\left(\Lambda^+_c \to \Sigma^+ \pi^- K^+\right) - \sqrt{2} A\left(\Xi_c^+ \to p K^- \pi^+\right) & = 0,\\
A\left(\Lambda^+_c \to \Sigma^+ K^- K^+\right)+A\left(\Xi_c^+ \to p\pi^- \pi^+\right) + A\left(\Lambda_c^+ \to p K^- \pi^+\right) & \nonumber \\
+ A\left(\Xi_c^+ \to \Sigma^+ \pi^- K^+\right) - \sqrt{2} A\left(\Lambda_c^+ \to p K^- K^+\right) - \sqrt{2} A\left(\Xi_c^+ \to \Sigma^+ \pi^- \pi^+\right) & = 0,
\\
+ A\left(\Lambda_c^+ \to \Sigma^+ \pi^- \pi^+\right) + A\left(\Xi_c^+ \to p K^- K^+\right) + A\left(\Lambda_c^+ \to p K^- \pi^+\right) & \nonumber \\
+ A\left(\Xi_c^+ \to \Sigma^+ \pi^- K^+\right) - \sqrt{2}A\left(\Lambda_c^+ \to p \pi^- \pi^+\right) - \sqrt{2} A\left(\Xi_c^+ \to \Sigma^+ K^- K^+\right) & = 0,
\\
- A\left(\Lambda^+_c \to \Sigma^+ \pi^- K^+\right) - A\left(\Xi_c^+ \to p K^- \pi^+\right) - A\left(\Lambda_c^+ \to p K^- K^+\right) \nonumber \\
- A\left(\Xi_c^+ \to \Sigma^+ \pi^- \pi^+\right) - A\left(\Lambda_c^+ \to p \pi^- \pi^+\right) - A\left(\Xi_c^+ \to \Sigma^+ K^- K^+\right) \nonumber \\
+\sqrt{2} A\left(\Lambda_c^+ \to p \pi^- K^+\right) + \sqrt{2} A\left(\Xi_c^+ \to \Sigma^+ K^- \pi^+\right) & = 0.
\end{align}
The $a$-type sum rule that holds at $b=2$ is
\begin{equation}
a_{(1,2)} + a_{(1,3)} + a_{(2,3)} + a_{(4,4)} + \sqrt{2} a_{(1,4)} +
\sqrt{2} a_{(2,4)} + \sqrt{2} a_{(3,4)} =0.
\end{equation}
In terms of CKM-free amplitudes of the physical system this sum rule becomes
\begin{align}
&+\left[A\left(\Lambda^+_c \to \Sigma^+ K^- K^+\right) - A\left(\Xi_c^+ \to p\pi^- \pi^+\right)\right] \nonumber\\&
+ \left[A\left(\Lambda_c^+ \to \Sigma^+ \pi^- \pi^+\right) - A\left(\Xi_c^+ \to p K^- K^+\right)\right]\nonumber\\&
+\left[A\left(\Lambda_c^+ \to p K^- \pi^+\right) - A\left(\Xi_c^+ \to \Sigma^+ \pi^- K^+\right)\right]
\nonumber\\&
+\left[A\left(\Lambda_c^+ \to p \pi^- K^+\right) - A\left(\Xi_c^+ \to \Sigma^+ K^- \pi^+\right)\right]
\nonumber\\&
-\sqrt{2}\left[A\left(\Lambda^+_c \to \Sigma^+ \pi^- K^+\right) -A\left(\Xi_c^+ \to p K^- \pi^+\right)\right]
\nonumber\\ &-
\sqrt{2}\left[A\left(\Lambda_c^+ \to p K^- K^+\right) - A\left(\Xi_c^+ \to \Sigma^+ \pi^- \pi^+\right)\right]
\nonumber\\&
-\sqrt{2}\left[A\left(\Lambda_c^+ \to p \pi^- \pi^+\right)- A\left(\Xi_c^+ \to \Sigma^+ K^- K^+\right)\right] =0.
\end{align}
For comparison, in Appendix~\ref{app:CbtoLbPP} we show the decompositions of the CKM-free amplitudes of the $C_b \to L_b P^+ P^-$ system in terms of RME. One can check explicitly that the sum rules we obtain here using our novel method are indeed satisfied.
\subsubsection{$D^0 \to P^0 P^{*0}$ decays}
In this section we obtain the sum rules for a system that is not easy to probe as the final states contain
particles which are not mass eigenstates. Still it is an instructive example in order to illustrate our formalism because it contains only triplet representations and no doublets. Specifically, we obtain the sum rules for the $U$-spin set $D^0\to P^0 P^{*0}$, where $D^0$ is a $U$-spin singlet, $P^0$ is the neutral pseudoscalar triplet, and $P^{*0}$ is the neutral vector triplet. Explicitly the $P^0$ and $P^{*0}$ triplets are
\begin{equation}
P^0 = \begin{bmatrix}
K^0\\
\eta_u \\ \bar K^0
\end{bmatrix}=
\begin{bmatrix}
\ket{1, +1}\\
\ket{1,0} \\ \ket{1,-1}
\end{bmatrix},
\qquad
P^{*0} = \begin{bmatrix}
K^{*0}\\
\phi_u \\ \bar K^{*0}
\end{bmatrix}=
\begin{bmatrix}
\ket{1, +1}\\
\ket{1,0} \\ \ket{1,-1}
\end{bmatrix},
\end{equation}
where we use the notation
\begin{equation}
\eta_u, \phi_u \equiv \frac{s \bar s - d \bar d}{\sqrt{2}}.
\end{equation}
While the $\eta_u$ and $\phi_u$ states are not mass eigenstates, they do have definite transformation behavior under $U$-spin and are therefore well-suited for an illustration of our methodology.
In terms of physical states $\eta_u$ is a mixture of $\pi^0$, $\eta$, and $\eta'$ while $\phi_u$ is a mixture of $\rho$, $\omega$ and $\phi$. The Hamiltonian and CKM factors for $D\to P^0 P^{*0}$ are the same as for $D^0 \to P^+ P^-$ and $C_b \to L_b P^- P^+$ decays and are summarized in Eqs.~\eqref{eq:D-Ham}--\eqref{eq:charmCKM-f1m-again}.
The $U$-spin system under consideration is thus described by $r = 3$ representations all of which are triplets
\begin{equation}
u_0 = u_1 =u_2 = 1.
\end{equation}
We choose to order the irreps as $(H,P^0,P^{*0})$. The mapping of the CKM-free amplitudes of the set into generalized $n$-tuples is presented in Table~\ref{tab:map-3-triplets}. Note that the last amplitude $A\left(D^0 \to \eta_u \phi_u\right)$ in the table is $U$-spin self conjugate. For this system $p=1$.
\begin{table}[t]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
Decay & $U$-spin conjugate & $n$-tuple & $(-1)^{q_i}$ & Indices\\
\hline
$D^0 \to \eta_u K^{*0} $ & $D^0 \to \eta_u \bar K^{*0} $ & $(--,-+,++)$ & $-1$ &7, 52 \\
$D^0 \to K^0 \phi_u $& $D^0 \to \bar K^0 \phi_u $& $(--,++,-+)$ & $-1$ & 13, 49 \\
~~~$D^0 \to \bar K^0 K^{*0} $~~~ & $D^0 \to K^0 \bar K^{*0} $ & ~~$(-+,--,++)$~~ & $+1$ & 19, 28 \\
~~~$D^0 \to \eta_u \phi_u $~~~ & $D^0 \to \eta_u \phi_u $ & ~~$(-+,-+,-+)$~~ & $+1$ & $21$ \\
\hline
\end{tabular}
\caption{The mapping of the $U$-spin pairs of $D^0\to P P^*$ decays into generalized $n$-tuples. \label{tab:map-3-triplets}}
\end{table}
The sum rules for a system of three triplets are derived in Appendix~\ref{app:3t}. Using Eqs.~\eqref{eq:3t-b0}--\eqref{eq:3t-b2} and taking into account the values of $(-1)^{q_i}$ listed in Table~\ref{tab:map-3-triplets}, we have the following $b=0$ sum rules
\begin{align}
A(D^0 \to \eta_u K^{*0}) &= -
A(D^0 \to \eta_u \bar K^{*0}),
\\
A(D^0 \to K^0 \phi_u) &= -
A(D^0 \to \bar K^0 \phi_u),
\\
A(D^0 \to \bar K^0 K^{*0}) &= -
A(D^0 \to K^0 \bar K^{*0}),
\\
A(D^0 \to \eta_u \phi_u) &=0.
\end{align}
For $b=1$ the sum rules are
\begin{align} \label{eq:3-tri-1}
& -A(D^0 \to \eta_u K^{*0}) +
A(D^0 \to \eta_u \bar K^{*0}) - A(D^0 \to K^0 \phi_u) +
A(D^0 \to \bar K^0 \phi_u) = 0, \\
& -A(D^0 \to \eta_u K^{*0}) +
A(D^0 \to \eta_u \bar K^{*0}) +
A(D^0 \to \bar K^0 K^{*0}) -
A(D^0 \to K^0 \bar K^{*0}) = 0,
\end{align}
and, finally, for $b=2$ we have
\begin{align}
-A(D^0 \to \eta_u K^{*0}) -
A(D^0 \to \eta_u \bar K^{*0}) - A(D^0 \to K^0 \phi_u) -
A(D^0 \to \bar K^0 \phi_u) &+ \nonumber \\
A(D^0 \to \bar K^0 K^{*0}) +
A(D^0 \to K^0 \bar K^{*0})
+2 A(D^0 \to \eta_u \phi_u) &=0.
\end{align}
\subsubsection{$\bar B^0 \to D^0 \Omega_B^- \bar\Omega_B^+$ decays}
We conclude this section with an example of a system with $n=8$ and $r = 4$. We consider $B$ decays into a $D^0$ meson and two baryons that belong to the multiplets $\Omega_B^-$ and $\Omega_B^+$ each with $U$-spin $3/2$. This is an example of a system that was not studied theoretically and is yet to be measured experimentally. The baryon multiplets $\Omega_B^-$ and $\Omega_B^+$ are defined as follows:
\begin{equation}
\Omega_B^- = \begin{bmatrix}
\Delta^- \\ \Sigma^{*-} \\\Xi^{*-} \\ \Omega^-
\end{bmatrix} =
\begin{bmatrix}
\ket{ddd}\\
\ket{dds} \\ \ket{dss} \\ \ket{sss}
\end{bmatrix} =
\begin{bmatrix}
\ket{\frac{3}{2}, +\frac{3}{2}}\\
\ket{\frac{3}{2}, +\frac{1}{2}} \\\ket{\frac{3}{2}, -\frac{1}{2}}\\
\ket{\frac{3}{2}, -\frac{3}{2}}
\end{bmatrix}\,,
\qquad
\Omega_B^+ = \begin{bmatrix}
\bar\Omega^+ \\\bar\Xi^{*+} \\ \bar \Sigma^{*+} \\
\bar \Delta^+
\end{bmatrix} =
\begin{bmatrix}
\ket{\bar{s} \bar{s} \bar{s}}\\
-\ket{\bar d \bar s \bar s}\\
\ket{\bar d \bar d \bar s}\\
-\ket{\bar d \bar d \bar d}
\end{bmatrix} =
\begin{bmatrix}
\ket{\frac{3}{2}, +\frac{3}{2}}\\
\ket{\frac{3}{2}, +\frac{1}{2}} \\\ket{\frac{3}{2}, -\frac{1}{2}}\\
\ket{\frac{3}{2}, -\frac{3}{2}}
\end{bmatrix}\,.
\end{equation}
The $\bar B^0$ doublet is given by
\begin{equation}
\bar B^0 = \begin{bmatrix}
\bar B_s^0 \\
\bar B_d^0
\end{bmatrix} =
\begin{bmatrix}
\ket{b \bar s}\\ -\ket{b \bar d}
\end{bmatrix} =
\begin{bmatrix}
\ket{\frac{1}{2}, +\frac{1}{2}} \\ \ket{\frac{1}{2}, -\frac{1}{2}}
\end{bmatrix}\,.
\end{equation}
The $D^0$ meson is a $U$-spin singlet.
The leading order effective Hamiltonian for the $\bar{B}^0 \to D^0 \Omega_B^- \bar{\Omega}^+$ system is a doublet
\begin{equation}
\mathcal{H}^{(0)}_\text{eff} = \sum_{m=-1/2}^{1/2} f_{1/2,m} H_m^{1/2}.
\end{equation}
with
\begin{equation} \label{eq:H-b-to-c}
H_{1/2}^{1/2} =
(\bar c b) (\bar s u),\qquad
H_{-1/2}^{1/2} =
(\bar c b) (\bar d u),
\end{equation}
and the CKM factors
\begin{equation}
f_{1/2,1/2} = V_{cb}V_{us}^*, \qquad
f_{1/2, -1/2} =
V_{cb}V_{ud}^*.
\end{equation}
We order the $U$-spin representations that describe the system as follows
\begin{equation}
u_0=u_1={1 \over 2}, \qquad u_2=u_3={3 \over 2},
\end{equation}
where we choose the order to be $(\bar B^0,H,\Omega_B^-,\bar\Omega_B^+)$. The mapping of the amplitudes of the physical system into generalized $n$-tuples is then given in \cref{tab:map-b-baryons}.
\begin{table}[t]
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Decay & $U$-spin conjugate & $n$-tuple & Node &~$\mu$-factor~& $(-1)^{q_i}$\\
\hline
~$\bar B^0_s\rightarrow D^0\, \Xi^{*-} \bar \Omega^+ $ ~&
~$\bar B^0_d\rightarrow D^0 \,\Sigma^{*-} \bar\Delta^+ $~&
~~$(-,-,- -+,+++)$~~ &
~~$(1,2,2)$~~ & $2\sqrt{3}$ & $+1$\\
~$\bar B^0_s\rightarrow D^0\, \Sigma^{*-} \bar\Xi^{*+} $ ~&~$\bar B^0_d\rightarrow D^0 \,\Xi^{*-} \bar\Sigma^{*+} $~&
~~$(-,-,- ++,-++)$~~ &
~~$(1,2,3)$~~ & 3 & $+1$\\
~$\bar B^0_s\rightarrow D^0\, \Delta^{-} \bar\Sigma^{*+} $~&~$\bar B^0_d\rightarrow D^0 \,\Omega^{-} \bar\Xi^{*+} $~&
~~$(-,-,+++,--+)$~~ &
~~$(1,3,3)$~~ & $2 \sqrt{3}$ & $+1$\\
~$\bar B^0_s\rightarrow D^0 \,\Delta^{-} \bar\Delta^+ $ ~&~$\bar B^0_d\rightarrow D^0\, \Omega^{-} \bar\Omega^+ $ ~&
~~$(-,+,+++,---)$~~ &
~~$(3,3,3)$~~ & 6 & $-1$\\
~$\bar B^0_s\rightarrow D^0\, \Omega^{-} \bar\Omega^+ $~&~$\bar B^0_d\rightarrow D^0\, \,\Delta^{-} \bar\Delta^+$ ~&
~~$(-,+,---,+++)$~~ &
~~$(2,2,2)$~~ & 6 & $-1$\\
~$\bar B^0_s\rightarrow D^0\, \Sigma^{*-} \bar\Sigma^{*+} $~&~$\bar B^0_d\rightarrow D^0\, \,\Xi^{*-} \bar\Xi^{*+} $~&
~~$(-,+,-++,--+)$~~ &
~~$(2,3,3)$~~ & 6 & $-1$\\
~$\bar B^0_s\rightarrow D^0\, \Xi^{*-} \bar\Xi^{*+} $~&~$\bar B^0_d\rightarrow D^0\, \,\Sigma^{*-} \bar\Sigma^{*+} $~&
~~$(-,+,--+,-++)$~&
~~$(2,2,3)$~~ & 6 & $-1$\\
\hline
\end{tabular}
\caption{The mapping of the $U$-spin pairs of $\bar B^0 \to D^0 \Omega_B^- \bar\Omega_B^+$ decays into generalized $n$-tuples. \label{tab:map-b-baryons}}
\end{table}
The $a$-type sum rules that are valid up to $b=0$ are the trivial ones, and we do not write them explicitly. The $s$-type sum rules that are valid up to $b=1$ are given by
\begin{align}
2 s_{(1,2,2)}+ \sqrt{3} s_{(1,2,3)} &=0\,, \\
2 s_{(1,3,3)}+ \sqrt{3} s_{(1,2,3)} &=0\,, \\
s_{(1,2,2)}+\sqrt{3}s_{(2,2,2)}+ \sqrt{3}s_{(2,2,3)} &=0\,,\\
s_{(1,2,3)}+ 2s_{(2,2,3)}+ 2s_{(2,3,3)}&=0\,,\\
s_{(1,3,3)}+\sqrt{3}s_{(3,3,3)}+ \sqrt{3}s_{(2,3,3)} &=0\,.
\end{align}
For the $a$-type sum rules that are valid up to $b=2$, we need to calculate the $W_b=M_b\times \mu$ factors. We write the results with explicit product of the $M_b$ and $\mu$-factors:
\begin{align}
1 \times 2 \sqrt{3} a_{(1,2,2)}+ 2 \times 3a_{(1,2,3)}+ 1 \times 2 \sqrt{3} a_{(1,3,3)}&=0, \\
2 \times 2\sqrt{3} a_{(1,2,2)}+ 2 \times 3 a_{(1,2,3)}+ 1\times 6 a_{(2,2,2)}+ 2 \times 6 a_{(2,2,3)} + 1 \times 6 a_{(2,3,3)} &=0, \\
2 \times 2\sqrt{3} a_{(1,3,3)}+ 2 \times 3 a_{(1,2,3)}+ 1\times 6 a_{(3,3,3)}+ 1 \times 6 a_{(2,2,3)} + 2 \times 6 a_{(2,3,3)} &=0.
\end{align}
simplifying we can write the sum rules as
\begin{align}
a_{(1,2,2)}+
\sqrt{3} a_{(1,2,3)}+ a_{(1,3,3)}&=0, \\
2\sqrt{3} a_{(1,2,2)}+ 3 a_{(1,2,3)}+ 3 a_{(2,2,2)}+ 6 a_{(2,2,3)} + 3 a_{(2,3,3)} &=0, \\
2\sqrt{3} a_{(1,3,3)}+ 3 a_{(1,2,3)}+ 3 a_{(3,3,3)}+ 3 a_{(2,2,3)} + 6 a_{(2,3,3)} &=0.
\end{align}
The $s$-type sum rule for $b=3$ is given by
\begin{align}
&
3 \times 2 \sqrt{3} s_{(1,2,2)}+ 3 \times 2 \sqrt{3} s_{(1,3,3)}+
6 \times 3 s_{(1,2,3)}+ \nonumber
\\ & ~~~~~~ 1 \times 6 s_{(2,2,2)}+ 1\times 6 s_{(3,3,3)}+ 3 \times 6 s_{(2,2,3)} + 3 \times 6 s_{(2,3,3)} =0.
\end{align}
Simplifying we get
\begin{align}
\sqrt{3} s_{(1,2,2)}+ \sqrt{3} s_{(1,3,3)}+
3 s_{(1,2,3)}+ s_{(2,2,2)}+ s_{(3,3,3)}+ 3 s_{(2,2,3)} + 3 s_{(2,3,3)} &=0.
\end{align}
\section{Conclusion and discussion \label{sec:conclusions}}
We have studied the general group-theoretical properties of $SU(2)$ amplitude sum rules, with particular emphasis on $U$-spin amplitude sum rules, and have found that there is a rich mathematical structure underlying them.
We have found that the basis of $a$- and $s$-type amplitudes is particularly useful for writing $SU(2)$ flavour sum rules. All the sum rules at any order of breaking can be written in terms of $a$- and $s$-type amplitudes. The mathematical structure that we found allowed us to formulate a straightforward algorithm to derive all the sum rules to all orders in symmetry breaking without the need to explicitly express the amplitudes in terms of reduced matrix elements (see Section~\ref{sec:algorithm}). The formulated method is computationally much simpler than the standard method of deriving the sum rules.
The sum rules for most of the systems that have been studied experimentally have been already derived using the standard approach. Yet, for some of them the higher order corrections were not studied before. We derived the sum rules, including all higher order corrections for them, see Section~\ref{sec:examples} for the examples. While the experimental precision for these systems does not require it yet, our hope is that the higher order corrections will eventually become relevant as the experimental precision is getting better.
There are several future directions we plan to take. Below we list them.
\setcounter{myenum}{1}
(\roman{myenum})~\addtocounter{myenum}{1}
In this work we only discuss the amplitude sum rules. It is not trivial how to relate them to physical observables such as decay rates and CP asymmetries. This is in particular problematic when we go beyond the leading order. The reason is that the amplitudes are functions of kinematic variables, while the measurements are done at some specific kinematic points. This issue is most problematic in the presence of narrow resonances and on the boundary of phase space.
(\roman{myenum})~\addtocounter{myenum}{1}
One of the main simplifying assumptions in this work is that we work with processes where the Hamiltonian is given by only one $U$-spin representation. It is interesting, both theoretically and practically, to generalize our results to the case where the Hamiltonian is given by a sum of several different representations.
(\roman{myenum})~\addtocounter{myenum}{1}
One other assumption that we have made, and would like to relax in the future, is that the initial and final state particles have well defined properties under $U$-spin. Physical particles, however, at times are given by mixtures of several representations.
(\roman{myenum})~\addtocounter{myenum}{1}
One more point to study is the case of
processes with identical particles. In practice, this situation emerges only in two-body decays. For more particles in the final state the momentum is in general different and thus the particles are not truly identical. Yet, it is interesting to discuss this also from the theoretical point of view.
(\roman{myenum})~\addtocounter{myenum}{1}
In this paper we only consider the $SU(2)$ flavor group. It seems plausible that similar results can be obtained for $SU(3)$.
(\roman{myenum})~\addtocounter{myenum}{1}
One other point that we did not discuss in this work is the freedom to redefine amplitudes, in particular by a phase. In this work, we have chosen one convention and follow it everywhere. In future work we would like to understand however which phases are physical and which are not.
To conclude, our hope is that a deeper understanding of flavor symmetries and higher order amplitude sum rules would eventually lead to performing precise measurements of fundamental parameters that are related to flavor physics.
\begin{acknowledgements}
We thank Saquib Hassan, Zoltan Ligeti, Wee Hao Ng,
Dean Robinson, and Yotam Soreq for helpful discussions.
The work of YG is supported in part by the NSF grant PHY1316222.
S.S. is supported by a Stephen Hawking Fellowship from UKRI under reference EP/T01623X/1 and the Lancaster-Manchester-Sheffield Consortium for Fundamental Physics, under STFC research grant ST/T001038/1.
For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Authors Accepted Manuscript version arising.
This work uses existing data which is available at locations cited in the bibliography.
\end{acknowledgements}
\begin{appendix}
\section{Rotation between Physical and $U$-spin bases}\label{app:physical_vs_Uspin_basis}
We start our analysis with \emph{a set of states} related by $U$-spin (not to be confused with a $U$-spin set of amplitudes). Such a $U$-spin set of states is fully defined by listing the representations forming the states. If we talk specifically about the initial or final state this would be a list of $U$-spin multiplets of particles in the initial/final state.
Consider a state that is described by $g\ge 2$ QNs. We distinguish between two types of QNs: $U$-type and $m$-type. The $U$-type QNs are the ones that describe the total $U$-spin of states and $m$-type QNs are the ones that describe the $m$-type QNs. Specific states can be expressed in different bases. We are in particular using the following two:
\begin{enumerate}
\item
{\it Physical basis:} This basis is what is commonly referred to as a ``product state basis.'' In this basis all the $g$ QNs are $m$-type.
\item
{\it $U$-spin basis:}
This basis is commonly referred to as a ``basis of definite value of total $U$-spin.'' In this basis the states are defined by one $m$-type QN and $g-1$ $U$-type QNs.
\end{enumerate}
The physical basis is unique up to reordering of multiplets and their corresponding $m$-QNs, which does not change the states, while, as we explain later, there might exist many different $U$-spin bases.
In what follows we elaborate on these definitions and formulate how to perform the basis rotation between the two bases. Let us consider a system of states described by $g$ irreps $u_1, u_2, \dots, u_g$. For shortness we denote the entire set of representations forming the system as $\bar{u}$. That is we define
\begin{equation}
\bar{u} = \{u_1, u_2, \dots, u_g\},
\end{equation}
where we use the bar to represent a set of similar objects. (While this is the same notation as for anti-up quark the context makes it clear what is meant.)
Now, once the system of states is defined via listing the representations $\bar{u}$ we move to describing the states themselves.
In the physical basis each state in the $U$-spin set of states that we consider can be described by $g$ $m$-QNs: $m_1, m_2, \dots, m_g$, which are the third-component projections of each of the representations in the set $\bar{u}$. We denote such set as $\bar{m}$ and we write
\begin{equation}
\bar{m} = \{m_1, m_2, \dots, m_g\}.
\end{equation}
We emphasize that the set $\bar u$ describes the system and thus is the same for all the states of the set of states, while different sets $\bar m$ describe different states within the set.
In the following we adopt the notation $\ket{*;*}$ for states where the semicolon divides the QNs that describe the system and the QNs that describe a specific state of the system. At times, however, for shortness, we omit the part that describes the system and simply use $\ket{*}$ to describe the states.
To represent a state that belongs to the system of states $\bar{u}$ and is described by the set $\bar m$ in the physical basis we use $\ket{\bar u;\bar m}$, which is given explicitly by the following tensor product
\begin{equation}\label{eq:um_state_in_physical_basis}
\ket{\bar{u};\bar{m}} \equiv \ket{u_1;m_1}\otimes\ket{u_2; m_2}\otimes \dots \otimes \ket{u_g;m_g},
\end{equation}
thus the alternative name ``product state basis.'' At times the $\bar u$ in the above is implicit and we write the state as $\ket{\bar m}$.
In the $U$-spin basis each state of the system under consideration is described by one $m$-type QN which we denote as $M$ and by a set of $g-1$ $U$-type QNs that we denote as
\begin{equation}
\bar U = \{U_1, U_2, \dots,U_{g-1}\}.
\end{equation}
Note that we use capital $U$ here to distinguish from the $u$ that describes the system. The $m$-type QN $M$ is given by
\begin{equation}
M = \sum_{j=1}^g m_j,
\end{equation}
and it is the total $m$-QN of the state. With this, each state in the $U$-spin basis can be denoted as $\ket{\bar{u};\bar U, M} \equiv \ket{\bar U, M}$, where we omit the $\bar u$ for brevity of notation.
The difference between the two bases $\ket{\bar{m}}$ and $\ket{\bar{U}, M}$ is that the former one is written as a product of states, while the latter one arises from explicitly taking the tensor products in Eq.~\eqref{eq:um_state_in_physical_basis}. There are $g-1$ tensor products in Eq.~\eqref{eq:um_state_in_physical_basis} and the $g-1$ elements of the set $\bar{U}$ are the values of the total $U$-spin taken one from each tensor product. The last element of the set $\bar{U}$, that is $U_{g-1}$, gives the total $U$-spin of the state in the $U$-spin basis.
In general, for $g\ge3$, there are many basis choices that result in different $U$-spin bases. In these different bases each state has the same $M$ and total $U$-spin, but the other $U$-spin QNs could be different depending on the order of the tensor product.
To better understand the definitions of the physical and $U$-spin bases, consider as an example the tensor product of three doublets:
\begin{equation}
\left(\frac{1}{2} \otimes \frac{1}{2}\right)\otimes \frac{1}{2} = \left(0 \oplus 1\right)\otimes \frac{1}{2} = \left(\frac{1}{2}\right)_0\oplus \left(\frac{1}{2}\right)_1\oplus\left(\frac{3}{2}\right)_1.
\end{equation}
The subscripts $0$ and $1$ on the RHS indicate from which intermediate representations the representations $(1/2)$, $(1/2)$ and $(3/2)$ are combined. Note that we distinguish between representations coming from different intermediate terms even if their total $U$-spin is the same. In this example of three doublets there are three possible sets of $\bar U= \{U_1,U_2\}$:
\begin{equation}\label{eq:eq:ubar-options}
\left\{0, \frac{1}{2}\right\}, \qquad \left\{1, \frac{1}{2}\right\}, \qquad \left\{1,\frac{3}{2}\right\}.
\end{equation}
There are 8 states in this system. In the physical basis they are
\begin{equation}
\ket{m_1,m_2,m_3},
\end{equation}
where each $m_j$ can assume a value of $\pm 1/2$. In the $U$-spin basis the states are
\begin{equation}
\ket{0,1/2,M}, \qquad
\ket{1,1/2,M}, \qquad
\ket{1,3/2,M}.
\end{equation}
with $-U_2 \le M \le U_2$ and $U_2$ is the total $U$-spin of the state. As we see the number of states in the $U$-spin basis is also equal to 8 as it should be.
The rotation between the physical basis and the $U$-spin basis can be written as follows
\begin{equation}\label{eq:basis_rot_def}
\ket{\bar u; \bar m} = \sum_{\bar U} C^*(\bar u; \bar{m}, \bar U) \ket{\bar U, M},
\end{equation}
where the coefficients $C^*(\bar u; \bar{m}, \bar U)$ are given by products of Clebsch-Gordan coefficients.
Note that the sum in Eq.~(\ref{eq:basis_rot_def}) goes over the different $\bar{U}$ sets, not the elements of one particular $\bar{U}$. In the example above that would, for instance, be a sum over three sets listed in Eq.~\eqref{eq:eq:ubar-options}.
To write the coefficients $C^*(\bar u; \bar{m}, \bar U)$ explicitly one needs to specify a concrete $U$-spin basis, that is, to specify the order of the tensor product. When the tensor product is taken iteratively in the order in which the representations are listed in the set $\bar u$, then the coefficients $C^*(\bar u; \bar{m}, \bar U)$ are given by
\begin{align}\label{eq:C*_def}
C^*(\bar u; \bar m, \bar U) = \mathop{C_{u_1, m_1}}_{\hspace{8pt} u_2, m_2}^{\hspace{8pt} U_1, M_1} \times \mathop{C_{U_1, M_1}}_{\hspace{8pt} u_3, m_3}^{\hspace{8pt} U_2, M_2} \times
...\times \mathop{C_{U_{r-2}, M_{r-2}}}_{\hspace{-10pt} u_r, m_r}^{\hspace{-4pt} U_{r-1}, M},
\end{align}
where
\begin{align}
\mathop{C_{u_j, m_j}}_{\hspace{8pt} u_k, m_k}^{\hspace{8pt} U, M} = \braket{u_j\, m_j\, u_k\, m_k}{U\, M}
\end{align}
are the Clebsch-Gordan coefficients and
\begin{equation}
M_j = \sum_{k=1}^{j+1} m_k, \qquad M=M_{g-1}=\sum_{k=1}^{g} m_k.
\end{equation}
Recall that the parameters before the semicolon describe the system and can be omitted. Thus, sometimes we simply use $C^*(\bar u; \bar m, \bar U)\equiv C^*(\bar{m}, \bar{U})$.\\
Several remarks are in order. First, above we only discuss the states described by $g \ge 2$ QNs. If $g < 2$ there are two possibilities:
\begin{enumerate}
\item[(i)] $g = 1$: the state is described by one non-trivial irrep and thus a single $m$-type QN,
\item[(ii)] $g = 0$: the state is a $U$-spin singlet.
\end{enumerate}
For both of these cases, the physical basis and the $U$-spin basis are the same. Thus for both $g=1$ and $g=0$ we say that the set of $U$-type QNs for the state is an empty set $\bar U = \emptyset$ (as in this case no product can be formed). For the case of $g=1$ we have for the $C^*$ coefficients
\begin{equation}
C^*(u_1; m_1, \emptyset) \equiv 1, \qquad \forall u_1, -u_1 \le m_1 \le u_1.
\end{equation}
Note $u_1$ and $m_1$ are single elements of $\bar u$ and $\bar m$, and for simplicity, we write in these cases directly the single elements instead of the corresponding sets of one element.
In the case of a singlet state $g=0$ we have $\bar u = \bar m = \emptyset$ and thus we define
\begin{equation}
C^*(\emptyset;\emptyset,\emptyset) \equiv 1.
\end{equation}
Second, some care has to be taken when there are identical irreps in the system. Consider, for example, two identical doublets. In the physical basis the fact that they are identical can be written as
\begin{equation}
\ket{+1/2,-1/2} = \ket {-1/2,+1/2}\,.
\end{equation}
In the $U$-spin basis the fact that they are identical implies only the triplet combination is possible and the singlet combination is identically zero.
More generally, the presence of identical irreps in the system leads to some states being identical in the physical basis.
In the $U$-spin basis this implies that some sets $\bar U$ that occur for distinguishable irreps are identically zero.
We conclude this section with a remark about physical and $U$-spin bases for amplitudes. We say that an amplitude is written in the physical ($U$-spin) basis when the initial state, final state and the Hamiltonian are in the physical ($U$-spin) basis.
\section{Decomposition of amplitudes in terms of RMEs \label{app:RMEdecomposition}}
In this appendix we show how basis rotations together with the Wigner-Eckart theorem allow to express amplitudes of physical processes in terms of RMEs, that is, we derive Eq.~\eqref{eq:factorization}, which we rewrite below for convenience:
\begin{equation}\label{eq:decomposAi}
\mathcal{A}_j = f_{u,m} \sum_\alpha C_{j\alpha} X_\alpha\,.
\end{equation}
In what follows we give explicit definitions for the coefficients $C_{j\alpha}$ and the RMEs $X_\alpha$.
\subsection{Defining the $U$-spin set}\label{app:def-u-set}
To define a generic $U$-spin set it is necessary to describe the $U$-spin structure of the initial state, the final state, and the Hamiltonian. In this appendix we consider a $U$-spin set which is described by $g_I$ $U$-spin irreps in the initial state and $g_F$ irreps in the final state, where the irreps can be arbitrary. (Note that in practice we are usually interested in decays and thus $g_I=1$. Yet here we consider the general case.) For the Hamiltonian, the number of irreps in the $U$-spin limit is one, see Sec.~\ref{sec:definitions}. This irrep can also be the singlet operator.
To describe the $U$-spin structure of the initial and final states of the system we use
\begin{equation} \label{eq:irreps_in_out}
\bar u^I = \{u_1^I, u_2^I, ... u^I_{g_I}\}, \qquad
\bar u^F = \{u_1^F, u_2^F, ... u^F_{g_F}\},
\end{equation}
where $u^I_j$($u^F_j$) denote the sets of $g_I$($g_F$) representations in the initial(final) state. The $U$-spin of the Hamiltonian is denoted as $u$. The $U$-spin system is fully described by $\bar u^I$, $\bar u^F$, and $u$.
Each amplitude from the $U$-spin set under consideration can be described by a set of $(g_I+g_F)$ QNs. In the physical basis they are the $m$-type QN of the specific components of the multiplets in Eq.~\eqref{eq:irreps_in_out}. We denote the corresponding sets of $m$-QNs as $\bar{m}^I$ and $\bar{m}^F$, where
\begin{equation}
\bar{m}^I = \{m_1, m_2, \dots, m_{g_I}\}, \qquad \bar{m}^F = \{m_1, m_2, \dots, m_{g_F}\}.
\end{equation}
With this, according to the notation we introduced in Appendix~\ref{app:physical_vs_Uspin_basis} the initial and final states of the $U$-spin system in the physical basis are denoted as
\begin{equation}\label{eq:m_of_irreps_in_out}
\ket{\bar m^I} =\ket{ m^I_1, m^I_2,..., m^I_{g_I}}, \qquad
\ket{\bar m^F} = \ket{m^F_1, m^F_2,..., m^F_{g_F}}.
\end{equation}
Each amplitude in the physical basis is then described by $\bar m^I$ and $\bar m^F$. We use $m$ to denote the $m$ of the Hamiltonian that contributes to the amplitude. Note that for non-zero amplitudes $m$ is not an independent QN and is given by
\begin{equation}\label{eq:mH-def}
m = \sum_{j=1}^{g_I} m^I_j - \sum_{j=1}^{g_F} m^F_j.
\end{equation}
Thus, for a given $U$-spin set, each amplitude can be indexed by a pair $(\bar m^I, \bar m^F)$.
This allows us to denote the amplitudes from any $U$-spin set as $\mathcal{A}_{(\bar m^I, \bar m^F)}$.
\subsection{The Wigner-Eckart theorem }
The Wigner-Eckart theorem states that for any spherical operator $O(u,m)$ and two states of angular momentum $\ket{u_1;m_1}$ and $\ket{u_2;m_2}$, the matrix element of the operator between the two states can be written as a product of a factor that depends on $m$-QNs and a factor that only depends on the values of the total $U$-spin. Formally the statement of the theorem is given by the following relation
\begin{equation}\label{eq:WE}
\mel{u_2; m_2}{O(u,m)}{u_1; m_1} = \mathop{C_{u_1, m_1}}_{\hspace{2pt} u, m}^{\hspace{8pt} u_2, m_2} \mel{u_2}{O(u)}{u_1},
\end{equation}
where the $m$-dependent factor is given by the Clebsch-Gordan coefficient. The $m$-independent factor, $\mel{u_2}{O(u)}{u_1}$, is called Reduced Matrix Element (RME).
The situation becomes more complicated when the states that we work with are given by products of several different states. In terms of the amplitudes in the physical basis this would correspond to $g_I >1$ and/or $g_F>1$. To apply the Wigner-Eckart theorem to such states we need to work in the $U$-spin basis where each state has definite value of total $U$-spin.
Consider two states in the $U$-spin basis
\begin{equation}\label{eq:U_basis_states_WE}
\ket{\bar U^I,M^I}, \qquad
\ket{\bar U^F,M^F}.
\end{equation}
The last elements in the sets $\bar U^{I}$ ($\bar U^F$) is $U_{g_I-1}^{I}$ ($U^F_{g_F-1}$) and it gives the total $U$-spin of the states. Introducing the notations
\begin{equation}\label{eq:UT_notation}
U^I_T = U^I_{g_I-1}, \qquad U^F_T = U^F_{g_F-1},
\end{equation}
where the label \lq\lq{}$T$\rq\rq{} stands for the ``total'' value of $U$-spin, the Wigner-Eckart theorem can be rewritten for the case of the states in Eq.~\eqref{eq:U_basis_states_WE} as follows
\begin{equation}\label{eq:WE-gen}
\mel{\bar U^F, M^F}{O(u,m)}{\bar U^I, M^I } = \mathop{C_{U^I_T, M^I}}_{\hspace{-4pt} u, m}^{\hspace{12pt} U_T^F, M^F} \times \mel{\bar U^F}{O(u)}{\bar U^I}.
\end{equation}
That is, the matrix element in the $U$-spin basis is proportional to a RME that does not depend on the $m$-type QNs, but depends on all the $U$-type QNs. The CG coefficient depends on the $m$-QNs as well as the total $U$-spin of the initial state, final state, and the operator. Note that there is only one CG coefficient and not a product of them.
We conclude that in order to write the decomposition of the amplitudes
$\mathcal{A}_{(\bar m^I, \bar m^F)}$ in terms of RMEs,
we need to, first, perform the basis rotation from the physical basis to the $U$-spin basis and then apply the Wigner-Eckart theorem according to Eq.~\eqref{eq:WE-gen}. In the following subsections we elaborate on these two steps.
\subsection{Basis rotation}
First, we perform the basis rotation for the states from the physical basis to the $U$-spin basis using Eq.~\eqref{eq:basis_rot_def}. For the initial state we have
\begin{equation}\label{eq:in_decomp}
\ket{\text{in}} = \ket{\bar m^I} = \sum_{\bar{U}^{I}} C^*(\bar{u}^{I}; \bar{m}^I, \bar U^I) \ket{\bar U^I, M^I}, \qquad M^I=\sum_{j=1}^{g_I} m^I_j,
\end{equation}
and similarly for the final state
\begin{equation}\label{eq:out_decomp}
\ket{\text{out}} = \ket{\bar m^F} = \sum_{\bar{U}^{F}} C^*(\bar{u}^{F}; \bar{m}^F, \bar U^F) \ket{\bar U^F, M^F}, \qquad M^F=\sum_{j=1}^{g_F} m^F_j.
\end{equation}
In the equations above, $\bar U^I$ and $\bar U^F$ are the sets of $U$-type QNs for the initial and final states, respectively.
Next, we consider the Hamiltonian. The Hamiltonian in the $U$-spin limit takes the following form
\begin{equation}
\mathcal{H}_{\text{eff}}^{(0)} \, = \, \sum_{m} f_{u, m} \, H(u,m)\,.
\end{equation}
where for clarity we use $H^u_m \equiv H(u,m)$ and as everywhere in this paper we focus on the Hamiltonians with only one fixed value of $U$-spin, in this case it is denoted as $u$.
Taking into account $U$-spin breaking results in an effective Hamiltonian as given in Eq.~\eqref{eq:Heff}, which we rewrite here in a slightly modified notation as
\begin{equation}
\mathcal{H}_\text{eff} = \sum_{m,b} f_{u,m} \, H(u,m)\otimes H_\varepsilon(1,0)^{\otimes b}.
\end{equation}
We use $H(u,m,b)$ to denote an operator of order $b$ in the Hamiltonian above, that is we define
\begin{equation}
H(u,m,b) \equiv H(u,m)\otimes H_\varepsilon(1,0)^{\otimes b}.
\end{equation}
This term in the Hamiltonian is written in the physical basis. Now, we would like to perform a basis rotation for this operator. Using Eq.~\eqref{eq:basis_rot_def} we obtain
\begin{equation}
H(u,m,b) = \sum_{\bar U} C^*(\bar u^H;\bar m^H,\bar U) H(\bar U,m,b),
\end{equation}
where the sets $\bar u^H$ and $\bar m^H$ are both of length $b+1$ and their elements are given by
\begin{align}\label{eq:uH-mH-de}
u^H_1 = u, \qquad u^H_{j+1} = 1, \qquad 1 \le j \le b, \nonumber\\
m^H_1 = m, \qquad m^H_{j+1} = 0, \qquad 1 \le j \le b.
\end{align}
$\bar U$ is a set of $U$-type QNs for the Hamiltonian to order $b$.
Note that the $b$ in $H(\bar U,m,b)$ is redundant as it is included in $\bar U$ that has a length of $b$. Yet, due to its importance we keep it explicitly. Finally we write the effective Hamiltonian in the $U$-spin basis as
\begin{equation} \label{eq:H_decomp}
\mathcal{H}_{\text{eff}}\, = \, \sum_{m,b} f_{u,m} \,
\left(\sum_{\bar U} C^*(\bar u^H;\bar m^H,\bar U) H(\bar U,m,b)\right)\,.
\end{equation}
\subsection{Applying the Wigner-Eckart theorem}
Equipped with the decompositions in Eqs.~\eqref{eq:in_decomp}, \eqref{eq:out_decomp}, and \eqref{eq:H_decomp} and the Wigner-Eckart theorem, Eq.~\eqref{eq:WE-gen}, we finally can write the amplitudes in terms of RMEs and recover the expression in Eq.~\eqref{eq:decomposAi} with amplitudes
\begin{equation}\label{eq:Aj=AmImF}
\mathcal{A}_j \equiv
\mathcal{A}_{(\bar m^{I},\bar m^{F})},
\end{equation}
the RMEs
\begin{equation}\label{eq:X-def}
X_\alpha \equiv \mel{\bar U^F}{H(\bar U,b)}{\bar U^I},
\end{equation}
and the multi-index $\alpha$ given by
\begin{equation} \label{eq:def-alpha}
\alpha \equiv \{\bar U^I,\bar U^F, \bar U, b\}.
\end{equation}
The coefficients $C_{j\alpha}$ are given by
\begin{equation}\label{eq:Cjalpha}
C_{j\alpha} \equiv C^*(\bar{u}^{I}; \bar{m}^I, \bar U^I) \times
C^*(\bar{u}^{F}; \bar{m}^F, \bar U^F) \times C^*(\bar u^H;\bar m^H,\bar U) \times
\mathop{C_{U_T^I, M^I}}_{\hspace{2pt} U_T, m}^{\hspace{8pt} U_T^F, M^F}\, ,
\end{equation}
where we used the notation for the total $U$-spin of states in the $U$-spin basis from Eq.~\eqref{eq:UT_notation} and introduced $U_T$ for the total $U$-spin of operators in the Hamiltonian, which is equal to the last element of $\bar{U}$, that is $U_T \equiv U_b$.
Using $\alpha$ from Eq.~\eqref{eq:def-alpha} and
$j= \{\bar m^I,\bar m^F\}$ at times we also write the $m$-QN dependence explicitly
\begin{equation}\label{eq:Cjalpha-notation}
C_{j\alpha} =
C(\bar m^I,\bar m^F, \alpha).
\end{equation}
\section{Relation between decomposition of amplitudes forming a $U$-spin pair}\label{app:Upair_relation}
In this appendix we prove the relation between the RME decompositions of the amplitudes in Eqs.~\eqref{eq:factorization-again} and \eqref{eq:u-pair}. To establish the relation between the two amplitudes that form a $U$-spin pair, we use the following symmetry property of the CG coefficients
\begin{equation}\label{eq:CG_sym}
\mathop{C_{u_1, m_1}}_{\hspace{8pt} u_2, m_2}^{\hspace{25pt} u_3, m_1 + m_2} = (-1)^{u_1 + u_2 - u_3} \times \!\!\!\!\!\!\!\mathop{C_{u_1, -m_1}}_{\hspace{8pt} u_2, -m_2}^{\hspace{25pt} u_3, -m_1 - m_2}.
\end{equation}
For a given $U$-spin set we consider the CKM-free amplitude
\begin{equation}\label{eq:Aif}
A_i\equiv A_{(\bar m^{I}, \bar m^{F})} = \sum_\alpha C(\bar m^I,\bar m^F, \alpha) X_\alpha\,,
\end{equation}
where we use the notation introduced in Appendix~\ref{app:RMEdecomposition}. The $U$-spin pair amplitude is expressed through the same set of RMEs, since they do not depend on the $m$-QNs, but with different coefficients:
\begin{equation}\label{eq:Aifpair}
A_\ell\equiv A_{(-\bar m^{I}, -\bar m^{F})} = \sum_\alpha C(-\bar m^I,-\bar m^F, \alpha) X_\alpha\,,
\end{equation}
Using Eqs.~\eqref{eq:Cjalpha},~\eqref{eq:C*_def}, and the symmetry property in Eq.~\eqref{eq:CG_sym} we find that
\begin{equation}\label{eq:C_i_l_relation}
C(\bar m^{I},\bar m^{F},\alpha) = (-1)^{p + b} C(-\bar m^{I},-\bar m^{F},\alpha),
\end{equation}
where the parity $(-1)^p$ can be defined as follows
\begin{equation}\label{eq:p_1st_version}
(-1)^p = (-1)^{-\sum_{j = 1}^{g_I} u^{I}_j - \sum_{j = 1}^{g_F} u^{F}_j + 2U^{F}_{g_F-1} - u}.
\end{equation}
As we see the factor $(-1)^p$ is the same for all the amplitudes of a system and thus we find the following expression for the $U$-spin pair amplitude $A_\ell$ of the amplitude $A_i$:
\begin{align}
\label{eq:Upair_decomp_theorem}
A_\ell &=
(-1)^p \sum_\alpha (-1)^b C(\bar m^I,\bar m^F, \alpha) X_\alpha\,.
\end{align}
Next, several comments about Eq.~\eqref{eq:p_1st_version} are in order. First, even though when deriving the results in Eqs.~\eqref{eq:C_i_l_relation} and~\eqref{eq:p_1st_version} we referred to Eq.~\eqref{eq:C*_def}, which gives the expression of $C^*$ coefficients for a specific basis choice, the results are in fact basis independent. The basis independence, i.e. the independence on the specific order of the tensor products is achieved since all the intermediate representations are always bound to cancel due to the minus sign in front of $u_3$ in Eq.~\eqref{eq:CG_sym}.
Second, note the following:
\begin{enumerate}
\item
The power in Eq.~\eqref{eq:p_1st_version} is always integer and thus $(-1)^p = \pm 1$.
\item
The expressions $\sum_{j = 1}^{g_F} u^{F}_j$ and $U^{F}_{g_F-1}$ are both either half-integer or integer and therefore $2\sum_{j = 1}^{g_F} u^{F}_j$ and $2 U^{F}_{g_F-1}$ have the same parity. Thus we can write
\begin{align}
(-1)^p = (-1)^{-\sum_{j = 1}^{g_I} u^{I}_j + \sum_{j = 1}^{g_F} u^{F}_j-u} \,.
\end{align}
\item
Recalling that $n$ is the number of would-be doublets for the system we have
\begin{equation}
\sum_{j = 1}^{g_I} u^{I}_j + \sum_{j = 1}^{g_F} u^{F}_j + u = {n \over 2}.
\end{equation}
\end{enumerate}
These properties allow us to introduce the following definition for $p$
\begin{equation}\label{eq:p_def}
p = \sum_{j = 1}^{g_F} u^{F}_j-u -\sum_{j = 1}^{g_I} u^{I}_j =
2\sum_{j = 1}^{g_F} u^{F}_j - {n \over 2}.
\end{equation}
This, of course, is only one of many possible definitions. For consistency, everywhere in this work we use Eq.~\eqref{eq:p_def} as the definition for $p$.
Next, consider two special cases:
\begin{enumerate}
\item [$(i)$]
The process is entirely described by $n$ $U$-spin doublets in the final state. In this case Eq.~\eqref{eq:p_def} results in $p=n/2$.
\item [$(ii)$]
All the irreps of the system are integers. In this case the sum $2\sum_{j = 1}^{g_F} u^{F}_j$ is an even number and thus the parity of $p$ is determined by $n/2$, so we can set $p = n/2$.
\end{enumerate}
We thus conclude that in both of the special cases above the $p$-factor can be chosen as
\begin{equation}\label{eq:p_n_doublets}
p = \frac{n}{2}.
\end{equation}
\section{Universality of sum rules}\label{app:signs}
In this appendix we compare two $U$-spin sets that belong to the same universality class, see Section~\ref{sec:universality}. We first show that one can establish a one-to-one correspondence between the amplitudes for any two $U$-spin sets from the same universality class. We then show that there is also a one-to-one correspondence between the RMEs of the two systems. With these correspondences in mind, we derive the relation between the coefficients that enter the group theoretical decompositions of the amplitudes for the two systems. This explicit result shows that the sum rules for any two $U$-spin sets from the same universality class are the same up to relative signs between the amplitudes.
We consider the following two systems:
\begin{enumerate}
\item \textbf{System I} is very general and is described by a set of $k$ representations $\bar{u}^{(1)}$ in the initial state, a set of $l$ representations $\bar u^{(2)}$ in the final state and a Hamiltonian with $U$-spin $u^{(3)}$. Using the notation of Appendix~\ref{app:def-u-set}, we have for System~I
\begin{align}
g_I = k, \qquad u^I_j &= u^{(1)}_j, \qquad 1 \le j \le k, \nonumber \\
g_F = l, \qquad u^F_j &= u^{(2)}_j, \qquad 1 \le j \le l, \nonumber \\
u &= u^{(3)}.
\end{align}
For the sets of irreps in the initial and final states we write
\begin{equation}
\bar{u}^I = \bar u^{(1)}, \qquad \bar{u}^F = \bar u^{(2)}.
\end{equation}
The CKM-free amplitudes of the System I are denoted as $A^{(\text{I})}_j$.
\item \textbf{System II} is described by the same $k+l+1$ representations as System I, but all of them belong to the final state (the Hamiltonian and the initial state are given by singlet states). Using the notation of Appendix~\ref{app:def-u-set} we write for the initial state and the Hamiltonian of System II
\begin{equation}
g_I = 0, \qquad \bar u^I = \emptyset, \qquad u = 0,
\end{equation}
and for the final state $g_F = k+l+1$ and the elements of the set $\bar u^F$ are given by
\begin{align}
u^F_j = u^{(1)}_j, & \qquad 1 \le j \le k,\nonumber \\
u^F_{j+k} = u^{(2)}_{j}, & \qquad 1 \le j \le l, \nonumber\\
u^{F}_{k+l+1} = u^{(3)}.
\end{align}
The set of the final state irreps can be written as a union of sets as follows
\begin{equation}
\bar{u}^F = \bar{u}^{(1)}\cup\, \bar{u}^{(2)} \cup\, u^{(3)}.
\end{equation}
The CKM-free amplitudes of System~II are denoted as $A^{(\text{II})}_j$.
\end{enumerate}
Systems I and II are described by the same sets of irreps and the only difference is the assignment of the irreps to the initial/final state and the Hamiltonian. Establishing the one-to-one correspondence between the amplitudes of the two systems is trivial. This can be done using $n$-tuples. If we construct the $n$-tuples for the two systems using the same assignment of $n$-tuple positions to representations, the two systems are described by two sets of $n$-tuples that are exactly the same. Thus we can say that an amplitude $A_{j_1}^{(\text{I})}$ is mapped into an amplitude $A_{j_2}^{(\text{II})}$ if and only if the two are described by the same $n$-tuple. In the index notation that we introduce in Section~\ref{sec:An-tuples-doublets} and generalize in Section~\ref{sec:n-tuples-generalized} this also implies that $j_1 = j_2$.
Next we move to establishing the one-to-one correspondence between the RME of System~I and System II.
\subsection{System I}
First, we consider System I. To describe an amplitude from this system it is enough to list the $m$-QNs of the representations in the initial and final states. We consider an amplitude that is described by a set of $m$-QNs $\bar m^{(1)}$ in the initial state and a set $\bar m^{(2)}$ in the final state:
\begin{align}
m^I_{j} &= m^{(1)}_j, \qquad 1 \le j\le k, \nonumber \\
m^{F}_{j} &= m^{(2)}_j, \qquad 1 \le j \le l,
\end{align}
that is for the amplitude under consideration we have
\begin{equation}\label{eq:SystemI-QNs}
\bar m^I = \bar m^{(1)}, \qquad \bar m^{F} = \bar m^{(2)}.
\end{equation}
Introducing
\begin{equation}
M^{(1)} = \sum_{j = 1}^{k} m^{(1)}_j, \qquad M^{(2)} = \sum_{j = 1}^{l} m^{(2)}_j,
\end{equation}
we can write for the amplitude
\begin{equation}\label{eq:Mrelations}
M^I = M^{(1)}, \qquad M^F = M^{(2)}, \qquad m = m^{(3)} = M^{(2)} - M^{(1)},
\end{equation}
where $m$, $M^I$, and $M^F$ are defined in Eqs.~\eqref{eq:mH-def}, \eqref{eq:in_decomp}, and \eqref{eq:out_decomp} respectively.
The sets $\bar u^H$ and $\bar m^H$ for a given order of breaking are defined in Eq.~\eqref{eq:uH-mH-def}. For the example we consider, the elements of these sets take the following values
\begin{align}\label{eq:uH-mH-def}
u^H_1 = u^{(3)}, \qquad u^H_{j+1} = 1, \qquad 1 \le j \le b, \nonumber\\
m^H_1 = m^{(3)}, \qquad m^H_{j+1} = 0, \qquad 1 \le j \le b.
\end{align}
Introducing two sets of $b$ elements $\bar{u}^{(\text{br})}$ and $\bar{m}^{(\text{br})}$ (where the label (br) stands for breaking) such that
\begin{align}
\bar{u}^{(\text{br})}: \qquad u^{(\text{br})}_{j} = \bar{u}^H_{j+1}, \qquad 1 \le j \le b, \nonumber \\
\bar{m}^{(\text{br})}: \qquad m^{(\text{br})}_j = m^H_{j+1}, \qquad 1 \le j \le b
\end{align}
we rewrite the sets $\bar u^H$ and $\bar m^H$ as the following unions
\begin{equation}\label{eq:uH-mH-def-v2}
\bar{u}^H = u^{(3)} \cup \bar u^{(\text{br})}, \qquad \bar{m}^H = u^{(3)} \cup \bar m^{(\text{br})}.
\end{equation}
For the $U$-type QNs $\bar U^I$ and $\bar U^F$ of the initial and final state, respectively, we use the following notation
\begin{align}
U^I_{j} = U^{(1)}_j, \qquad 1 \le j \le k-1, \nonumber \\
U^F_j = U^{(2)}_j, \qquad 1 \le j \le l-1,
\end{align}
where the sets $\bar U^{(1)}$ and $\bar U^{(2)}$ are the sets of $U$-type QNs for the tensor products of representations in sets $\bar u^{(1)}$ and $\bar u^{(2)}$.
For the $U$-type QNs $\bar U$ of the operators in the $b$th order Hamiltonian we write
\begin{align}
U_j &= U^{(\text{br})}_j, \qquad 1 \le j \le b-1, \nonumber \\
U_{b} &= U^\prime,
\end{align}
where $U'$ is the result of the tensor product of the breaking operators with the $U$-spin limit Hamiltonian.
We define $\bar U^{(\text{br})}$ to be a set of $b-1$ elements
\begin{align}
\bar{U}^{\text{(br)}} = \{ U^{(\text{br})}_1, \dots, U^{(\text{br})}_{b-1} \}.
\end{align}
Thus we can rewrite $\bar U$ as a union
\begin{equation}
\bar U = \bar U^{(\text{br})} \cup U^\prime. \label{eq:basis-choice-hamiltonian}
\end{equation}
Note that Eq.~(\ref{eq:basis-choice-hamiltonian}) corresponds to a specific basis choice. The basis choice consists in choosing in which order the tensor products of the $b+1$ representations ($b$ insertions of the $U$-spin breaking spurion and a $U$-spin limit Hamiltonian) are performed. In Eq.~(\ref{eq:basis-choice-hamiltonian}) the set $\bar U^{(\text{br})}$ is a set of $U$-type QNs for the tensor product of $b$ spurions and $U^\prime$ is the total $U$-spin of a term in the $b$th order Hamiltonian.
Using Eqs.~\eqref{eq:Aj=AmImF}-\eqref{eq:def-alpha}, we write the amplitudes of System I as
\begin{equation}
A_j^{(\text{I})} \equiv A_{(\bar{m}^{(1)}, \bar{m}^{(2)})},
\end{equation}
the RMEs as
\begin{equation}
X_\alpha^{(\text{I})} \equiv \mel{\bar{U}^{(2)}}{H(\bar{U}^{(3)},b)}{\bar{U}^{(1)}},
\end{equation}
and the multi-index $\alpha$ as
\begin{equation}
\alpha \equiv \{\bar{U}^{(1)}, \bar{U}^{(2)}, \bar{U}^{(3)}, b\}.
\end{equation}
The group-theoretical decomposition of an amplitude described by the $m$-QNs in Eq.~\eqref{eq:SystemI-QNs} is given by
\begin{equation}
A_j^{\text{(I)}} = \sum_\alpha C^{(\text{I})}_{j \alpha} X_\alpha^{\text{(I)}},
\end{equation}
where we have introduced
\begin{equation}
C^{(\text{I})}_{j \alpha} \equiv C(\bar{m}^{(1)}, \bar{m}^{(2)}, \alpha).
\end{equation}
Using the definition for the coefficients $C^*$ given in Eq.~\eqref{eq:basis_rot_def}, we can write the following expression for the coefficients $C^{(\text{I})}_{j \alpha}$:
\begin{align}\label{eq:SystemI_C}
C^{(\text{I})}_{j \alpha} = C^*(\bar{u}^{(1)};\bar{m}^{(1)}, \bar{U}^{(1)})\times C^*(\bar{u}^{(2)};\bar{m}^{(2)}, \bar{U}^{(2)}) \times C^*(\bar{u}^{(\text{br})}; \bar m^{(\text{br})}, \bar{U}^{(\text{br})}) \nonumber \\
\times \hspace{0pt}\mathop{C_{u^{(3)}, m^{(3)}}}_{\hspace{-4pt} U^{(\text{br})}_{b-1}, 0}^{\hspace{2pt} U^\prime, m^{(3)}} \times \hspace{-2pt}\mathop{C_{U^{(1)}_{k-1}, M^{(1)}}}_{\hspace{-4pt} U^\prime, m^{(3)}}^{\hspace{4pt} U^{(2)}_{l-1}, M^{(2)}}.
\end{align}
Recall that the coefficients $C^{(\text{I})}_{j \alpha}$ and the RMEs $X_\alpha^{(\text{I})}$ are basis dependent, that is, they depend on the specific choice of the order of the tensor product. In Eq.~\eqref{eq:SystemI_C}, for the tensor product of the operators in the Hamiltonian we made a choice to first take a tensor product of all the $U$-spin breaking spurions and only then to multiply the result with the representation $u^{(3)}$. Nevertheless, the sum rules are the same for any choice of basis.
\subsection{System II}
Next, we focus on System~II. We consider an amplitude $A^{(\text{II})}_j$ that is described by the same $n$-tuple as the amplitude $A^{(\text{I})}_{j}$ that we consider above. The amplitude of interest is then described by the following sets of $m$-QNs:
\begin{equation}\label{eq:SystemII-QNs}
\bar{m}^I_1 = \emptyset, \qquad \bar{m}^F = \left(-\bar{m}^{(1)}\right)\cup \, \bar{m}^{(2)}\cup\, \left(-m^{(3)}\right),
\end{equation}
where we use $-\bar m^{(1)}$ to denote a set made of all the elements of $\bar m^{(1)}$ multiplied by $(-1)$. The change of sign for the set $\bar m^{(1)}$ and $m^{(3)}$ is due to our convention for constructing generalized $n$-tuples as described in Section~\ref{sec:gen-gen}. In the convention that we use, the $m$-QNs of the representations in the initial state and the Hamiltonian are inverted in the $n$-tuple.
Note that the amplitudes $A_j^{(\text{II})}$ of System~II are described by the same number of independent $m$-QNs as the amplitudes $A_j^{(\text{I})}$ of System~I. This is the case since the total $m$-QN of the final state must be zero. That is we have
\begin{equation}
-M^{(1)} + M^{(2)} - m^{(3)} = 0,
\end{equation}
which is the same as the last relation in Eq.~\eqref{eq:Mrelations}.
For the total $m$-QNs of the initial state, final state and the Hamiltonian of System~II we have
\begin{equation}
M^I = M^F = m = 0.
\end{equation}
The sets $\bar u^H$ and $\bar m^H$ are defined as in Eqs.\eqref{eq:uH-mH-def}-\eqref{eq:uH-mH-def-v2}. The only difference are the new values of $u$ and $m$ for System~II, $u = m =0$.
In order to define the $U$-type QNs for System~II we choose to perform the tensor product in the final state of the system in the following order:
\begin{equation}
\left(\left(u_{1}^{(1)} \otimes ... \otimes u_{k}^{(1)}\right) \otimes \left(u_{1}^{(2)} \otimes ... \otimes u_{l}^{(2)}\right)\right)\otimes u^{(3)}.
\end{equation}
As a result we can introduce the following notation for the $U$-type QNs
\begin{align}
\bar{U}^{I} = \emptyset, \qquad \bar{U}^{F} = \bar{U}^{(1)}\cup\bar{U}^{(2)}\cup \{U^{\prime \prime}, U^F_{k+l}\}, \qquad
\bar U = \bar{U}^{(\text{br})}\cup U_{b},
\end{align}
where $U^F_{k+l}$ and $U_b$ are the last elements of the sets $\bar U^F$ and $\bar U$ respectively, and $U^{\prime \prime}$ is used to represent a tensor product of the final irreps in the sets $\bar U^{(1)}$ and $\bar U^{(2)}$. $U_b$ results from the multiplication of the last element of $\bar{U}^{(\text{br})}$, \emph{i.e.} $U^{(\text{br})}_{b-1}$, with the $U$-spin limit Hamilton operator. Since the $U$-spin limit Hamiltonian for System~II is a singlet,
\begin{equation}
U^{F}_{k+l} = U_b = U^{(\text{br})}_{b-1}.
\end{equation}
Using Eqs.~\eqref{eq:Aj=AmImF}-\eqref{eq:def-alpha}, we write for the amplitudes of System II
\begin{equation}
A_j^{(\text{II})} \equiv A_{(0, (-\bar{m}^{(1)})\cup\bar m^{(2)}\cup(-m^{(3)}))},
\end{equation}
for the RMEs we write
\begin{equation}
X_\beta^{(\text{II})} \equiv \mel{\bar{U}^{(1)}\cup\bar{U}^{(2)}\cup U^{\prime\prime}\cup U_{k+l}^{F}}{H(\bar{U}^{(\text{br})}\cup U_b,0,b)}{0},
\end{equation}
where the multi-index $\beta$ is given as
\begin{equation}
\beta \equiv \{\bar{U}^{(1)}, \bar{U}^{(2)}, \bar{U}^{(\text{br})}, U^{\prime\prime}, b\}.
\end{equation}
The group-theoretical decomposition of an amplitude described by the $m$-QNs in Eq.~\eqref{eq:SystemII-QNs} is given by
\begin{equation}\label{eq:SystemII-decomp}
A_j^{\text{(II)}} = \sum_{\beta} C^{(\text{II})}_{j\beta} X_\beta^{(\text{II})},
\end{equation}
where we have introduced
\begin{equation}
C^{(\text{II})}_{j\beta} \equiv C(0,(-\bar{m}^{(1)})\cup \bar{m}^{(2)} \cup (-m^{(3)}), \beta).
\end{equation}
Using the definition for the coefficients $C^*$ in Eq.~\eqref{eq:basis_rot_def} we can write the following expression for the coefficients in Eq.~\eqref{eq:SystemII-decomp}
\begin{align}\label{eq:SystemII-C}
C^{(\text{II})}_{j\beta} = C^*(\bar{u}^{(1)};-\bar{m}^{(1)}, \bar{U}^{(1)})\times C^*(\bar{u}^{(2)};\bar{m}^{(2)}, \bar{U}^{(2)}) \times C^*(\bar{u}^{(\text{br})}; \bar{m}^{(\text{br})}, \bar{U}^{(\text{br})}) \nonumber \\
\times \hspace{0pt}
\mathop{C_{U^{(2)}_{l-1}, M^{(2)}}}_{\hspace{16pt} U^{(1)}_{k-1}, -M^{(1)}}^{\hspace{0pt} U^{\prime \prime}, m^{(3)}}
\times \hspace{-10pt} \mathop{C_{U^{\prime\prime}, m^{(3)}}}_{\hspace{20pt} u^{(3)}, -m^{(3)}}^{\hspace{2pt} U^{(\text{br})}_{b-1}, 0} \times\hspace{-14pt} \mathop{C_{0, 0}}_{\hspace{22pt} U^{(\text{br})}_{b-1}, 0}^{\hspace{20pt} U^{(\text{br})}_{b-1}, 0},
\end{align}
where the last CG coefficient is coming from the Wigner-Eckart theorem and is equal to one.
Comparing Eqs.~\eqref{eq:SystemI_C} and~\eqref{eq:SystemII-C} we see that for non-zero RMEs $U^\prime$ takes the same values as $U^{\prime \prime}$ and thus the two can be identified $U^\prime \equiv U^{\prime \prime}$. Most importantly, this implies that $\alpha \equiv \beta$ and thus we have shown that one can establish a one-to-one correspondence between the RMEs of the two systems $X^{(\text{I})}_\alpha$ and $X^{(\text{II})}_\alpha$.
Finally, Eq.~\eqref{eq:SystemII-C} can be rewritten as follows
\begin{align}\label{eq:SystemII-C-mod}
C^{(\text{II})}_{j\alpha} = C^*(\bar{u}^{(1)};-\bar{m}^{(1)}, \bar{U}^{(1)})\times C^*(\bar{u}^{(2)};\bar{m}^{(2)}, \bar{U}^{(2)}) \times C^*(\bar{u}^{(\text{br})}; \bar{m}^{(\text{br})}, \bar{U}^{(\text{br})}) \nonumber \\
\times \hspace{0pt}
\mathop{C_{U^{(2)}_{l-1}, M^{(2)}}}_{\hspace{16pt} U^{(1)}_{k-1}, -M^{(1)}}^{\hspace{0pt} U^{\prime}, m^{(3)}}
\times \hspace{-10pt} \mathop{C_{U^{\prime}, m^{(3)}}}_{\hspace{20pt} u^{(3)}, -m^{(3)}}^{\hspace{2pt} U^{(\text{br})}_{b-1}, 0}.
\end{align}
\subsection{Universality of sum rules}
As we show above, even though the two systems are indeed described by different RMEs, the matrix elements still carry the same indices and thus there is a one-to-one correspondence between the RMEs of the two systems.
Next, using the following symmetry properties of the CG coefficients
\begin{equation}
\mathop{C_{j_1, m_1}}_{\hspace{8pt} j_2, m_2}^{\hspace{8pt} j_3, m_3} = (-1)^{j_1 + j_2 - j_3} \mathop{C_{j_1, -m_1}}_{\hspace{8pt} j_2, -m_2}^{\hspace{8pt} j_3, -m_3},
\end{equation}
\begin{equation}
\mathop{C_{j_1, m_1}}_{\hspace{8pt} j_2, m_2}^{\hspace{8pt} j_3, m_3} = (-1)^{j_1-m_1} \sqrt{\frac{2j_3 + 1}{2j_2 + 1}} \mathop{C_{j_3, m_3}}_{\hspace{16pt} j_1, -m_1}^{\hspace{8pt} j_2, m_2},
\end{equation}
we find
\begin{align}
C^*(\bar{u}^{(1)}; \bar m^{(1)}, \bar U^{(1)}) = (-1)^{\left(\sum_{i=1}^k u^{(1)}_i\right) - U^{(1)}_{k-1}} C^*(\bar{u}^{(1)}; -\bar m^{(1)}, \bar U^{(1)} ), \label{eq:C*-CG-relations-1} \\
\mathop{C_{u^{(3)}, m^{(3)}}}_{\hspace{-4pt} U^{(\text{br})}_{b-1}, 0}^{\hspace{2pt} U^\prime, m^{(3)}} = (-1)^{u^{(3)} - m^{(3)}} \sqrt{\frac{2U^\prime+1}{2U^{(\text{br})}_{b-1} +1}} \mathop{C_{U^{\prime}, m^{(3)}}}_{\hspace{20pt} u^{(3)}, -m^{(3)}}^{\hspace{2pt} U^{(\text{br})}_{b-1}, 0}, \label{eq:C*-CG-relations-2}\\
\mathop{C_{U^{(1)}_{k-1}, M^{(1)}}}_{\hspace{-4pt} U^\prime, m^{(3)}}^{\hspace{4pt} U^{(2)}_{l-1}, M^{(2)}} = (-1)^{U^{(1)}_{k-1}-M^{(1)}} \sqrt{\frac{2U^{(2)}_{l-1}+1}{2U^\prime + 1}} \mathop{C_{U^{(2)}_{l-1}, M^{(2)}}}_{\hspace{16pt} U^{(1)}_{k-1}, -M^{(1)}}^{\hspace{0pt} U^{\prime}, m^{(3)}}.\label{eq:C*-CG-relations-3}
\end{align}
Combining Eqs.~\eqref{eq:C*-CG-relations-1}-\eqref{eq:C*-CG-relations-3} we obtain the following relation between the coefficients in the group-theoretical decompositions for the two systems:
\begin{equation}\label{eq:CI-CII-relation}
C^{(\text{I})}_{j \alpha} = (-1)^{\tilde{q}_j} Q\times C^{(\text{II})}_{j \alpha},
\end{equation}
with
\begin{equation}
(-1)^{\tilde{q}_j} = (-1)^{\left(\sum_{i=1}^k u^{(1)}_i\right) - M^{(1)} + u^{(3)} - m^{(3)}}, \qquad Q = \sqrt{\frac{2U^{(2)}_{l-1}+1}{2U^{(\text{br})}_{b-1} +1}}.
\end{equation}
For a fixed $n$-tuple the factor $(-1)^{\tilde{q}_j}$ is the same for all RMEs.
The factor $Q$ does not depend on the $m$-QNs and thus for a fixed RME it is the same for all amplitudes. Note that $Q > 0$.
Let us assume that we have found a sum rule for System~II that holds up to a certain order of breaking. This means that there exists a linear combination of amplitudes such that the coefficients $C^{(\text{II})}_{j\alpha}$ in front of all RMEs cancel. The cancellation still holds for the coefficients $Q\times C^{(\text{II})}_{j\alpha}$ because this is just a multiplication with a global factor.
Thus knowing a sum rule for System~II one can write the corresponding sum rule for System~I by multiplying the amplitudes in the sum rule by factors $(-1)^{\tilde{q}_j}$. Equivalently, one can redefine the amplitudes of System~I by multiplying them by these factors. We choose the latter convention. In this case the amplitudes of System~I are redefined as follows
\begin{equation}\label{eq:AIj-redef-v1}
A^{(\text{I})}_j \rightarrow (-1)^{\tilde{q}_j} A^{(\text{I})}_j
\end{equation} and the sum rules for the two systems take exactly the same form. The factor $(-1)^{\tilde{q}_j}$ can be rewritten as follows
\begin{equation}\label{eq:q-factor-intro-1}
(-1)^{\tilde{q}_j}=(-1)^{\sum_{i=1}^k u^{(1)}_i - M^{(1)} + u^{(3)} - m^{(3)}} = (-1)^{u^{(3)}-m^{(3)}} \left(\prod_{i=1}^k (-1)^{u_i^{(1)}-m^{(1)}_i}\right),
\end{equation}
which in terms of $u$, $m$, $u^I_i$ and $m^I_i$ of System~I becomes
\begin{equation}\label{eq:q-factor-intro}
(-1)^{\tilde{q}_j}= (-1)^{u-m} \left(\prod_{i=1}^k (-1)^{u_i^I-m^{I}_i}\right).
\end{equation}
Note that even though $u$ and $\sum_i u^I_i$ are fixed for all the amplitudes of the system it is important to keep them to ensure that the powers in Eq.~\eqref{eq:q-factor-intro} are integer numbers.
The factors $u-m$ and $u^I_i - m^I_i$ in Eq.~\eqref{eq:q-factor-intro} give the numbers of plus signs in the $n$-tuple that correspond to the representation of the Hamiltonian and the representation $u^I_i$ from the initial state respectively, see the discussion at the end of Appendix~\ref{eq:deriving-the-symmetry-factor}. They give the numbers of plus signs and no minus signs, since in our convention the $m$-QNs are inverted when we build $n$-tuples. Thus, according to Eq.~\eqref{eq:q-factor-intro}, the factor $(-1)^{\tilde{q}_j}$ can be found as the parity of the total number of the plus signs that correspond to the initial state and the Hamiltonian in the $n$-tuple.
The parity of the number of the plus signs for the initial state and the Hamiltonian can also be found as the product of the parity of the number of all the pluses in the $n$-tuple and the parity of the number of pluses corresponding to the final state. That is
\begin{equation}\label{eq:q-factor-2}
(-1)^{\tilde{q}_j} = (-1)^{n/2} \prod_{i=1}^l (-1)^{u_i^F+m^{F}_i},
\end{equation}
where $n$ is the total number of elements in the $n$-tuple given by
\begin{equation}
n = 2\sum_i^k u^I_i + 2\sum_i^l u^F_i + 2u.
\end{equation}
Eq.~\eqref{eq:q-factor-2} can be further rewritten in terms of the parity of the number of the minus signs in the final state
\begin{equation}
(-1)^{\tilde{q}_j} = (-1)^{n/2 + 2\sum_i^l u_i^F} \prod_{i=1}^l (-1)^{u_i^F-m^{F}_i}.
\end{equation}
Note however, that the factor $(-1)^{n/2 + 2\sum_i^l u_i^F}$ is the same for all the amplitudes. Thus we can introduce a complementary definition
\begin{equation}\label{eq:q-factor-def-final}
(-1)^{q_j} \equiv \prod_{i=1}^l (-1)^{u_i^F-m^{F}_i}\,,
\end{equation}
which is simply the parity of the number of minuses in the final state. We can use the $(-1)^{q_j}$ factors to introduce the following redefinition of amplitudes instead of Eq.~\eqref{eq:AIj-redef-v1}
\begin{equation}\label{eq:AIj-redef-v2}
A^{(\text{I})}_j \rightarrow (-1)^{q_j} A^{(\text{I})}_j.
\end{equation}
One can choose either of the two definitions $(-1)^{\tilde{q}_j}$ or $(-1)^{q_j}$. Even though they are not equivalent they preserve the relative signs between amplitudes and thus will result in exactly the same sum rules. Moreover, one can also use the parity of plus signs in the final state or parity of the minus signs in the initial state and the Hamiltonian. Any of these definitions will result in the same set of sum rules. For consistency, everywhere in this work we use the definitions in Eqs.~\eqref{eq:q-factor-def-final} and~\eqref{eq:AIj-redef-v2}. The resulting definitions of $a$- and $s$-type amplitudes are given in Eq.~\eqref{eq:as-comb-def-app}.
Summing up, in this appendix we have shown that
\begin{enumerate}
\item there is a one-to-one correspondence between amplitudes and RMEs of System~I and System~II,
\item the coefficients of the group-theoretical decompositions for the two systems are related by Eq.~\eqref{eq:CI-CII-relation},
\item if one redefines the amplitudes of System~II by multiplying them by the factors $(-1)^{q_j}$ defined in Eq.~\eqref{eq:q-factor-def-final}, the sum rules for the two systems take exactly the same form.
\end{enumerate}
The last statement is what we call the ``universality of sum rules'', the fact that the sum rules for any two systems from the same universality class are the same.
\section{Counting the number of amplitude sum rules}\label{app:SR_counting_doublets}
In this appendix we present a formula for the number of sum rules for a given $U$-spin set described by $n$ doublets. As we argue in Section~\ref{sec:universality} and show explicitly in Appendix~\ref{app:signs} the counting is the same no matter if the doublets belong to the initial state, final state or the Hamiltonian. Without loss of generality we consider a $U$-spin set whose $U$-spin structure is described by $n$ doublets in the final state:
\begin{equation}\label{eq:system-0-to-nd}
0 \xrightarrow{u = 0} \left(\frac{1}{2}\right)^{\otimes n}.
\end{equation}
Using the notation of Appendix~\ref{app:RMEdecomposition} we describe the $U$-spin structure of the system via the sets $\bar u^{I,F}$ and $u$:
\begin{align}
\bar{u}^I &= \emptyset, \qquad u = 0,\\
u^F_j &= \frac{1}{2}, \qquad 1 \le j \le n.
\end{align}
The amplitudes of the system in the physical basis are described by the sets of $m$-QNs
\begin{equation}
\bar{m}^I = \emptyset, \qquad \bar{m}^F = \{m_1^F, m_2^F, \dots, m_n^F\}.
\end{equation}
The total $m$-QNs of the initial, final state and the Hamiltonian are equal to zero
\begin{equation}
M^I = M^F = m = 0.
\end{equation}
Generally, to count the number of sum rules one needs to consider the decomposition of the CKM-free amplitudes
in terms of RMEs. This decomposition takes the form, see Eq.~\eqref{eq:CKMfree_decomposition},
\begin{equation}\label{eq:CKMfree_decomposition-copy}
A_j = \sum_\alpha C_{j \alpha} X_\alpha.
\end{equation}
Using Eqs.~\eqref{eq:Aj=AmImF}-\eqref{eq:def-alpha} we find that for the system under consideration the CKM-free amplitudes are
\begin{equation}
A_j \equiv A_{(\emptyset,\bar{m}^F)},
\end{equation}
the RMEs are
\begin{equation}
X_\alpha \equiv \mel{\bar{U}^F}{H(\bar{U}, b^\prime)}{0},
\end{equation}
and the multi-index $\alpha$ is given by
\begin{equation}
\alpha \equiv \{\bar{U}^I, \bar{U}^F,\bar{U},b^\prime\} \equiv \{ \bar{U}^F,\bar{U},b^\prime\},
\end{equation}
where we used $\bar{U}^I = \emptyset$. The order of breaking, $b^\prime$, takes values from $0$ to $b$, where $b$ is the chosen order in breaking we consider. Note that in order to ensure non-zero RMEs, the last elements of the sets $\bar{U}^F$ and $\bar{U}$ must be the same, that is $U^F_{n-1} = U_{b^\prime -1}$. Also, note that everywhere in this Appendix we use $b^\prime$ as a generic index, and $b$ as a chosen order of breaking, up to which we want to write the sum rules.
Once we have the decomposition in Eq.~\eqref{eq:CKMfree_decomposition-copy}, the number of sum rules, $n_{SR}$, that are valid up to $b$ is given by
\begin{equation}\label{eq:nSR_gen}
n_{SR}^{(b)} = n_A - \text{rank} \left[C_{j \alpha}\right],
\end{equation}
where $n_A$ is the number of amplitudes in the $U$-spin set, and the multi-index $\alpha$ carries information about the order of breaking $b$.
In what follows we first discuss the details of the decomposition in Eq.~\eqref{eq:CKMfree_decomposition-copy} for the special case that we consider in this appendix, next we find $n_A$ and $\text{rank} \left[C_{j \alpha}\right]$. These allow us to find the number of sum rules in terms of $n$ and $b$.
\subsection{Decomposition in terms of RMEs}
In this subsection we discuss some properties of Eq.~\eqref{eq:CKMfree_decomposition-copy} that are specific for the processes with $U$-spin structure that is described by $n$ doublets in the final state. First, we note that applying the Wigner-Eckart theorem, Eq.~\eqref{eq:WE-gen}, to the matrix elements of the system under consideration we find
\begin{equation}\label{eq:X_doublets}
\mel{\bar{U}^F,0}{H\left(\bar{U},0,b^\prime\right)}{0,0} \overset{\text{WE}}{=} \mel{\bar{U}^F}{H(\bar{U}, b^\prime)}{0} \equiv X_{\alpha},
\end{equation}
where we used $u = m = 0$, $U^F_{n-1} = U_{b^\prime-1}$ and $M^I = M^F = 0$. Thus in order to find sum rules for a system of $n$ doublets in the final state one does not need to use the Wigner-Eckart theorem and invoke the concept of RMEs at all. This is because in this case states as well as operators at all orders of $U$-spin breaking have their total $m$-QN equal to zero. That is, in this case the application of the Wigner-Eckart theorem does not lead to additional complexity reduction.
Using Eq.~\eqref{eq:Cjalpha} we find
\begin{equation}\label{eq:Cjalpha-special-def}
C_{j\alpha} = C^*(\bar{u}^F; \bar{m}^F, \bar{U}^F)\times C^*(\bar{u}^H; \bar m^H, \bar{U}),
\end{equation}
where the sets $\bar u^H$ and $\bar m^H$ are such that
\begin{align}
u^H_1 = u = 0, \qquad u^H_j = 1, \qquad 2 \le j \le b^\prime+1, \nonumber \\
m^H_j = 0, \qquad 1 \le j \le b^\prime+1.
\end{align}
\subsection{The number of amplitudes in the $U$-spin set, $n_A$}
As we discuss in Section~\ref{sec:An-tuples-doublets}, the amplitudes $A_j$ of a $U$-spin set can be mapped one-to-one onto $n$-tuples made of ${n/2}$ pluses and ${n/2}$ minuses. The number of different $n$-tuples that satisfy this rule is the binomial coefficient and is given by
\begin{equation}\label{eq:nA}
n_A = \binom{n}{{n/2}} = \frac{n!}{(n/2)! (n/2)!}.
\end{equation}
This is the number of possible final states of the $U$-spin set in the physical basis.
Alternatively, one can perform the counting in the $U$-spin basis. The relation between states in the physical basis and in the $U$-spin basis for the system we consider is given by
\begin{equation}
\ket{\bar{u}^F; \bar{m}^F} = \sum_{\bar{U}^F} C^*(\bar{u}^F; \bar{m}^F, \bar{U}^F) \ket{\bar{U}^F, 0}. \label{eq:basis-rotation}
\end{equation}
Since all we do here is just a basis rotation, the number of states $\ket{\bar{u}^F; \bar{m}^F}$ in the physical basis is the same as the number of states $\ket{\bar{U}^F, 0}$ in the $U$-spin basis. The amplitudes are fully defined by the sets $\bar{m}^F$ and thus the number of amplitudes in the physical basis is the same as the number of amplitudes in the $U$-spin basis and is equal to $n_A$.
The multiplicity of irreps $U$ in the decomposition of the tensor product of $n$ doublets is given by \cite{Zachos:1992xp, Curtright:2016eni}
\begin{equation}\label{eq:Nnu}
N_{U}^{n} =
\frac{n! (2U + 1)}{\left({n/2} - U\right)! \left({n/2} + U + 1\right)!},
\end{equation}
where we assume that $U$ takes values that are consistent with $n$, this is the case when $(n + 2U)$ is even.
The number of different sets $\bar{U}^F$ such that $U^F_{n-1} = U$ is then equal to $N^n_{U}$. Note that $N_U^n$ can be written in terms of the entries $c(n,k)$ of Catalan's triangle~\cite{weissteinCatalansTriangle}
\begin{align}
c(n,k) \equiv \frac{(n+k)! (n-k+1)}{k! (n+1)!}
\end{align}
as
\begin{align}
N_U^n &= c\left(\frac{n}{2}+U,\frac{n}{2}-U\right)\,.
\end{align}
By summing $N_U^n$ over all possible $U$ we find the number of basis elements in the $U$-spin basis and thus the number of amplitudes in the $U$-spin set
\begin{equation}\label{eq:nA_basis}
n_A = \sum_{U=0}^{{n}/{2}} N^{n}_{U}.
\end{equation}
One can check that Eq.~\eqref{eq:nA_basis} is equal to Eq.~\eqref{eq:nA} for all $n$ as it should be.
\subsection{Finding the rank of the matrix $\left[C_{j\alpha}\right]$}
The elements of the matrix $C_{j\alpha}$ are defined in Eq.~\eqref{eq:Cjalpha-special-def}. First, we note that the number of sets $\bar{U}^F$ present in the decomposition in Eq.~\eqref{eq:CKMfree_decomposition-copy} is determined by $b$, since $U^F_{n-1}$ must satisfy $U^F_{n-1} \le b$. Second, studying Eq.~\eqref{eq:Cjalpha-special-def}, we note that the factors $C^*(\bar{u}^H; \bar{m}^H,\bar{U})$ are independent of $\bar{m}^F$.
Note that $C^*(\bar{u}^H; \bar{m}^H,\bar{U})$ depends on $b^\prime$ and the specific set $\bar U$. Thus, for a fixed matrix element this factor can be absorbed by a redefinition of the matrix elements in the $U$-spin basis.
This is equivalent to multiplying columns of the corresponding Clebsch-Gordan coefficient matrix by arbitrary numbers $\neq 0$. This operation does not change the rank.
As a result, the rank of the matrix $\left[C_{j\alpha}\right]$ is fully determined by the rank of the matrix that we denote as $\left[C_{\bar{m}^F\bar{U}^F}\right]$, with elements given by the coefficients $C^*(\bar{u}^F;\bar{m}^F,\bar{U}^F)$.
The matrix $C^*(\bar{u}^F;\bar{m}^F,\bar{U}^F)$ is the rotation matrix between the physical and the $U$-spin basis of states, see Eq.~(\ref{eq:basis-rotation}). This means that the number of states on the left is equal to the number of states on the right. All states $\ket{\bar U^F,0}$ are linearly independent. Therefore, the rank of the matrix $\left[C_{\bar{m}^F\bar{U}^F}\right]$ is found by counting the number of different sets $\bar{U}^F$ allowed at the chosen order of breaking $b$.
Consequently, we can use Eq.~\eqref{eq:Nnu} to find
\begin{equation}\label{eq:rank}
\text{rank } \left[C_{j\alpha}\right] = \sum_{U=0}^{b} N_U^n\,.
\end{equation}
Consider, for example, $b = 0$. In this case the only non-vanishing matrix elements have $U^F_{n-1} = 0$ and there are $N^n_0$ of them. For $b = 1$ the non-vanishing matrix elements can have $U^F_{n-1} = 0$ and $U^F_{n-1}=1$. Thus, in this case there are $N^n_0 + N^n_1$ matrix elements.
The fact that the rank of $\left[C_{j\alpha}\right]$ is determined by the rank of $\left[C_{\bar{m}^F\bar{U}^F}\right]$ can easily be understood. Let's consider two non-zero matrix elements $X_{\alpha_1}$ and $X_{\alpha_2}$ with fixed $\bar U^F$, such that
\begin{equation}
\alpha_1 = \{\bar{U}^F, \bar{U}_1, b_1^\prime\}, \qquad \alpha_2 = \{\bar{U}^F, \bar{U}_2, b_2^\prime\},
\end{equation}
where $b_1^\prime < b_2^\prime$. Note that in order for both matrix elements to exist, $b_1^\prime$ and $b_2^\prime$ must have the same parity because of Eq.~\eqref{eq:b-parity}. Now, according to Eq.~\eqref{eq:Cjalpha-special-def} the two matrix elements will always enter decompositions of amplitudes in the same linear combinations (since $C^*(\bar{u}^H;\bar{m}^H,\bar{U})$ is independent on $\bar{m}^F$) and thus the addition of the matrix element $X_{\alpha_2}$ at the higher order $b_2^\prime$ will not change the rank of $\left[C_{j\alpha}\right]$.
The matrix elements that change the rank come from the highest irrep in the tensor product $H_\varepsilon^{\otimes b_2^\prime}$, that is the terms for which $U_{b_2^\prime-1} = b_2^\prime$. Such matrix elements will be also described by new sets $\bar{U}^F$ with $U^F_{b_2^\prime-1} = b_2^\prime$ that were not generated at lower orders in breaking.
Equivalently, one can say that the terms for which $U_{b^\prime-1} = U^F_{n-1} < b^\prime$ can be always absorbed into the terms at lower orders of breaking and when performing the counting only the terms with $U_{b^\prime-1} = U^F_{n-1} = b^\prime$ can provide new $U$-spin structure.
\subsection{The number of sum rules}
Combining the results Eqs.~\eqref{eq:nSR_gen},~\eqref{eq:nA_basis} and~\eqref{eq:rank} we arrive at the following expression for the number of sum rules up to the considered order $b$ for a system of $n$ doublets
\begin{equation}\label{eq:nSR_doublets}
n_{SR}^{(b)} = \sum_{U = b + 1}^{{n}/{2}} N^n_U = \frac{n!}{\left({n/2} + b + 1\right)!\left({n/2} - b - 1\right)!},
\end{equation}
where $N^n_U$ is defined in Eq.~\eqref{eq:Nnu}. This expression gives the number of sum rules for any system of $n$ doublets, no matter if they are in the initial state, final state or represent operators. From Eq.~\eqref{eq:nSR_doublets}, the maximum order of breaking when there are still sum rules between amplitudes of a $U$-spin system is $b_{\rm max} = {n}/{2} - 1$. There is exactly one sum rule at this order.
\section{Proof of the sum rule theorem for a system of $n$ doublets}\label{app:ThII_doublets}
In what follows we prove a Theorem and two Corollaries that together establish the algorithm of finding the sum rules for systems of $n$ doublets without writing explicitly the group-theoretical decomposition of amplitudes.
In this Appendix we consider a $U$-spin set of processes that can be described by $n$ doublets. We assume that all doublets belong to the final state and only consider CKM-free amplitudes.
\subsection{Notations and definitions}
We start by listing definitions and notations that are used in the formulation and the proof of the theorem.
\begin{itemize}
\item $b$ is the order of breaking. The highest order at which there are still sum rules in the system is $b_{\text{max}} = {n/2} - 1$, see Eq.~\eqref{eq:nSR_doublets}. As in Appendix~\ref{app:SR_counting_doublets} we use $b^\prime$ as a generic index that may take values between $0$ and $b$.
\item
$a_i$ and $s_i$ are $a$-type and $s$-type amplitudes respectively.
\item
We recall the definition of a set of subsets of $U$-spin pairs of the system $S^{(k)} = \{S_j^{(k)}\}$. Each $U$-spin pair is defined by the corresponding $n$-tuple, which in our convention starts with a minus sign. A subset $S_j^{(k)}$ contains all the $U$-spin pairs whose $n$-tuples share $k$ minuses at the same positions including the first minus. The index $j$ is used to enumerate all such subsets. The number of subsets $S_j^{(k)}$ is denoted by $n_S^{(k)}$ and the number of amplitude pairs in each subset by $n_A^{(k)}$. In Sec.~\ref{sec:n_doublet_system} we show that
\begin{equation}\label{eq:nSk-nAk-def}
n_S^{(k)} = \binom{n-1}{k-1}, \hspace{25pt} n_A^{(k)} = \binom{n-k}{{n/2}-k}.
\end{equation}
\item
We use the notation $a_i \in S^{(k)}_j$ and $s_i \in S^{(k)}_j$ to denote the $a$- and $s$-type amplitudes corresponding to $U$-spin pairs that are in $S^{(k)}_j$. For shortness, instead of saying ``the amplitudes that correspond to $U$-spin pairs from the subset $S_j^{(k)}$'' we simply say ``amplitudes from $S_j^{(k)}$''.
\item
We define $S_1^{(k)}$ to be a specific subset of the amplitude pairs which can be schematically written as follows \begin{equation}\label{eq:amp}
S_1^{(k)} = (\underbrace{-, -, ..., -}_{k}; \{\underbrace{-, ..., -}_{{n/2}-k}\underbrace{+, ..., +}_{{n/2}}\}).
\end{equation}
That is, the first $k$ signs before the semicolon are fixed to be minuses and the curly brackets indicate all possible orderings of minuses and pluses after the semicolon.
For example, with $n = 6$ and $k = 2$, the subset of interest is $S_1^{(2)}$ and is given by
\begin{equation} \label{eq:S1-examp}
S_1^{(2)} = \left\{\left(-, -; -, +, +, + \right), \left(-, -; +, -, +, + \right),
\left(-, -; +, +, -, + \right), \left(-, -; +, +, +, - \right)\right\}.
\end{equation}
\item
$N_{U}^{n}$ is the number of different irreps $U$ in the decomposition of the tensor product of $n$ doublets which is given in Eq.~\eqref{eq:Nnu}.
\item $n_{X-a}^{(b)}$ and $n_{X-s}^{(b)}$ are the numbers of linearly independent combinations of matrix elements that enter the decompositions of the $a$- and $s$-type amplitudes of subset $S_1^{(k)}$, respectively, up to an order of breaking $b$. As we show in step 4 of the proof below, these numbers are the same for all subsets from the set $S^{(k)}$.
\item The sets $\bar u^{I, F, H}$, $\bar U^{I, F}$, $\bar U$, $\bar{m}^{I,F,H}$ and $u$, $m$, $M^{I, F}$ are defined in Appendix~\ref{app:RMEdecomposition}. For a system described by $n$ doublets in the final state we have \begin{align}
\bar{u}^I &= \emptyset, \qquad u = m = 0, \nonumber \\
\bar{u}^F &= \{u^F_1, \dots, u^F_n\}, \qquad u^F_j = \frac{1}{2}, \qquad 1 \le j\le n,\nonumber \\
\bar{u}^H &= \{u^H_1, u^H_2, \dots, u^H_{b+1}\}, \qquad u^H_1 = 0,\qquad u^H_j = 1,\qquad 2 \le j \le b+1, \nonumber\\
\bar m^H &= \{m^H_1, m^H_2, \dots, m^H_{b+1}\}, \qquad m^H_j = 0, \qquad 1 \le j \le b+1.
\end{align}
\item
When performing the tensor product of the representations $\bar u^F$ in the final state we choose to divide the set into two subsets
\begin{equation}
\bar u^F = \bar u^{(1)} \cup \bar u^{(2)},
\end{equation}
such that set $\bar u^{(1)}$ contains $k$ irreps and set $\bar u^{(2)}$ contains the remaining $n-k$ irreps. Since for the system under consideration all irreps are doublets, we have
\begin{equation}
u^{(1)}_j = \frac{1}{2} \qquad \text{for } 1\le j \le k, \qquad u^{(2)}_j = \frac{1}{2}\qquad \text{for } 1\le j \le n-k.
\end{equation}
The corresponding sets of $m$-QNs are denoted as $\bar m^{(1)}$ and $\bar m^{(2)}$ and we have
\begin{equation}
\bar m^F = \bar m^{(1)} \cup \bar m^{(2)}.
\end{equation}
For the subset of $n$-tuples $S_1^{(k)}$ defined in Eq.~\eqref{eq:amp} we can choose the set $\bar u^{(1)}$ such that
\begin{equation}
m^{(1)}_j = -\frac{1}{2}\qquad \text{for } 1 \le j \le k.
\end{equation}
In this case we have
\begin{equation}
M^{(1)} = \sum_{j=1}^k m^{(1)}_j = -\frac{k}{2}, \qquad M^{(2)} = \sum_{j=1}^{n-k} m^{(2)}_j = \frac{k}{2}.
\end{equation}
We use $\bar U^{(1)}$ and $\bar{U}^{(2)}$ to denote the sets of $U$-type QNs. For the final state we choose to perform the tensor product in the following order:
\begin{equation}\label{eq:tensor_prod}
\mathop{\otimes}_i^n d_i = \underbrace{\left( d_1 \otimes d_2 \otimes ... d_{k} \right)}_{k} \otimes \underbrace{\left( d_{k+1} \otimes d_{k + 2} \otimes ... d_{n} \right)}_{n - k},
\end{equation}
where $d_i$ are doublets. We use $\bar{U}^{(1)}$ to denote the $U$-type QNs for the tensor product of the first $k$ doublets, and $\bar{U}^{(2)}$ to denote the tensor product of the last $n-k$ doublets. Thus we write the set of $U$-type QNs in the final state as follows:
\begin{equation}
\bar{U}^F = \{U^{(1)}_1, U^{(1)}_2, \dots, U^{(1)}_{k-1}, U^{(2)}_1, U^{(2)}_2, \dots, U^{(2)}_{n-k-1}, U^F_{n-1}\} = \bar{U}^{(1)} \cup \bar{U}^{(2)} \cup U^F_{n-1}.
\end{equation}
\end{itemize}
\subsection{Formulation of the sum rule theorem}
Consider a $U$-spin set of $n$ doublets in the final stat such that $n$ is even.
\textbf{Theorem:}
\begin{enumerate}
\item
For any even $n$, an even (odd) order of breaking $b \le {n}/{2} - 1$, and for any $k = {n}/{2}-b$, there exist exactly one $a$($s$)-type sum rule among the amplitudes from each $S_j^{(k)}$. As a result, there are $n_S^{(k)}$ $a$($s$)-type sum rules among the $n_A^{(k)}$ amplitudes from every subset $S_j^{(k)}$.
\item These sum rules are valid up to order of breaking $b$ and are broken by corrections of order $b+1$.
\item The sum rules are given by
\begin{equation}\label{eq:SR1}
\sum_{a_i \in S^{(k)}_j} a_i = 0 \hspace{25pt} \text{and} \hspace{25pt} \sum_{s_i \in S^{(k)}_j} s_i = 0,
\end{equation}
where for even (odd) $b$ the sums are taken over all $a$($s$)-type amplitudes from subsets $S^{(k)}_j$. Note that for even $b$ there are $a$-type sum rules but no $s$-type sum rules. For odd $b$ there are $s$-type sum rules but no $a$-type sum rules.
\end{enumerate}
\textbf{Corollary I.} The sum rules described by the Theorem provide the full set of sum rules for the system. For a system described by $n$ doublets the number of sum rules at order of breaking $b \le {n/2} - 1$ is
\begin{equation}
n_{SR}^{(b)} = n_S^{({n}/{2}-b)} + n_S^{({n}/{2}-b-1)},
\end{equation}
where $n_{SR}^{(b)}$ is given in Eq.~\eqref{eq:nSR_doublets}.
\textbf{Corollary II.} All the sum rules for a system of $n$ doublets at any order $b \le {n}/{2}-1$ can be found using Table~\ref{tab:sum-rules-rule}.
\begin{table}[t]
\centering
\begin{tabular}{|c|c|}
\hline
$a$-type & $s$-type \\
\hline
$n_{SR-a}^{(b)} = n_S^{(k)}$ & $n_{SR-s}^{(b)} = n_S^{(k)}$\\
$j$th sum rule: \(\displaystyle \sum_{a_i \in S^{(k)}_j} a_i \), &
$j$th sum rule: \(\displaystyle \sum_{s_i \in S^{(k)}_j} s_i \),
\\ ~~with $i = 1, 2, ..., n_A^{(k)}$, $j = 1, 2, ..., n_S^{(k)}$~~ & ~~ with $i = 1, 2, ..., n_A^{(k)}$, $j = 1, 2, ..., n_S^{(k)}$~~\\
\hline
\multicolumn{2}{|c|}{even $b$}\\
\hline
$k = \frac{n}{2}-b$ & $k = \frac{n}{2} - b - 1$\\
\hline
\multicolumn{2}{|c|}{odd $b$}\\
\hline
$k = \frac{n}{2}-b-1$ & $k = \frac{n}{2} - b$\\
\hline
\end{tabular}
\caption{Summary of the $a$- and $s$-type sum rules at even and odd orders of $U$-spin breaking~$b$. Note that $k$ takes different values for $a$- and $s$-type sum rules. This means specifically, that the respective sums of the $a_i$ and $s_i$ go over different sets $S_j^{(k)}$. \label{tab:sum-rules-rule}}
\end{table}
\textbf{Outline of the proof:}
\begin{itemize}[leftmargin=2.2cm]
\item[\textbf{Step 1:}] For an arbitrary $1 \le k \le {n/2}$, we consider one specific subset of amplitude pairs $S^{(k)}_1 \in S^{(k)}$ that we define below. For the subset $S_1^{(k)}$, we count the numbers of linearly independent combinations of matrix elements that enter the group-theoretical decompositions of the $a$- and $s$-type amplitudes up to order of breaking $b$. We denote these numbers as $n_{X-a}^{(b)}$ and $n_{X-s}^{(b)}$ respectively.
\item[\textbf{Step 2:}] We consider $S_1^{(k)}$ for $k = {n/2}-b$ and show that in this case the following relations hold for even $b$
\begin{equation}
n_A^{(k)} - n_{X-a}^{(b)} = 1, \hspace{25pt} n_A^{(k)} - n_{X-s}^{(b)} = 0, \hspace{25pt} \forall n, b,
\end{equation}
and for odd $b$
\begin{equation}
n_A^{(k)} - n_{X-a}^{(b)} = 0, \hspace{25pt} n_A^{(k)} - n_{X-s}^{(b)} = 1, \hspace{25pt} \forall n, b.
\end{equation}
This means that at $b$ even (odd) there is exactly one $a$($s$)-type sum rule and no $s$($a$)-type sum rules among the amplitudes from the subset $S_1^{(k)}$, where $k = {n/2}-b$.
\item[\textbf{Step 3:}] We show that the $a$($s$)-type sum rule among the amplitudes from $S_1^{(k)}$, with $k={n/2}-b$, has the symmetric form given in Eq. \eqref{eq:SR1}.
\item[\textbf{Step 4:}] We show that the results of steps 2 and 3 hold for all subsets from the set $S^{(k)}$. This proves the statement of the theorem.
\end{itemize}
\subsection{Step 1}
Step 1 follows very closely the counting performed in Appendix~\ref{app:SR_counting_doublets} with minor differences due to the specific basis choice and the fact that we focus on the $S_1^{(k)}$ subset.
\subsubsection{Linearly independent combinations of matrix elements}
We consider the subset $S_1^{(k)}$ defined in Eq.~\eqref{eq:amp}. This subset contains $n_A^{(k)}$ amplitude pairs. Our aim is to perform the counting of $n_{X-a}^{(b)}$ and $n_{X-s}^{(b)}$. That is, the numbers of linearly independent combinations of matrix elements that enter the decompositions of the $a$- and $s$-type amplitudes from the subset under consideration at order $b$.
As we already mentioned, without loss of generality we choose the initial state and the Hamiltonian to be singlets. All the matrix elements of interest, for both $a$- and $s$-type amplitudes, have the form
\begin{equation}\label{eq:me_gen}
\mel{\bar U^F}{H(\bar{U},b^\prime)}{0}, \qquad \text{where} \qquad 0 \le U_{b^\prime} \le b^\prime, \qquad 0 \le b^\prime \le b.
\end{equation}
In order for a matrix element to be non-vanishing the relation $U^F_{n-1} = U_{b^\prime}$ must be satisfied. Matrix elements with even $b^\prime$ contribute only to $s$-type amplitudes and matrix elements with odd $b'$ contribute only to $a$-type amplitudes, see the discussion of the decoupling of $a$- and $s$-type sum rules in Section~\ref{sec:U-spin-amp-pairs}.
We consider an $a$($s$)-type amplitude from the subset $S_1^{(k)}$ defined in Eq.~\eqref{eq:amp}. This is equivalent to considering a certain ordering of signs in the $n$-tuple. Since the subset is such that all $n$-tuples have their first $k$ signs fixed, to define an amplitude from the subset it is enough to indicate the ordering of the remaining $n-k$ signs. This ordering is given by the sets $\bar m^{(2)}$. Below we write an expression for the coefficients $C_{j \alpha}$ that enter the decompositions of the amplitudes from the subset $S^{(k)}_1$:
\begin{equation}\label{eq:me_coeff}
C_{j \alpha} = C^*(\bar u^{(1)}; \bar{m}^{(1)}, \bar U^{(1)}) \times C^*(\bar u^{(2)}; \bar{m}^{(2)}, \bar U^{(2)}) \times \mathop{C_{U^{(1)}_{k-1}, -U^{(1)}_{k-1}}}_{\hspace{12pt} U^{(2)}_{n-k-1}, U^{(1)}_{k-1}}^{\hspace{-10pt} U^F_{n-1}, 0} \times C^*(\bar{u}^H; \bar{m}^H, \bar U).
\end{equation}
Similarly to the case considered in Appendix~\ref{app:SR_counting_doublets} we conclude that the rank of the matrix that connects the amplitudes from the subset $S_1^{(k)}$ in the physical basis with RMEs is determined by $C^*(\bar u^{(2)}; \bar{m}^{(2)}, \bar{U}^{(2)})$. This is the case since $C^*(\bar u^{(2)}; \bar{m}^{(2)}, \bar{U}^{(2)})$ is the only part of $C_{j \alpha}$ that depends on $\bar m^{(2)}$ and thus can not be absorbed via re-definitions of RMEs. Note that, by construction, $\bar{m}^{(1)}$ is the same for all RMEs.
The rank of $C_{j \alpha}$ is equal to the number of linearly independent combinations of RMEs entering the group-theoretical decomposition of the $a$($s$)-type amplitudes. Thus to find $n_{X-a}^{(b)}$($n_{X-s}^{(b)}$) all we need to do is to count the number of different sets $\bar{U}^{(2)}$ that result in a non-zero value of Eq.~\eqref{eq:me_coeff}.
Note, that due to the decoupling of the $a$-type and $s$-type amplitudes, in order to find both $n_{X-a}^{(b)}$ and $n_{X-s}^{(b)}$ the counting needs to be done separately.
\subsubsection{Counting the number of sets $\bar{U}^{(2)}$}
To count the number of different sets $\bar{U}^{(2)}$ that could appear at order $b$ for the subset under consideration we use Eq.~\eqref{eq:Nnu}. We need to be careful when imposing the limits in which the elements of $\bar U^{(2)}$ can vary for specific $k$, $n$, and $b$.
In order to count the number of different sets $\bar{U}^{(2)}$ we consider the tensor product of $n$ doublets in the order defined in Eq.~\eqref{eq:tensor_prod}, that is:
\begin{equation}\label{eq:basis_proof}
\mathop{\otimes}_i^n d_i = \underbrace{\left( d_1 \otimes d_2 \otimes ... d_{k} \right)}_{k} \otimes \underbrace{\left( d_{k+1} \otimes d_{k + 2} \otimes ... d_{n} \right)}_{n-k}.
\end{equation}
$\bar{U}^{(1)}$ is fixed for all amplitudes in the subset $S^{(k)}_{1}$:
\begin{equation}
U^{(1)}_1 = 1, \qquad U^{(1)}_2 = \frac{3}{2}, \qquad \dots \quad, \quad U^{(1)}_{k-1} = \frac{k}{2}.
\end{equation}
This is because the absolute value of the third component is equal to the highest irrep for all intermediate tensor products of the first $k$ doublets. There, however, could be several different sets $\bar{U}^{(2)}$. All of them are such that the last element, $U^{(2)}_{n-k-1}$, satisfies the following:
\begin{equation}\label{eq:J2_limits}
\frac{k}{2}\le U^{(2)}_{n-k-1} \le \frac{k}{2} + b.
\end{equation}
\begin{itemize}
\item $U^{(2)}_{n-k-1} \ge {k/2}$ since the total $m$-QN of the doublets after the semicolon is $M^{(2)} = {k/2}$.
\item $U^{(2)}_{n-k-1} \le \frac{k}{2} + b$ since at order $b$ we consider up to $b$ insertions of the spurion operator, meaning that the maximum value of $U^F_{n-1}$ is equal to $b$. Now, as $U^F_{n-1}$ is constructed from adding the angular momenta $U^{(1)}_{k-1}$ and $U^{(2)}_{n-k-1}$, we have
\begin{equation}
\left|U^{(1)}_{k-1}-U^{(2)}_{n-k-1}\right| \leq U^F_{n-1} \leq U^{(1)}_{k-1}+U^{(2)}_{n-k-1}.
\end{equation}
Since $U^{(2)}_{n-k-1} \geq U^{(1)}_{k-1}$, it follows that $U^{(2)}_{n-k-1}-U^{(1)}_{k-1}\leq U^F_{n-1}$. Therefore, the values of $U^{(2)}_{n-k-1}$ such that $U^F_{n-1} \le b$ can be found from
\begin{equation}
U^{(2)}_{n-k-1} - U^{(1)}_{k-1} \le b \hspace{10pt} \Rightarrow \hspace{10pt} U^{(2)}_{n-k-1} \le U^{(1)}_{k-1} + b
\end{equation}
Greater values of $U^{(2)}_{n-k-1}$ give rise only to $U^F_{n-1} > b$.
\end{itemize}
Using Eq.~\eqref{eq:Nnu} we can find the number of different sets $\bar{U}^{(2)}$ for every fixed value of $U^{(2)}_{n-k-1}$ from the interval above, the sum of these numbers gives the number of all sets $\bar{U}^{(2)}$ that one can have for the chosen $k$, $n$, and $b$.
In the next subsection we use this fact in order to obtain the explicit expressions for the numbers of matrix elements that enter the decompositions of the $a$- and $s$-type amplitudes.
\subsubsection{Counting $n_{X-a}^{(b)}$ and $n_{X-s}^{(b)}$}
The counting of $n_{X-a}^{(b)}$ and $n_{X-s}^{(b)}$ is different for even and odd $b$ because matrix elements with even $b$ enter the decompositions of the $s$-type amplitudes, while matrix elements with odd $b$ enter the decompositions of the $a$-type amplitudes.
We start the discussion for the case of even $b$. In that case,
$n_{X-s}^{(b)}$ can be found as the number of all the sets $\bar{U}^{(2)}$ that satisfy the condition in Eq.~\eqref{eq:J2_limits}. This is the case, since all $U^{(2)}_{n-k-1}$ from the interval given in Eq.~(\ref{eq:J2_limits}) can appear in the decompositions of the $s$-type amplitudes. Thus, using Eq.~\eqref{eq:Nnu}, for even $b$ we find
\begin{equation}\label{eq:n_X-s^b_even}
n_{X-s}^{(b)} = \sum_{U = {k/2}}^{{k/2}+b} N^{n-k}_{U}.
\end{equation}
Here, $U$ takes the values $k/2,\, k/2+1,\, k/2+2\,\dots, k/2+b$.
The matrix elements with even $b$, however, do not contribute to $a$-type amplitudes, thus only $U^{(2)}_{n-k-1}$ that satisfy $k/2 \le U^{(2)}_{n-k-1} \le k/2 + b - 1$ enter the decompositions of $a$-type sum rules:
\begin{equation}\label{eq:n_X-a^b_even}
n_{X-a}^{(b)} = \sum_{U = {k/2}}^{{k/2}+b - 1} N^{n-k}_{U}.
\end{equation}
For odd $b$ the situation is reversed and we have
\begin{equation}\label{eq:n_X-s-a^b_odd}
n_{X-a}^{(b)} = \sum_{U = {k/2}}^{{k/2}+b} N^{n-k}_{U}, \hspace{25pt} n_{X-s}^{(b)} = \sum_{U = {k/2}}^{{k/2}+b - 1} N^{n-k}_{U}.
\end{equation}
\subsection{Step 2}
Using Eqs.~\eqref{eq:nSk-nAk-def},~\eqref{eq:Nnu},~\eqref{eq:n_X-a^b_even} and \eqref{eq:n_X-s^b_even} and setting $k = {n}/{2} - b$ we see that for even $b$
\begin{equation}
\begin{gathered}
n_A^{({n/2} - b)} - n_{X-a}^{(b)} = 1, \hspace{15pt} \forall n, b,\\
n_A^{({n/2} - b)} - n_{X-s}^{(b)} = 0, \hspace{15pt} \forall n, b.\\
\end{gathered}
\end{equation}
This means that at order $b$ there is exactly one sum rule among the $a$-type amplitudes of the subset $S_1^{(k)}$. There are no $s$-type sum rules among the amplitudes from $S_1^{(k)}$ at this order.
Similarly for odd $b$ we use Eqs.~\eqref{eq:nSk-nAk-def},~\eqref{eq:Nnu} and~\eqref{eq:n_X-s-a^b_odd} to find
\begin{equation}
\begin{gathered}
n_A^{({n/2} - b)} - n_{X-a}^{(b)} = 0, \hspace{15pt} \forall n, b,\\
n_A^{({n/2} - b)} - n_{X-s}^{(b)} = 1, \hspace{15pt} \forall n, b.\\
\end{gathered}
\end{equation}
Summing up, we have obtained the following result. For any $n$ and order of breaking $b\leq {n}/{2}-1$ there exists a subset $S_1^{(k)}$ with $k = {n/2} - b$, such that there is exactly one sum rule among the amplitudes of the subset. It is an $a$-type sum rule for even $b$ and an $s$-type sum rule for odd $b$.
\subsection{Step 3}
We found that there exists exactly one sum rule between amplitudes in the subset under consideration. A change of basis does not affect the number of sum rules between amplitudes from the subset nor the form of the sum rules. By a change of basis for the subset, particularly the change of the order in which one takes the tensor product of the $n-k$ doublets after the semicolon in Eq.~\eqref{eq:amp}, any amplitude in the subset can be exchanged with any other amplitude in the subset. This means that the sum rules must be symmetric under the exchange of any two amplitudes. This implies the following symmetric form for sum rules
\begin{equation}\label{eq:SR}
\sum_{a_i \in S_1^{(k)}} a_i = 0 \hspace{15pt} \text{and} \hspace{15pt} \sum_{s_i \in S_1^{(k)}} s_i = 0
\end{equation}
for even and odd $b$ respectively, and $k = {n/2} - b$. The sums are taken over all the amplitudes in $S_1^{(k)}$.
\subsection{Step 4}
Now we are ready to discuss the entire set of subsets of the $U$-spin pairs $S^{(k)}$ with $k = {n/2} - b$. Eq.~\eqref{eq:nSk-nAk-def} defines $n_S^{(k)}$ and $n_A^{(k)}$ which are the number of subsets in $S^{(k)}$ and the number of $U$-spin pairs in each subset, respectively.
As in step 3 we use the fact that the choice of the specific $U$-spin basis, that is, the choice of the order in which the tensor product is taken, does not affect the number of sum rules between amplitudes nor the form of the sum rules. For any subset there is a basis choice such that the group theoretical decomposition of amplitudes takes the same form as the decomposition for the subset $S_1^{(k)}$ in the basis of Eq.~\eqref{eq:basis_proof}. Thus each of the $n_S^{(k)}$ subsets has exactly one sum rule among its amplitudes. The sum rule has the symmetric form given in Eq.~\eqref{eq:SR}.
In other words, we have proven that for any $n$ and even(odd) $b\leq {n/2}-1$ there are $n_S^{\left({n/2}-b\right)}$ $a$($s$)-type sum rules, that are valid up to order $b$ and broken at order $b+1$, and that are given in Eq.~\eqref{eq:SR1}.
\subsection{Corollary I}
Let us start by considering even $b$. As we have shown above, at even~$b$ the system has at least $n_S^{(\frac{n}{2}-b)}$ $a$-type sum rules. $b+1$ is odd and there exist at least $n_S^{({n/2} - b - 1)}$ $s$-type sum rules. The $s$-type sum rules also hold at order $b$ since only matrix elements with even $b$ contribute to the $s$-type amplitudes. Together this implies that, according to the theorem, the number of $a$- and $s$-type sum rules for the case of even $b$ is at least $n_S^{(\frac{n}{2}-b)} + n_S^{(\frac{n}{2} -b - 1)}$. Consideration of odd $b$ leads to the same result.
Now, we can compare this number with the total number of sum rules $n_{SR}^{(b)}$ given in Eq.~(\ref{eq:nSR_doublets}). Using Eq.~\eqref{eq:nSk-nAk-def} for $n_S^{(k)}$, we find
\begin{equation}
n_{SR}^{(b)} = n_S^{({n}/{2} - b)} + n_S^{({n}/{2}-b-1)} \hspace{25pt} \forall n,b\,.
\end{equation}
We see that the counting of sum rules predicted by the theorem at order $b$ is the same as the counting of all sum rules of the system. This means that all sum rules of the $U$-spin system at any order of breaking can be found as symmetric sums of amplitudes from the subsets of sets $S^{(k)}$.
\subsection{Corollary II}
Table~\ref{tab:sum-rules-rule} summarizes the statements of the Theorem and Corollary I. At any order of breaking $b \le {n/2} - 1$ the $U$-spin system has $a$- and $s$-type sum rules which are given as sums over the amplitudes of subsets $S^{(k)}$, where $k$ takes different values for $a$- and $s$-type sum rules, depending on the parity of $b$.
This difference in the definition of $k$ is due to the decoupling of the $a$- and $s$-type amplitudes. That is, the fact that the $a$-type amplitudes contain only contributions from matrix elements that appear with odd $b$ and $s$-type amplitudes that have matrix elements that appear at even $b$.
\section{Generalization to the case of arbitrary irreps \label{app:mu-factor}}
As we outline in Section~\ref{sec:sym}, the key ideas that are used in obtaining the sum rules for arbitrary systems from the sum rules of systems of doublets are basis rotation and symmetrization.
In this appendix we provide extra details and proofs for that process.
\subsection{Basis rotation for amplitudes}
Consider a system of $r$ arbitrary $U$-spin irreps $u_0, u_1, \dots, u_{r-1}$. We would like to obtain the sum rules for this system from the sum rules of the underlying system of $n$ would-be doublets, where $n$ is given in Eq.~\eqref{eq:would-be-n-def}. As in Section~\ref{sec:sym} we denote the system of $n$ would-be doublets as ``the original system'' and the system of $r$ arbitrary representations as ``the new system.''
Each irrep $u_j$ of the new system is constructed as the highest irrep in the tensor product of $n_j = 2u_j$ doublets of the original system. Being the highest irrep implies that the irrep is totally symmetric
with respect to the interchange of the representations in the original system.
We denote the amplitudes of the original system as $A^{(d)}(\bar{m}_0, \bar{m}_1, \dots, \bar{m}_{r-1})$, where the label $(d)$ indicates that this is an amplitude of a system of doublets.
Note that in the main text we only considered one symmetrization, while here we consider the more general case of $r$ symmetrizations.
Each $\bar m_j$ is a set of $2u_j$ $m$-QNs of the doublets of the original system that we use in order to build the irrep $u_j$ of the new system. For each of the tensor products of the $n_j$ doublets of the original system we perform a basis rotation according to Eq.~\eqref{eq:basis_rot_def}. We arrive at the following result
\begin{equation}\label{eq:doublet_to_u_basis_rot_r_irreps}
A^{(d)}(\bar{m}_0, \dots, \bar{m}_{r-1}) = \sum_{\bar{U}_0, \dots, \bar{U}_{r-1}} \left(\prod_{j=0}^{r-1} C^*\left(\bar{m}_j,\bar{U}_j\right) \right) A\left(\bar{U}_0, \dots, \bar{U}_{r-1}, M_0, \dots, M_{r-1}\right),
\end{equation}
where $M_j$ is the sum of the $m$-QNs of the set $\bar m_j$.
Each set $\bar U_j$ contains $n_j-1$ elements. Note that as in Eq.~\eqref{eq:basis_rot_def}, the sum in Eq.~\eqref{eq:doublet_to_u_basis_rot_r_irreps} goes over all the possible sets $\bar{U}_0, \dots, \bar{U}_{r-1}$, and not over the particular elements in these sets.
Now, consider one specific representation $u_j$ of the new system. Among all the possible sets of $U$-type QNs $\bar U_j$ only one has the total $U$-spin $u_j$. We denote this set of $U$-type QNs as $\bar{U}_j^{(h)}$, where the label $(h)$ highlights that the set corresponds to the highest possible representation in the tensor product. Thus, out of all the terms that enter the RHS of Eq.~\eqref{eq:doublet_to_u_basis_rot_r_irreps} only one amplitude belongs to the new system that we are interested in. We denote this amplitude as $A(M_0, M_1, \dots, M_{r-1})$.
The procedure of obtaining the sum rules for the new system can be performed in two steps. First, we perform the basis rotation as in Eq.~\eqref{eq:doublet_to_u_basis_rot_r_irreps} and rewrite the sum rules of the original system in terms of the amplitudes in the RHS of Eq.~\eqref{eq:doublet_to_u_basis_rot_r_irreps}. Second, we manipulate the sum rules to obtain relations that only contain the amplitudes with the highest irreps, that is, $A(M_0, M_1, \dots, M_{r-1})$. We show below that the latter is guaranteed to be realizable due to the decoupling of sum rules corresponding to different combinations of sets $\bar U_j$.
As a consequence of the decoupling, in order to obtain the sum rules of the new system from the sum rules of the original system it is enough to just perform the following substitution inside the sum rules of the original system
\begin{equation}\label{eq:doublet_to_u_basis_subst_r_irreps}
A^{(d)}(\bar{m}_0, \dots, \bar{m}_{r-1}) \, \longrightarrow \,\, \left(\prod_{j=0}^{r-1} C^*\left(\bar{m}_j,\bar{U}_j^{(h)}\right) \right) A\left(M_0, \dots, M_{r-1}\right).
\end{equation}
Note that the mapping in Eq.~(\ref{eq:doublet_to_u_basis_subst_r_irreps}) is not injective, \emph{i.e.}~in general several $A^{(d)}(\bar{m}_0, \dots, \bar{m}_{r-1})$ are mapped onto the same $A\left(M_0, \dots, M_{r-1}\right)$.
In the next subsection we explain why the decoupling takes place.
\subsection{Decoupling}
Below we expand on why the sum rules in the symmetrized basis decouple. In short the decoupling takes place due to the fact that different RMEs cannot cancel each other,~\emph{i.e.}~the respective contributions have to cancel separately in each sum rule.
Note first that in order to show that a system of sum rules decouples, it is enough to show that this is manifestly the case in one specific basis. In any other basis the decoupling might not be manifest, however, the decoupling still will be the underlying feature of the system. Therefore, for the argument of this subsection, we construct a specific basis of RMEs in which the decoupling is manifest.
Recall that different bases are generated by changing the order of the tensor products.
When we are given a system of $n$ doublets and perform the decomposition of the amplitudes in terms of RMEs, as described in detail in Appendix~\ref{app:RMEdecomposition}, we can always choose a basis of RMEs such that it mimics the new system. To construct such a basis we perform the tensor product of the doublets of the original system in a specific order as follows. We first separately multiply the doublets of each subset of $n_j$ doublets that are eventually used to build the irrep $u_j$. For each $u_j$ we use the same order of the tensor product that is encoded in the corresponding set $\bar U_j$ that is used to define the amplitudes in the RHS of Eq.~\eqref{eq:doublet_to_u_basis_rot_r_irreps}. The remaining tensor products can be arbitrary.
For the purpose of the argument, we introduce the following notations
\begin{align}
\bar U &= \{\bar{U}_0, \dots, \bar{U}_{r-1}\}, \\ \bar M & = \{M_0, \dots, M_{r-1}\}, \\ A\left(\bar U, \bar{M}\right) &\equiv A\left(\bar{U}_0, \dots, \bar{U}_{r-1}, M_0, \dots, M_{r-1}\right).
\end{align}
In the basis constructed above, the amplitudes $A\left(\bar U, \bar M\right)$ can be written only in terms of RMEs that have $\bar U$ as a part of their multi-index $\alpha$ and no other RME could be present in the decomposition. That is, we see that sets of amplitudes with different $\bar{U}$ are decomposed in terms of different non-overlapping sets of RMEs. This shows the decoupling: Any sum rule that holds between the amplitudes of the original system must also hold when the amplitudes of the original system are replaced by the amplitudes $A\left(\bar U, \bar M\right)$ with the factors as in Eq.~\eqref{eq:doublet_to_u_basis_rot_r_irreps}. Thus the substitution in Eq.~\eqref{eq:doublet_to_u_basis_subst_r_irreps} is justified.
\subsection{The symmetry factor \label{eq:deriving-the-symmetry-factor}}
To perform the substitution in Eq.~\eqref{eq:doublet_to_u_basis_subst_r_irreps} we need to to know the coefficients $C^*(\bar m_j, \bar U^{(h)}_j)$. A straightforward way to find these coefficients would be to use the definition of coefficients $C^*$ given in Eq.~\eqref{eq:basis_rot_def}. This approach requires us to specify a $U$-spin basis and evaluate the coefficients of interest in this chosen basis. However, since the coefficients $C^*(\bar m_j, \bar U^{(h)}_j)$ that appear in the RHS of Eq.~\eqref{eq:doublet_to_u_basis_subst_r_irreps} correspond to the highest representation in the tensor product of $n_j$ doublets, one can find them using simple combinatorics. In what follows we use the symmetry of the highest representation in the tensor product of doublets to derive a basis independent expression for the coefficients $C^*(\bar m_j, \bar U_j^{(h)})$.
Consider a state $\ket{u_j,M}$. As above we build this state from the product of $n_j$ doublets such that $n_j = 2u_j$, and thus $u_j$ is the highest irrep in the tensor product. The highest representation is totally symmetric, which means that the coefficients in the basis rotation analogous to Eq.~(\ref{eq:basis_rot_def})
should be the same for all sets $\bar m_j$ that appear in the decomposition of $\ket{u_j, M}$. We define this universal prefactor $C_\text{sym}(u_j,M)$ such that
\begin{align}
\label{eq:C*_Csym_relation}
\ket{u_j, M} = \frac{1}{\sqrt{C_\text{sym}(u_j, M)}}\sum_{\bar m_j} \delta_{M_j, M}\ket{\bar m_j}\,.
\end{align}
Note that $\ket{\bar m_j}$ represents a tensor product of $n_j$ doublets with $m$-QNs from the set $\bar m_j$.
Note further that all $\bar m_j$ we sum over in Eq.~(\ref{eq:C*_Csym_relation}) have the same number of elements, namely $n_j=2u_j$ and that the Kronecker delta ensures that the sum goes only over the sets $\bar m_j$ with total $m$-QN equal to $M$. The normalization condition implies that the square of the coefficient $C_\text{sym}(u_j,M)$ is equal to the number of different sets $\bar{m}_j$ with the fixed value $M_j = M$, so we arrive at
\begin{equation}\label{eq:counting-mj}
C_\text{sym}(u_j, M_j) = C(2u_j, u_j-M_j) = C(2u_j, y_j),
\end{equation}
where $C(*,*)$ is a binomial coefficient. The first binomial coefficient is written in terms of the $m$-QN $M_j$ of the representation $u_j$, while the latter uses the number of minus signs $y_j$ from the $y_j$-notation introduced in Section~\ref{sec:gen_1d}.
Eqn.~(\ref{eq:counting-mj}) can be seen as follows. All $\ket{\bar{m}_j}$ that contribute to Eq.~\eqref{eq:C*_Csym_relation} can be represented using $2u_j$ signs. We count the number of ways to arrange minus signs such that we have a total $m$-QN $M_j$. There are two possibilities:
\begin{itemize}
\item $M_j\geq 0$: Then in order to contribute, $\ket{\bar{m}_j}$ must contain $2M_j$ \lq\lq{}$+$\rq\rq{}-signs, and on top of that an equal number of $\frac{2 u_j-2 M_j}{2}$ \lq\lq{}$+$\rq\rq{}- and \lq\lq{}$-$\rq\rq{}-signs. Thus the total number of \lq\lq{}$-$\rq\rq{}-signs is $y_j = u_j-M_j$, which can be chosen from a total of $2u_j$ signs, it follows Eq.~(\ref{eq:counting-mj}).
\item $M_j<0$: Then in order to contribute, $\ket{\bar{m}_j}$ must contain $2\vert M_j\vert$ \lq\lq{}$-$\rq\rq{}-signs, and on top of that an equal number of $\frac{2 u_j-2 \vert M_j\vert}{2}$ \lq\lq{}$+$\rq\rq{}- and \lq\lq{}$-$\rq\rq{}-signs. This makes a total of $y_j = u_j+\vert M_j\vert = u_j-M_j$ minus signs, it follows again Eq.~(\ref{eq:counting-mj}).
\end{itemize}
Finally, we relate our results in Eqs.~\eqref{eq:C*_Csym_relation} and~\eqref{eq:counting-mj} to the coefficient $C^*(\bar{m}_j, \bar U_j^{(h)})$ in Eq.~(\ref{eq:doublet_to_u_basis_subst_r_irreps}). To do this we take Eq.~\eqref{eq:C*_Csym_relation} and multiply it by $\bra{u_j, M}$ on both sides
\begin{align}
1 &= \frac{1}{\sqrt{C_{\text{sym}}(u_j,M)}} \sum_{\bar{m}_j} \delta_{M_j,M} \braket{u_j, M}{\bar{m}_j}\\
&=\frac{C^*(\bar{m}_j, \bar U_j^{(h)})}{\sqrt{C_\text{sym}(u_j, M)}}C_\text{sym}(u_j, M) \\
&= C^*(\bar{m}_j, \bar U_j^{(h)}) \sqrt{C_\text{sym}(u_j, M)},
\end{align}
and where we also used, following from Eq.~\eqref{eq:basis_rot_def},
\begin{align}
\braket{u_j, M}{\bar{m}_j} &= C^*(\bar{m}_j, \bar U_j^{(h)})\,.
\end{align}
It follows:
\begin{equation}
C^*(\bar{m}_j, \bar U_j^{(h)}) = \frac{1}{\sqrt{C_\text{sym}(u_j, M)}}.
\end{equation}
\subsection{The $\mu$-factor \label{sec:mu-Factor-Appx}}
In Section~\ref{sec:halves-lattice} we study the geometrical approach to writing sum rules for doublets-only systems. The method can be straightforwardly generalized to the case of systems with at least one doublet, while the rest of the representations are arbitrary.
For the lattice in the general case each node is assigned a $\mu$-factor. Note that the $\mu$-factor used in the lattice formalism is different from, but related to the symmetry factor derived in Sec.~\ref{eq:deriving-the-symmetry-factor} above.
The reason for the difference is as follows. When performing the replacement Eq.~(\ref{eq:doublet_to_u_basis_subst_r_irreps}), several different amplitudes from the LHS are mapped onto the same amplitude in the RHS. While in the purely algebraic algorithm, these are then automatically added together, in the lattice algorithm we have to count explicitly how many amplitudes contribute in this way to the same symmetrized amplitude.
There are therefore two sources that contribute to the $\mu$-factors of lattice nodes. One comes from the symmetry factor that we derive above. As we showed, when obtaining the sum rules for the new system from the sum rules for the original system of doublets all we need to do is to perform the substitution given in Eq.~\eqref{eq:doublet_to_u_basis_subst_r_irreps}. Thus each amplitude/node of the lattice for the new system gains a factor of $1/\sqrt{C(2u_j, y_j)}$ for each higher representation, see Eqs.~(\ref{eq:doublet_to_u_basis_subst_r_irreps}, \ref{eq:counting-mj}).
The other contribution to the $\mu$-factor is due to the fact that when transitioning from the system of doublets to the system of arbitrary representations, different $n$-tuples of the doublets-only system can be mapped onto a single generalized $n$-tuple of the system of arbitrary representations. For each representation $u_j$ and for a fixed $y_j$ (which corresponds to a fixed $M_j$) the contribution to the $\mu$-factor is given by the number of different amplitudes of the system of doublets that are mapped into a single amplitude of the general system.
What we actually need to count in the geometrical method is how many nodes from the lattice for doublets-only system are mapped onto a given node of the lattice that corresponds to the system of arbitrary representations. Consider the example of the lattice point
\begin{equation}\label{eq:example-lattice-point}
(\underset{0}{-},\underset{1}{---++}, \underset{2}{+ +}) = (1, 1, 1)\,,
\end{equation}
where we consider the case $2 u_1=5$, and $M_1=-1/2$, i.e. $y_1=3$.
How many lattice points, \emph{i.e.}~amplitudes, correspond to this node in the lattice for the doublets? To start the counting, the number of ways three minus signs can be assigned to the five available positions is given by the binomial coefficient $C(5,3)=10$. However, in order to get the total number of doublet lattice points that correspond to Eq.~(\ref{eq:example-lattice-point}), we also have to account for the fact that in the lattice there are points that correspond to identical amplitudes when the labels of the three minus signs (in the doublet lattice) are interchanged, see Eq.~(\ref{eq:example-permutations-lattice-points}). This gives another factor $3!$ on top of the binomial coefficient.
Therefore, the total number of lattice points in the doublet lattice that are mapped onto the same point in the new lattice Eq.~(\ref{eq:example-lattice-point}) is $C(5,3)\times 3! = 60$.
Alternatively, we can perform the counting also by directly counting the number of arrangements of three distinguishable minus-signs into five numbered positions, no matter what is the permutation of the plus-signs. This is equal to the total number of permutations 5! divided by the permutations of the plus-signs 2!, leading of course to the same result $5!/2!= 60$.
In the general case this translates to the number of ordered arrangements of $y_j$ minus signs into the $2 u_j$ positions, given by
\begin{align}
P(2u_j,y_j) = C(2u_j,y_j) \times y_j! = \frac{(2u_j)!}{(2u_j-y_j)!}\,.
\end{align}
In words: the number of lattice points of the doublet lattice that correspond to one lattice point in the lattice of the higher representation $u_j$ (and number of minus signs $y_j$, determined by the value of $M_j$) is given by the number of unordered ways $C(2u_j, y_j) $ to put the $y_j$ minus signs into the $2u_j$ positions, times the number of ways one can order the minus signs, which is $y_j!$.
Thus, we conclude that each irrep $u_j$ contributes a factor of
\begin{equation}
\mu_j=
\sqrt{C(2u_j,y_j)} \times y_j!\,. \label{eq:complete-mu-factor}
\end{equation}
The total $\mu$-factor of a node is then given by
\begin{equation}
\mu[y_1,y_2,...,y_{r-1}]=\prod_{j=1}^{r-1} \mu_j\,.
\end{equation}
\section{Decomposition of the $C_b \to L_b P^- P^+$ system in terms of RMEs}\label{app:CbtoLbPP}
In this Appendix we perform the decomposition of the CKM-free amplitudes of the $C_b\to L_b P^- P^+$ set in terms of RMEs. We perform the decomposition up to $b_{\text{max}} = 2$. Table~\ref{tab:CbtoLPP-b01} shows the decompositions of the CKM-free amplitudes in terms of RMEs with $b =0$ and $b = 1$, which we list below:
\begin{align}\label{eq:RMEb01}
X_1 &= \mel{\frac{3}{2}}{_{1}1}{\frac{1}{2}}, & X_2 &= \mel{\frac{1}{2}}{_{1}1}{\frac{1}{2}}, & X_3 &= \mel{\frac{1}{2}}{_{0}1}{\frac{1}{2}} \nonumber \\
X_4 &= \mel{\frac{3}{2}}{_1 \left(1 \times 1_\varepsilon\right)_2}{\frac{1}{2}}, & X_5 &= \mel{\frac{3}{2}}{_1 \left(1 \times 1_\varepsilon \right)_1}{\frac{1}{2}}, & X_6 &= \mel{\frac{1}{2}}{_1 \left(1\times 1_\varepsilon\right)_1}{\frac{1}{2}} \nonumber \\
X_7 &= \mel{\frac{1}{2}}{_0 \left(1 \times 1_\varepsilon \right)_1}{\frac{1}{2}}, & X_8 &= \mel{\frac{1}{2}}{_1 \left(1 \times 1_\varepsilon\right)_0}{\frac{1}{2}}, & X_9 &= \mel{\frac{1}{2}}{_0 \left(1 \times 1_\varepsilon\right)_0}{\frac{1}{2}}.
\end{align}
For the states we use the notation as in Ref.~\cite{Grossman:2018ptn}, where the subindex for the final state is the intermediate representation in the tensor product of three doublets. The Hamilton operator $H^1$ is denoted by ``$1$'', the spurion operator is denoted by ``$1_\varepsilon$'', and expressions of the form $\left(u_1 \times u_2\right)_{u_3}$ denote the $u_3$ irrep in the tensor product of $u_1$ and $u_2$.
Table~\ref{tab:CbtoLPP-b2} shows the contributions to the decompositions from RMEs with $b = 2$. All $b = 2$ RMEs are listed below:
\begin{align}\label{eq:RMEb2}
X_{10} & = \mel{\frac{3}{2}}{_1 \left(\left(1 \times 1_\varepsilon\right)_2 \times 1_\varepsilon\right)_2}{\frac{1}{2}}, & X_{11} & = \mel{\frac{3}{2}}{_1 \left(\left(1\times1_\varepsilon\right)_2\times 1_\varepsilon\right)_1}{\frac{1}{2}}, \nonumber\\
X_{12} &= \mel{\frac{1}{2}}{_1 \left(\left(1\times1_\varepsilon\right)_2\times 1_\varepsilon\right)_1}{\frac{1}{2}}, & X_{13} & = \mel{\frac{1}{2}}{_0 \left(\left(1 \times 1_\varepsilon\right)_2 \times 1_\varepsilon\right)_1}{\frac{1}{2}}, \nonumber\\
X_{14} & = \mel{\frac{3}{2}}{_1 \left(\left(1\times1_\varepsilon\right)_1\times1_\varepsilon\right)_2}{\frac{1}{2}}, & X_{15} & = \mel{\frac{3}{2}}{_1 \left(\left(1\times 1_\varepsilon\right)_1 \times 1_\varepsilon\right)_1}{\frac{1}{2}}, \nonumber\\
X_{16} & = \mel{\frac{1}{2}}{_1 \left(\left(1\times1_\varepsilon\right)_1 \times 1_\varepsilon\right)_1}{\frac{1}{2}}, & X_{17} & = \mel{\frac{1}{2}}{_0 \left(\left(1 \times 1_\varepsilon\right)_1 \times 1_\varepsilon\right)_1}{\frac{1}{2}}, \nonumber \\
X_{18} &= \mel{\frac{3}{2}}{_1 \left(\left(1\times 1_\varepsilon\right)_0\times 1_\varepsilon\right)_1}{\frac{1}{2}}, & X_{19} &= \mel{\frac{1}{2}}{_1 \left(\left(1\times1_\varepsilon\right)_0\times1_\varepsilon\right)_1}{\frac{1}{2}},\nonumber\\
X_{20}& = \mel{\frac{1}{2}}{_0 \left(\left(1 \times 1_\varepsilon\right)_0\times 1_\varepsilon\right)_1}{\frac{1}{2}}. & &
\end{align}
When writing the decompositions of the amplitudes of the $C_b \to L_b P^- P^+$ set in Tables~\ref{tab:CbtoLPP-b01} and~\ref{tab:CbtoLPP-b2}, we perform the tensor products in the final state in the order $\left(\left(L_b \otimes P^-\right)\otimes P^+\right)$. The tensor product for the Hamilton operators are taken in the order shown in Eqs.~\eqref{eq:RMEb01} and~\eqref{eq:RMEb2}.
To find the $b=0$ sum rules one needs to consider only three RMEs $X_{1}$, $X_2$, $X_3$. For $b=1$ sum rules one needs to also include the RMEs from $X_4$ through $X_9$. In order to write the sum rules that hold up to $b = 2$ all the RMEs $X_1\,-\,X_{20}$ must be considered. Note that not all the RMEs are linearly independent: the rank of the combined matrix that includes all 20 RMEs is equal to 13
thus resulting in one sum rule that holds at order $b = 2$. One can check explicitly that all the sum rules listed in Section~\ref{sec:CbtoLbPP} are indeed in agreement with Tables~\ref{tab:CbtoLPP-b01} and~\ref{tab:CbtoLPP-b2}. We also checked that for $b=3$ the rank of the matrix saturates, i.e.~there are no higher order sum rules. We do not write the explicit table here.
\begingroup
\squeezetable
\begin{table}
\centering
\begin{tabular}{c|c|c|c|c|c|c|c|c|c}
\hline\hline
\text{Decay amplitude} & ~~$X_1$~~ & ~~$X_2$~~ & ~~$X_3$~~ & ~~$X_4$~~ & ~~$X_5$~~ & ~~$X_6$~~ & ~~$X_7$~~ & ~~$X_8$~~ & ~~$X_9$~~ \\
\hline\hline
$A\left(\Lambda_c^+ \to \Sigma^+ K^- K^+\right)$ & $\frac{1}{3}$ & $-\frac{2}{3}$ & $0$ & $\frac{1}{\sqrt{10}}$ & $-\frac{1}{3 \sqrt{2}}$ & $\frac{\sqrt{2}}{3}$ & $0$ & $0$ & $0$\\
$A\left(\Xi_c^+\to p\pi^- \pi^+\right)$ & $\frac{1}{3}$ & $-\frac{2}{3}$ & $0$ & $-\frac{1}{\sqrt{10}}$ & $\frac{1}{3 \sqrt{2}}$ & $-\frac{\sqrt{2}}{3}$ & $0$ & $0$ & $0$ \\
$A\left(\Lambda_c^+\to \Sigma^+ \pi^- \pi^+\right)$ & $\frac{1}{3}$ & $\frac{1}{3}$ & $-\frac{1}{\sqrt{3}}$ & $\frac{1}{\sqrt{10}}$ & $-\frac{1}{3 \sqrt{2}}$ & $-\frac{1}{3 \sqrt{2}}$ & $\frac{1}{\sqrt{6}}$ & $0$ & $0$ \\
$A\left(\Xi_c^+\to p K^- K^+\right)$ & $\frac{1}{3}$ & $\frac{1}{3}$ & $-\frac{1}{\sqrt{3}}$ & $-\frac{1}{\sqrt{10}}$ & $\frac{1}{3 \sqrt{2}}$ & $\frac{1}{3 \sqrt{2}}$ & $-\frac{1}{\sqrt{6}}$ & $0$ & $0$ \\
$A\left(\Lambda_c^+\to \Sigma^+ \pi^- K^+\right)$ & $\frac{\sqrt{2}}{3}$ & $-\frac{1}{3 \sqrt{2}}$ & $-\frac{1}{\sqrt{6}}$ & $\frac{2}{3 \sqrt{5}}$ & $0$ & $0$ & $0$ & $\frac{1}{3 \sqrt{2}}$ & $\frac{1}{\sqrt{6}}$ \\
$A\left(\Xi_c^+\to p K^- \pi^+\right)$ & $\frac{\sqrt{2}}{3}$ & $-\frac{1}{3 \sqrt{2}}$ & $-\frac{1}{\sqrt{6}}$ & $-\frac{2}{3 \sqrt{5}}$ & $0$ & $0$ & $0$ & $-\frac{1}{3 \sqrt{2}}$ & $-\frac{1}{\sqrt{6}}$ \\
$A\left(\Lambda_c^+\to p K^- \pi^+ \right)$ & $\frac{1}{3}$ & $\frac{1}{3}$ & $\frac{1}{\sqrt{3}}$ & $\frac{1}{\sqrt{10}}$ & $-\frac{1}{3 \sqrt{2}}$ & $-\frac{1}{3 \sqrt{2}}$ & $-\frac{1}{\sqrt{6}}$ & $0$ & $0$ \\
$A\left(\Xi_c^+\to \Sigma^+ \pi^- K^+\right)$ & $\frac{1}{3}$ & $\frac{1}{3}$ & $\frac{1}{\sqrt{3}}$ & $-\frac{1}{\sqrt{10}}$ & $\frac{1}{3 \sqrt{2}}$ & $\frac{1}{3 \sqrt{2}}$ & $\frac{1}{\sqrt{6}}$ & $0$ & $0$ \\
$A\left(\Lambda_c^+\to p K^- K^+\right)$ & $\frac{\sqrt{2}}{3}$ & $-\frac{1}{3 \sqrt{2}}$ & $\frac{1}{\sqrt{6}}$ & $\frac{2}{3 \sqrt{5}}$ & $0$ & $0$ & $0$ & $\frac{1}{3 \sqrt{2}}$ & $-\frac{1}{\sqrt{6}}$ \\
$A\left(\Xi_c^+\to \Sigma^+ \pi^- \pi^+\right)$ & $\frac{\sqrt{2}}{3}$ & $-\frac{1}{3 \sqrt{2}}$ & $\frac{1}{\sqrt{6}}$ & $-\frac{2}{3 \sqrt{5}}$ & $0$ & $0$ & $0$ & $-\frac{1}{3 \sqrt{2}}$ & $\frac{1}{\sqrt{6}}$ \\
$A\left(\Lambda_c^+\to p \pi^- \pi^+\right)$ & $\frac{\sqrt{2}}{3}$ & $\frac{\sqrt{2}}{3}$ & $0$ & $\frac{2}{3 \sqrt{5}}$ & $0$ & $0$ & $0$ & $-\frac{\sqrt{2}}{3}$ & $0$ \\
$A\left(\Xi_c^+\to \Sigma^+ K^- K^+\right)$ & $\frac{\sqrt{2}}{3}$ & $\frac{\sqrt{2}}{3}$ & $0$ & $-\frac{2}{3 \sqrt{5}}$ & $0$ & $0$ & $0$ & $\frac{\sqrt{2}}{3}$ & $0$ \\
$A\left(\Lambda_c^+\to p \pi^- K^+\right)$ & $1$ & $0$ & $0$ & $\frac{1}{\sqrt{10}}$ & $\frac{1}{\sqrt{2}}$ & $0$ & $0$ & $0$ & $0$ \\
$A\left(\Xi_c^+\to \Sigma^+ K^- \pi^+\right)$ & $1$ & $0$ & $0$ & $-\frac{1}{\sqrt{10}}$ & $-\frac{1}{\sqrt{2}}$ & $0$ & $0$ & $0$ & $0$ \\
\hline\hline
\end{tabular}
\caption{RME decomposition of $C_b \to L_bP^-P^+$ amplitudes up to first order $U$-spin breaking ($b=0$ and $b=1$). The corresponding RMEs $X_\alpha$ are defined in Eq.~\eqref{eq:RMEb01}.\label{tab:CbtoLPP-b01}}
\end{table}
\endgroup
\begingroup
\squeezetable
\begin{table}
\centering
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c}
\hline\hline
\text{Decay amplitude} & ~~$X_{10}$~~ & ~~$X_{11}$~~ & ~~$X_{12}$~~ & ~~$X_{13}$~~ & ~~$X_{14}$~~ & ~~$X_{15}$~~ & ~~$X_{16}$~~ & ~~$X_{17}$~~ & ~~$X_{18}$~~&~~$X_{19}$~~&~~$X_{20}$~~\\
\hline\hline
$A\left(\Lambda_c^+\to \Sigma^+ K^- K^+\right)$ & $-\frac{1}{2 \sqrt{15}}$ & $-\frac{1}{2 \sqrt{15}}$ & $\frac{1}{\sqrt{15}}$ & $0$ & $-\frac{1}{2 \sqrt{5}}$ & $\frac{1}{6}$ & $-\frac{1}{3}$ & $0$ & $0$ & $0$ & $0$ \\
$A\left(\Xi_c^+\to p\pi^- \pi^+\right)$ & $-\frac{1}{2 \sqrt{15}}$ & $-\frac{1}{2 \sqrt{15}}$ & $\frac{1}{\sqrt{15}}$ & $0$ & $-\frac{1}{2 \sqrt{5}}$ & $\frac{1}{6}$ & $-\frac{1}{3}$ & $0$ & $0$ & $0$ & $0$ \\
$A\left(\Lambda_c^+\to \Sigma^+ \pi^- \pi^+\right)$ & $-\frac{1}{2 \sqrt{15}}$ & $-\frac{1}{2 \sqrt{15}}$ & $-\frac{1}{2 \sqrt{15}}$ & $\frac{1}{2 \sqrt{5}}$ & $-\frac{1}{2 \sqrt{5}}$ & $\frac{1}{6}$ & $\frac{1}{6}$ & $-\frac{1}{2 \sqrt{3}}$ & $0$ & $0$ & $0$ \\
$A\left(\Xi_c^+\to p K^- K^+\right)$ & $-\frac{1}{2 \sqrt{15}}$ & $-\frac{1}{2 \sqrt{15}}$ & $-\frac{1}{2 \sqrt{15}}$ & $\frac{1}{2 \sqrt{5}}$ & $-\frac{1}{2 \sqrt{5}}$ & $\frac{1}{6}$ & $\frac{1}{6}$ & $-\frac{1}{2 \sqrt{3}}$ & $0$ & $0$ & $0$ \\
$A\left(\Lambda_c^+\to \Sigma^+ \pi^- K^+\right)$ & $0$ & $-\frac{2}{3} \sqrt{\frac{2}{15}}$ & $\frac{1}{3}\sqrt{\frac{2}{15}}$ & $\frac{1}{3}\sqrt{\frac{2}{5}}$ & $0$ & $0$ & $0$ & $0$ & $-\frac{1}{3}\sqrt{\frac{2}{3}}$ & $\frac{1}{3 \sqrt{6}}$ & $\frac{1}{3 \sqrt{2}}$ \\
$A\left(\Xi_c^+\to p K^- \pi^+\right)$ & $0$ & $-\frac{2}{3} \sqrt{\frac{2}{15}}$ & $\frac{1}{3}\sqrt{\frac{2}{15}}$ & $\frac{1}{3}\sqrt{\frac{2}{5}}$ & $0$ & $0$ & $0$ & $0$ & $-\frac{1}{3}\sqrt{\frac{2}{3}}$ & $\frac{1}{3 \sqrt{6}}$ & $\frac{1}{3 \sqrt{2}}$ \\
$A\left(\Lambda_c^+\to p K^- \pi^+\right)$ & $-\frac{1}{2 \sqrt{15}}$ & $-\frac{1}{2 \sqrt{15}}$ & $-\frac{1}{2 \sqrt{15}}$ & $-\frac{1}{2 \sqrt{5}}$ & $-\frac{1}{2 \sqrt{5}}$ & $\frac{1}{6}$ & $\frac{1}{6}$ & $\frac{1}{2 \sqrt{3}}$ & $0$ & $0$ & $0$ \\
$A\left(\Xi_c^+\to \Sigma^+ \pi^- K^+\right)$ & $-\frac{1}{2 \sqrt{15}}$ & $-\frac{1}{2 \sqrt{15}}$ & $-\frac{1}{2 \sqrt{15}}$ & $-\frac{1}{2 \sqrt{5}}$ & $-\frac{1}{2 \sqrt{5}}$ & $\frac{1}{6}$ & $\frac{1}{6}$ & $\frac{1}{2 \sqrt{3}}$ & $0$ & $0$ & $0$ \\
$A\left(\Lambda_c^+\to p K^- K^+\right)$ & $0$ & $-\frac{2}{3} \sqrt{\frac{2}{15}}$ & $\frac{1}{3}\sqrt{\frac{2}{15}}$ & $-\frac{1}{3}\sqrt{\frac{2}{5}}$ & $0$ & $0$ & $0$ & $0$ & $-\frac{1}{3}\sqrt{\frac{2}{3}}$ & $\frac{1}{3 \sqrt{6}}$ & $-\frac{1}{3 \sqrt{2}}$ \\
$A\left(\Xi_c^+\to \Sigma^+ \pi^- \pi^+\right)$ & $0$ & $-\frac{2}{3} \sqrt{\frac{2}{15}}$ & $\frac{1}{3}\sqrt{\frac{2}{15}}$ & $-\frac{1}{3}\sqrt{\frac{2}{5}}$ & $0$ & $0$ & $0$ & $0$ & $-\frac{1}{3}\sqrt{\frac{2}{3}}$ & $\frac{1}{3 \sqrt{6}}$ & $-\frac{1}{3 \sqrt{2}}$\\
$A\left(\Lambda_c^+\to p \pi^- \pi^+\right)$ & $0$ & $-\frac{2}{3} \sqrt{\frac{2}{15}}$ & $-\frac{2}{3} \sqrt{\frac{2}{15}}$ & $0$ & $0$ & $0$ & $0$ & $0$ & $-\frac{1}{3}\sqrt{\frac{2}{3}}$ & $-\frac{1}{3}\sqrt{\frac{2}{3}}$ & $0$ \\
$A\left(\Xi_c^+\to \Sigma^+ K^- K^+\right)$ & $0$ & $-\frac{2}{3} \sqrt{\frac{2}{15}}$ & $-\frac{2}{3} \sqrt{\frac{2}{15}}$ & $0$ & $0$ & $0$ & $0$ & $0$ & $-\frac{1}{3}\sqrt{\frac{2}{3}}$ & $-\frac{1}{3}\sqrt{\frac{2}{3}}$ & $0$ \\
$A\left(\Lambda_c^+\to p \pi^- K^+\right)$ & $\frac{1}{2 \sqrt{15}}$ & $-\frac{1}{2}\sqrt{\frac{3}{5}}$ & $0$ & $0$ & $\frac{1}{2 \sqrt{5}}$ & $\frac{1}{2}$ & $0$ & $0$ & $0$ & $0$ & $0$ \\
$A\left(\Xi_c^+\to \Sigma^+ K^- \pi^+\right)$ & $\frac{1}{2 \sqrt{15}}$ & $-\frac{1}{2}\sqrt{\frac{3}{5}}$ & $0$ & $0$ & $\frac{1}{2 \sqrt{5}}$ & $\frac{1}{2}$ & $0$ & $0$ & $0$ & $0$ & $0$ \\
\hline\hline
\end{tabular}
\caption{Contributions to the RME decomposition $C_b \to L_b P^- P^+$ from second order $U$-spin breaking ($b=2$). The corresponding RMEs $X_\alpha$ are defined in Eq.(\ref{eq:RMEb2}).\label{tab:CbtoLPP-b2}}
\end{table}
\endgroup
\section{The $n=6$, three triplets system}\label{app:3t}
In this appendix we derive in detail the sum rules for the system of three triplets. The sum rules for this system can be obtained from the sum rules for the system of two doublets and two triplets. Thus we introduce the following definitions:
\begin{itemize}
\item System I: a $U$-spin set described by two doublets and two triplets, $u_0 = u_1 = 1/2$ and $u_2 = u_3 = 1$. The amplitudes and the corresponding nodes of the lattice for this system are as follows:
\begin{align}
(1,2): \qquad A_7^\text{(I)} &= \left(-,-,-+,++\right), & A_{52}^\text{(I)} &= \left(+,+, -+,--\right), \nonumber\\
(1,3): \qquad A_{13}^\text{(I)} &= \left(-,-, ++,-+\right), & A_{49}^\text{(I)} &= \left(+,+, --,-+\right),\nonumber\\
(2,2): \qquad A_{19}^\text{(I)} &= \left(-,+,--,++\right), & A_{44}^\text{(I)}& = \left(+,-,++,-- \right), \nonumber\\
(2,3): \qquad A_{21}^\text{(I)} &= \left(-,+,-+,-+\right), & A_{37}^\text{(I)} &=\left(+,-,-+,-+\right),\nonumber\\
(3,3): \qquad A_{28}^\text{(I)}&= \left(-,+,++,--\right), & A_{35}^\text{(I)} & = \left(+,-,--,++\right).
\end{align}
\item System II: a $U$-spin set described by three triplets, $u_0 = u_1 = u_2 = 1$. The amplitudes for this system are:
\begin{align}
A_7^\text{(II)} & = \left(--,-+,++\right), & A_{52}^\text{(II)} & = \left(++, -+,--\right), \nonumber\\
A_{13}^\text{(II)} & = \left(--, ++,-+\right), & A_{49}^\text{(II)} & = \left(++, --,-+\right),\nonumber\\
A_{19}^\text{(II)} &= \left(-+,--,++ \right), & A_{28}^\text{(II)} & = \left(-+,++,--\right),\nonumber\\
A_{21}^\text{(II)} & = \left(-+,-+,-+\right). & &
\end{align}
\end{itemize}
The sum rules for System I can be read off Fig.~\ref{fig:n6-2d-2t}. We obtain the following trivial $a$-type sum rules valid at $b=0$:
\begin{equation}\label{eq:3t-a-b0}
a^\text{(I)}_{(1,2)} = a^\text{(I)}_{(1,3)} = a^\text{(I)}_{(2,2)} = a^\text{(I)}_{(2,3)} = a^\text{(I)}_{(3,3)} = 0.
\end{equation}
The $s$-type sum rules up to $b = 1$ are given by
\begin{equation}\label{eq:3t-s-b1}
s_{(1,2)}^\text{(I)} + s_{(1,3)}^\text{(I)} = 0, \qquad s_{(1,2)}^\text{(I)} + \sqrt{2}s_{(2,2)}^\text{(I)} + \sqrt{2} s_{(2,3)}^\text{(I)} = 0, \qquad s_{(1,3)}^\text{(I)} + \sqrt{2}s_{(2,3)}^\text{(I)} + \sqrt{2}s_{(3,3)}^\text{(I)} = 0.
\end{equation}
Finally, the $b=2$ $a$-type sum rule is
\begin{equation}\label{eq:3t-a-b2}
\sqrt{2}a_{(1,2)}^\text{(I)} + \sqrt{2}a_{(1,3)}^\text{(I)} + a_{(2,2)}^\text{(I)} + 2 a_{(2,3)}^\text{(I)} + a_{(3,3)}^\text{(I)} = 0.
\end{equation}
To obtain the sum rules for System~II we perform the substitutions as described in Section~\ref{sec:sym}. We have
\begin{align}
&(1,2): &
A_{7}^{\text{(I)}} &\rightarrow A_7^{\text{(II)}}, &
A_{52}^{\text{(I)}} &\rightarrow A_{52}^{\text{(II)}}, \nonumber\\
&(1,3): &
A_{13}^{\text{(I)}} &\rightarrow A_{13}^{\text{(II)}}, &
A_{49}^{\text{(I)}} &\rightarrow A_{49}^{\text{(II)}}, \nonumber\\
&(2, 2): &
A_{19}^{\text{(I)}} &\rightarrow \frac{1}{\sqrt{2}}A^\text{(II)}_{19}, &
A^\text{(I)}_{44} &\rightarrow \frac{1}{\sqrt{2}}A^{\text{(II)}}_{28} \nonumber \\
&(2,3): &
A_{21}^\text{(I)} &\rightarrow \frac{1}{\sqrt{2}} A_{21}^{\text{(II)}}, &
A^\text{(I)}_{37} &\rightarrow \frac{1}{\sqrt{2}} A^{\text{(II)}}_{21}, \nonumber \\
&(3,3): &
A_{28}^{\text{(I)}} &\rightarrow \frac{1}{\sqrt{2}} A^{\text{(II)}}_{28}, &
A_{35}^\text{(I)} &\rightarrow \frac{1}{\sqrt{2}} A_{19}^{\text{(II)}}.
\end{align}
As System~II is an all-integer system, we have $p=n/2$, i.e.~for $n=6$ it follows $p=3$.
As $p$ is odd, for the mapping on the self-conjugate amplitude of System~II we have
\begin{align}
a_{(2,3)}^{\text{(I)}} &\rightarrow 2\times \frac{1}{\sqrt{2}} A_{21}^{\text{(II)}} = \sqrt{2}A_{21}^{\text{(II)}},\\
s_{(2,3)}^{\text{(I)}} &\rightarrow 0.
\end{align}
Furthermore, no matter if the representations belong to the initial state, final state or the Hamiltonian, we can rewrite the $a$-type sum rules of Eq.~\eqref{eq:3t-a-b0} in terms of the amplitudes of the system of three triplets as
\begin{equation}\label{eq:3t-b0}
A_7^\text{(II)} + A_{52}^\text{(II)} = A_{13}^{\text{(II)}} + A_{49}^\text{(II)} = A_{19}^\text{(II)} + A_{28}^\text{(II)} = A_{21}^\text{(II)} = 0\,.
\end{equation}
Note that the $a$-type sum rules $a_{(2,2)}^{\text{(I)}} = 0$ and $a_{(3,3)}^{\text{(I)}} = 0$ in Eq.~\eqref{eq:3t-a-b0} result in an identical sum rule for the system of three triplets. Thus the total number of $b=0$ $a$-type sum rules for the system of three triplets is one less than the corresponding number of $a$-type sum rules for the system of two doublets and two triplets given in Eq.~\eqref{eq:3t-a-b0}.
For the $s$-type sum rules in Eq.~\eqref{eq:3t-s-b1} we obtain
\begin{align}\label{eq:3t-b1}
(-1)^{q_7} \left( A_7^{\text{(II)}} - A_{52}^\text{(II)}\right) + (-1)^{q_{13}} \left(A_{13}^\text{(II)} - A_{49}^{\text{(II)}}\right) = 0, \nonumber\\
(-1)^{q_7} \left( A_7^{\text{(II)}} - A_{52}^\text{(II)}\right) + (-1)^{q_{19}} \left(A^{\text{(II)}}_{19} - A_{28}^\text{(II)}\right) = 0,
\end{align}
where we list only two sum rules since only two out of the three are linearly independent.
The $b = 2$ $a$-type sum rule in Eq.~\eqref{eq:3t-a-b2} takes the following form in terms of the amplitudes of the system of three triplets
\begin{equation}\label{eq:3t-b2}
(-1)^{q_7}\left(A_7^\text{(II)} + A_{52}^\text{(II)}\right) + (-1)^{q_{13}}\left( A_{13}^{\text{(II)}} + A_{49}^\text{(II)} \right) + (-1)^{q_{19}} \left(A_{19}^\text{(II)} + A_{28}^\text{(II)}\right) + (-1)^{q_{21}}2A_{21}^\text{(II)} = 0.
\end{equation}
The $(-1)^{q_i}$ factors in Eqs.~\eqref{eq:3t-b1} and~\eqref{eq:3t-b2} are to be found for each specific system using Eq.~\eqref{eq:qi-gen}.
\end{appendix}
|
2,877,628,090,999 | arxiv | \section*{Introduction}
In this note, we continue in studying special-Hermitian structures on compact complex manifolds, in view of a (far-to-be-obtained) possible classification of compact complex non-K\"ahler manifolds.
In particular, we focus on the existence of degenerate special-Hermitian metrics. We investigate here degenerate K\"ahler metrics and degenerate locally conformally K\"ahler metrics, by introducing the notion of special-Hermitian ranks, as a first development of the K\"ahler rank by R. Harvey and H.~B. Lawson and by I. Chiose in the non-K\"ahler setting.
We investigate here classes of non-K\"ahler examples.
\medskip
R. Harvey and H. B. Lawson provided an intrinsic characterization of the K\"ahler condition. In \cite[Theorem (14)]{harvey-lawson}, they proved that, on a compact complex manifold $X$, one and only one of the following facts holds:
\begin{inparaenum}[\itshape (i)]
\item there is a positive $(1,1)$-form being closed; (namely, a K\"ahler metric;)
\item there is a (non-trivial) positive bidimension-$(1, 1)$-current being component of a boundary.
\end{inparaenum}
This let them to introduce a notion of K\"ahler rank for compact complex surfaces, in terms of the foliated set
$$ \mathcal B(X) \;:=\; \left\{ x \in X \;:\; \exists \varphi \in P^\infty_{\text{bdy}}(X) \text{ such that }\varphi_x\neq 0 \right\} \;, $$
where $P^\infty_{\text{bdy}}(X)$ denotes the subcone of smooth currents in the cone of positive bidimension-$(1,1)$-currents on $X$ being a boundary.
By \cite[Corollary 4.3]{chiose-toma}, the K\"ahler rank of a compact complex surface $X$ is equal to the maximal rank that a non-negative closed $(1, 1)$-form may attain at some point of $X$. (See also \cite[Definition 1.2]{fino-grantcharov-verbitsky}.) This allows I. Chiose to extend the notion of K\"ahler rank to higher-dimensional compact complex manifolds, \cite[Definition 1.1]{chiose}, by setting
\begin{equation}\label{eq:def-Kr}
\mathrm{Kr}(X) \;:=\; \max \left\{ k\in\mathbb N \;:\; \exists \omega\in \wedge^{1,1}X \text{ s.t. } \omega\geq0,\; d\omega=0, \text{ and }\omega^k\neq 0 \right\} \;.
\end{equation}
\medskip
\begin{comment}
{\color{blue}{
By studying compact complex surfaces of K\"ahler rank one, I. Chiose and M. Toma proved that the K\"ahler rank of compact complex surfaces is a bimeromorphic invariant, \cite[Corollary 4.1]{chiose-toma}, as conjectured in \cite[page 187]{harvey-lawson}.
We study here how the K\"ahler rank changes under blow-up. See also the methods and the results in \cite[Theorem 0.1]{chiose} as regards blow-up of compact complex non-K\"ahler surfaces at a point.
\renewcommand{}{\ref{prop:blow-up}}
\begin{prop*}
Let $X$ be a compact complex manifold, and consider $\tilde X$ the blow-up of $X$ along a compact complex submanifold $Y$.
Assume that $X$ admits a closed non-negative $2$-form being positive on some splitting for the normal bundle $N_{Y|X}$.
Then
$$ \mathrm{Kr}(\tilde X) \;\geq\; \mathrm{Kr}(X) \geq \codim Y \;. $$
\end{prop*}
}}
\medskip
\end{comment}
By relaxing the K\"ahler condition, several notions of special-Hermitian metrics can be defined: {\itshape e.g.} Hermitian-symplectic, balanced in the sense of Michelsohn \cite{michelsohn}, pluri-closed \cite{bismut}, astheno-K\"ahler \cite{jost-yau}, Gauduchon \cite{gauduchon}, strongly-Gauduchon \cite{popovici}, and others.
The notion in Equation \eqref{eq:def-Kr} can be restated for some of these metrics: we will consider {\itshape e.g.} the SKT case \eqref{eq:SKT-rank}.
In particular, we introduce and study the {\em Hermitian locally conformally K\"ahler rank}.
It is defined in Equation \eqref{eq:HlcK-rank}. Essentially, we replace the condition of $d\omega=0$ in Equation \eqref{eq:def-Kr} by $d\omega-\vartheta\wedge\omega=0$ for some $d$-closed $1$-form $\vartheta=0$. By the Poincar\'e Lemma, $\vartheta$ is locally $d$-exact: $\vartheta\stackrel{\text{loc}}{=}dg$. Then $\exp(-g)\omega$ is a local conformal change of $\omega$ being K\"ahler.
Both the K\"ahler and the lcK conditions are cohomological in nature. Moreover, cohomologies of nilmanifolds (namely, compact quotients of connected simply-connected nilpotent Lie groups,) can be often reduced as Lie algebra invariants. It follows that the K\"ahler rank and the lcK rank of nilmanifolds is often encoded in the Lie algebra, see Lemma \ref{lem:inv-ranks}.
This allows us to study explicitly the K\"ahler rank and the lcK rank of $6$-dimensional nilmanifolds. (With the possible exception of nilmanifolds associated to the Lie algebra $\mathfrak h_7=(0,0,0,12,13,23)$ in the notation of Salamon \cite{salamon}, see \cite{rollenske-survey}.) This is done in Section \ref{sec:nilmanifolds-ranks}. Compare also \cite[Section 4.2]{fino-grantcharov-verbitsky}, where the same results have been obtained independently.
As a further example, we consider a non-K\"ahler manifold obtained as a torus-suspension in \cite{magnusson}.
The investigation of these manifolds was suggested by Valentino Tosatti and may deserve further studies in non-K\"ahler geometry.
\bigskip
\noindent{\sl Acknowledgments.}
The authors would like to thank Valentino Tosatti for useful discussions.
Many thanks also to the anonymous Referee for valuable comments.
\section{Hermitian ranks}
In this section, we recall the definitions of K\"ahler rank for compact complex surfaces by R. Harvey and H.~B. Lawson, and for compact complex manifolds by I. Chiose. On the same lines, we also introduce the notions of lcK rank and pluri-closed rank.
\subsection{K\"ahler rank}
Let $X$ be a compact complex surfaces.
Denote by $P_{\text{bdy}}(X)$ the cone of positive bidimension-$(1,1)$-currents on $X$ being a boundary, and denote by $P^\infty_{\text{bdy}}(X)$ the subcone of smooth currents.
On a compact complex surface $X$, the cone $P^\infty_{\text{bdy}}(X)$ coincides with the cone $P^\infty_{\text{bdy}_{1,1}}(X)$ of positive bidimension-$(1,1)$-currents on $X$ being component of a boundary, and any form $\varphi\in P^\infty_{\text{bdy}}(X)$ is simple, ({\itshape i.e.} of rank less or equal than one,) at every point of $X$, \cite[Proposition (37)]{harvey-lawson}.
Set:
$$ \mathcal B(X) \;:=\; \left\{ x \in X \;:\; \exists \varphi \in P^\infty_{\text{bdy}}(X) \text{ such that }\varphi_x\neq 0 \right\} \;. $$
The open subset $\mathcal B(X)\subseteq X$ carries an intrinsically defined complex analytic foliation $\mathcal F$, which is characterized by the property that $\varphi\lfloor_{\mathcal F}=0$ for any $\varphi\in P^\infty_{\text{bdy}}(X)$, \cite[Theorem 40]{harvey-lawson}.
The {\em K\"ahler rank} of the compact complex surface $X$ \cite[Definition 41]{harvey-lawson} is defined to be:
\begin{inparaenum}[\itshape (a)]
\item two, when $X$ admits K\"ahler metrics; (that is, the open subset $\mathcal B(X)$ in $X$ is empty;)
\item one, when the complement of the open subset $\mathcal B(X)$ in $X$ is contained in a complex curve and non-empty;
\item zero, otherwise.
\end{inparaenum}
The K\"ahler rank of compact complex surfaces is a bimeromorphic invariant, \cite[Corollary 4.1]{chiose-toma}.
Surfaces with even first Betti number have K\"ahler rank two, \cite{lamari, buchdahl}.
Elliptic non-K\"ahler surfaces have K\"ahler rank one, \cite[page 187]{harvey-lawson}.
Non-elliptic non-K\"ahler surfaces are in class VII: under the GSS conjecture, their minimal model is one of the following:
\begin{inparaenum}[\itshape (i)]
\item Inoue surfaces: K\"ahler rank is one, \cite[\S10]{harvey-lawson};
\item Hopf surface: K\"ahler rank is one or zero according to the type, \cite[\S9]{harvey-lawson};
\item Kato surfaces: K\"ahler rank is zero, \cite{chiose-toma}.
\end{inparaenum}
\medskip
Now, let $X$ be a compact complex manifold of complex dimension $n\geq2$. Notice that, when $n=2$, the K\"ahler rank, as in \cite[Definition 41]{harvey-lawson}, is equal to the maximal rank that a non-negative closed $(1,1)$-form may attain at some point of $X$, thanks to \cite[Corollary 4.3]{chiose-toma}. Then, the following definition by I. Chiose is coherent. Compare also \cite[Definition 1.2]{fino-grantcharov-verbitsky}.
\begin{defi}[{\cite[Definition 1.1]{chiose}}]
Let $X$ be a compact complex manifold of complex dimension $n$. The {\em K\"ahler rank} of $X$ is defined to be
$$ \mathrm{Kr}(X) \;:=\; \max \left\{ k\in\mathbb N \;:\; \exists \omega\in \wedge^{1,1}X \text{ s.t. } \omega\geq0,\; d\omega=0, \text{ and }\omega^k\neq 0 \right\} \;\in\; \left\{0,\ldots,n\right\} \;. $$
\end{defi}
\begin{rmk}
Note that, by \cite[Theorem 0.2]{chiose}, for compact complex manifolds of complex dimension $3$ with maximal K\"ahler rank, the ``tamed-to-compatible'' conjecture, \cite[page 678]{li-zhang}, \cite[Question 1.7]{streets-tian}, holds.
\end{rmk}
\subsection{Locally conformally K\"ahler rank}
Now, we consider {\em locally conformally K\"ahler} structures \cite{dragomir-ornea}. Such a structure is given by $(\vartheta, \omega)$, where $\vartheta$ is a closed $1$-form and $\omega$ is a Hermitian metric satisfying $d_\vartheta\omega=0$, where
$$ d_\vartheta \;:=\; d-\vartheta\wedge \;. $$
Note that, locally, $\vartheta\stackrel{\text{loc}}{=}df$ for some smooth function $f$. Therefore $\exp(-f)\omega$ is a local K\"ahler structure being a locally conformally transformations of $\omega$.
We now admit degenerate metrics, and we introduce the following rank.
(We add the adjective ``Hermitian'' in order to avoid confusion with the notion of lcK rank introduced in \cite{gini-ornea-parton-piccinni}, which regards the Lee form $\vartheta$.)
\begin{defi}
Let $X$ be a compact complex manifold of complex dimension $n$. The {\em Hermitian locally conformally K\"ahler rank} of $X$ is defined to be
\begin{eqnarray}\label{eq:HlcK-rank}
\mathrm{HlcKr}(X) &:=& \max \left\{ k\in\mathbb N \;:\; \exists \vartheta \in \wedge^1X \text{ s.t. } d\vartheta=0,\right.\\[5pt]
&&\left.\exists\omega\in \wedge^{1,1}X \text{ s.t. } \omega\geq0,\; d_{\vartheta}\omega=0, \text{ and }\omega^k\neq 0 \right\} \nonumber\\[5pt]
&\in& \left\{0,\ldots,n\right\} \;.\nonumber
\end{eqnarray}
\end{defi}
Clearly, since $d_{0}=d$, it holds $\mathrm{Kr}(X)\leq\mathrm{HlcKr}(X)\leq \dim X$. Moreover, if $X$ admits a locally conformally K\"ahler metric, then clearly $\mathrm{HlcKr}(X)=\dim X$.
\subsection{Pluri-closed rank}
Finally, we consider {\em pluri-closed metrics} \cite{bismut}, namely, Hermitian metrics $\omega$ such that $\partial\overline\partial\omega=0$, also called {\em SKT metrics}. We define the following.
\begin{defi}\label{def:skt-rank}
Let $X$ be a compact complex manifold of complex dimension $n$. The {\em pluri-closed rank} of $X$ is defined to be
\begin{eqnarray}\label{eq:SKT-rank}
\mathrm{SKTr}(X) &:=& \max \left\{ k\in\mathbb N \;:\; \exists \omega\in \wedge^{1,1}X \text{ s.t. } \omega\geq0,\; \partial\overline\partial\omega=0, \text{ and }\omega^k\neq 0 \right\} \\[5pt]
&\in& \left\{0,\ldots,n\right\} \;.\nonumber
\end{eqnarray}
\end{defi}
Note that, $\mathrm{Kr}(X)\leq\mathrm{SKT}(X)$. Moreover, by \cite[Théorème 1]{gauduchon}, any compact Hermitian manifold admit a unique {\em Gauduchon metric} in the conformal class up to scaling, that is, a metric $\omega$ satisfying $\partial\overline\partial\omega^{n-1}=0$, where $n=\dim X$. In particular, it follows that, on compact complex surfaces, the pluri-closed rank is always maximum, equal to $2$.
\begin{comment}
{\color{blue}{
\section{K\"ahler rank and blow-up}
In \cite[Section 6]{blanchard}, see also \cite[Proposition 4.3.1]{lascoux-berger}, it is proven that the blow-up of a compact complex manifold admitting K\"ahler metrics still admits K\"ahler metrics. The same result in the context of pluri-closed metrics is proven in \cite[Theorem 3.2]{fino-tomassini-SKT}.
As for the property of admitting lcK metrics, F. Tricerri proved that it is invariant under blow-up in a point, \cite[Proposition 4.2, Proposition 4.3]{tricerri} and \cite[Theorem 1]{vuletescu}; see also \cite{ornea-verbitsky-vuletescu-IMRN}.
By studying compact complex surfaces of K\"ahler rank one, M. Toma and I. Chiose prove in \cite[Corollary 4.1]{chiose-toma} that the K\"ahler rank for compact complex surfaces is a bimeromorphic invariant. This confirms a conjecture by Harvey and Lawson \cite[page 187]{harvey-lawson}.
This is not true in higher dimension, see \cite[Example 1.7]{chiose}.
Here, we adapt the argument in \cite{fino-tomassini-SKT} to show the following behaviour of the K\"ahler rank under blow-up.
Notice that, by the Weak Factorization Theorem for bimeromorphic maps between compact complex manifolds \cite[Theorem 0.3.1]{AKMW}, \cite{wlodarczyk}, any bimeromorphic map between compact complex manifolds of the same dimension can be functorially factored as a sequence of blow-ups and blow-downs with non-singular centres.
(Note that, by the technical assumption in the statement, it follows that $\mathrm{Kr}(X)\geq \codim Y$. In particular, when $Y$ is a point, we need to assume that $\mathrm{Kr}(X)=\dim X$ is maximum.)
\begin{prop}\label{prop:blow-up}
Let $X$ be a compact complex manifold, and consider $\tilde X$ the blow-up of $X$ along a compact complex submanifold $Y$.
Assume that $X$ admits a closed non-negative $2$-form being positive on some splitting for the normal bundle $N_{Y|X}$.
Then
$$ \mathrm{Kr}(\tilde X) \;\geq\; \mathrm{Kr}(X) \geq \codim Y \;. $$
\end{prop}
\begin{rmk}
Concerning Proposition \ref{prop:blow-up}, note that, if $\partial\overline\partial\omega=0$, then $\partial\overline\partial\omega_\kappa=0$.
Therefore the same statement holds true when the K\"ahler rank is replaced by the pluri-closed rank.
On the other hand, the same argument does not work for the twisted differential and the Hermitian lcK rank: for example, since $\omega_{h_L}$ is closed, in general $d_\vartheta\omega_{h_L}\neq0$. We refer to \cite{tricerri, vuletescu, ornea-verbitsky-vuletescu-IMRN} for the behaviour of the lcK condition under blow-up.
\end{rmk}
}}
\end{comment}
\section{Special-Hermitian ranks of homogeneous manifolds of solvable Lie groups}
In this section, we investigate the K\"ahler and the Hermitian lcK ranks of homogeneous manifolds of solvable Lie groups.
\medskip
Let $X = \left.\Gamma \middle\backslash G \right.$ be a solvmanifold, namely, a compact quotient of a connected simply-connected solvable Lie group $G$ by a co-compact discrete subgroup $\Gamma$. Assume that $X$ is endowed with an invariant complex structure, (that is, the complex structure is induced by a complex structure on $G$ being invariant with respect to the action of $G$ on itself given by left-translations.)
Then $\wedge^{\bullet,\bullet}\mathfrak{g}^\ast \hookrightarrow \wedge^{\bullet,\bullet}X$ is a sub-complex.
In the definition of special-Hermitian ranks, we can restrict to invariant metrics: set the {\em invariant K\"ahler rank} and the {\em invariant Hermitian lcK rank} to be
\begin{eqnarray*}
\mathrm{Kr}(\mathfrak g) &:=& \max \left\{ k\in\mathbb N \;:\; \exists \omega\in \wedge^{1,1}\mathfrak{g}^\ast \text{ s.t. } \omega\geq0,\; d\omega=0, \text{ and }\omega^k\neq 0 \right\} \;, \\[5pt]
\mathrm{HlcKr}(\mathfrak g) &:=& \max \left\{ k\in\mathbb N \;:\; \exists \vartheta \in \wedge^1\mathfrak{g}^\ast \text{ s.t. } d\vartheta=0,\right.\\[5pt]
&&\left.\exists\omega\in \wedge^{1,1}\mathfrak{g}^\ast \text{ s.t. } \omega\geq0,\; d_{\vartheta}\omega=0, \text{ and }\omega^k\neq 0 \right\} \;.
\end{eqnarray*}
They have the advantage to be easier to be computed. In general, it holds
$$ \mathrm{Kr}(\mathfrak g) \;\leq\; \mathrm{Kr}(X) \qquad \text{ and } \qquad \mathrm{HlcKr}(\mathfrak g) \;\leq\; \mathrm{HlcKr}(X) \;. $$
In fact, we prove that equalities hold, under the assumption that the map $H^{\bullet,\bullet}_{\overline\partial}(\mathfrak g)\to H^{\bullet,\bullet}_{\overline\partial}(X)$ induced by the inclusion $\wedge^{\bullet,\bullet}\mathfrak{g}^\ast\to\wedge^{\bullet,\bullet}X$ is an isomorphism. The assumption holds true, {\itshape e.g.} when $G$ is nilpotent and the complex structure is either holomorphically-parallelizable, or Abelian, or nilpotent, or rational, \cite{console-survey, rollenske-survey}. It holds true also when $X$ is a compact complex surface being diffeomorphic to a solvmanifold, \cite{angella-dloussky-tomassini}.
\begin{lem}\label{lem:inv-ranks}
Let $X=\Gamma\backslash G$ be a solvmanifold. Assume that the map $H^{\bullet,\bullet}_{\overline\partial}(\mathfrak g)\to H^{\bullet,\bullet}_{\overline\partial}(X)$ induced by the inclusion $\wedge^{\bullet,\bullet}\mathfrak{g}^\ast\to\wedge^{\bullet,\bullet}X$ is an isomorphism.
Then the Hermitian locally conformally K\"ahler rank $\mathrm{HlcKr}(X)$ and the invariant Hermitian locally conformally K\"ahler rank $\mathrm{HlcKr}(\mathfrak g)$ are equal. In particular, the K\"ahler rank $\mathrm{Kr}(X)$ and the invariant K\"ahler rank $\mathrm{Kr}(\mathfrak{g})$ are equal.
\end{lem}
\begin{proof}
Clearly, $\mathrm{HlcKr}(\mathfrak g) \leq \mathrm{HlcKr}(X)$.
Let $\vartheta$ be a $d$-closed $1$-form, and $\omega\geq0$ be a $(1,1)$-form satisfying $d_\vartheta\omega=0$ such that $\omega^k\neq0$.
We show that there exist an invariant $d$-closed $1$-form $\hat\vartheta$ and an invariant $(1,1)$-form $\hat\omega\geq 0$ satisfying $d_{\hat\vartheta}\hat\omega$ such that $\hat\omega^k\neq0$.
By the assumption and by the Fr\"olicher spectral sequence, the average map
$$ \mu\colon \wedge^{\bullet,\bullet}X \to \wedge^{\bullet,\bullet}\mathfrak{g}^\ast \;,\qquad \mu(\alpha)\;:=\;\int_X \alpha\lfloor_m\,\eta(m) $$
(here, $\eta$ is a bi-invariant volume forms, thanks to Milnor,) induces the identity in de Rham cohomology.
In particular, $\hat\vartheta:=\mu(\vartheta)$ is an invariant $d$-closed $1$-form, and there exists $f$ smooth function such that $\hat\vartheta=\vartheta+df$.
Note that $d_{\vartheta+df} = \exp(f) \cdot d_\vartheta (\exp(-f)\cdot \text{\--})$.
Therefore $\tilde\omega:=\exp(f)\cdot \omega\geq 0$ is a $(1,1)$-form satisfying $d_{\hat\vartheta}\tilde\omega=0$ such that $\tilde\omega^k=\exp(kf)\omega^k\neq0$. In particular, $[\tilde\omega]\in H^{2}_{d_{\hat\vartheta}}(X)$.
By \cite{hattori}, the average map $\mu$ induces the identity also in the cohomology of the twisted differential $d_{\hat\vartheta}$.
So we get that $\hat\omega:=\mu(\tilde\omega)\geq0$ is an invariant $(1,1)$-form satisfying $d_{\hat\vartheta}\hat\omega=0$, and there exists $\alpha$ a $1$-form such that $\hat\omega=\tilde\omega+d_{\hat\vartheta}\alpha$.
Moreover, we have
$$ \hat\omega^k \;=\; \omega^k + d_{k\hat\vartheta} \varphi \qquad \text{ where } \varphi \;:=\; \sum_{\substack{s+t=k\\t\geq1}} {k \choose s} \cdot \tilde\omega^s\wedge \alpha\wedge (d_{(t-1)\hat\vartheta}\alpha)^{t-1} \;. $$
That is, $[\hat\omega^k]=[\tilde\omega^k]$ in $H^{2k}_{d_{k\hat\vartheta}}(X)$. Therefore, since $\hat\omega^k$ is invariant, it follows that $\mu(\tilde\omega^k)=\hat\omega^k$.
Since $\tilde\omega^k\geq0$ and $\tilde\omega^k\neq0$, then $\hat\omega^k\neq0$.
As for the case of K\"ahler rank, it suffices to note that $[\vartheta]=0$ if and only if $[\hat\vartheta]=0$.
\end{proof}
As a direct consequence, we have the following.
\begin{cor}
Let $X=\Gamma\backslash G$ be a solvmanifold. Assume that the map $H^{\bullet,\bullet}_{\overline\partial}(\mathfrak g)\to H^{\bullet,\bullet}_{\overline\partial}(X)$ induced by the inclusion $\wedge^{\bullet,\bullet}\mathfrak{g}^\ast\to\wedge^{\bullet,\bullet}X$ is an isomorphism.
Then the K\"ahler rank $\mathrm{Kr}(X)$, (respectively, the Hermitian locally conformally K\"ahler rank $\mathrm{HlcKr}(X)$,) is maximum if and only if there exists a Hermitian metric being K\"ahler, (respectively, locally conformally K\"ahler.)
\end{cor}
\section{Special-Hermitian ranks of \texorpdfstring{$6$}{6}-dimensional nilmanifolds}\label{sec:nilmanifolds-ranks}
By using Lemma \ref{lem:inv-ranks}, we can compute the K\"ahler and Hermitian lcK ranks of $6$-dimensional nilmanifolds with invariant complex structures, except possibly for the nilmanifolds associated to the Lie algebra $\mathfrak h_7=(0,0,0,12,13,23)$ in the notation of Salamon \cite{salamon}.
In fact, the assumption of the map $H^{\bullet,\bullet}_{\overline\partial}(\mathfrak g)\to H^{\bullet,\bullet}_{\overline\partial}(X)$ induced by the inclusion being an isomorphism is satisfied, see \cite{console-survey, rollenske-survey}.
\medskip
It is well-known \cite{ugarte, ceballos-otal-ugarte-villacampa} that, up to (linear-)equivalence, the invariant complex structures on $6$-dimensional nilmanifolds are parametrized into the following families: there exists a global co-frame $\{\varphi^1,\varphi^2,\varphi^3\}$ of invariant $(1,0)$-forms such that the structure equations are
\begin{description}
\item[(P)] $d\varphi^1=0$, $d\varphi^2=0$, $d\varphi^3=\rho\varphi^1\wedge\varphi^2$,\\
where $\rho\in\{0,1\}$;
\item[(I)] $d\varphi^1=0$, $d\varphi^2=0$, $d\varphi^3=\rho\varphi^1\wedge\varphi^2+\varphi^1\wedge\bar\varphi^1+\lambda\varphi^1\wedge\bar\varphi^2+D\varphi^2\wedge\bar\varphi^2$,\\
where $\rho\in\{0,1\}$, $\lambda\in\mathbb R^{\geq0}$, $D\in\mathbb C$ with $\Im D\geq 0$;
\item[(II)] $d\varphi^1=0$, $d\varphi^2=\varphi^1\wedge\bar\varphi^1$, $d\varphi^3=\rho\varphi^1\wedge\varphi^2+B\varphi^1\wedge\bar\varphi^2+c\varphi^2\wedge\bar\varphi^1$,\\
where $\rho\in\{0,1\}$, $B\in\mathbb C$, $c\in\mathbb R^{\geq0}$, with $(\rho,B,c)\neq(0,0,0)$;
\item[(III)] $d\varphi^1=0$, $d\varphi^2=\varphi^1\wedge\varphi^3+\varphi^1\wedge\bar\varphi^3$, $d\varphi^3=\varepsilon\varphi^1\wedge\bar\varphi^1\pm i\left(\varphi^1\wedge\bar\varphi^2-\varphi^2\wedge\bar\varphi^1\right)$,\\
where $\varepsilon\in\{0,1\}$.
\end{description}
\subsection{K\"ahler rank of \texorpdfstring{$6$}{6}-dimensional nilmanifolds}
As for the K\"ahler rank, we have the following.
\begin{prop}\label{prop:kahler-rank-6-nilmfd}
On $6$-dimensional nilmanifolds endowed with invariant complex structures, (except possibly for the nilmanifolds associated to the Lie algebra $\mathfrak h_7$,) the K\"ahler rank takes the following values:
\begin{eqnarray*}
\text{\normalfont\bfseries (P):}&\mathrm{Kr}(X)\;=\;& 3 \quad \text{ if } \rho=0 \;, \\[5pt]
&\mathrm{Kr}(X)\;=\;& 2 \quad \text{ if } \rho=1 \;; \\[5pt]
\text{\normalfont\bfseries (I):}&\mathrm{Kr}(X)\;=\;& 2 \;; \\[5pt]
\text{\normalfont\bfseries (II):}&\mathrm{Kr}(X)\;=\;& 1 \quad \text{ if } (\rho,B,c)\neq(1,1,0)\;, \\[5pt]
&\mathrm{Kr}(X)\;\geq\;& 1 \quad \text{ if } (\rho,B,c)=(1,1,0)\;; \\[5pt]
\text{\normalfont\bfseries (III):}&\mathrm{Kr}(X)\;=\;& 1 \;;
\end{eqnarray*}
\end{prop}
\begin{proof}
Thanks to Lemma \ref{lem:inv-ranks}, we are reduced to compute the invariant ranks.
The arbitrary invariant $(1,1)$-form $\omega$ such that $\omega\geq0$ is
\begin{eqnarray}\label{eq:generic-metric}
\omega &=& ir^2\varphi^{1}\wedge\bar\varphi^1 + is^2\varphi^{2}\wedge\bar\varphi^2 + it^2\varphi^{3}\wedge\bar\varphi^3 \\[5pt]
\nonumber
&&+\left(u\varphi^1\wedge\bar\varphi^2-\bar u\varphi^2\wedge\bar\varphi^1\right)
+\left(v\varphi^2\wedge\bar\varphi^3-\bar v\varphi^3\wedge\bar\varphi^2\right)
+\left(z\varphi^1\wedge\bar\varphi^3-\bar z\varphi^3\wedge\bar\varphi^1\right) \;,
\end{eqnarray}
where $r,s,t\in\mathbb R$, $u,v,z\in\mathbb C$ satisfy
\begin{eqnarray}
\label{eq:condition-generic-metric}
&&r^2 \;\geq\; 0\;,\qquad
s^2 \;\geq\; 0\;,\qquad
t^2 \;\geq\; 0\;,\\[5pt]
\nonumber
&&r^2s^2 \;\geq\; |u|^2\;,\qquad
s^2t^2 \;\geq\; |v|^2\;,\qquad
r^2t^2 \;\geq\; |z|^2\;,\\[5pt]
\nonumber
&&r^2s^2t^2+2\Re(i\bar u \bar v z) \;\geq\; t^2|u|^2+r^2|v|^2+s^2|z|^2 \;.
\end{eqnarray}
We have
\begin{eqnarray*}
\frac12\omega^2 &=&
(r^2s^2-|u|^2) \, \varphi^{12\bar1\bar2}
+(-ir^2v-\bar uz) \, \varphi^{12\bar1\bar3}
+(is^2z-uv) \, \varphi^{12\bar2\bar3} \\[5pt]
&&+(ir^2\bar v-u\bar z) \, \varphi^{13\bar1\bar2}
+(r^2t^2-|z|^2) \, \varphi^{13\bar1\bar3}
+(-it^2u-\bar v z) \, \varphi^{13\bar2\bar3} \\[5pt]
&&+(-is^2\bar z-\bar u \bar v) \, \varphi^{23\bar1\bar2}
+(it^2\bar u-v\bar z) \, \varphi^{23\bar1\bar3}
+(s^2t^2-|v|^2) \, \varphi^{23\bar2\bar3}
\end{eqnarray*}
(for simplicity of notation, we shorten, {\itshape e.g.} $\varphi^{12\bar2\bar3}:=\varphi^{1}\wedge\varphi^{2}\wedge\bar\varphi^{2}\wedge\bar\varphi^{3}$,) and
\begin{eqnarray*}
\frac16\omega^3 &=&
(ir^2s^2t^2-ir^2|v|^2-is^2|z|^2-it^2|u|^2+uv\bar z-\bar u \bar v z) \, \varphi^{123\bar1\bar2\bar3} \;.
\end{eqnarray*}
We compute:
\begin{eqnarray*}
\text{\bfseries(P):}&
\quad\partial\omega \;=\;&
-\bar z\rho \, \varphi^{12\bar1}
-\bar v\rho \, \varphi^{12\bar2}
+i t^2 \rho \, \varphi^{12\bar3} \;; \\[5pt]
\text{\bfseries(I):}&
\quad\partial\omega \;=\;&
(-v+\lambda z-\rho \bar z) \, \varphi^{12\bar1}
+ (-\bar v \rho + z\bar D) \, \varphi^{12\bar2}
+ (it^2\rho) \, \varphi^{12\bar3} \\[5pt]
&& + (-it^2) \, \varphi^{13\bar1}
+ (-it^2\lambda) \, \varphi^{23\bar1}
+ (-it^2\bar D) \, \varphi^{23\bar2}
\;; \\[5pt]
\text{\bfseries(II):}&
\quad\partial\omega \;=\;&
\left(-is^2+z\bar B-\bar z\rho\right) \, \varphi^{12\bar1}
+ \left(-cv-\bar v\rho\right) \, \varphi^{12\bar2}
+ \left(it^2\rho\right) \, \varphi^{12\bar3} \\[5pt]
&& + \left(\bar v\right) \, \varphi^{13\bar1}
+ \left(-it^2 c\right) \, \varphi^{13\bar2}
+ \left(-it^2\bar B\right) \, \varphi^{23\bar1}
\;; \\[5pt]
\text{\bfseries(III):}&
\quad\partial\omega \;=\;&
(\mp iz-\varepsilon v)\,\varphi^{12\bar1}
+ (\mp iv)\,\varphi^{12\bar2}
+ (u-\bar u-it^2\varepsilon)\,\varphi^{13\bar1} \\[5pt]
&& + (is^2\pm t^2)\,\varphi^{13\bar2}
+ (v)\,\varphi^{13\bar3}
+ (is^2\mp t^2)\,\varphi^{23\bar1}
\;.
\end{eqnarray*}
The statement follows.
\end{proof}
\begin{rmk}
The results in Proposition \ref{prop:kahler-rank-6-nilmfd} have been obtained independently in \cite[Section 4.2]{fino-grantcharov-verbitsky}.
\end{rmk}
\subsection{Hermitian locally conformally K\"ahler rank of \texorpdfstring{$6$}{6}-dimensional nilmanifolds}
As for the Hermitian lcK rank, we have the following.
\begin{prop}
On $6$-dimensional nilmanifolds endowed with invariant complex structures, (except possibly for the nilmanifolds associated to the Lie algebra $\mathfrak h_7$,) the Hermitian locally conformally K\"ahler rank takes the following values:
\begin{eqnarray*}
\text{\normalfont\bfseries (P):}&\mathrm{HlcKr}(X)\;=\;& 3 \quad \text{ if } \rho=0 \;, \\[5pt]
&\mathrm{HlcKr}(X)\;=\;& 2 \quad \text{ if } \rho=1 \;; \\[5pt]
\text{\normalfont\bfseries (I):}&\mathrm{HlcKr}(X)\;=\;& 3 \quad\text{ if } (\rho,\lambda,D)=(0,0,-1) \;, \\[5pt]
&\mathrm{HlcKr}(X)\;=\;& 2 \quad\text{ if } (\rho,\lambda,D)\neq(0,0,-1)\;; \\[5pt]
\text{\normalfont\bfseries (II):}&\mathrm{HlcKr}(X)\;=\;& 2 \;; \\[5pt]
\text{\normalfont\bfseries (III):}&\mathrm{HlcKr}(X)\;=\;& 1 \;;
\end{eqnarray*}
\end{prop}
\begin{proof}
By \cite[Main Theorem]{sawai}, a non-toral compact nilmanifold with a left-invariant complex structure has a locally conformally K\"ahler structure if and only if it is biholomorphic to a quotient of the Heisenberg group times $\mathbb R$. In particular, the only $6$-dimensional non-Abelian nilpotent Lie algebra admitting lcK structures is $\mathfrak{h}_3$, which appears in family {\bfseries (I)} with parameters $\rho=0$, $\lambda=0$, $D=-1$.
In case {\bfseries (II)}, consider the $d$-closed $1$-form $\vartheta:=\varphi^2+\bar\varphi^2$ and the $d_\vartheta$-closed $2$-form, $\Omega:=i\,\varphi^1\wedge\bar\varphi^1+i\,\varphi^2\wedge\bar\varphi^2\geq0$.
In case {\bfseries (III)}, the arbitrary $d$-closed $1$-form is $\vartheta=\vartheta_1\varphi^1+\vartheta_3\varphi^3+\bar\vartheta_1\bar\varphi^1+\vartheta_3\bar\varphi^3$, where $\vartheta_1\in\mathbb C$ and $\vartheta_3\in\mathbb R$.
By straightforward computations, which we performed with the aid of Sage \cite{sage}, we get that the arbitrary form $\omega$ in \eqref{eq:generic-metric} with conditions \eqref{eq:condition-generic-metric} is $d_\vartheta$-closed if and only if both $r^2=0$ and $s^2=0$.
\end{proof}
\subsection{Hermitian pluri-closed rank of \texorpdfstring{$6$}{6}-dimensional nilmanifolds}
Finally, we consider the pluri-closed rank $\mathrm{SKTr}(X)$ of $X$ as defined in \eqref{eq:SKT-rank} in Definition \ref{def:skt-rank}.
In case of solvmanifolds $X$ with associated Lie algebra $\mathfrak{g}$, a notion of {\em invariant pluri-closed rank} $\mathrm{SKTr}(\mathfrak{g})$ can be defined. Clearly, $\mathrm{SKTr}(\mathfrak{g})\leq \mathrm{SKTr}(X)$. Note that, in this case, the argument in the proof of Lemma \ref{lem:inv-ranks} does not apply. Indeed, there we make use of the map induced by the wedge product in the Morse-Novikov cohomology, $H^2_{d_\vartheta}(X)\times H^2_{d_\vartheta}(X) \to H^4_{d_{2\vartheta}}(X)$, which in turn is a consequence of the Leibniz rule for the twisted differential operator, namely $d_{k\vartheta}(\alpha\wedge\beta)=d_{h\vartheta}\alpha\wedge\beta+(-1)^{\mathrm{deg}\,\alpha}\,\alpha\wedge d_{(k-h)\vartheta}\beta$. But the $\partial\overline\partial$-operator, and the corresponding Aeppli cohomology, do not share these properties.
In the following Table \ref{table:ranks-nilmanifolds}, we show the invariant pluri-closed rank of $6$-dimensional nilmanifolds, summarizing also the ranks computed in the previous sections. The results follows by computing:
\begin{center}
\begin{table}[ht]
\centering
\begin{tabular}{>{\bfseries\bgroup}l<{\bfseries\egroup} | >{$}c<{$} || >{$}c<{$} | >{$}c<{$} | >{$}c<{$} ||}
\toprule
\multicolumn{2}{c||}{\bfseries class} & \mathbf{\mathrm{Kr}(X)} & \mathbf{\mathrm{HlcKr}(X)} & \mathbf{\mathrm{SKTr}(\mathfrak{g})} \\
\toprule
\multirow{2}{*}{(P)} & \rho=0 & 3 & 3 & 3 \\
& \rho=1 & 2 & 2 & 2 \\
\midrule
\multirow{2}{*}{(I)} & -\rho+D+\bar D-\lambda^2=0 & 2 & 2 & 3 \\
& -\rho+D+\bar D-\lambda^2\neq0, \; (\rho,\lambda,D)\neq(0,0,-1) & 2 & 2 & 2 \\
& (\rho,\lambda,D)=(0,0,-1) & 2 & 3 & 2 \\
\midrule
\multirow{2}{*}{(II)} & (\rho,B,c)\neq(1,1,0) & 1 & 2 & 2 \\
& (\rho,B,c)=(1,1,0) & \geq1 & 2 & 2 \\
\midrule
(III) & & 1 & 1 & 1 \\
\bottomrule
\end{tabular}
\caption{Special-Hermitian ranks for $6$-dimensional nilmanifolds endowed with invariant complex structures.}
\label{table:ranks-nilmanifolds}
\end{table}
\end{center}
\begin{eqnarray*}
\text{\bfseries(P):}&
\quad\partial\overline\partial\omega \;=\;& -i t^2\rho\, \varphi^{12\bar1\bar2} \;; \\[5pt]
\text{\bfseries(I):}&
\quad\partial\overline\partial\omega \;=\;&
\left(it^2(-\rho+D+\bar D-\lambda^2)\right) \,\varphi^{12\bar1\bar2} \;; \\[5pt]
\text{\bfseries(II):}&
\quad\partial\overline\partial\omega \;=\;&
\left(-it^2(\rho+c^2+|B|^2)\right)\, \varphi^{12\bar1\bar2} \;; \\[5pt]
\text{\bfseries(III):}&
\quad\partial\overline\partial\omega \;=\;&
(-2it^2)\, \varphi^{12\bar1\bar2}
+ (-2is^2)\, \varphi^{13\bar1\bar3} \;. \\[5pt]
\end{eqnarray*}
\begin{rmk}
From the results in Table \ref{table:ranks-nilmanifolds}, we note in particular the upper-semi-continuity of the Hermitian ranks.
We wonder whether this proerty holds in general.
\end{rmk}
\section{K\"ahler rank of a non-K\"ahler manifold constructed as suspension}
As another example, we consider here a non-K\"ahler manifold constructed as suspension over a torus.
We consider an explicit case of a more general construction which has been investigated by G.~\TH{}. Magnusson \cite{magnusson} to disprove the abundance and Iitaka conjectures for complex non-K\"ahler manifolds.
See also \cite[Example 3.1]{tosatti}, (and the references therein,) where V. Tosatti uses the same construction to get a complex non-K\"ahler manifold with vanishing first Bott-Chern class, whose canonical bundle is not holomorphically torsion.
\medskip
We first recall the construction by Yoshihara \cite[Example 4.1]{yoshihara} of a complex $2$-torus $X$ with an automorphism $f$ such that the induced automorphism on $H^0(X;K_X)\simeq\mathbb C$ has infinite order.
Consider the roots $\alpha\in\mathbb C$ and $\beta\in\mathbb C$ of the equation
$$ x^2-(1+\sqrt{-1})x+1 \;=\; 0 \;.$$
The minimal polynomial over $\mathbb Q$ of $\alpha$ and $\bar\beta$ is
$$ x^4-2x^3+4x^2-2x+1 \;=\; 0 \;.$$
In particular,
$$\left(\begin{array}{c}\alpha^4\\\bar\beta^4\end{array}\right)=2\left(\begin{array}{c}\alpha^3\\\bar\beta^3\end{array}\right)
-4\left(\begin{array}{c}\alpha^2\\\bar\beta^2\end{array}\right)+2\left(\begin{array}{c}\alpha\\\bar\beta\end{array}\right)-\left(\begin{array}{c}1\\1\end{array}\right)\;.$$
Consider the following lattice in $\mathbb C^2$:
$$ \Gamma \;:=\; \mathrm{span}_\mathbb Z \left\{
\left(\begin{array}{c}1\\1\end{array}\right),\;
\left(\begin{array}{c}\alpha\\\bar\beta\end{array}\right),\;
\left(\begin{array}{c}\alpha^2\\\bar\beta^2\end{array}\right),\;
\left(\begin{array}{c}\alpha^3\\\bar\beta^3\end{array}\right) \right\} \;.$$
Consider the torus
$$ X \;:=\; \left. \mathbb C^2 \middle\slash \Gamma \right. \;.$$
The automorphism
$$ f\colon \mathbb C^2\to \mathbb C^2 \;, \qquad f\left(\begin{array}{c}z_1\\z_2\end{array}\right) \;:=\; \left(\begin{array}{cc}\alpha&\\&\bar\beta\end{array}\right) \cdot \left(\begin{array}{c}z_1\\z_2\end{array}\right) $$
induces an automorphism of $X$.
Now, we recall the construction by G.~\TH{}. Magnusson \cite{magnusson} of the non-K\"ahler manifold $M$.
Let
$$ C\;:=\;\left.\mathbb C \middle\slash (\mathbb Z\oplus\tau\mathbb Z)\right. $$
be an elliptic curve. Then $M$ is the total space of a holomorphic fibre bundle $M\to C$ with fibre $X$ as follows:
$$ M \;:=\; \left. X \times \mathbb C \middle \slash \mathbb Z^2\right. $$
where $\mathbb Z^2\circlearrowleft X\times\mathbb C$ acts as
$$ (\ell,m) \cdot (z,\,w) \;:=\; \left(f^m(z),\,w+\ell+m\,\tau\right)\;.$$
Note that $M$ is not K\"ahler, \cite[Proposition 1.2]{magnusson}, because of \cite[Corollary 4.10]{fujiki}.
\medskip
We claim that the K\"ahler rank of $M$ is equal to $1$.
In fact, note that the form
$$ d w \wedge d\bar w $$
on $\mathbb C$ yields a $d$-closed $(1,1)$-form of rank $1$ on $M$. Whence the K\"ahler rank of $M$ is greater than or equal to $1$.
On the other side, assume that there exists $\omega$ a $d$-closed $(1,1)$-form of rank at least $2$ on $M$.
It corresponds to a $d$-closed $\mathbb Z^2$-invariant $(1,1)$-form of rank at least $2$ on $X\times\mathbb C$. By the inclusion $\iota\colon X \ni x \mapsto (x,0) \in X\times \mathbb C$, it yields a $d$-closed $f$-invariant $(1,1)$-form of rank at least $1$ on $X$ --- say $\omega$ again.
Notice that $f$ sends invariant forms (with respect to the action of $\mathbb C^2$ on $X$) to invariant forms.
We have $\omega=\omega_{\text{inv}}+d\eta$ where $\omega_{\text{inv}}$ is invariant, and $\eta$ is a $1$-form. Then $f^*\omega=f^*\omega_{\text{inv}}+df^*\eta$, where $f^*\omega_{\text{inv}}$ is invariant. We get that $f^*\omega_{\text{inv}}=\omega_{\text{inv}}\mod d(\wedge^1\mathfrak{g}^\ast)$. Since the Lie algebra $\mathfrak{g}$ of $X$ is Abelian, we get $f^*\omega_{\text{inv}}=\omega_{\text{inv}}$.
We get that $\omega_{\text{inv}}$ is a $d$-closed invariant $f$-invariant $(1,1)$-form of rank at least $1$ on $X$. But this is not possible, since the only invariant $f$-invariant $(1,1)$-forms on $X$ are generated over $\mathbb C$ by $dz^1\wedge d\bar z^2$ and $dz^2\wedge d\bar z^1$.
|
2,877,628,091,000 | arxiv | \section*{Practical Information}
\section{Introduction}
\label{intro}
\blfootnote{
%
%
%
%
%
\hspace{-0.65cm}
This work is licensed under a Creative Commons
Attribution 4.0 International License.
License details:
\url{http://creativecommons.org/licenses/by/4.0/}.
}
Users' activities on social media is increasing at a fast rate. Unfortunately, a lot of people misuse these online platforms to harass, threaten, and bully other users. This growing aggression against social media users has caused serious effects on victims, which can even lead them to harm themselves. The TRAC 2018 Shared Task on Aggression Identification \cite{trac2018report} aims at developing a classifier that could make a 3-way classification of a given data instance between ``Overtly Aggressive'', ``Covertly Aggressive'', and ``Non-aggressive''.
We present here the different systems we submitted to the shared task, which mainly use lexical and semantic features to distinguish different levels of aggression over multiple datasets from Facebook and other social media that cover both English and Hindi texts.
\section{Related Work}
In recent years, several studies have been done towards detecting abusive and hateful language in online texts. Some of these works target different online platforms like Twitter~\cite{waseem2016hateful}, Wikipedia~\cite{Wulczyn:2016}, and ask.fm~\cite{samghabadi2017detecting} to encourage other research groups to contribute to aggression identification in these sources.
Most of the approaches proposed to detect offensive language in social media make use of multiple types of hand-engineered features. ~\newcite{nobata2016abusive} use n-grams, linguistic, syntactic and distributional semantic features to build a hate speech detection framework over Yahoo! Finance and News and get an F-score of 81\% for a combination of all features.~\newcite{davidson2017automated} combine n-grams, POS-colored n-grams, and sentiment lexicon features to detect hate speech on Twitter data.~\newcite{VanHee15detect} use word and character n-grams along with sentiment lexicon features to identify nasty posts in ask.fm.~\newcite{samghabadi2017detecting} build a model based on lexical, semantic, sentiment, and stylistic features to detect nastiness in ask.fm. They also show the robustness of the model by applying it to the dataset from different other sources.
Based on \newcite{malmasi2018challenges}, distinguishing hate speech from profanity is not a trivial task and requires features that capture deeper information from the comments. In this paper, we try different combinations of lexical, semantic, sentiment, and lexicon-based features to identify various levels of aggression in online texts.
\section{Methodology and Data}
\subsection{Data}
\begin{table}[h!]
\centering
\begin{tabular}{|l|r|r|r|r|}
\hline
\textbf{Data} & \multicolumn{1}{l|}{\textbf{Training (FB)}} & \multicolumn{1}{l|}{\textbf{Validation (FB)}} & \multicolumn{1}{l|}{\textbf{Test (FB)}} & \multicolumn{1}{l|}{\textbf{Test (SM)}} \\ \hline
\textbf{English} & 12000 & 3001 & 916 & 1257 \\ \hline
\textbf{Hindi} & 12000 & 3001 & 970 & 1194 \\ \hline
\end{tabular}
\caption{Data distribution for English and Hindi corpus}
\label{table1}
\end{table}
The datasets were provided by \newcite{trac2018dataset}. Table~\ref{table1} shows the distribution of training, validation and test (Facebook and social media) data for English and Hindi corpora. The data has been labeled with one out of three possible tags:
\begin{itemize}
\item \textbf{Non-aggressive (NAG):} There is no aggression in the text.
\item \textbf{Overtly aggressive (OAG):} The text is containing either aggressive lexical items or certain syntactic structures.
\item \textbf{Covertly aggressive (CAG):} The text is containing an indirect attack against the target using polite expressions in most cases.
\end{itemize}
\subsection{Data Pre-processing}
Generally the data from social media resources is noisy, grammar and syntactic errors are common, with a lot of ad-hoc spellings, that make it hard to analyze. Therefore, we first put our efforts to clean and prepare the data to feed it to our systems.
For the English dataset, we lowercased the data and removed URLs, Email addresses, and numbers. We also did minor stemming by removing ``ing'', plural and possessive ``s'', and replaced a few common abstract grammatical forms with the formal versions.
On manual inspection of the training data for Hindi, we found that some of the instances are Hindi-English code-mixed, some use Roman script for Hindi and others are in Devanagari. Only 26\% of the training data is in Devanagari script. We normalize the data by transliterating instances in Devanagari to Roman script. These instances are identified using Unicode pattern matching and are transliterated to Roman script using \textit{indic-trans} transliteration tool\footnote{\url{https://github.com/libindic/indic-trans}}. For further analysis, we run an in-house word-level language identification system on the training data \cite{mave2018lid}. This CRF system is trained on Facebook posts and has an F1-weighted score of 97\%. Approximately 60\% of the training data is code-mixed, 39\% is only Hindi and $0.42\%$ is only English.
\subsection{Features} \label{sec:feautures}
We make use of the following features:
\noindent\textbf{Lexical: }Words are powerful mediums to convey a feeling, describe or express ideas. With this notion, we use word $n$-grams (n=1, 2, 3), char $n$-grams (n=3, 4, 5), and $k$-skip $n$-grams (k=2, n=2, 3) as features. We weigh each term with its term frequency-inverse document frequency (TF-IDF). We also consider using another weighting scheme by trying binary word $n$-grams (n=1, 2, 3).
~\\
\noindent\textbf{Word Embeddings: } The idea behind this approach is to use a vector space model for extracting semantic information from the text~\cite{le2014distributed}. For the embedding model we use pre-trained vectors trained on part of Google News dataset including about 3 million words\footnote{\url{https://code.google.com/archive/p/word2vec/}}. We computed word embeddings feature vectors by averaging the word vector of all the words in each comment. We skip the words which are not in the vocabulary of the pre-trained model. This representation is only used for English data and the coverage of the Google word embedding is 63\% for this corpus.
\noindent\textbf{Sentiment: }We use Stanford Sentiment Analysis tool~\cite{socher2013recursive}\footnote{\url{https://nlp.stanford.edu/sentiment/code.html}} to extract fine-grained sentiment distribution of each comment. For every message, we calculate the mean and standard deviation of sentiment distribution over all sentences and use them as feature vector.
\noindent\textbf{LIWC (Linguistic Inquiry and Word Count): }LIWC2007 \cite{Pennebaker07liwc} includes around 70 word categories to analyze different language dimensions. In our approach, we only use the categories related to positive or negative emotions and self-references. To build the feature vectors in this case, we use a normalized count of words separated by any of the mentioned categories. This feature is only applicable to English data.
\noindent\textbf{Gender Probability: }Following the approach in \newcite{waseem2016you} we use the Twitter based lexicon presented in \newcite{sap2014developing} to calculate the probability of gender. We also convert these probabilities to binary gender by considering the positive cases as female and the rest as male.
We make the feature vectors with the probability of the gender and binary gender for each message. This feature is not applicable to Hindi corpus.
\section{Experiments and Results}
\label{sec:results}
\subsection{Experimental Settings}
For both datasets, we trained several classification models using different combinations of features discussed in~\ref{sec:feautures}. Since this is a multi-class classification task, we use a one-versus-rest classifier which trains a separate classifier for each class and labels each comment with the class with highest predicted probability across all classifiers. We tried Logistic Regression and linear SVM as the estimator for the classifier. We decided to use Logistic Regression in our final systems, since it works better in the validation phase. We implemented all models using scikit-learn tool\footnote{\url{http://scikit-learn.org/stable/}}.
\subsection{Results}
To build our best systems for both English and Hindi data, we experimented with several models using the different combinations of available features. Table~\ref{table-dev} shows the validation results on training and validation sets.
\begin{table*}[h]
\centering
\begin{tabular}{l|l|l|}
\cline{2-3}
& \multicolumn{2}{c|}{\textbf{F1-weighted}} \\ \hline
\multicolumn{1}{|l|}{\textbf{Feature}} & \multicolumn{1}{c|}{\textbf{English}} & \multicolumn{1}{c|}{\textbf{Hindi}} \\ \hline
\multicolumn{1}{|l|}{Unigram (U)} & 0.5804 & 0.6159 \\
\multicolumn{1}{|l|}{Bigram (B)} & 0.4637 & 0.5195 \\
\multicolumn{1}{|l|}{Trigram (T)} & 0.3846 & 0.4300 \\ \hline
\multicolumn{1}{|l|}{Char 3gram (C3)} & 0.5694 & 0.6065 \\
\multicolumn{1}{|l|}{Char 4gram (C4)} & 0.5794 & 0.6212 \\
\multicolumn{1}{|l|}{Char 5gram (C5)} & 0.5758 & 0.6195 \\ \hline
\multicolumn{1}{|l|}{Word Embeddings (W2V)} & 0.5463 & N/A \\ \hline
\multicolumn{1}{|l|}{Sentiment (S)} & 0.3961 & N/A \\ \hline
\multicolumn{1}{|l|}{LIWC} & 0.4350 & N/A \\ \hline
\multicolumn{1}{|l|}{Gender Probability (GP)} & 0.3440 & N/A \\ \hline
\multicolumn{1}{|l|}{BU + U + C4 + C5 + W2V} & \textbf{0.5875} & N/A \\
\multicolumn{1}{|l|}{C3 + C4 + C5} & 0.5494 & 0.6207 \\
\multicolumn{1}{|l|}{U + C3 + C4 + C5} & 0.5541 & \textbf{0.6267 } \\ \hline
\end{tabular}
\caption{Validation results for different features for the English and Hindi datasets using Logistic Regression model. In this table BU stands for Binary Unigram.}
\label{table-dev}
\end{table*}
Table~\ref{table2} shows the results of our three submitted systems for the English Facebook and Social Media data. In all three systems, we used the same set of features as follows: binary unigram, word unigram, character n-grams of length 4 and 5, and word embeddings. In the first system, we used both train and validation sets for training our ensemble classifier. In the second system we only used the train data for training the model. The only difference between the second and the third models is that we corrected the misspellings using PyEnchant\footnote{\url{https://pypi.org/project/pyenchant}} spell checking tool. Unfortunately, we could not try applying the sentiment and lexicon-based features after spell correction due to the restrictions on the total number of submissions. However, we believe that it can improve the performance of the system.
\begin{table*}[h!]
\centering
\begin{tabular}{l|l|l|}
\cline{2-3}
& \multicolumn{2}{c|}{\textbf{F1 (weighted)}} \\ \hline
\multicolumn{1}{|l|}{\textbf{System}} & \multicolumn{1}{c|}{\textbf{FB}} & \multicolumn{1}{c|}{\textbf{SM}} \\ \hline
\multicolumn{1}{|l|}{Random Baseline} & 0.3535 & 0.3477 \\ \hline
\multicolumn{1}{|l|}{System 1} & 0.5673 & 0.5453 \\
\multicolumn{1}{|l|}{System 2} & 0.5847 & 0.5391 \\
\multicolumn{1}{|l|}{System 3} & \textbf{0.5921} & \textbf{0.5663} \\ \hline
\end{tabular}
\caption{Results for the English test set. FB: Facebook and SM: Social Media.}
\label{table2}
\end{table*}
\begin{table*}[h!]
\centering
\begin{tabular}{l|l|l|}
\cline{2-3}
& \multicolumn{2}{c|}{\textbf{F1 (weighted)}} \\ \hline
\multicolumn{1}{|l|}{\textbf{System}} & \multicolumn{1}{c|}{\textbf{FB}} & \multicolumn{1}{c|}{\textbf{SM}} \\ \hline
\multicolumn{1}{|l|}{Random Baseline} & 0.3571 & 0.3206 \\ \hline
\multicolumn{1}{|l|}{System 1} & \textbf{0.6451} & \textbf{0.4853} \\
\multicolumn{1}{|l|}{System 2} & 0.6292 & 0.4689 \\ \hline
\end{tabular}
\caption{Results for the Hindi test set. FB: Facebook and SM: Social Media.}
\label{table3}
\end{table*}
Table~\ref{table3} shows the performance of our systems for the Hindi Facebook and social media data. For the Hindi dataset, the combination of word unigrams, character n-grams of length 3, 4 and 5 gives the best performance over the validation set. These features capture the word usage distribution across classes. Both System 1 and System 2 use these features, trained over training set only and training and validation sets respectively.
\subsection{Analysis}
Looking at the mislabeled instances at validation phase, we found that there are two main reasons for the classifier mistakes:
\begin{enumerate}
\item Perceived level of aggression can be subjective. There are some examples in the validation dataset where the label is CAG but it is more likely to be OAG and vice versa. Table~\ref{table5} shows some of these examples.
\item There are several typos and misspellings in the data that affect the performance.
\end{enumerate}
\begin{table}[h!]
\footnotesize
\begin{tabular}{ll|l|l|}
\cline{3-4}
& & \multicolumn{2}{c|}{\textbf{Label}} \\ \hline
\multicolumn{1}{|l|}{\textbf{Language}} & \textbf{Example} & \textbf{Actual} & \textbf{Predicted} \\ \hline
\multicolumn{1}{|l|}{\multirow{1}{*}{\textbf{English}}} & \begin{tabular}[c]{@{}l@{}}What has so far Mr.Yechuri done for this Country. Ask him to shut down his bloody\\ piehole for good or I if given the chance will crap on his mouth hole.\end{tabular} & CAG & OAG \\ \cline{2-4}
\multicolumn{1}{|l|}{} & \begin{tabular}[c]{@{}l@{}}The time you tweeted is around 3 am morning,,which is not at all a namaz time.,As\\ you bollywood carrier is almost finished, you are preparing yourself for politics by\\ these comments.\end{tabular} & OAG & CAG \\ \hline
\multicolumn{1}{|l|}{\multirow{2}{*}{\textbf{Hindi}}} & ajeeb chutya hai.... kahi se course kiya hai ya paida hee chutya hua tha & CAG & OAG \\ \cline{2-4}
\multicolumn{1}{|l|}{} & \begin{tabular}[c]{@{}l@{}}Salman aur aamir ki kounsi movie release huyee jo aandhi me dub gaye?? ?Bikau\\ chatukar media\end{tabular} & OAG & CAG \\ \hline
\end{tabular}
\caption{Misclassified examples in case of the aggression level}\label{table5}
\end{table}
\normalsize
Also, it is obvious from Figure~\ref{fig:datafig} that Hindi corpus is more balanced than the English one in case of OAG and CAG instances. That could be a good reason why the performance of the lexical features is better for Hindi data.
\begin{figure*}[h!]
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.90\textwidth, height=0.20\textheight]{train.png}
\caption{Data distribution for training sets}
\label{fig:1data}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.90\textwidth, height=0.20\textheight]{dev.png}
\caption{Data distribution for evaluation sets}
\label{fig:2data}
\end{subfigure}
\caption{Label distribution comparison between training and evaluation sets}
\label{fig:datafig}
\end{figure*}
Table \ref{top_n_hin} illustrates the most informative features learned by the classifier for all three classes in Hindi data. We observe that word unigrams and character trigrams are the most important features for the system. From the table, the top features for CAG are mostly swear words in Hindi and character n-grams of the swear words. More English words appear in the top list for NAG than the other two classes. There is no overlap between these features with top features from either CAG or OAG. Our system has difficulty differentiating between OAG and CAG when there is no strong swear word in the comments.
\begin{table*}[h!]
\centering
\begin{tabular}{|c|c|c|}
\hline
\textbf{NAG} & \textbf{CAG} & \textbf{OAG} \\
\hline
unigram\_: &unigram\_?&char\_tri\_gram\_kut\\
unigram\_mera &unigram\_baki&unigram\_bc\\
unigram\_bike &unigram\_??&char\_tri\_gram\_cho\\
unigram\_jai &unigram\_pm&char\_4\_gram\_ kut\\
unigram\_main &unigram\_o&unigram\_chutiya\\
unigram\_sahi &char\_tri\_gram\_ ky&unigram\_maa\\
unigram\_........... &unigram\_badla&unigram\_mc\\
unigram\_launch &cha\_tri\_gram\_yad&unigram\_gand \\
unigram\_jay &char\_5\_gram\_e...&char\_tri\_gram\_tiy\\
char\_tri\_gram\_mer &unigram\_3&char\_tri\_gram\_chu\\
\hline
\end{tabular}
\caption{Top 10 features learned by System 1 for each class for the Hindi dataset.}
\label{top_n_hin}
\end{table*}
Figure~\ref{fig:1} shows the confusion matrix of our best model for all three classes in English Facebook corpus. The most interesting part of this figure is that the classifier mislabeled several NAG instances with CAG label. Since our system is mostly based on lexical features, we can conclude that there are much fewer profanities in CAG instances comparing with the OAG ones, which make it hard to distinguish them from NAG examples without considering the sentiment aspects of the messages. This fact can also be proved by looking at Figure~\ref{fig:2}, since it seems that the classifier was also confused to label CAG instances in both cases with and without profanities in English Social Media corpus.
\begin{figure*}[h!]
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.85\textwidth, height=0.25\textheight]{EN-FB_task__na14_03.pdf}
\caption{EN-FB task}
\label{fig:1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.85\textwidth, height=0.25\textheight]{EN-TW_task__na14_03.pdf}
\caption{EN-SM task}
\label{fig:2}
\end{subfigure}
\caption{plots of confusion matrices of our best performing systems for English Facebook and Social Media data}
\label{fig:fig}
\end{figure*}
Figure~\ref{fig:3} shows that for Hindi Facebook data, the most biggest challenge is to distinguish OAG instances from CAG ones. Since our proposed system, in this case, is completely built on lexical features, it can be inferred from the figure that even indirect aggressive comments in Hindi language contains lots of profanities. However, for the Hindi Social Media corpus, we have the same concern as English data.
\begin{figure*}[h!]
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.85\textwidth, height=0.25\textheight]{HI-FB_task__na14_01.pdf}
\caption{HI-FB task}
\label{fig:3}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.85\textwidth, height=0.25\textheight]{HI-TW_task__na14_01.pdf}
\caption{HI-SM task}
\label{fig:4}
\end{subfigure}
\caption{Plots of confusion matrices of our best performing systems for Hindi Facebook and Social Media data}
\label{fig:fig*}
\end{figure*}
\section{Conclusion}
In this paper, we present our approaches to identify the aggression level in English and Hindi comments in two different datasets, one from Facebook and another from other social media. In our best performing systems, we use a combination of lexical and semantic features for English corpus, and lexical features for Hindi data.
Future work for English data includes exploring more sentiment features to capture implicit hateful comments and adding more pre-processing levels. For instance, non-English character removal can improve the system since our proposed model is mainly based on lexical features, and is likely very sensitive to unknown characters and words. For the Hindi dataset, identifying the Hindi-English code-mixed instances and processing these instances and Hindi monolingual instances separately could be a future direction to explore. As the classification of aggression is subjective in most scenarios, adding sentiment features to the lexical information might help to model performance for Hindi data.
\color{black}
|
2,877,628,091,001 | arxiv |
\section{Evolution equations and equivalence with mesh lattices}
Here we show that the evolution of pulses in the fiber loops [Eqs.~(1)], can be transformed to the standard evolution equations of the optical mesh lattice which is investigated in detail in Ref.~\cite{Miri2012a}. We show this equivalence for the Hermitian case (when no gain or loss is involved). The $\mathcal{PT}$-symmetric case can be explored in a similar fashion. Consider Eqs.~(1) for even round-trips:
\begin{subequations}\label{eqs:1}
\begin{align}
u_{2k}^{2l+2} & =\frac{1}{\sqrt{2}}\left(u_{2k+1}^{2l+1}+iv_{2k+1}^{2l+1}\right),\\
v_{2k}^{2l+2} & =\frac{1}{\sqrt{2}}\left(v_{2k-1}^{2l+1}+iu_{2k-1}^{2l+1}\right)e^{i\varphi_{2k}}.
\end{align}
\end{subequations}
After rewritting these same equations for odd round-trips, we obtain:
\begin{subequations}\label{eqs:2}
\begin{align}
u_{2k+1}^{2l+1} & =\frac{1}{\sqrt{2}}\left(u_{2k+2}^{2l}+iv_{2k+2}^{2l}\right),\\
v_{2k+1}^{2l+1} & =\frac{1}{\sqrt{2}}\left(v_{2k}^{2l}+iu_{2k}^{2l}\right)e^{i\varphi_{2k+1}},
\end{align}
\end{subequations}
and
\begin{subequations}
\begin{align}
u_{2k-1}^{2l+1} & =\frac{1}{\sqrt{2}}\left(u_{2k}^{2l}+iv_{2k}^{2l}\right),\\
v_{2k-1}^{2l+1} & =\frac{1}{\sqrt{2}}\left(v_{2k-2}^{2l}+iu_{2k-2}^{2l}\right)e^{i\varphi_{2k-1}}.
\end{align}
\end{subequations}
By combining these equations we reach at:
\begin{subequations}\label{eq:4}
\begin{align}
u_{2k}^{2l+2} & = \frac{1}{2}\left(\left(-u_{2k}^{2l}+iv_{2k}^{2l}\right)e^{i\varphi_{2k+1}}+\left(u_{2k+2}^{2l}+iv_{2k+2}^{2l}\right)\right),\\
v_{2k}^{2l+2} & =\frac{1}{2}\left(\left(-v_{2k}^{2l}+iu_{2k}^{2l}\right)e^{i\varphi_{2k}}+\left(v_{2k-2}^{2l}+iu_{2k-2}^{2l}\right)e^{i(\varphi_{2k-1}+\varphi_{2k})}\right).
\end{align}
\end{subequations}
Now use the following gauge transformation:
\begin{subequations}\label{eq:gauge}
\begin{align}
(-1)^ma_n^m & =-u_{-2k}^{2l},\\
(-1)^mb_n^m & =+v_{-2k}^{2l}.
\end{align}
\end{subequations}
In addition we assume:
\begin{equation}\label{eq:phi}
\varphi_n=\varphi_{-2k}=\varphi_{-2k-1}.
\end{equation}
Finally, by replacing Eqs.~(\ref{eq:gauge}) and Eq.~(\ref{eq:phi}) in Eqs.~(\ref{eq:4}) we obtain
\begin{subequations}\label{eq:meshiteration}
\begin{align}
a_{n}^{m+1} & = \frac{1}{2}\left(\left(a_{n}^{m}+ib_{n}^{m}\right)e^{i\varphi_{n}}+\left(-a_{n-1}^{m}+ib_{n-1}^{m}\right)\right),\\
b_{n}^{m+1} &
=\frac{1}{2}\left(\left(b_{n}^{m}+ia_{n}^{m}\right)e^{i\varphi_{n}}+\left(-b_{n+1}^{m}+ia_{n+1}^{m}\right)e^{i(\varphi_{n}+\varphi_{n+1})}\right),
\end{align}
\end{subequations}
which are the evolution equations of optical mesh lattices \cite{Miri2012a}.
For a periodic arrangement of the phase potential, the band structure as well as the corresponding Floquet-Bloch modes are studied in Ref.~\cite{Miri2012a}. In the presence of defects, however, there is not a systematic approach for finding defect states in general. On the other hand, as we will show here, for the specific case where there is a single defect site in a passive empty lattice, an analytical expression for the defect mode can be obtained. For this reason, assume a single defect state imposed on an empty lattice. Therefore, the total phase potential can be written as:
\begin{equation}
\varphi_n=\begin{cases}
\varphi_d \text{ for } n=0 \\
0 ~~ \text{ for } n\neq 0.
\end{cases}
\end{equation}
We then assume the following form of solution for a defect mode:
\begin{subequations}
\begin{align}
a_n^m& =e^{i\theta m}e^{-\alpha|n|}
\begin{cases}
E & n\leq -1 \\
E \qquad\qquad\text{for} & n=0 \\
Fe^{-i\varphi_d} & n\geq +1
\end{cases}\\
b_n^m& =e^{i\theta m}e^{-\alpha|n|}
\begin{cases}
F & n\leq -1 \\
E \qquad\qquad\text{for}& n=0 \\
Ee^{-i\varphi_d} & n\geq +1
\end{cases}
\end{align}
\end{subequations}
where $\theta$, $\alpha$, $E$ and $F$ are all unknown constants to be determined. Note, that this solution is propagating along $m$, but is exponentially decaying along both positive and negative values of $n$ and it is therefore very localized to the site $n=0$. In this case just by replacing this simple form of solution into the evolution equations we find the transcendental equations
\begin{subequations}
\begin{align}
\cos(\theta_d) & =\frac{1}{2}-\frac{1}{2}\cosh{(\alpha)},\\
\left(e^{i\varphi_d}+ie^{i\varphi_d}\right)e^{-i\theta_d}+2e^{i\theta_d} & = 1+\left(e^{i\varphi_d}+ie^{i\varphi_d}\right)-e^{-\alpha},
\end{align}
\end{subequations}
which simultaneously give the confinement factor $\alpha$ and the propagation constant $\theta_d$ for a given defect strength $\varphi_d$. This latter case is plotted in Fig. 2(b) of the main text.
\section{Band structure and Floquet-Bloch modes in optical mesh lattices}
In this section we obtain the band dispersion relation and the continuum of the Floquet-Bloch modes associated with optical mesh lattices. Here we restrict our study to the Hermitian passive lattice described by the evolution Equations~(\ref{eq:meshiteration}) in the main paper. This analysis can be easily extended to the case of $\mathcal{PT}$-symmetric lattices. Consider the following periodic arrangement of the phase modulation $\varphi_n$:
\begin{equation}
\varphi_{n}=
\begin{cases}
+\varphi_{0},~~~n:~ even\\
-\varphi_{0},~~~n:~~ odd
\end{cases}
\end{equation}
In this case Eqs.~(\ref{eq:meshiteration}) reduce to:
\begin{subequations}\label{eq:reducediteration}
\begin{align}
a_{n}^{m+1}=\frac{1}{2}\left(\left(a_{n}^{m}+ib_{n}^{m}\right)e^{i\varphi_{n}}+\left(-a_{n-1}^{m}+ib_{n-1}^{m}\right)\right)\\
b_{n}^{m+1}=\frac{1}{2}\left(\left(b_{n}^{m}+ia_{n}^{m}\right)e^{i\varphi_{n}}+\left(-b_{n+1}^{m}+ia_{n+1}^{m}\right)\right)
\end{align}
\end{subequations}
Now let us assume plane wave solutions of the form:
\begin{equation}\label{eq:planewaves}
\binom {a_{n}^{m}}{b_{n}^{m}}=\binom {A_{n}}{B_{n}}e^{iQn} e^{i\theta m}.
\end{equation}
Since $\varphi_n$ is periodic with a period of $N=2$, the discrete Bloch functions $A_n$ and $B_n$ are also periodic functions of $n$ with the same periodicity, i.e. $A_{n+2}=A_n$ and $B_{n+2}=B_n$. Therefore they can be completely determined by knowing their values in $n=0,1$. Using the ansatz of Eq.~(\ref{eq:planewaves}) in Eqs.~(\ref{eq:reducediteration}) leads to the following eigenvalue problem:
\begin{equation}\label{eq:eigenvalue}
\frac{1}{2}\begin{pmatrix}
e^{i\varphi_0} &ie^{i\varphi_0} &-e^{-iQ} &ie^{-iQ} \\
ie^{i\varphi_0} &e^{i\varphi_0} &ie^{iQ} &-e^{iQ} \\
-e^{-iQ} &ie^{-iQ} &e^{-i\varphi_0} &ie^{-i\varphi_0} \\
ie^{iQ} &-e^{iQ} &ie^{-i\varphi_0} &e^{-i\varphi_0}
\end{pmatrix}
\begin{pmatrix}
A_0\\
B_0\\
A_1\\
B_1
\end{pmatrix}
=e^{i\theta}\begin{pmatrix}
A_0\\
B_0\\
A_1\\
B_1
\end{pmatrix}
\end{equation}
This can be solved to find the dispersion relation:
\begin{equation}
\cos{(2Q)}=8\cos^2{(\theta)}-8\cos{(\varphi_0)}\cos{(\theta)}+4\cos^2{(\varphi_0)}-3
\end{equation}
while the corresponding eigenvectors can be readily obtained.
\section{Passive defects (Fig. 2 of main paper)}
\expfigure{figs/ptarray_phaseshifts_passivedefect}{Optical mesh lattice
with elemental phase defect $\varphi_d\neq0$ at central positions
$n={0,1}$ embedded in a passive lattice ($G_p=0$, $G_d=0$) with no
periodic phase potential ($\varphi_p=0$). Steps $m$ label the rows of
couplers, while positions $n$ label couplers in horizontal
direction. The phase shift and the values of gain/loss of a fiber are
assigned to the position $n$ of the coupler below it. From this
spatial picture, it becomes clearly visible that every second position
$n$ is not accessible in each round-trip: Two neighboring couplers have
a distance of 2 positions $n$ and each row of couplers is shifted by 1
position $n$ with respect to the rows above and below it. All
amplitudes at these empty positions $n$ have thus been set to zero in
all measurements and simulations.}
\expfigure{figs/defect54-messnr_01_suppl}
Data in both loops and comparison to simulations of measurements shown
in Fig.~2(c). In the main paper, only measurement data recorded in
the short loop is displayed.}
\expfigure{figs/defect54-messnr_02_suppl}
Data in both loops and comparison to simulations of measurements shown
in Fig.~2(d).}
\expfigure{figs/defect54-messnr_03_suppl}
Data in both loops and comparison to simulations of measurements shown
in Fig.~2(e).}
\section{Phase defect in periodic PT lattice (Fig. 3 of main paper)}
\expfigure{figs/ptarray_phaseshifts_ptdefect}
Optical mesh lattice with elemental phase defect $\varphi_d\neq0$ at
central positions $n={0,1}$ embedded in a periodic PT lattice
($G_p\neq0$) with phase potential $\varphi_p\neq0$. The distribution
of gain/loss has no defect ($G_d=0$).}
\expfigure{figs/defect59-messnr_02_suppl}
Data in both loops and comparison to simulations of measurements shown
in Fig.~3(c). In the main paper, only measurement data recorded in
the short loop is displayed.}
\expfigure{figs/defect59-messnr_04_suppl}
Data in both loops and comparison to simulations of measurements shown
in Fig.~3(d).}
\expfigure{figs/defect59-messnr_06_suppl}
Data in both loops and comparison to simulations of measurements shown
in Fig.~3(e).}
\section{Gain/loss and phase defect in empty lattice (Figs. 4 and 5 of main paper)}
\expfigure{figs/ptarray_phaseshifts_bicdefect}
Optical mesh lattice with gain/loss defect $G_d\neq0$ and phase defect
$\varphi_d \neq0$ at 14 positions $n={-6,..,7}$ embedded in a passive
lattice ($G_p=0$) with no periodic phase potential ($\varphi_p=0$). In
case of gain/loss defects ($G_d\neq0$), some couplers at the
boundaries have active (red/blue) fibers on one side and passive
(black) fibers on the other port in this spatial equivalent picture to
fulfill the conditions of PT symmetry. To describe this with the
iteration equations (1) in the main paper, the gain/loss value in the
short loop is shifted by one position. Therefore, $\widetilde{G}(n+1)$
is evaluated in the short loop (amplitudes $u_n^m$) while
$\widetilde{G}(n)$ is evaluated in the long loop (amplitudes
$v_n^m$).}
\expfigure{figs/bic27-messnr_03_suppl}
Data in both loops and comparison to simulations of measurements shown
in Fig.~4(c).}
\expfigure{figs/bic27-messnr_06_suppl}
Data in both loops and comparison to simulations of measurements shown
in Fig.~4(d). A breakdown of localization is observed in experiment after 80 steps in $m$ due to gain saturation of amplifiers.}
\expfigure{figs/bic36-messnr_03_suppl}
Data in both loops and comparison to simulations of measurements shown
in Figs.~5(b--d).}
\section{Experimental setup\label{sec:exp}}
The following sections give a comprehensive description of experimental methods used to obtain the results presented in the publication. For a more accessible and less technical explanation of our setup, please take a look at the Supplemental Material of References \cite{Regensburger2012} and \cite{Regensburger2011}.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{figs/double-loop-detail-withamps}
\caption{Scheme of the fiber setup. Two loops of optical fibers are connected by a central 50\% fiber coupler. $\Delta L$: Length difference between the loops; ISO: Optical Faraday isolator; PC: Polarization controller; BPF: Tunable bandpass filter; SOA: Semiconductor optical amplifier; AOM: Acousto-optic modulator; PM: Phase modulator with integrated polarizer; PD: Photodiode; LA: Logarithmic voltage amplifier.\label{fig:exp}}
\end{figure}
The experimental setup depicted in Fig. \ref{fig:exp} is assembled entirely of standard equipment for C-band optical telecommunication systems. It is based on two loops of standard single-mode fiber that have a measured average length of $L=\SI{23406.6}{\ns} \cdot c_\text{fiber}$, with $c_\text{fiber}$ being the speed of light in the fibers at the signal wavelength. The physically relevant quantity of our measurement is not the actual fiber length, but the optical round-trip times that are used to extract the pulse intensities from the signal (see below). Both loops are connected by a central 50/50 fiber-optical coupler. The length differential $\Delta L=\SI{276.1}{\ns} \cdot c_\text{fiber}$ introduces a transverse coupling between adjacent pulse positions $n$ via time multiplexing (see Section \ref{sec:time-mult} below). An additional 50/50 coupler is used in each loop for signal monitoring. In one loop, this coupler also injects the signal pulse from a DFB laser diode (\textit{SEI SLT5411-CB-F400}) with direct current modulation. It is powered by a \textit{Hytek HY 6510} laser diode driver chip and a \textit{Hytek HY 5610} TEC controller chip. The laser pulses at \SI{1545.3}{\nm} have a close to rectangular shape with duration of \SI{200}{\ns} such that it fits well into the time slot of \SI{276.1}{\ns} fixed by the loop length differential.
Two common telecommunication monitoring photodiodes are connected to the 50/50 monitor couplers in both loops. Their time-varying electrical signals are acquired with a fast digital real-time oscilloscope after being amplified by high-bandwidth logarithmic amplifiers (\textit{Femto HLVA100}). The \SI{100}{\MHz} total bandwidth of the diagnostic system is large enough to resolve the temporal shape and peak intensity of all circulating optical pulses.
An electro-optical lithium-niobate phase modulator (\textit{Covega LN65S}) is driven by a \SI{100}{\MHz} arbitrary waveform generator (\textit{Tektronix AFG3102}) to apply the real-valued potential function $\varphi (n,m)$ on the pulse phases in the long loop. This way, a computer-generated time-dependent voltage signal is effectively translated into the temporal equivalent of a transverse distribution of the refractive index in the optical mesh lattice (see Section \ref{sec:ptsymm} and Fig. \ref{fig:timing-scheme}). The voltage function is programmed such that it also contains the phase defects required for studying light localization around defects in passive and non-Hermitian optical mesh lattices. It is stored in the integrated memory of the waveform generator and repeated at a rate of \SI{25}{\Hz} at which the whole experiment is operated. All other electrical signals necessary to control the setup operation are also synthesized by waveform generators (\textit{Agilent 33522A}).
Two semiconductor optical amplifiers (\textit{Finisar G111}) do not only counterbalance all round-trip losses stemming from component losses and leakage at the monitor couplers, but also provide a constant amount of excess gain. With the acousto-optic modulators (AOM, \textit{Neos 26027} operated in zeroth diffraction order) in their transparent state, the variable attenuators are used to adjust the round-trip losses such that all pulse intensities are multiplied by a residual gain factor of $1.46$. Depending on the gain/loss modulation parameters, the AOMs attenuate the signals in the loops to have a certain amount of net gain or net loss. This enables the temporal implementation of arbitrary distributions of optical amplification and attenuation for each pulse that are prepared in the way necessary to realize $\mathcal{PT}$ symmetry in the structure. At the boundaries between two different steps $m$ and $m+1$, the AOMs are set to a highly absorptive state to prevent uncontrolled noise build-up and signal cross-talk between two steps (see Section \ref{sec:ptsymm} and Fig. \ref{fig:timing-scheme}). Between two successive measurements, the AOMs were set to maximum absorption to erase all circulating optical signals from the loops.
Several additional passive optical fiber components are used for polarization and gain control as well as for noise filtering. One fiber-coupled optical Faraday isolator is present in each loop to filter out counter-propagating noise and back reflections. In addition, a single bandpass filter in the long loop (\textit{DiCon TF-1565-0.8-FC/APC-0.3-1}) and a combination of two bandpass filters in the short loop (\textit{JDS Uniphase VCF050-Z001} and \textit{JDS Fitel TB1500B}) suppress excess noise due to amplified spontaneous emission of the amplifiers. Several fiber-quenching polarization controllers and another monitoring photodiode at the rejection port of a fiber-coupled polarizing beam splitter in the short loop are used to adjust and monitor the state of polarization. Additionally, the phase modulator has an integrated polarizer that preserves a clean state of polarization in the long loop. The static round-trip losses of the short loop are tuned with a precision variable optical attenuator (\textit{JDS Fitel}). In the other loop, a polarization controller in front of the polarizer of the phase modulator serves the same purpose.
Directly after each signal acquisition, which is averaged over a sufficient number of realizations, the residual amplifier noise is recorded in a dark frame that is realized under exactly the same conditions except for the laser source being switched off. It is averaged the same way as the signal frame and subsequently subtracted point by point. Afterwards, the intensities of pulse peaks are extracted from the data (see Section \ref{sec:time-mult}). Here, an additional arithmetic mean along several points of the temporal waveform is applied in the evaluation procedure.
All measurements within the same figure in the Letter were recorded in a subsequent measurement series where the parameter variation was realized by a change of the modulation waveform only. No manual intervention e.g. to adjust the static net gain of the setup was performed in between the measurements in a series.
\section{Time multiplexing\label{sec:time-mult}}
\begin{figure}\centering
\includegraphics[width=0.8\textwidth]{figs/timeline4}
\caption{Time multiplexing. Schematic drawing of the time trace of the optical power $U(t)$ as recorded by the photodiode attached to the short loop. Step number $m$ and position $n$ are indicated. Source: Ref. \cite{Regensburger2012}.\label{fig:timeline}}
\end{figure}
In our experimental approach, time multiplexing is applied to realize a $\mathcal{PT}$-symmetric optical lattice in the temporal domain. Figure \ref{fig:timeline} shows a sketch of the signal waveform $U(t)$ as detected by the photodiode attached to the monitor coupler in the short loop. At first, the single pulse that is initially inserted into the long loop gets detected after having completed one trip around the short loop (step $m=1$). One loop round-trip later (after a time interval $\Delta T=L/c_\text{fiber}$ with $L$ being the average length of the two loops), two neighboring pulses with a significantly shorter relative time offset $\Delta t=|\Delta L|/c_\text{fiber} \ll T$ are recorded by the photodiode. The pulse that arrives first went around the short loop two times, while the second pulse first travelled through the long and afterwards through the short loop. After the pulses have circled the loops one more time ($m=3$), a sequence of three pulses is observed, with the central one being subject to wave interference. The coherent superposition arises because this temporal position can be reached via two alternate paths through the two-loop network (long--short or short--long), see Fig.~1(b) of the main paper.
The pulse intensities in the acquired signal $U(t)$ are then extracted to the discrete $m \times n$ coordinate system of the mesh lattice via $\left|u_n^m \right|^2=U\left(m \Delta T + \frac{n}{2} \Delta t \right)$ with the above-mentioned read-out timings $\Delta T$ and $\Delta t$. Following this notation, only every second position $n$ can be physically accessed by the pulses. This simplifies the analytical treatment and results in a symmetric behavior of the system. The initial pulse starts at step $m = 0$ and the central position $n = 0$. In all round-trips where the step $m$ is an even number ($m = 0, 2, 4,\ldots$), only even positions $n$ are physically relevant. In all other round-trips ($m=1,3,5,\ldots$) only odd positions $n$ can be reached physically. Signals at physically non-relevant positions are set to zero. Finally, the extracted data is visualized on the discrete 2D $m \times n$ grid in a logarithmic color scale as indicated next to all plots.
To conduct the simulations of signal propagation shown in this Supplemental Material, Eq. (1) of the main paper is evaluated numerically with the initial condition of a single pulse having amplitude $1$ in the long loop and the parameters stated in the respective figure captions.
\section{Experimental implementation of parity-time symmetry\label{sec:ptsymm}}
\begin{figure}\centering
\includegraphics[width=0.8\textwidth]{figs/timing-scheme}
\caption{Timing scheme of modulation to impose a $\mathcal{PT}$-symmetric optical potential. Simplified traces are displayed; pulse power levels do not account for amplification or attenuation of signal. The phase modulation contains a central defect. Source: Adapted from Ref. \cite{Regensburger2012}.\label{fig:timing-scheme}}
\end{figure}
For the realization of time-dependent gain/loss in the system, the AOMs in both loops are modulated to switch the net gain or net loss between pulses thus enabling a completely arbitrary distribution of the imaginary optical potential throughout all lattice points of the time-multiplexed optical mesh.
The operation of the experiment is based on a computer-programmed modulation scheme of the loop losses by AOMs and of the signal phase by the phase modulator, as displayed in Fig. \ref{fig:timing-scheme}. Here, $U(t)$ and $V(t)$ illustrate the temporal traces detected by the monitoring photodiodes in the short and long loops. The phase modulator applies the same phase function $\varphi(n)$ in every loop round-trip. In the case shown, it contains a central phase defect around which localized modes will arise during propagation. Finally, the AOM modulation signals, used to obtain net gain and net loss which obeys parity-time symmetry, are indicated for both loops. As required by $\mathcal{PT}$ symmetry, the two loops are repeatedly switched between gain and loss at every loop round-trip.
|
2,877,628,091,002 | arxiv | \section{Introduction}
Fractional quantum Hall effects \cite{laughlinoriginal,girvin,yoshoika} are among the most profound collections of phenomena to emerge in interacting quantum many-body systems. The elementary excitations in these systems do not act like bosons or fermions; rather, they are {\em anyons}, which in some cases can be used for a robust form of quantum computing \cite{kitaev2003,nayaksimon}. All physical examples of fractional quantum Hall effects are in two dimensional (2D) electron gases. Here we propose a method for linking standard qubit designs which will realize a \textit{bosonic} fractional quantum Hall effect. The rich theoretical literature on bosonic fractional quantum Hall effects suggests that there will be a large number of interesting states \cite{cooperwilkin,hafezi,sorensen,cooper,moller,hormozimoller,juliadiaz2,juliadiaz} that could be explored in our system.
These include `Pfaffians' and their generalizations. Furthermore, one could anticipate that some important experiments (such as directly braiding quasiparticles) may be simpler in a qubit array than in a GaAs layer surrounded by AlGaAs.
There are several competing approaches to engineering bosonic fractional quantum Hall effects. One proposal uses Raman lasers to simulate the magnetic vector potential in neutral cold atoms \cite{dalibardRMP,bloch}.
The technical challenges are, however, quite daunting: new cooling methods need to be designed to offset heating from the Raman lasers, and the most natural probes are indirect.
Another scheme is to use lattices of tiny superconducting grains (charge qubits, \cite{choi,stern,fazio,vanderzant,makhlinschoen,clarkewilhelm}) connected through Josephson junctions.
Suitably low temperatures can be reached in a dilution refrigerator, and the system is readily studied using transport measurements. Unfortunately, random charge noise, which scales linearly with the interaction strength, would prevent the quantum Hall regime from being reached without significant local tuning of the potentials on hundreds or thousands of lattice sites. Other proposals include superconducting Jaynes-Cummings lattices \cite{nunnenkamp} and ``photon lattices" of coupled optical waveguides \cite{hafezidemler,umucalilarcarusotto,hafezilukin}, each of which have their own advantages and shortcomings.
We here propose a new and promising approach. Namely, we consider a circuit of qubits, with a geometry which naturally maps onto a system of charged bosons hopping in a magnetic field. In order to produce complex hopping matrix elements we propose a lattice of coupled asymmetrical pairs of qubits, which we label as $A$ or $B$. We choose device parameters so that excitation energy $\omega_A$ of the $A$ qubits is significantly smaller than the excitation energy of the $B$ qubits, and place a $B$ qubit on each link between neighboring $A$ qubits. Further, we couple them to each other through alternating hopping ($\sigma_{A}^{+} \sigma_{B}^{-} + H.C.$, henceforth referred to as a ``$\pm$" coupling) and potential ($\sigma_{A}^{z} \sigma_{B}^{z}$, a ``$zz$" coupling) terms. We also apply an external oscillating electromagnetic field of frequency $\omega$ to each qubit, with the relative phase of the signal applied to the $B$ qubits shifted relative to that of the $A$ qubits by a locally tunable $\varphi_s$. Since the $B$ qubits are higher energy than the $A$ qubits, we can integrate them out, leading to complex tunneling matrix elements (the amplitude of a process where the states of neighboring qubit pairs are exchanged) between $A$ qubits with phases that can be tuned to any value by adjusting $\varphi_s$.
As we will describe below, a particularly attractive realization of this architecture would be to use three junction ``flux qubits" (FQs) \cite{mooijorlando,orlandomooij,chiorescunakamura,lyakhovbruder,majerpaauw,gracjar,matsuo,kakuyanagi,bourassa,jiangkane}. The flux qubits are mesoscopic superconducting rings interrupted by three Josephson junctions, placed in a magnetic field which is tuned so that nearly 1/2 of a magnetic flux quantum penetrates the ring. The energies of the flux qubits can be tuned by adjusting this magnetic field, or by varying the areas of the Josephson junctions, so that the $B$ qubits are higher energy than the $A$ qubits as outlined above. We then capacitively couple all the flux qubits to an external, oscillating voltage $V_0 \sin \omega t$, and arrange the couplings so that the phases of the voltage applied to the $B$ qubits are shifted relative to the $A$ qubits. The subtle interplay of the oscillating applied voltages with the mix of charge (capacitive) and phase (Josephson) couplings introduces phase shifts which make these hopping matrix elements complex, mimicking the Peierls phases found for charged particles in magnetic fields.
All of our flux qubits are operated in the regime where the Josephson energy $E_{\rm J}$ is large compared to the charging energy $E_{\rm C}$, so charge noise effects are exponentially suppressed. The system is therefore almost completely insensitive to stray low-frequency electric fields. The many-body excitation gap, a key feature of anyon states, can be measured through the single-qubit response to applied oscillating voltages. The large nonlinearities of the flux qubit devices imply that the first excited states experience an effectively infinite on-site repulsion. We note also that our scheme is not intended to function as a circuit QED architecture (in contrast to the recent work of Koch \textit{et al} \cite{kochhouck,nunnenkampkoch} and others); the device parameters should be chosen so that the external voltages can be treated as purely classical sources, with no dynamical photons present in our system.
The remainder of this paper is organized as follows. In section \ref{general}, we write down the basic coupled qubit Hamiltonian, and outline the conditions under which arbitrary external gauge fields can be simulated. In section \ref{qubits}, we describe three junction flux qubits, and how they can be coupled to obtain the arbitrary complex hopping phases derived in section \ref{general}. Having derived these phases, in section \ref{LLL} we show how the circuits of the two previous sections can be used as building blocks for exotic boson fractional quantum Hall states. Finally, in section \ref{ex}, we show how a simple arrangement of four flux qubits could experimentally demonstrate a nonzero effective gauge field, and offer concluding remarks.
\section{General Formalism}\label{general}
\begin{figure}
\vspace{0.5in}
\psfrag{qa}{$A$}
\psfrag{qb}{$B$}
\psfrag{va}{$\widehat{V} \sin \omega t$}
\psfrag{vb}{$\widehat{V} \sin \of{\omega t + \varphi_{s}}$}
\psfrag{cpm}{$D^{\pm}$}
\psfrag{cz}{$D^{z}$}
\psfrag{dpm}{$D^{\pm}$}
\psfrag{dz}{$D^{z}$}
\includegraphics[width=3.0in]{simplepairs.eps}
\caption{Basic coupling structure for the $A$ and $B$ qubits. Each site in our many-body lattice would correspond to a single $A$ qubit, which couples to its neighbors through one $B$ qubit per link, joined through alternating hopping ($\pm$) and potential ($zz$) couplings as described in section~\ref{general}. Though drawn in one dimension in the figure, we ultimately intend to construct 2d lattices in this manner, and generalizations to even higher dimensions are also possible.}\label{ABfig}
\end{figure}
\subsection{Berry's Phase of a Rotating Spin}
Before outlining the physics of our qubit array, we would first like to discuss a simple example to more straightforwardly elucidate the origin of the complex hopping phases. Specifically, we will consider a pair of spins and examine the Berry's phase effects generated during a process where an excitation is transfered from one spin to its neighbor (whose eigenstates lie on a different axis from the first spin) by rotating both spins about $z$ and then transferred back by rotating both spins about $y$. Specifically, let us consider two initially uncoupled spin-$\frac{1}{2}$ degrees of freedom, with the Hamiltonian,
\begin{eqnarray}
H = \sigma_{A}^{x} + \cos \theta \sigma_{B}^{z} + \sin \theta \of{\cos \varphi \sigma_{B}^{x} + \sin \varphi \sigma_{B}^{y} }.
\end{eqnarray}
Let us assume that initially spin $A$ is excited and spin $B$ is in its ground state. We first act with the operator $\sigma_{A}^{z} \sigma_{B}^{z}$ to transfer the excitation from $A$ to $B$, and assume energy is conserved in this process so that the final state after acting with $\sigma_{A}^{z} \sigma_{B}^{z}$ is $\ket{0_A 1_B}$. We then act with $\sigma_{A}^{y} \sigma_{B}^{y}$ to transfer the excitation back; the resulting matrix element $\MM$ for the entire process is
\begin{eqnarray}
\MM &=& \bra{1_{A} 0_{B}} \sigma_{A}^{y} \sigma_{B}^{y} \ket{0_{A} 1_{B} } \bra{0_{A} 1_{B}} \sigma_{A}^{z} \sigma_{B}^{z} \ket{ 1_{A} 0_{B} } \\
&=& \sin \theta \of{ \cos \varphi + i \sin \varphi \cos \theta }. \nonumber
\end{eqnarray}
For $\theta \neq \pi/2$ and $\varphi \neq 0,\pi$, $\MM$ is complex, and the resulting phase can be understood as a consequence of the Berry's phase acquired by a rotating spin, though we note of course that the Berry's phase discussed here is only an analogy, since we are considering the action of pairs of operators and not continuous, adiabatic changes to the system's wavefunction. When a spin $m$ is rotated along a closed path, the resulting phase is equal to $m$ times the area subtended by the path on the unit sphere. In this case, we have two spins which rotate, but both end in the same states in which they started, so we obtain a gauge-invariant phase equal to the sum of the phases picked up by both spins. The area subtended by $A$ is just $\pi$, but the area subtended by $B$ depends on the projection of $\sigma_{y}$ and $\sigma_{z}$ onto its quantization axis, and thus depends on $\varphi_s$, yielding the result above. Note that if we'd acted with $\sigma_{A}^{z} \sigma_{B}^{z}$ or $\sigma_{A}^{y} \sigma_{B}^{y}$ twice instead of using a combination of the two, the outcome would necessarily be real, since $\MM$ would be the product of a matrix element and its Hermitian conjugate. In the Berry's phase picture, the phase is zero simply because the path of each spin's rotation would be a 1d line, and thus each area is zero. Both inequivalent eigenstates and anisotropic operations are necessary for spin transfer matrix element to be complex.
It is precisely this effect--the phase picked up by a spin which rotates as it propagates in space--which we will use to engineer artificial hopping phases in our lattice. Specifically, imagine the case in which we had two (identical) $A$ spins with a $B$ spin in between them, and after acting with $\sigma_{A1}^{z} \sigma_{B}^{z}$ to pass an excitation from $A1$ to the $B$ spin, we then act with $\sigma_{A2}^{y} \sigma_{B}^{y}$ to transfer the excitation to the second $A$ qubit instead of sending it back to the first. Since the $A$ spins are identical, the matrix element $\MM$ should be the same as the one derived above, and therefore by letting $B$ spins mediate a hopping coupling, we can introduce tunable phases in a lattice of $A$ spins.
Engineering this structure in a real spin (or qubit) lattice is by no means trivial. For real spins, one could introduce a spatially varying magnetic field to generate the inequivalent local eigenstates, but adding the anisotropic spin-spin interactions ($\sigma_{A}^{z} \sigma_{B}^{z}$ or $\sigma_{A}^{y} \sigma_{B}^{y}$ instead of $\mathbf{S}_{A} \cdot \mathbf{S}_{B}$) is very difficult. Conversely, for a more general lattice of qubits, generating passive anisotropic couplings is often straightforward, but generating inequivalent local eigenstates is not. We here demonstrate that coupling the qubits to a continuously oscillating monochromatic external field can introduce the required rotations, provided that the phases of the signals applied to the $B$ qubits are different from those applied to the $A$ qubits. By adjusting these phases at a local level, we can independently tune the tunneling phase between any linked sites on the lattice, and can thus simulate any desired external gauge field, at least in principle.
\subsection{Qubit Coupling Hamiltonian}
We will consider a lattice of qubits, arranged such that there is an asymmetric pair of qubits $A$ and $B$ at each site. We shall assume throughout that the following conditions hold:
(1) The nonlinearities of each physical system which we use as a qubit are large enough that we can consider them to be purely two-level systems, and ignore all eigenstates besides $\ket{0}$ and $\ket{1}$. This requirement ultimately constrains the magnitudes of the couplings between qubits, which must be small compared to the physical devices' absolute nonlinearities.
(2) The qubits can be coupled to an external electromagnetic field. We shall further require that the electromagnetic field operator $\widehat{V}$ (which could represent the coupling to magnetic fields as well) has no expectation value in either state, so $\bra{0}\widehat{V}\ket{0} = \bra{1}\widehat{V}\ket{1} = 0$. These fields will always be present in the qubit array Hamiltonian, and we will treat them in the standard rotating wave approximation.
(3) We must be able to introduce two types of coupling between the qubits, so that the qubit-qubit Hamiltonian takes the form
\begin{eqnarray}
H_{int} = D^{\pm} \of{ \sigma_{A}^{+} \sigma_{B}^{-} + \sigma_{A}^{-} \sigma_{B}^{+} } + D^{z} \sigma_{A}^{z} \sigma_{B}^{z}.
\end{eqnarray}
We must have independent control over both $D^\pm$ and $D^z$ for our method to succeed. Note that any physical coupling between the qubits will typically include terms which violate number conservation. However, when we transform to the rotating frame when the external oscillating voltage is applied, the terms in $H_{int}$ are unchanged but anomalous terms such as $\sigma_{A}^{-} \sigma_{B}^{z}$ or $\sigma_{A}^{+} \sigma_{B}^{+}$ will become rapidly oscillating and can be dropped from the low-energy Hamiltonian.
(4) We must be able to tune the relative phase $\varphi_s$ of the external electromagnetic field applied to $B$ qubits relative to the $A$ qubits, as shown in fig.~\ref{ABfig}. If $\varphi_s \neq 0,\pi$ then time reversal symmetry is broken, since we cannot chose a zero point for the time $t$ so that both $V_{A} \of{t}= V_{A} \of{-t}$ and $V_{B} \of{t} = V_{B} \of{-t}$. Breaking time reversal symmetry is a basic requirement for obtaining nontrivial effective gauge fields.
These requirements could be fulfilled by a large number of physical systems, including spin qubits, trapped ions, and superconducting devices, which will be the focus of this work. Let us now consider the Hamiltonian of a given qubit pair, $H_{AB}$. Before turning on the oscillating fields, our qubit Hamiltonian is
\begin{eqnarray}
H_{AB}^{0} &=& \omega_{A} \sigma_{A}^{z} + \omega_{B} \sigma_{B}^{z} \\
& & + \cuof{ D^{\pm} \of{ \sigma_{A}^{+} \sigma_{B}^{-} + \sigma_{A}^{-} \sigma_{B}^{+} } \; {\rm{or}} \; D^{z} \sigma_{A}^{z} \sigma_{B}^{z} }. \nonumber
\end{eqnarray}
We now turn on the oscillating fields. When acting on $A$ or $B$, we have:
\begin{eqnarray}
\widehat{V} = 2 \Omega_{A/B} \sigma_{A/B}^{y},
\end{eqnarray}
with $2 \Omega_{A/B} = \bra{1_{A/B}} \widehat{V} \ket{0_{A/B}}$, which we choose to be real (in the flux qubits described below, the full matrix element for the $\widehat{V}$ operator is imaginary, but we have absorbed those factors of $i$ into $\sigma^{y}$). We now examine
\begin{eqnarray}\label{actV}
\widehat{V} \sin \omega t &=& \Omega_{A/B} \of{e^{i \omega t} \sigma_{A/B}^{-} + e^{- i \omega t} \sigma_{A/B}^{+}} \\
& & + \Omega_{A/B} \of{e^{-i \omega t} \sigma_{A/B}^{-} + e^{ i \omega t} \sigma_{A/B}^{+}}. \nonumber
\end{eqnarray}
We now transform to the rotating frame by applying the unitary transformation $\ket{\psi} \to \exp -i \omega \of{\sigma_{A}^{z} + \sigma_{B}^{z}} t \ket{\psi}$. The time dependence of terms on the first line of (\ref{actV}) is cancelled out, leaving us with $\Omega_{A/B} \sigma_{A/B}^{x}$ plus a set of terms which are rapidly oscillating with frequency $2\omega$. We now make the rotating wave approximation (RWA) to neglect these terms; if further accuracy is required we can treat them through a second order perturbation theory in $\of{\Omega_{A/B}/\omega}$ and obtain a small correction to the $\sigma_{A/B}^{z}$ terms. After transforming to the rotating frame and invoking the RWA, $H_{AB}$ is:
\begin{eqnarray}\label{HAB}
H_{AB} &=& \of{\omega_{A} - \omega} \sigma_{A}^{z} + \of{\omega_{B} - \omega} \sigma_{B}^{z} \\ & & + \Omega_{A} \sigma_{A}^{x} + \Omega_{B} \of{ \cos \varphi_{s} \sigma_{B}^{x} + \sin \varphi_{s} \sigma_{B}^{y} } \nonumber \\ & & + \cuof{ D^{\pm} \of{ \sigma_{A}^{+} \sigma_{B}^{-} + \sigma_{A}^{-} \sigma_{B}^{+} } \; {\rm{or}} \; D^{z} \sigma_{A}^{z} \sigma_{B}^{z} }. \nonumber
\end{eqnarray}
From now on we will assume $\omega$ is tuned to resonance with the $A$ qubits, so that $\omega = \omega_{A}$ and the single-site Hamiltonian for the $A$ qubits is just $\Omega_{A} \sigma_{A}^{x}$.
To construct the full qubit lattice, we wire the qubits as in fig.~\ref{ABfig}, so that the connection between any pair of neighboring $A$ qubits consists of a $zz$ coupling to a $B$ qubit followed by a $\pm$ coupling to the neighboring $A$ qubit. For simplicity, we will ignore cases where $A$ qubits are coupled directly; such couplings will produce either neighbor-neighbor potential interactions or real-valued hopping matrix elements, depending on their structure. We assume that the energy difference $E_{B} - E_{A} = 2 \sqrt{\of{\omega_{B} - \omega_{A} }^{2} + \Omega_{B}^{2} } - 2 \Omega_{A} \equiv \delta E$ is large compared to $D^{\pm}$ and $D^{z}$, so that we can treat the $A-B$ coupling perturbatively. We now eliminate the $B$ qubits using second order perturbation theory; noting that all $A$ qubits are identical, the resulting Hamiltonian, to order $D^{2}/\delta E$, is given by:
\begin{eqnarray}\label{Hlat}
H &=& \sum_{ij} \of{J_{ij} a_{i}^{\dagger} a_{j} + H. C.} + 2 \tilde{\Omega}_{A} \sum_{i} a_{i}^{\dagger} a_{i}, \\
J_{ij} &=& -\frac{D_{ij}^{z} D_{ij}^{\pm}}{ \delta E} \sin \theta \of{ \cos \varphi_{s(ij)} + i \cos \theta \sin \varphi_{s(ij)} }, \nonumber \\
\cos \theta &=& \frac{\omega_{B} - \omega_{A} }{ \sqrt{\of{\omega_{B} - \omega_{A} }^{2} + \Omega_{B}^{2} } }. \nonumber
\end{eqnarray}
Here $a_{i}^{\dagger}/a_{i}$ creates/annihilates an excitation in the $A$ qubit at site $i$, and $\tilde{\Omega}_{A}$ is equal to $\Omega_{A}$ plus $O \of{J}$ shifts which depend on the coordination number of the lattice and magnitudes of the couplings. Since the qubits are spin-$\frac{1}{2}$, we have an effective hard-core constraint, so $a_{i}^{\dagger} \ket{1_i} = 0$. If we now identify
\begin{eqnarray}\label{Peierls}
{\rm{arg}}J_{ij} \equiv q \int_{r_{i}}^{r_{j}} \mathbf{A} \cdot d\mathbf{r},
\end{eqnarray}
we see that the complex phases of $J$ are identical to the Peierls phases of a charged particle moving on a lattice in an external gauge field $\mathbf{A}$. Further, if we choose parameters so that the $B$ qubits are far off-resonance, $\theta$ will be small and
\begin{eqnarray}\label{Jphase}
J_{ij} \to -\frac{D_{ij}^{z} D_{ij}^{\pm}}{ \delta E} \theta e^{i \varphi_{s(ij)} } + O \of{\theta^{3}}.
\end{eqnarray}
In this regime, we can freely adjust the phase of $J$ without significantly altering its magnitude, and can thus simulate any time-dependent external gauge field configuration we desire, simply by adjusting the $B$ qubit phase shifts $\varphi_{s(ij)}$ at each link.
The ability to engineer artificial gauge fields of any desired configuration has tremendous potential to unlock new physics, and we will discuss the most natural application, simulating a uniform magnetic field to realize strongly interacting bosons in the quantum Hall regime, later in the work. Before doing so, however, we will first describe a possible implementation of this architecture in superconducting flux qubits. While flux qubits are certainly not the only-- or even necessarily the best-- qubits to use for this purpose, our proposal will demonstrate that a fairly robust implementation of our architecture can be realized using device parameters from previous experiments. Thus, small lattices should be within reach of current technology.
\begin{widetext}
\begin{figure}
\psfrag{r1}{$(1)$}
\psfrag{r2}{$(2)$}
\psfrag{Aej}{$\alpha E_{\rm J}, \alpha C,f$}
\psfrag{ejc}{$E_{\rm J},C$}
\psfrag{qj}{$(L)$}
\psfrag{qk}{$(R)$}
\psfrag{Kc}{$\kappa E_{\rm J}, \kappa C$}
\psfrag{Bej}{$\beta E_{\rm J}, \beta C,f$}
\psfrag{Ac}{$\gamma C$}
\psfrag{Bc}{$g C$}
\psfrag{ve}{$V_{0} \sin \of{\omega t + \varphi_{s} }$}
\psfrag{sve}{$V_{0} \sin \omega t $}
\psfrag{gamc}{$\eta C$}
\psfrag{fp1}{$f'$}
\psfrag{fp2}{$f''$}
\includegraphics[width=6in]{auxqbit_IV.eps}
\caption{(Color online) Basic circuit architecture. The regions enclosed in dashed boxes are three-junction flux qubits, which are connected to a physical ground. The blue ($A$, left and right) and red ($B$, center) qubits differ from each other by a rescaling of the area of the central Josephson junction, which is tuned so that the $B$ qubits have higher energy excitations. A magnetic field penetrates the plane so that $f$ flux quanta are enclosed by each ring. An oscillating voltage $V_{E} \of{t}$ is applied near resonant transitions to both qubits, mixing their ground and first excited states. Excitations in the $A$ flux qubits can tunnel through the $B$ qubits to each other; the oscillating voltage will make this transition matrix element complex. The qubit properties and the couplings between them are discussed in section~\ref{qubits}.}\label{qubitfig}
\end{figure}
\end{widetext}
\section{Flux Qubits}\label{qubits}
The three-junction flux qubit consists of a superconducting ring interrupted by three Josephson junctions as shown in fig.~\ref{qubitfig}, with one junction whose area is rescaled by $\alpha$ relative to the other two. A constant, tunable magnetic flux bias of $f \neq 1/2$ flux quanta is applied through the loop. We choose bottom third of the ring to be ground (which will be a physical ground in our case) with phase $\phi=0$, then the two remaining degrees of freedom of the flux qubit are the phases $\phi_1$ and $\phi_2$ of the other two superconducting regions. The derivation of the flux qubit Hamiltonian is descrbed in detail in Orlando \textit{et al} \cite{orlandomooij}; in terms of the phases $\phi_1$ and $\phi_2$, the flux qubit Hamiltonian $H_{FQ}$ is
\begin{eqnarray}\label{HFQ}
H_{FQ} &=& \frac{ \of{1+ \alpha + \eta} \of{Q_{1}^{2} + Q_{2}^{2} } + 2 \alpha Q_{1} Q_{2} }{\of{1+\eta} \of{1 + 2 \alpha + \eta} C } \\
& & - E_{\rm J} \sqof{ \cos \phi_1 + \cos \phi_2 + \alpha \cos \of{ 2 \pi f + \phi_1 - \phi_2 } } \nonumber \\
& & + \frac{2 \eta \of{\alpha Q_{2} + \of{1+\alpha + \eta} Q_{1} } V_{0} \sin \omega t }{\of{1+\eta} \of{1 + 2 \alpha + \eta}}. \nonumber
\end{eqnarray}
Here, $Q_{j} = - 2 e i \partial/ \partial \phi_j$, $E_{\rm J}$ is the Josephson energy of the Josephson junctions and $f$ is the total magnetic flux through the loop in units of the magnetic flux quantum $\Phi_0$. The terms on the third line of (\ref{HFQ}) represent the coupling of the flux qubit to the applied voltage $V_0 \sin \omega t$. For the moment, we will consider this Hamiltonian with $V_0 = 0$.
We let $\phi_\pm = \of{\phi_1 \pm \phi_2 }/2$. For $f \neq 0$, the symmetry between $\phi_1$ and $\phi_2$ is broken, and for $f$ close to 1/2, the ground and first excited states are distinguished by their behavior along the $\phi_-$ direction, as excitations along $\phi_+$ are significantly more expensive. The typical excitation energy for $0.4 < \alpha < 0.6$ and $0.5 < f < 0.55$ is $\omega_{FQ}/2\pi = 12 - 30 \rm{GHz}$ for $E_{\rm J}/h \sim 200 \rm{GHz}$ and $E_{\rm C} =e^2/2C = E_{\rm J}/40$, and the nonlinearities of the spectrum are all reasonably large. In this work we will only consider flux qubits operated at the symmetry point of $f=1/2$, in which case the ground and first excited states are both even along $\phi_+$ and even or odd, respectively, along $\phi_-$. From this, we can readily translate operators in the phase basis to Pauli matrices acting in the qubit basis. We will define the following compact notation for matrix elements:
\begin{eqnarray}\label{Mdef}
\MM_{\widehat{O},s}^{ij} \equiv \bra{i_{s}} \widehat{O} \ket{j_{s}} \; \rm{e.g.} \; \MM_{Q_{1},A}^{01} = \bra{0_A} Q_{1} \ket{1_A}.
\end{eqnarray}
In this notation, we have:
\begin{eqnarray}\label{relations}
Q_{j} &\to & 2 e \of{-1}^{j} \MM_{\partial_{\phi_{-}}}^{01} \sigma^{y}, \\
\sin \phi_{j} &\to & \of{-1}^{j} \MM_{\sin \phi_{1}}^{01} \sigma^{x}, \nonumber \\
\cos \phi_{j} & \to & \frac{\MM_{\cos \phi_1}^{11}+\MM_{\cos \phi_1}^{00}}{2} \mathbf{1} + \frac{\MM_{\cos \phi_1}^{11}-\MM_{\cos \phi_1}^{00}}{2} \sigma^{z}. \nonumber
\end{eqnarray}
For consistency, all matrix elements $\MM$ are calculated between the $V_0=0$ (non-rotating) eigenstates of the flux and transmon qubit Hamiltonians.
Let us now turn to the coupling Hamiltonian between the qubits shown in fig.~\ref{qubitfig}. We label the two $A$ qubits by $L$ and $R$. The coupling of the $B$ qubit to the right qubit is a simple capacitive coupling, and so is given by a constant times $\sigma_{B}^{y} \sigma_{R}^{y}$, which becomes a $\pm$ coupling in the rotating frame:
\begin{eqnarray}\label{capterm}
H_{BR} = \frac{8 E_{\rm C} \MM_{\partial_{\phi_{1}}, B}^{01} \MM_{\partial_{\phi_{1}}, R}^{10}}{ \of{1+2\alpha + \eta} \of{1+2\beta + \eta}} \of{\sigma_{B}^{+} \sigma_{R}^{-} + \sigma_{B}^{-} \sigma_{R}^{+}}.
\end{eqnarray}
It is important to note that both $\sigma^{x} \sigma^{x}$ and $\sigma^{y} \sigma^{y}$ become $\pm$ couplings in the rotating frame (up to overall signs), as the components of them which lead to net creation or destruction of excitations are rapidly oscillating and should be dropped. The coupling between the left qubit and the $B$ qubit consists of two Josephson junctions; since these junctions define closed loops through ground, they pick up flux biases $f'$ and $f''$ from the external magnetic field. For simplicity, we choose the wiring geometry so that these biases are both zero mod $2\pi$. When we write the coupling between $L$ and $B$ as a set of Pauli matrices, the $\pm$ terms vanish due to the sign flips in (\ref{relations}), but the $zz$ term survives:
\begin{eqnarray}
H_{LB} &=& -2 \kappa E_{J} \of{\MM_{\cos \phi_1,L}^{11}-\MM_{\cos \phi_1,L}^{00}} \\ & & \times \of{\MM_{\cos \phi_1,B}^{11}-\MM_{\cos \phi_1,B}^{00}} \sigma_{L}^{z} \sigma_{B}^{z}. \nonumber
\end{eqnarray}
Alternately, one could obtain a pure $zz$ coupling by simply placing a single Josephson junction between a pair of regions, and choosing the wiring geometry so that the flux bias $f'$ is nonzero, leading to an interaction term of the form $- \kappa E_{J} \cos \of{\phi_{L2} - \phi_{B1} + 2\pi f'}$ plus a capacitive term with the same structure as (\ref{capterm}). One could then tune $f'$ so that the $\pm$ components of the $xx$ and $yy$ terms from these couplings interfere with each other, leaving only the $zz$ part of the coupling.
We are now in a position to plug in numbers and evaluate $J$ for this architecture. Consider flux qubits wired as in fig.~\ref{qubitfig}. If we choose the realistic device parameters listed below, taking into account the single-qubit energy shifts from the $D^{z}$ coupling gives us:
\begin{eqnarray}\label{Jpars}
I_{c} &=& 400 {\rm{nA}}, \; C = 3.25 {\rm{fF}}, \; \alpha = 0.5, \; f=0.5, \\
\eta &=& 0.1, \; \beta = 0.45, \; \kappa = 0.2, \; g = 0.2 \nonumber \\
E_{\rm J}/h & = & 200 {\rm{GHz}} = 33 E_{\rm C}/h, \; \omega_{A} = 2\pi \times 15.5 {\rm{GHz}}, \nonumber \\
\omega_{B} &=& 2\pi \times 18 \rm{GHz}, \; \frac{\Omega_{A}}{V_{0}} \simeq \frac{\Omega_{B}}{V_{0}} = 2 \pi \times 4.6 \rm{\frac{GHz}{mV}}, \nonumber \\
D^{z} &=& 2 \pi \times 1.0 \rm{GHz}, \nonumber \\
D^{\pm} &=& 2\pi \times 1.4 {\rm{GHz}}.
\end{eqnarray}
A plot of $J$ for $V_{0} = 0.5,1 \rm{mV}$ is shown in figure fig.~\ref{Jvsphifig}, calculated from (\ref{Hlat}). For small values of $V_{0}$, $\abs{J}$ is almost completely independent of $\varphi_{s}$, but for larger $V_{0}$ the magnitude fluctuations become significant. $\abs{J}$ can be further increased by up to an order of magnitude by choosing device parameters to work in the regime where $f > 1/2$, but the relative qubit nonlinearities are smaller and the system becomes more susceptible to fluctuations in the external magnetic field. We emphasize that the parameters listed above certainly do not represent the best possible choice for many-body physics, and indeed, it may ultimately turn out that other types of qubits may be superior for reaching the bosonic fractional quantum Hall regime described below. Nonetheless, they demonstrate that our system could be engineered with current technology, and achieves hopping matrix elements which are around three orders of magnitude larger than the typical flux qubit decay and dephasing rates (around a MHz).
\begin{figure}
\psfrag{xlabel}{$\varphi_{s}/\pi$}
\psfrag{ylabel}{$\abs{J}/h \of{\rm{GHz}},\arg J/\pi$}
\includegraphics[width=3in]{Jvsphi.eps}
\caption{(Color online) Magnitude and phase of $J$ for the device parameters given in eq. (\ref{Jpars}) at the resonance point $\omega = \omega_{A}$. The blue and purple curves are $\abs{J}$ and $\arg{J}/\pi$, respectively, for $V_{0} = 0.5 \rm{mV}$; the yellow and green curves are the same quantities for $V_{0} = 1 \rm{mV}$. $J$ can be made significantly larger by increasing $\alpha$ and working away from the $f=1/2$ symmetry point, but the physical device nonlinearities are smaller in that regime and the system becomes more susceptible to fluctuations in the external magnetic field.}\label{Jvsphifig}
\end{figure}
\section{Many-body States and the Lowest Landau Level}\label{LLL}
By considering a 2d lattice of these qubits, requiring that $\abs{J}/(E_{B} - E_{A}) \ll 1$, and ignoring inaccessible higher excited states, we arrive at the final hopping Hamiltonian (\ref{Hlat}). Previous studies \cite{hofstadter,kohmoto,assaad,hafezi,palmerkleinjaksch,sorensen,zhang,oktel,kapitmueller,onur,hormozimoller,kapitbraiding} have shown that the square lattice version of this Hamiltonian is analogous to the 2d lowest Landau level problem of strongly interacting bosons, and realizes abelian and non-abelian fractional ground states at the appropriate fixed densities. We expect that small arrays should be sufficient to observe quantum Hall physics, since the magnetic length $l_{B} = 1/\sqrt{2 \pi \Psi}$ (where $\Psi$ is the gauge-invariant phase accumulated when a particle circulates around a plaquette) can be less than a lattice spacing \footnote{We calculate the magnetic length by analogy to the mapping to the lowest Landau level in \cite{kapitmueller}; the coefficient of the Gaussian in the Landau level wavefunctions sets $l_{B}$.}. Connections between flux qubits beyond nearest neighbors can reproduce the exact lowest Landau level of the continuum \cite{kapitmueller,kapitbraiding} and lead to more robust fractional quantum Hall states, but they may not be necessary to observe the Laughlin state at $\nu = 1/2$ \cite{hafezi}. Here we adopt the standard definition of the filling fraction $\nu$ as the ratio of particle to flux density. A wide range of other possible quantum spin-1/2 models with 2-body interactions, both with complex phases and without, could be studied in this device architecture; we find quantum Hall systems to be the most intriguing, due to the existence of abelian anyons at $\nu = 1/2$ and the existence (with tuning) of non-abelian anyons at $\nu = 1$ and 3/2 \cite{nayaksimon,kapitbraiding}, along with other exotic states at different filling fractions. The boson density could be controlled by using a second external field at frequency $\omega'$ near $2 \Omega_{A}$ to populate the lattice; the $\omega'$ dependence of the system's response to this field could be used to measure the gaps of the many-body states.
The incoherent particle gain and loss rates in our array from single qubit decay and dephasing effects should not be a significant obstacle to studying strongly correlated many-body states. Using values from the previous section and from the superconducting qubit literature \cite{yoshihara,clarkewilhelm}, a typical hopping parameter would be $J/\hbar = 1 \rm{GHz}$. It is important to note here that the particles in our case are rotating frame antisymmetric superpositions of the ground and first excited states of the physical qubits, so their decay would be from one type of superposition to the other. This rate would be roughly given by the dephasing rate of the qubits, which for flux qubits is of order $1 \mathrm{MHz}$. With a Landau band spacing of $\omega_{\rm LLL} \simeq 3 J$ in a square lattice at $\Psi = 1/4$ quanta per plaquette, the relative correction to the Landau bandwidth from this process decay would be thus be insignificant. We expect that this loss rate by itself will not prevent quantum Hall states from forming in our array \cite{hafezilukin}. Likewise, a small number of ``dead" sites (where a qubit is defective and cannot be excited) should also be relatively harmless-- the many-body wavefunction can eliminate these defects simply by nucleating a quasihole at each site. So long as the density of flux quanta is large compared to the defect density, these defects will simply make small shifts in the gap energy and particle density of the gapped states, but will not have any other qualitative effects on the system.
More worrisome is the issue of time-independent random variations in the qubit properties at every site, which could disrupt the formation of topological states if these variations became large enough. To quantify this issue, we numerically simulated the broadening of the lowest Landau level in our model as a function of three static (quenched) noise sources: random fluctuations in the on-site potential (shifts in the rotating frame excitation energy of a given $A$ qubit), random fluctuations in the magnitude of $J$, and random fluctuations in the phase of $J$ between neighboring sites. In a real system, these noise sources would be correlated, but as the details of those correlations would depend in part on the physical implementation of the qubits, we have assumed that each type of quenched disorder is applied randomly to every site with no dependence on the other types or on the disorder at nearby sites. To determine the broadening from each noise source, we numerically diagonalized the single-particle hopping matrix on 8$\times$8 and 12$\times$12 lattices with periodic boundary conditions, given by the Hamiltonian:
\begin{eqnarray}\label{deltaH}
H_{LLL} &=& \sum_{ij} \of{F_{ij}} J_{ij} \of{ e^{i \of{ \phi_{ij} + \pi \delta \phi_{ij}}} + H. C.} \\ & & + \sum_{i} J_{NN} \delta U_{i} n_{i}. \nonumber
\end{eqnarray}
Here, the hopping matrix elements are restricted to nearest and next nearest neighbors with relative magnitudes chosen as in \cite{kapitmueller}, $\delta U_{i}$ and $\delta \phi_{ij}$ are dimensionless parameters which are Gaussian distributed about 0, $J_{NN}$ is the average nearest neighbor hopping energy, and $F_{ij}$ is a dimensionless parameter which is Gaussian distributed about 1. We diagonalized (\ref{deltaH}) for 25 random distributions of noise for each data point, and from the spectrum we extracted the lowest Landau level broadening $\Delta$, which is the ratio of the energy splitting between the lowest and highest LLL states to the splitting between the highest LLL state and the bottom of the first excited band. We then fit $\Delta \of{\sigma_{U/J/\Psi}}$ as a function of the standard deviation of each noise source with the other two sources set to zero; this relationship was linear in each case for small fluctuations. The results of our simulations are shown in table~\ref{LLLtable}; note that $\Delta_0$ is nonzero even without defects, as a consequence of truncating the Hamiltonian in \cite{kapitmueller} to nearest and next nearest neighbor hopping.
It is important to note that this calculation only captures distortions to the single particle spectrum and that the many-body response to noise of this type is a subtle problem beyond the scope of this work. However, we qualitatively expect that the topological states should be disrupted when the normalized Landau level splitting $\Delta$ approaches the dimensionless quasiparticle excitation gap $\Delta_{qp}/J_{NN}$. In numerical studies of this system in the clean limit with hard-core 2-body repulsion (largely unpublished), $\Delta_{qp}/J_{NN}$ typically ranged between $0.2$ and $1$ for correlated states at different flux and particle densities, and tended to be larger at smaller filling fractions. This suggests that many-body quantum Hall states should exist in our system when noise is sufficiently well-controlled.
\begin{table}
\begin{tabular}{| c | c | c | c | c |}
\hline
Flux Density & $\Delta_{0}$ & $C_{U}$ & $C_{J}$ & $C_{\Psi}$ \\
1/4 & 0.015 & 0.41 & 1.42 & 1.75 \\
1/3 & 0.018 & 0.72 & 1.21 & 2.36 \\
3/8 & 0.08 & 0.35 & 0.99 & 1.94 \\
\hline
\end{tabular}
\caption{Robustness of the lowest Landau level to external noise sources. For the random noise simulations described in the text, we fit the normalized splitting $\Delta$ of the lowest Landau level to the function $\Delta = \Delta_{0} + C_{U} \sigma_{U} + C_{J} \sigma_{J} + C_{\Psi} \sigma_{\Psi}$, where the $\sigma$'s are the standard deviation of each noise source (local potential, hopping magnitude, and hopping phase) which is applied randomly to every site ($\sigma_{U}$) and link between sites ($\sigma_{J}$ and $\sigma_{\Psi}$). As seen in the Hamiltonian (\ref{deltaH}), the potential fluctuations are in units of $J_{NN}$ and the phase fluctuations are in units of $\pi$. Above the flux density $\Psi = 1/3$, truncation to nearest and next nearest neighbor hopping introduces significant broadening even in the clean system, so flux densities of 1/3 or less should be the focus of experiments on our design.}\label{LLLtable}
\end{table}
\section{A Simple Experiment to Demonstrate the Gauge Field}\label{ex}
While the ultimate purpose of this proposal is to study exotic many-body states in an array of hundreds or thousands of flux qubits, the existence of a nontrivial gauge field can be demonstrated by studying an arrangement of four flux qubits, connected in a loop. Consider a square loop of four flux qubits labeled (1-4), where qubit 1 sits at the top left corner and qubit 4 at the bottom right, as shown in Fig.~\ref{4Qfig}. For this choice, any hop through a $D^{z}$ coupling will accumulate a phase $\psi$, giving a total of $\Psi = 2 \psi$ for a complete circuit of the loop. Conversely, if the phases of the voltages applied to the $B$ qubits are shifted by $\pi$ from one FQ-FQ pair to the next, the magnitude of the hopping matrix element will be unchanged but there will be no complex phase accumulation. In this case, the $B$ qubits have identical rotating frame energies to the $A$ qubits, and differ from them through the relative phases $\varphi_{si}$ of the applied voltages. We will assume for simplicity that the magnitudes of the hopping matrix elements from the $D^{z}$ and $D^{\pm}$ couplings are both equal to $J$.
To demonstrate that the alternating voltages generate a nonzero effective flux through the four-qubit loop, we first initialize the array by letting all four qubits relax to their ground states. At time $t=0$, we apply a microwave pulse to qubit 1 to excite it into the fluxon state $\ket{1}$, and then at time $t$ we measure the state of qubit 4. The probability of qubit 4 being occupied by the fluxon is given by
\begin{eqnarray}\label{P4}
P_{4} \of{t} &=& \abs{ \bra{0_{1} 0_{2} 0_{3} 1_{4} } e^{i H t/\hbar} \ket{1_{1} 0_{2} 0_{3} 0_{4} } }^{2} \\
&=& \frac{1}{4} \of{ \cos \of{ \frac{2 t J}{\hbar} \cos \frac{\Psi}{4} } - \cos \of{ \frac{2 t J}{\hbar} \sin \frac{\Psi}{4} } }^{2}. \nonumber
\end{eqnarray}
This interference pattern is particularly striking when $\Psi$ is nearly equal to $\pi$. If we let $\Psi = \pi + \epsilon$, the probability distribution becomes
\begin{eqnarray}
P_{4} \of{t} = \of{ \sin \frac{\sqrt{2} J t}{\hbar} }^{2} \of{ \sin \frac{J t \epsilon}{2 \sqrt{2} \hbar} }^{2}.
\end{eqnarray}
In the limit of $\epsilon \to 0$, the probability of qubit 4 being occupied becomes zero at all times, due to the perfect interference of the two paths. This is a dramatic effect, and while field fluctuations and fabrication defects would prevent perfect interference in a real device, the strong slowing of the occupation periodicity of qubit 4 as $\Psi$ approaches $\pi$ would be readily observable. Such interference is only possible if there is a gauge-invariant phase difference between the two paths, and would therefore demonstrate that nontrivial effective gauge fields are realized in our architecture.
\section{Conclusion}
We have demonstrated a method for realizing a quantum Hall state of bosons using asymmetric qubit pairs, driven by applied oscillating electric fields. We also demonstrated that our model could be implemented in a lattice of flux qubits. With appropriate protocols for stabilizing the average particle density and measuring the conductivity, we expect that conductivity quantization could be observed on small arrays, though we note that the details of how to measure the conductivity are beyond the scope of this article. The statistics of anyonic collective modes could be determined through similar methods \cite{dassarma,bondersonkitaev,willett,anbraiding}.
Further, the dynamical tunability of our model could be exploited to realize exotic combinations of states that would be difficult or impossible to study in cold atom or solid state systems. One could locally adjust the applied voltage $V_{0} \sin \of{\omega t + \varphi_s}$ and flux bias $f$ to change the gauge field density and effective chemical potential in a given region, creating islands of arbitrary shape which could be at a different filling fraction than the surrounding lattice and thus have different anyonic modes. Alternately, by reversing the signs of all the phase shifts $\varphi_s$ in a region, one can crate a sharp boundary between regions with effective gauge fields of equal magnitude but opposite sign. In both cases, we expect physics along the boundaries to be rich.
Finally, by locally tuning $V_{0}$, $\varphi_s$ and $f$ to manipulate vortices in the qubit lattice, arrays of ordinary qubits could be used to construct a topological non-abelian anyon qubit \cite{kitaev2003,hormozi,nayaksimon}, trading information density for topological protection against decoherence. In that sense our proposal is similar to the surface code and cluster state \cite{yaowang,fowlersurface} ideas developed in recent years, and provides a new potential mechanism for reducing decoherence in superconducting quantum information devices.
\section{Acknowledgments}
We would like to thank John Chalker, Greg Fuchs, Chris Henley, Matteo Mariantoni, John Martinis and Dan Ralph for useful discussions related to this project.
We are indebted to Paul Ginsparg for his critical comments and advice, and to Steve Simon for his assistance in the final preparation of this manuscript. Most of all, we would like to thank Erich Mueller for his guidance at many stages of this project. This material is based on work supported in part by the National Science Foundation Graduate Program, EPSRC Grant No. EP/I032487/1, and Oxford University.
\begin{figure}
\psfrag{Kc}{$\kappa E_{\rm J}, \kappa C$}
\psfrag{Bej}{$\beta E_{\rm J}, \beta C$}
\psfrag{Ac}{$\gamma C$}
\psfrag{Bc}{$g C$}
\psfrag{ve}{$V_{0} \sin \of{\omega t + \varphi_{s} }$}
\psfrag{sve}{$V_{0} \sin \omega t $}
\psfrag{l1}{$(1)$}
\psfrag{l2}{$(2)$}
\psfrag{l3}{$(3)$}
\psfrag{l4}{$(4)$}
\psfrag{la}{(a)}
\psfrag{lb}{(b)}
\vspace{0.5in}
\psfrag{qa}{$A$}
\psfrag{qb}{$B$}
\psfrag{va}{$V_{0} \sin \omega t $}
\psfrag{vb}{$V_{0} \sin \of{\omega t + \varphi_{s} }$}
\psfrag{cz}{$C^{z}$}
\psfrag{dpm}{$D^{\pm}$}
\includegraphics[width=3.25in]{4qbitring_II.eps}
\caption{(Color online) Configuration to demonstrate the artificial gauge field, as outlined in section~\ref{ex}. As described in section~\ref{ex}, appropriately tuning the phase offsets $\varphi_{si}$ will produce a gauge-invariant phase difference in the two paths that the mobile fluxon excitation could take from qubit 1 to qubit 4. The resulting interference of these two paths can be detected in the time-dependent probability $P_{4} \of{t}$ of qubit 4 being in its excited state.}\label{4Qfig}
\end{figure}
|
2,877,628,091,003 | arxiv | \section{Introduction}
Let $\Delta$ be the open unit disc on the complex plane $\mathbb{C}$. Let $\mathcal{H}$ be the family of all analytic functions and $\mathcal{A}\subset \mathcal{H}$ be the family of all normalized functions in $\Delta$. We denote by $\mathcal{U}$ the class of all univalent functions in $\Delta$ and denote by $\mathcal{LU}\subset \mathcal{H}$ the class of all locally univalent functions in $\Delta$. For a $f\in\mathcal{LU}$, we consider the following norm
\begin{equation*}
||f||=\sup_{z\in\Delta}(1-|z|^2)\left|\frac{f''(z)}{f'(z)}\right|,
\end{equation*}
where the quantity $f''/f$ is often referred to as pre--Schwarzian derivative of $f$ and in the theory of Teichm\"{u}ller spaces is considered as element of complex Banach spaces. We remark that $||f||<\infty$ if, and only if, $f$ is uniformly locally univalent in $\Delta$. Note that, $||f||\leq 6$ if $f$ is univalent in $\Delta$ and, conversely, $f$ is univalent in $\Delta$ if
$||f|\leq 1$. Both of these bounds are sharp, see \cite{BecPom}. For more geometric properties of the function $f$ relating the norm, see \cite{choi, kimsug, PonSS} and the references therein.
We say that a function $f$ is subordinate to $g$, written by $f(z)\prec g(z)$ or $f\prec g$ where $f$ and $g$ belonging to the class $\mathcal{A}$, if there exists a Schwarz function $w(z)$ is analytic in $\Delta$ with
\begin{equation*}
w(0)=0\quad{\rm and}\quad |w(z)|<1\quad(z\in\Delta),
\end{equation*}
such that $f(z)=g(w(z))$ for all $z\in\Delta$.
In the sequel, we recall two definition which are certain subclasses of analytic and normalized functions $\mathcal{A}$. First, we say that a function $f\in\mathcal{A}$ belongs to the class $\mathcal{S}(\alpha,\beta)$ if it satisfies the following two--sided inequality
\begin{equation*}
\alpha<{\rm Re}\left\{\frac{zf'(z)}{f(z)}\right\}<\beta\quad(z\in\Delta),
\end{equation*}
where $0\leq \alpha<1$ and $\beta>1$. The class $\mathcal{S}(\alpha,\beta)$ was introduced by Kuroki and Owa (cf. \cite{KO2011}). Also, we say that a function $f\in\mathcal{A}$ belongs to the class $\mathcal{V}(\alpha,\beta)$ if
\begin{equation*}
\alpha<{\rm Re}\left\{\left(\frac{z}{f(z)}\right)^2f'(z)\right\}<\beta\quad(z\in\Delta).
\end{equation*}
The class $\mathcal{V}(\alpha,\beta)$ was first introduced by Kargar et al., see \cite{KES(Siberian)}.
Since the convex univalent function
\begin{equation}\label{P}
P_{\alpha,\beta}(z)=1+\frac{(\beta-\alpha)i}{\pi}\log\left(\frac{1-e^{i\phi}z}{1-z}\right)
\quad(z\in\Delta),
\end{equation}
where
\begin{equation}\label{phi}
\phi:=\frac{2\pi(1-\alpha)}{\beta-\alpha},
\end{equation}
maps $\Delta$ onto the domain $\Omega=\{\omega: \alpha<{\rm Re}\{\omega\}<\beta\}$ conformally, thus we have.
\begin{lemma}\label{lem S alpha beta}
{\rm(}\cite[Lemma 1.3]{KO2011}{\rm )}
Let $0\leq \alpha<1$ and $\beta>1$. Then $f\in\mathcal{S}(\alpha,\beta)$ if, and only if,
\begin{equation*}
\frac{zf'(z)}{f(z)}\prec 1+\frac{(\beta-\alpha)i}{\pi}\log\left(\frac{1-e^{i\phi}z}{1-z}\right)
\quad(z\in\Delta),
\end{equation*}
where $\phi$ is defined in \eqref{phi}.
\end{lemma}
\begin{lemma}\label{lem V alpha beta}
{\rm(}\cite[Lemma 1.1]{KES(Siberian)}{\rm )}
Let $0\leq \alpha<1$ and $\beta>1$. Then $f\in\mathcal{V}(\alpha,\beta)$ if, and only if,
\begin{equation*}
\left(\frac{z}{f(z)}\right)^2f'(z)\prec 1+\frac{(\beta-\alpha)i}{\pi}\log\left(\frac{1-e^{i\phi}z}{1-z}\right)
\quad(z\in\Delta),
\end{equation*}
where $\phi$ is defined in \eqref{phi}.
\end{lemma}
Rahmatan et al. (see \cite{RNE}) estimated the norm of pre--Schwarzian derivatives of the function $f$ where $f$ belong to the classes $\mathcal{S}(\alpha,\beta)$ and $\mathcal{V}(\alpha,\beta)$. Both estimates and proofs are incorrect. Indeed, the estimates of $||f||$ were wrongly proven by Rahmatan, Najafzadeh and Ebadian are in the following form:\\ \\
{\bf Theorem A:} For $0\leq \alpha<1<\beta$, if $f\in\mathcal{S}(\alpha,\beta)$, then
\begin{equation*}
||f||\leq \frac{2(\beta-\alpha)}{\pi}\left(1-e^{2\pi i\frac{1-\alpha}{\beta-\alpha}}\right).
\end{equation*}
{\bf Theorem B:}
For $0\leq \alpha<1<\beta$, if $f\in\mathcal{V}(\alpha,\beta)$, then
\begin{equation*}
||f||\leq \frac{3(\beta-\alpha)}{\pi}\left(1-e^{2\pi i\frac{1-\alpha}{\beta-\alpha}}\right).
\end{equation*}
First, note that both bounds are complex numbers.
In this paper we give the best estimate for $||f||$ when $f\in\mathcal{S}(\alpha,\beta)$ and disprove the Theorem B. However, we show that $||f||<\infty$ when $f\in\mathcal{V}(\alpha,\beta)$.
\section{The Main Results}
The first result of the paper is the following.
\begin{theorem}
Let $0\leq \alpha<1$ and $\beta>1$. If a function $f$ belongs to the class $\mathcal{S}(\alpha,\beta)$, then
\begin{equation}\label{norm S alpha beta}
||f||\leq \frac{2(\beta-\alpha)}{\pi}\sqrt{4 \sin^2(\phi/2)+2\pi^2}-\frac{4\sin(\phi/2)}{\sqrt{4 \sin^2(\phi/2)+2\pi^2}},
\end{equation}
where $\phi$ is defined in \eqref{phi}.
The result is sharp.
\end{theorem}
\begin{proof}
Let that $0\leq \alpha<1$, $\beta>1$ and $\phi$ be given by \eqref{phi}. If
$f\in\mathcal{S}(\alpha,\beta)$, by Lemma \ref{lem S alpha beta}, then we have
\begin{equation}\label{z f prime f sub P alpha beta}
\frac{zf'(z)}{f(z)}\prec 1+\frac{(\beta-\alpha)i}{\pi}\log\left(\frac{1-e^{i\phi}z}{1-z}\right)\quad(z\in\Delta).
\end{equation}
The above subordination relation \eqref{z f prime f sub P alpha beta} implies that
\begin{equation*}
\frac{zf'(z)}{f(z)}= 1+\frac{(\beta-\alpha)i}{\pi}\log\left(\frac{1-e^{i\phi}w(z)}{1-w(z)}\right)\quad(z\in\Delta),
\end{equation*}
or equivalently
\begin{equation}\label{log z f prime f equal P alpha beta w}
\log\left\{\frac{zf'(z)}{f(z)}\right\}=\log\left\{ 1+\frac{(\beta-\alpha)i}{\pi}\log\left(\frac{1-e^{i\phi}w(z)}{1-w(z)}\right)\right\}\quad(z\in\Delta),
\end{equation}
where $w(z)$ is the Schwarz function.
From \eqref{log z f prime f equal P alpha beta w}, differentiating on both sides, after simplification, we obtain
\begin{equation}\label{f second f prime}
\frac{f''(z)}{f'(z)}=\frac{(\beta-\alpha)i}{\pi}\left[\frac{1}{z}\log\left(\frac{1-e^{i\phi}w(z)}
{1-w(z)}\right)+\frac{(1-e^{i\phi})w'(z)}{(1-w(z))(1-e^{i\phi}w(z))\left(1+\frac{(\beta-\alpha)i}
{\pi}\log\left(\frac{1-e^{i\phi}w(z)}
{1-w(z)}\right)\right)}\right].
\end{equation}
It is well--known that $|w(z)|\leq|z|$ (cf. \cite{Duren}) and also by the Schwarz--Pick lemma, for a Schwarz function the following inequality
\begin{equation}\label{w prime}
|w'(z)|\leq \frac{1-|w(z)|^2}{1-|z|^2}\quad(z\in\Delta),
\end{equation}
holds. Also, we know that if $\log$ is the principal branch of the complex logarithm, then we have
\begin{equation}\label{log z}
\log z= \ln |z|+i \arg z\quad(z\in\Delta\setminus\{0\}, -\pi<\arg z\leq \pi).
\end{equation}
Therefore, by the above equation \eqref{log z}, it is well--known that if $|z|\geq1$, then
\begin{equation}\label{ineq log 1}
|\log z|\leq \sqrt{|z-1|^2+\pi^2},
\end{equation}
while for $0<|z|<1$, we have
\begin{equation}\label{ineq log 2}
|\log z|\leq \sqrt{\left|\frac{z-1}{z}\right|^2+\pi^2}.
\end{equation}
Thus, it is natural to distinguish the following cases.\\
{\bf Case 1:} $\left|\frac{1-e^{i\phi}w(z)}
{1-w(z)}\right|\geq 1$.\\
By \eqref{ineq log 1}, we have
\begin{align}\label{estimate abs log geq 1}
\left|\log\left(\frac{1-e^{i\phi}w(z)}
{1-w(z)}\right)\right|&\leq \sqrt{\left|\frac{1-e^{i\phi}w(z)}
{1-w(z)}-1\right|^2+\pi^2}\nonumber\\
&=\frac{\sqrt{|1-e^{i\phi}|^2|w(z)|^2+\pi^2|1-w(z)|^2}}{|1-w(z)|}\nonumber\\
&\leq\frac{\sqrt{4 \sin^2(\phi/2)|w(z)|^2+\pi^2(1+|w(z)|^2)}}{1-|w(z)|}\nonumber\\
&\leq\frac{\sqrt{4 \sin^2(\phi/2)|z|^2+\pi^2(1+|z|^2)}}{1-|z|}
\end{align}
for all $z\in\Delta$.
We note that the above inequality is well defined also for $z=0$.
Thus from \eqref{f second f prime}, \eqref{w prime} and \eqref{estimate abs log geq 1}, we get
\begin{align*}
&\quad\left|\frac{f''(z)}{f'(z)}\right|\\ &=\left|\frac{(\beta-\alpha)i}{\pi}\left[\frac{1}{z}\log\left(\frac{1-e^{i\phi}w(z)}
{1-w(z)}\right)+\frac{(1-e^{i\phi})w'(z)}{(1-w(z))(1-e^{i\phi}w(z))\left(1+\frac{(\beta-\alpha)i}
{\pi}\log\left(\frac{1-e^{i\phi}w(z)}
{1-w(z)}\right)\right)}\right]\right| \\
&\leq \frac{(\beta-\alpha)}{\pi}\left[\frac{1}{|z|}\left|\log\left(\frac{1-e^{i\phi}w(z)}
{1-w(z)}\right)\right|+\frac{\left|1-e^{i\phi}\right||w'(z)|}{\left|1-w(z)\right|
\left|1-e^{i\phi}w(z)\right|\left(1-\frac{(\beta-\alpha)}
{\pi}\left|\log\left(\frac{1-e^{i\phi}w(z)}{1-w(z)}\right)\right|\right)}\right]\\
&\leq\frac{(\beta-\alpha)}{\pi}\left[\frac{1}{|z|}\left\{\frac{\sqrt{4 \sin^2(\phi/2)|z|^2+\pi^2(1+|z|^2)}}{1-|z|}
\right\}+\frac{2\sin(\phi/2)}{1-|z|-\frac{(\beta-\alpha)}{\pi}\sqrt{4 \sin^2(\phi/2)|z|^2+\pi^2(1+|z|^2)}
}.\frac{1+|z|}
{1-|z|^2}\right].
\end{align*}
However, we obtain
\begin{align*}
||f||&=\sup_{z\in\Delta}(1-|z|^2)\left|\frac{f''(z)}{f'(z)}\right|\\
&\leq\sup_{z\in\Delta}\left\{\frac{(\beta-\alpha)}{\pi}\left[
\frac{1+|z|}{|z|}\sqrt{4 \sin^2(\phi/2)|z|^2+\pi^2(1+|z|^2)}+\frac{2\sin(\phi/2)(1+|z|)}{1-|z|-\frac{(\beta-\alpha)}{\pi}\sqrt{4 \sin^2(\phi/2)|z|^2+\pi^2(1+|z|^2)}
}\right]\right\}\\
&=\frac{2(\beta-\alpha)}{\pi}\sqrt{4 \sin^2(\phi/2)+2\pi^2}-\frac{4\sin(\phi/2)}{\sqrt{4 \sin^2(\phi/2)+2\pi^2}}
\end{align*}
and concluding the inequality \eqref{norm S alpha beta}.\\
{\bf Case 2:} $\left|\frac{1-e^{i\phi}w(z)}
{1-w(z)}\right|< 1$.\\
By \eqref{ineq log 2}, we have
\begin{align*}
\left|\log\left(\frac{1-e^{i\phi}w(z)}
{1-w(z)}\right)\right|&\leq \sqrt{\left|\frac{\frac{1-e^{i\phi}w(z)}
{1-w(z)}-1}{\frac{1-e^{i\phi}w(z)}
{1-w(z)}}\right|^2+\pi^2}\\
&=\frac{\sqrt{|1-e^{i\phi}|^2|w(z)|^2+\pi^2|1-e^{i\phi}w(z)|^2}}{|1-e^{i\phi}w(z)|}\\
&\leq\frac{\sqrt{4 \sin^2(\phi/2)|w(z)|^2+\pi^2(1+|w(z)|^2)}}{1-|w(z)|}\quad(|e^{i\phi}|=1)\\
&\leq\frac{\sqrt{4 \sin^2(\phi/2)|z|^2+\pi^2(1+|z|^2)}}{1-|z|}.
\end{align*}
Since in the Cases 1 and 2 we have the equal estimates for
\begin{equation*}
\left|\log\left(\frac{1-e^{i\phi}w(z)}
{1-w(z)}\right)\right|,
\end{equation*}
therefore, in this case also, the desired result will be achieved.
For the sharpness, consider the function $f_{\alpha,\beta}(z)$ as follows
\begin{align*}
f_{\alpha,\beta}(z)&=z\exp\left\{\frac{(\beta-\alpha)i}{\pi}\int_{0}^{z}\frac{1}{\xi}
\log\left(\frac{1-e^{i\phi}\xi}{1-\xi}\right)d\xi\right\}\\
&=z+\frac{(\beta-\alpha)i}{\pi}\left(1-e^{i\phi}\right)z^2+\cdots,
\end{align*}
where $\phi$ is defined in \eqref{phi}, $0\leq \alpha<1$ and $\beta>1$. A simple calculation, gives us
\begin{equation*}
\frac{zf'_{\alpha,\beta}(z)}{f_{\alpha,\beta}(z)}= 1+\frac{(\beta-\alpha)i}{\pi}\log\left(\frac{1-e^{i\phi}z}{1-z}\right)\quad(z\in\Delta)
\end{equation*}
and thus $f_{\alpha,\beta}(z)\in\mathcal{S}(\alpha,\beta)$. With the same proof as above we get the desired result. Also, the result is sharp for a rotation of the function $ f_{\alpha,\beta}(z)$ as follows:
\begin{equation*}
\mathfrak{f}_{\alpha,\beta}(z)=z\exp\left\{\frac{(\beta-\alpha)i}{\pi}\int_{0}^{z}\frac{1}{\xi}
\log\left(\frac{1-e^{i\phi}\xi}{1-e^{-i\phi}\xi}\right)d\xi\right\}.
\end{equation*}
This is the end of proof.
\end{proof}
\begin{remark}
In the Theorem B, the authors estimated $||f||$ when $f\in\mathcal{V}(\alpha,\beta)$. But in the proof of this theorem \cite[p. 160]{RNE}, wrongly, they used from the following equation
\begin{equation*}
\frac{zf'(z)}{f(z)}=P_{\alpha,\beta}(w(z)),
\end{equation*}
where $P_{\alpha,\beta}$ is defined in \eqref{P}. This means that $f$, simultaneously, belonging to the class $\mathcal{S}(\alpha,\beta)$ and $\mathcal{V}(\alpha,\beta)$. Next, we show that the best estimate for $||f||$ when $f\in\mathcal{V}(\alpha,\beta)$ does not exists.
\end{remark}
\begin{theorem}
Let $0\leq \alpha<1$ and $\beta>1$. If a function $f$ belongs to the class $\mathcal{V}(\alpha,\beta)$, then
$||f||<\infty$.
\end{theorem}
\begin{proof}
Let $0\leq \alpha<1$, $\beta>1$ and $f\in\mathcal{V}(\alpha,\beta)$. Then by Lemma \ref{lem V alpha beta} and by use of definition of subordination, we have
\begin{equation}\label{p th 2.2 1}
\left(\frac{z}{f(z)}\right)^2f'(z)=P_{\alpha,\beta}(w(z))= 1+\frac{(\beta-\alpha)i}{\pi}\log\left(\frac{1-e^{i\phi}w(z)}{1-w(z)}\right)\quad(z\in\Delta),
\end{equation}
where $w$ is Schwarz function and $\phi$ is defined in \eqref{phi}. Taking logarithm on both sides of \eqref{p th 2.2 1} and differentiating, we get
\begin{equation}\label{p th 2.2 2}
\frac{f''(z)}{f'(z)}=2\left(\frac{f'(z)}{f(z)}-\frac{1}{z}\right)+\frac{(\beta-\alpha)i}{\pi}
\left[\frac{(1-e^{i\phi})w'(z)}{(1-w(z))(1-e^{i\phi}w(z))\left(1+\frac{(\beta-\alpha)i}
{\pi}\log\left(\frac{1-e^{i\phi}w(z)}
{1-w(z)}\right)\right)}\right].
\end{equation}
With a simple calculation, \eqref{p th 2.2 1} implies that
\begin{equation}\label{identity}
\left(\frac{f'(z)}{f(z)}-\frac{1}{z}\right)=\frac{f(z)}{z}\left(\frac{ P_{\alpha,\beta}(w(z))}{z}-1\right).
\end{equation}
Combining \eqref{p th 2.2 2} and \eqref{identity}, give us
\begin{equation*}
\frac{f''(z)}{f'(z)}=2\left(\frac{f(z)}{z}\left(\frac{ P_{\alpha,\beta}(w(z))}{z}-1\right)\right)+\frac{(\beta-\alpha)i}{\pi}
\left[\frac{(1-e^{i\phi})w'(z)}{(1-w(z))(1-e^{i\phi}w(z))\left(1+\frac{(\beta-\alpha)i}
{\pi}\log\left(\frac{1-e^{i\phi}w(z)}
{1-w(z)}\right)\right)}\right]
\end{equation*}
It was proved in (\cite[Theorem 2.2]{KES(Siberian)}) that if $f\in\mathcal{V}(\alpha,\beta)$ where $0<\alpha\leq 1/2$ and $\beta>1$, then
\begin{equation*}
1-\frac{1}{\alpha}<{\rm Re}\left\{\frac{f(z)}{z}\right\}<\infty\quad(z\in\Delta).
\end{equation*}
Since ${\rm Re}\{z\}\leq|z|$, the last two--sided inequality means that $|f(z)/z|<\infty$ when $f\in\mathcal{V}(\alpha,\beta)$. Thus from the above we deduce that
\begin{equation*}
\left|\frac{f''(z)}{f'(z)}\right|<\infty\quad(z\in\Delta)
\end{equation*}
and concluding the proof.
\end{proof}
|
2,877,628,091,004 | arxiv | \section{Introduction}
The surge of interest in deep learning has been fuelled by the availability of agile software packages that enable researchers and practitioners alike to quickly experiment with different architectures for their problem setting \citep{paszke2019pytorch, abadi2016tensorflow} by providing modular abstractions for automatic differentiation and gradient-based learning.
While there has been similarly growing interest in uncertainty estimation for deep neural networks, in particular following the Bayesian paradigm \citep{mackay1992practical,neal2012bayesian}, a comparable toolbox of software packages has mostly been missing.
A major barrier of entry for the use of Bayesian Neural Networks (BNNs) is the large overhead in required code and additional mathematical abstractions compared to stochastic maximum likelihood estimation as commonly performed in deep learning.
Moreover, BNNs typically have intractable posteriors, necessitating the use of various approximations when performing inference, which depending on the problem may perform better or worse and frequently require complex bespoke implementations.
This oftentimes leads to the development of inflexible small libraries or repetitive code creation that can lack essential ``tricks of the trade'' for performant BNNs, such as appropriate initialization schemes, gradient variance reduction \citep{kingma2015variational,tran2018simple}, or may only provide limited inference strategies to compare outcomes.
Even though various general purpose probabilistic programming packages have been built on top of those deep learning libraries (Pyro\xspace~\citep{bingham2019pyro} for Pytorch\xspace, Edward2~\citep{tran2018simple} for Tensorflow), software linking those to BNNs has only been released recently \citep{tran2019bayesian} and provides substitutes for Keras' layers~\citep{chollet2015keras} to construct BNNs from scratch.
In this work we describe TyXe\xspace (Greek: chance), a package linking the expressive computational capabilities of Pytorch\xspace with the flexible model and inference design of Pyro\xspace \citep{bingham2019pyro} in service of providing a simple, agile, and useful abstraction for BNNs targeted at Pytorch\xspace practitioners.
Specifically, we highlight the following contributions we make through TyXe\xspace:
\begin{itemize}
\item We provide an intuitive, object-oriented interface that abstracts away Pyro\xspace to facilitate turning Pytorch\xspace-based neural networks into BNNs with minimal changes to existing code.
\item Crucially, our design deviates from prior approaches, e.g. \citep{tran2019bayesian}, to avoid bespoke layer implementations, making TyXe\xspace applicable to arbitrary Pytorch\xspace architectures.
\item We make essential techniques for well-performing BNNs that are missing from Pyro\xspace, such as local reparameterization, available as flexible program transformations.
\item TyXe\xspace is compatible with architectures from libraries both native and non-native to the Pytorch\xspace ecosystem, such as torchvision ResNets and DGL graph neural networks.
\item Leveraging TyXe\xspace, we show that a Bayesian treatment of Pytorch\xspace{3d}-based Neural Radiance Fields improves their out-of-distribution robustness at a minimal coding overhead.
\item Our modular design handily supports variational continual learning through updating the prior to the posterior. Such abstractions are also currently not available in Pyro\xspace.
\end{itemize}
In the following we give an overview of our library, with an initial focus on the API design followed by a range of research settings where TyXe\xspace greatly simplifies `Bayesianizing' an existing workflow.
We provide experimental details in \Cref{app:experiments}, specifically discuss advancements upon Pyro\xspace in \Cref{app:comparison} and provide an in-depth overview of the codebase in \Cref{app:code}.
\section{TyXe\xspace by example: non-linear regression}
The core components that users interact with in TyXe\xspace are our BNN classes.
These wrap deterministic Pytorch\xspace \inlinepython{nn.Module} neural networks.
We then leverage Pyro\xspace to formulate a probabilistic model over the neural network parameters, in which we perform approximate inference.
There are two primary BNN classes with identical interfaces: \inlinepython{tyxe.VariationalBNN} and \inlinepython{tyxe.MCMC_BNN}.
Both offer a unified workflow of constructing a BNN, fitting it to data and then making predictions.
A more low-level class, \inlinepython{tyxe.PytorchBNN}, which can act as a drop-in BNN replacement for a \inlinepython{nn.Module} but lacks some of the high-level functionality of the other two classes, will be introduced in \Cref{sec:nerf}.
We stress that the former two classes only require using a Pyro\xspace optimizer in place of a Pytorch\xspace one, while the latter hides Pyro\xspace entirely, making its functionality accessible to Pytorch\xspace users without prior experience of using Pyro\xspace.
In this section we provide more details on each of the modelling steps along the example of a synthetic one-dimensional non-linear regression dataset.
We use the setup from \citep{foong2019between} with two clusters of inputs $x_1 \sim \mathcal{U}[-1, -0.7]$, $x_2 \sim \mathcal{U}[0.5, 1]$ and $y \sim \mathcal{N}(\cos(4x + 0.8), 0.1^2)$.
\subsection{Defining a BNN}
\begin{figure*}[t]
\centering
\subfigure[Local reparameterization]{\label{subfig:lr}
\includegraphics[width=0.3\linewidth]{toy_ffg_lr}
}%
\hfill
\subfigure[Shared weight samples]{\label{subfig:nolr}
\includegraphics[width=0.3\linewidth]{toy_ffg}
}%
\hfill
\subfigure[HMC]{\label{subfig:hmc}
\includegraphics[width=0.3\linewidth]{toy_hmc}
}
\vspace{-0.75em}
\caption{Bayesian nonlinear regression using the setup from \Cref{lst:bnn} and fit using \Cref{lst:fitpredict}. \Cref{subfig:lr} wraps the call to \inlinepython{bnn.predict} in the local reparameterization context with the call to \inlinepython{fit}, \Cref{subfig:nolr} does not. Switching between the two is as simple as adapting the indentation of the call to \inlinepython{predict} to be in- or outside the \inlinepython{local_reparameterization} context. Both use the same bnn object with the same approximate posterior. \Cref{subfig:hmc} uses \inlinepython{pyro.infer.mcmc.HMC} as guide factory. The shaded area indicates up to three standard deviations from the predictive mean.}
\label{fig:regression}
\vspace{-1.25em}
\end{figure*}
A TyXe\xspace BNN has four components: a Pytorch\xspace neural network, a data likelihood, a weight prior and a guide\footnote{Following Pyro\xspace's terminology we refer to programs drawing approximate posterior samples ``guides''.} factory for the posterior.
We describe their signature and our instantiations below.
As seen in \Cref{lst:bnn}, turning a Pytorch\xspace network into a TyXe\xspace BNN requires as little as five lines of code.
\subsubsection{Network architecture}
Pytorch\xspace provides a range of classes that facilitate the construction of neural networks, ranging from simple linear or convolutional layers and nonlinearities as building blocks to higher-level classes that compose these, e.g. by chaining them together in the \inlinepython{nn.Sequential} module.
A simple regression network on $1d$ data with one layer of $50$ hidden units and a $\tanh$ nonlinearity, as commonly used for illustration in works on Bayesian neural networks, can be defined in a single line of code (first line of \Cref{lst:bnn}).
More generally, any neural network in Pytorch\xspace is described by the \inlinepython{nn.Module} class, which provides functionalities such as easy composition, parameter and gradient handling, and many more conveniences for neural network researchers and practitioners that have contributed to the wide adoption of this framework.
Further, the \inlinepython{torchvision} package implements various modern architectures, such as ResNets \citep{he2016deep}.
TyXe can also work on top of architectures from 3rd party libraries, such as DGL \citep{wang2019dgl}, that derive from \inlinepython{nn.Module}.
Pyro\xspace inherits the elegant abstractions for neural networks from Pytorch\xspace through its \inlinepython{PyroModule} class, which extends \inlinepython{nn.Module} to allow for instance attributes to be modified by Pyro\xspace effect handlers, making it easy to replace \inlinepython{nn.Parameter}s with Pyro\xspace sample sites.
We adopt the \inlinepython{PyroModule} class under the hood to provide a seamless interface between TyXe\xspace and Pytorch\xspace networks.
\subsubsection{Prior}
At this time, we restrict the probabilistic model definition to weight space priors.
Our classes take care of constructing distribution objects that replace the network parameters as \inlinepython{PyroSample}s.
One such prior class is an \inlinepython{IIDPrior} which takes a Pyro\xspace distribution as argument, such as a \inlinepython{pyro.distributions.Normal(0., 1.)}, applying a standard normal prior over all network parameters.
We further implement \inlinepython{LayerwiseNormalPrior}, a per-layer Gaussian prior that sets the variance to the inverse of the number of input units as recommended in \citep{neal1996priors}, or analogous to the variance used for weight initialization in~\citep{glorot2010understanding, he2015delving} when using the flag \inlinepython{method={"radford", "xavier", "kaiming"}}, respectively.
Crucially, we do not require users to set priors for each layer by hand, this is dealt with automatically by our framework.
Our prior classes accept arguments that allow for certain layers or parameters to be excluded from a Bayesian treatment.
The prior in our ResNet example in \Cref{sec:resnet} receives \inlinepython{hide_module_types=[nn.BatchNorm2d]} to hide the parameters of the BatchNorm modules.
Those parameters stay deterministic and are fit to minimize the log likelihood part of the ELBO.
\subsubsection{Guide}
The guide argument is the only place where the initialization of our \inlinepython{VariationalBNN} and \inlinepython{MCMC_BNN} differs.
\inlinepython{tyxe.VariationalBNN} expects a function that automatically constructs a guide for the network weights, e.g. a \inlinepython{pyro.infer.autoguide}, and an optional second such function for variables in the likelihood if present (e.g. an unknown variance in a Gaussian likelihood).
To facilitate local reparameterization and computation of KL-divergences in closed form, we implement an \inlinepython{AutoNormal} guide, which samples all unobserved sites in the model from a diagonal Normal.
This is similar to Pyro\xspace's \inlinepython{AutoNormal} autoguide, which constructs an auxiliary joint latent variable with a factorized Gaussian distribution.
Variational parameters can be initialized as for autoguides by sampling from the prior/estimating statistics like the prior median, or through additional convenience functions that we provide, such as sampling the means from distributions with variances depending on the numbers of units in the corresponding layers, akin to how deterministic layers are typically initialized.
This also permits initializing means to the values of pre-trained networks, which is particularly convenient when converting a deep network into a BNN.
The \inlinepython{tyxe.MCMC_BNN} class expects an MCMC kernel as guide, either HMC~\citep{neal2012bayesian} or NUTS~\citep{hoffman2014no}, and runs Pyro\xspace's MCMC on the full dataset to obtain samples from the posterior.
For both \inlinepython{BNN} classes, arguments to the guide constructor can be passed via \inlinepython{partial} from Python's built-in \inlinepython{functools} module.
\Cref{lst:resnet} shows an example of this.
\subsubsection{Likelihood}
Our likelihoods are wrappers around Pyro\xspace's \inlinepython{distributions}, expecting a \inlinepython{dataset_size} argument to correctly scale the KL term when using mini-batches.
Specifically we provide Bernoulli, Categorical, HomoskedasticGaussian and HeterosketdasticGaussian likelihoods.
Implementing a new likelihood requires a \inlinepython{predictive_distribution(predictions)} method returning a Pyro\xspace distribution for sampling.
Further, it should provide a method for calculating an error estimate for evaluation, such as the squared error for Gaussian models or classification error for discrete models.
Hence it is easy to add new likelihoods based on existing distributions, e.g. a Poisson likelihood.
\subsection{Fitting a BNN}
Our BNN class provides a scikit-learn-style \inlinepython{fit} function to run inference for a given numbers of passes over an \inlinepython{Iterable}, e.g. a PyTorch \inlinepython{DataLoader}.
Each element is a length-two tuple, where the first element contains the network inputs (and may be a list) and the second is the likelihood targets, e.g. class labels.
The \inlinepython{VariationalBNN} class further requires a Pyro\xspace optimizer.
\inlinepython{tyxe.VariationalBNN} runs stochastic variational inference~\citep{ranganath2014black, wingate2013automated}, a popular training algorithm for Bayesian Neural Networks, e.g.~\citep{blundell2015weight} based on maximizing the evidence lower bound (ELBO).
Our implementation automatically handles correctly scaling the KL-term vs. the log likelihood in the ELBO.
$\inlinepython{tyxe.MCMC_BNN}$ provides a compatible interface to Pyro\xspace's \inlinepython{MCMC} class.
\begin{wrapfigure}{l}{.5\textwidth}
\vspace{-1.5em}
\centering
\inputpython{snippets/fit_predict.py}
\vspace{-0.75em}
\captionof{listing}{Regression fit and predict example with local reparameterization enabled for training only.}
\label{lst:fitpredict}
\vspace{-0.75em}
\end{wrapfigure}
\Cref{lst:fitpredict} shows a call to \inlinepython{fit}.
Besides the data loader and number of epochs or samples, it is possible to pass in a \inlinepython{callback} function to the \inlinepython{VariationalBNN}, which is invoked after every epoch with the average value of the ELBO over the epoch and can be used e.g. to check the log likelihood of a validation data set.
By returning \inlinepython{True}, the callback function can stop training.
The \inlinepython{MCMC_BNN} passes any keyword arguments on to Pyro\xspace's \inlinepython{MCMC} class.
\subsection{Predicting with a BNN}
The \inlinepython{predict} method returns predictions for a given number of weight samples from the approximate posterior.
\Cref{lst:fitpredict} invokes \inlinepython{predict} at the bottom.
By default it aggregates the sampled predictions, i.e. averages them.
Via \inlinepython{aggregate=False} the sampled predictions can be returned in a stacked tensor.
We further implement an \inlinepython{evaluate} method that expects test labels and returns their log likelihood along with an error measure depending on the model, e.g. squared error for Gaussian likelihoods and classification error for Categorical or Binary ones.
\subsection{Transformations via effect handlers} \label{sec:reparam}
One crucial component missing from Pyro\xspace that TyXe\xspace provides is BNN-specific effect handlers~ \citep{plotkin2009handlers,bingham2019pyro}, specifically local reparameterization \citep{kingma2015variational} and flipout \citep{wen2018flipout} for gradient variance reduction.
Local reparameterization samples the pre-activations of each data point rather than a single weight matrix shared across a mini-batch for factorized Gaussian approximate posteriors over the weights and layers performing linear mappings, such as dense or convolutional layers.
Flipout, on the other hand, samples a rank-one matrix of signs per data point, which allows for using distinct weights in a computationally efficient manner in linear operations, if the weights are sampled from a factorized symmetric distribution.
\begin{listing}[t]
\centering
\inputpython{snippets/resnet.py}
\vspace{-0.75em}
\caption{Bayesian ResNet. Line $1$ loads a ResNet with pre-trained parameters from \inlinepython{torchvision}. The prior in lines $2{-}3$ excludes BatchNorm layers, keeping their parameters deterministic. Arguments to the guide are passed with \inlinepython{partial} as in lines $5{-}6$. We show how to set the Gaussian means to the pre-trained weights and only fit the variances, which are initialized to be small. The \inlinepython{BNN} object in line $7$ is constructed exactly the same way as in the regression example. Lines $9{-}11$ show an alternative prior that only applies to the final fully-connected layer alongside a Pyro\xspace autoguide.}
\label{lst:resnet}
\vspace{-1.25em}
\end{listing}
Typically, these are implemented as separate layer classes, e.g. \citep{tran2019bayesian}.
This creates an unnecessary redundancy in the code base, since there are now two versions of the same model differing only in sampling approaches for gradient estimation at each linear mapping.
From a probabilistic modeling point of view it is preferable to separate model and inference explicitly to facilitate reuse of models and inference approaches.
Fortunately, Pyro\xspace provides an expressive module for effect handling, which we can leverage to modify the computation as required.
Specifically, we implement a \inlinepython{LocalReparameterizationMessenger} which marks linear functions called by Pytorch\xspace modules, such as \inlinepython{F.linear}, as effectful in order to modify how linear computations are performed as required.
The Messenger maintains references from samples to their distributions and, when a linear function is called in a \inlinepython{local_reparameterization} context on weights from a factorized Gaussian, samples the output from the Gaussian over the result of the linear mapping.
\Cref{lst:fitpredict} calls \inlinepython{fit} in such a context.
The call to \inlinepython{predict} could be wrapped too, but the purpose of local reparameterization and flipout is to reduce gradient variance.
As they double the computational cost, we omit them for testing.
\section{Large-scale vision classification} \label{sec:resnet}
The biggest advantage resulting from our choice not to implement bespoke layer classes is that implementations of popular architectures can immediately be turned into their Bayesian counterparts.
While implementing the two-layer network from the regression example with Bayesian layers is of course not complicated, writing the code for a modern computer vision architecture, e.g. a ResNet \citep{he2016deep}, is significantly more cumbersome and error-prone.
With TyXe\xspace, users can use the ResNet implementation available through \inlinepython{torchvision} as shown in \Cref{lst:resnet}.
In this example we further highlight the flexibility of TyXe\xspace to only perform inference over some parameters while keeping others deterministic by excluding \inlinepython{nn.BatchNorm2d} layers from a Bayesian treatment.
\begin{figure}[t]
\centering
\subfigure{
\includegraphics[width=\linewidth]{legend}
}\\[0pt]
\vspace{-1.25em}
\setcounter{subfigure}{0}
\subfigure[Test calibration.]{\label{subfig:calibration}
\includegraphics[width=0.45\linewidth]{calibration_curves}
}%
\hfill
\subfigure[Test vs. OOD uncertainty.]{\label{subfig:ood}
\includegraphics[width=0.45\linewidth]{ood}
}
\vspace{-0.75em}
\caption{Calibration curves and empirical cumulative density of the entropy of the predictive distribution on test and OOD data for Bayesian ResNet-18 with different inference approaches on CIFAR10 (OOD: SVHN).}
\label{fig:bayesian_resnet}
\vspace{-1.25em}
\end{figure}
To showcase how the clean separation of network architecture, prior, guide and likelihood in TyXe\xspace facilitates an experimental workflow, we investigate the predictive uncertainty of different inference strategies for a Bayesian ResNet.
In \Cref{lst:resnet} we define a fully factorized Gaussian guide that fixes the means to the values of pre-trained weights and only fits the variances as parameters.
While we would usually want the approximate posterior to be as flexible as possible, it has been observed in the literature \citep{louizos2017multiplicative,trippe2018overpruning} that such restrictions can improve the predictive performance of a BNN.
We further investigate a mean-field guide where we similarly initialize the means to pre-trained weight values, but do not fix them for optimization, and restrict the variance of the variational distribution to a maximum of $0.1$ to prevent underfitting.
Finally we test performing inference in only the final classification layer with a Gaussian guide with either a diagonal or low-rank plus diagonal covariance matrix (also shown in the Listing) while using the pre-trained weights for the previous layers.
Switching between these options is easy, with typically only a single or two lines of code differing.
As baselines we compare to maximum likelihood (ML) and maximum a-posteriori (MAP).
For the full code see \inlinepython{examples/resnet.py}.
\begin{wraptable}{r}{0.5\linewidth}
\centering
\vspace{-0.75em}
\caption{Bayesian ResNet-18 predictive perf.}
\vspace{-0.75em}
\resizebox{0.9\linewidth}{!}{%
\begin{tabular}{l|cccc}
\toprule
Inference & NLL$\downarrow$ & Acc.$\uparrow$(\%) & ECE$\downarrow$(\%) & OOD$\uparrow$\\
\midrule
ML & 0.33 & 94.29 & 4.10 & 0.78 \\
MAP & 0.29 & 92.14 & 4.44 & 0.82 \\
MF (sd only) & 0.27 & 93.66 & 3.14 & 0.93 \\
MF & 0.20 & 93.28 & 0.97 & 0.94 \\
LL MF & 0.35 & 93.36 & 3.62 & 0.89 \\
LL low rank & 0.34 & 93.31 & 3.75 & 0.89 \\
\bottomrule
\end{tabular}
}
\label{tab:resnet}
\vspace{-0.75em}
\end{wraptable}
\Cref{fig:bayesian_resnet} compares calibration and entropy of the predictive distributions on test and out-of-distribution (OOD) data.
Mean-field (MF) with learned means leads to better calibrated predictions than variants (re-)using point estimates.
It best distinguishes test from OOD data as measured by the area under the ROC curve based on the maximum predicted probability and has the lowest expected calibration error (ECE) and negative log likelihood (NLL), see \Cref{tab:resnet}.
We provide a pure Pyro\xspace snippet for a variational ResNet in \Cref{app:comparison} for a direct comparison.
The implementation requires a knowledge of a range of Pyro\xspace constructs to avoid pitfalls such as incorrectly scaling prior and likelihood, yet the code ends up being significantly lengthier and somewhat convoluted.
In contrast, TyXe\xspace provides a clean object-oriented interface that will be intuitive for most users with a basic understanding of Bayesian statistics and accessible for pure Pytorch\xspace users who do not want to have to learn Pyro\xspace.
Crucially, essential features for achieving good discriminative performance with a BNN, such as reparameterization and clipping the variance of the approximate posterior are not available in Pyro\xspace.
\section{Compatibility with external libraries}
TyXe\xspace is compatible with libraries outside of the native Pytorch\xspace ecosystem and classical settings such as classification of i.i.d. images or regression, as long as the networks build on top of \inlinepython{nn.Module}.
Below, we demonstrate this on a semi-supervised node classification example with a graph neural network from the DGL \citep{wang2019dgl} tutorials, as well as a 3D rendering example in Pytorch\xspace{3D}.
\subsection{Bayesian graph neural networks with DGL}
\label{sec:gnn}
\begin{listing}
\centering
\vspace{0.3em}
\begin{minipage}[t]{.49\linewidth}
\inputpython{snippets/gcn_layer.py}
\end{minipage}\hfill%
\begin{minipage}[t]{.49\linewidth}
\inputpython{snippets/gnn_net.py}
\end{minipage}\\[0.1em]
\begin{minipage}[b]{.99\linewidth}
\inputpython{snippets/gnn.py}
\end{minipage}
\vspace{-0.5em}
\captionof{listing}{GNN example. The graph convolutional layer definition (top left) relies on DGL's graph functionality and is used for the GNN (top right). The Bayesian GNN can be constructed in line $1$ with the exact same prior, guide and likelihood options as previously. The \inlinepython{selective_mask} in line $3$ ensures that only predictions on labelled nodes contribute to the log likelihood when calling \inlinepython{fit} in line $4$. The input data now consists of a graph and node features.}
\label{lst:gnn}
\vspace{-1.25em}
\end{listing}
We extend an example from the DGL tutorials\footnote{\url{https://docs.dgl.ai/en/0.5.x/tutorials/models/1_gnn/1_gcn.html}} to train a Bayesian graph neural network~(GNN) on the Cora dataset.
Graph datasets are often semi-supervised, where an entire graph of nodes is provided, but only some of them are labelled.
Hence we need a mechanism for preventing unlabelled nodes from contributing to the log likelihood.
We combine Pyro\xspace's \inlinepython{block} and \inlinepython{mask} poutines to implement the \inlinepython{selective_mask} effect handler, which can wrap the call to \inlinepython{fit} as a context manager as shown in \Cref{lst:gnn} and mask out data in the likelihood.
The network is taken from the DGL tutorial without change.
As it utilizes \inlinepython{nn.Linear}, it is compatible with flipout.
Prior, guide, likelihood and BNN can be constructed exactly as in the previous examples, see \inlinepython{examples/gnn.py} for the code.
\begin{wraptable}{r}{0.5\linewidth}
\vspace{-1.45em}
\centering
\caption{Performance of deterministic and Bayesian GNNs on the Cora dataset. We report the lowest validation NLL along with the test accuracy and ECE at the corresponding epoch (mean and two standard errors over five runs).}
\vspace{-0.75em}
\resizebox{0.95\linewidth}{!}{%
\begin{tabular}{l|ccc}
\toprule
Inference & NLL$\downarrow$ & Acc.$\uparrow$ & ECE$\downarrow$ \\
\midrule
ML & $1.01 \pm .04$ & $75.64 \pm 1.28$ & $15.38 \pm 0.97$ \\
MAP & $0.93 \pm .03$ & $75.94 \pm 0.73$ & $12.78 \pm 0.96$ \\
MF & $0.77 \pm .02$ & $78.02 \pm 1.00$ & $10.22 \pm 1.31$ \\
\bottomrule
\end{tabular}
}
\label{tab:cora}
\end{wraptable}
In \Cref{tab:cora} we report NLLs, accuracies and ECE for ML, MAP and MF.
ML leads to overfitting and requires the use of early stopping.
Further it suffers from overconfident predictions, which can be mitigated to a degree by the use of variational inference, although not to the same extent as in the image classification example.
Bayesian GNNs have only recently been started to be investigated in a few works \citep{zhang2019bayesian,hasanzadeh2020bayesian,luo2020learning,lamb2020bayesian} and we believe that TyXe\xspace can be a valuable tool for putting Bayesian inference at the disposal of the graph neural network community.
\subsection{Custom losses: Bayesian NeRF with Pytorch\xspace{3D}} \label{sec:nerf}
Next, we adapt a more complex example on Neural Radiance Fields (NeRF) \citep{mildenhall2020nerf} from the Pytorch\xspace{3D} repository\footnote{\url{https://github.com/facebookresearch/pytorch3d/blob/master/docs/tutorials/fit_simple_neural_radiance_field.ipynb}} to train a Bayesian NeRF.
The loss function does not straight-forwardly correspond to a probabilistic likelihood and is calculated as a custom error function of rendered image and silhouette.
Hence there is no suitable likelihood class to implement for TyXe\xspace and it is not clear how the the prior or KL term should be weighed relative to the error.
Therefore this example is not Bayesian in the proper sense as a `posterior' as a product of likelihood and prior does not exist, but demonstrates that the uncertainty of a pseudo-Bayesian variational BNN can still improve the robustness on unseen data.
\begin{listing}[t]
\centering
\inputpython{snippets/nerf.py}
\vspace{-0.75em}
\caption{Bayesian NeRF example. Constructing a \inlinepython{PytorchBNN} is similar to a \inlinepython{VariationalBNN} in line $1$ but without the likelihood. No downstream changes except for parameter collection for the Pytorch\xspace optimizer in line $2$ -- which requires a batch of data to trace parameters on a call to the net's forward method -- are needed. The \inlinepython{nerf_bnn} can be passed into the Pytorch\xspace{3D} renderer in line $4$ as a drop-in replacement for the \inlinepython{nerf_net}. The loss can be calculated as before in lines $5$, with the possible addition of the KL regularizer in line $6$. Automatic differentiation and parameter updates can be performed as in standard Pytorch\xspace code in line $7$.}
\label{lst:nerf}
\end{listing}
Specifically, we introduce a more low level \inlinepython{PytorchBNN} class that does not require a likelihood and can be used to directly wrap a Pytorch\xspace neural network.
It is constructed similarly to \inlinepython{VariationalBNN} with a variational guide factory, but due to the absence of the likelihood does not provide convenience functions such as \inlinepython{fit} or \inlinepython{predict}.
Instead, it is intended to serve as a drop-in replacement of the deterministic neural network in a Pytorch\xspace-based workflow.
The output of the \inlinepython{forward} method corresponds to predictions of the network made with a single Monte Carlo sample from the variational posterior.
The corresponding KL penalty term can be accessed through the \inlinepython{cached_kl_loss} attribute and added to the loss.
It is updated on every forward pass, i.e. when a sample is drawn from the approximate posterior.
The key difference to a regular Pytorch\xspace neural network is that since Pyro\xspace initializes parameters lazily, we cannot provide a \inlinepython{parameters} method.
Instead, optimizable parameters are collected via \inlinepython{pytorch_parameters}, which takes a batch of data to pass through the network for tracing the parameters.
\begin{wrapfigure}{r}{0.5\textwidth}
\vspace{-2.75em}
\centering
\subfigure{
\includegraphics[width=0.32\linewidth]{ml_render_seen}}%
\hspace{-1em}
\subfigure{
\includegraphics[width=0.32\linewidth]{vi_render_seen}}%
\hspace{-1em}
\subfigure{
\includegraphics[width=0.32\linewidth]{vi_uncertainty_seen}}
\\[-1.1em]%
\setcounter{subfigure}{0}
\subfigure[Det. NeRF]{
\includegraphics[width=0.32\linewidth]{ml_render_unseen}}%
\hspace{-1em}
\subfigure[Bay. NeRF]{
\includegraphics[width=0.32\linewidth]{vi_render_unseen}}%
\hspace{-1em}
\subfigure[Uncertainty]{
\includegraphics[width=0.32\linewidth]{vi_uncertainty_unseen}}
\caption{Pytorch\xspace{3D} example. Top row seen during training, bottom row excluded. Bayesian NeRF achieves an error of $8.1{\times}10^{-3}$ on a set of $10$ held-out angles, while the error is $9.4{\times}10^{-3}$ for the deterministic version. Uncertainty visualizes variance across different weight samples.}
\label{fig:nerf}
\vspace{-1.25em}
\end{wrapfigure}
We provide a code snippet in \Cref{lst:nerf}.
We emphasize that parameters are trained with the original Pytorch\xspace instead of a Pyro\xspace optimizer, further reducing the required changes to the original workflow.
The renderer is a Pytorch\xspace{3D} object and uses the Bayesian NeRF object instead of the original Pytorch\xspace network.
The data-dependent loss is then calculated as before and the KL-divergence of the approximate posterior from the prior on the weights can be added to the objective as a regularizer, possible weighed by some scalar \inlinepython{scale}.
The full code can be found in \inlinepython{examples/nerf.py} and is identical to the original notebook for the most part, with only a few lines needing to be modified to adapt it to TyXe\xspace, as well as some additional plotting code for visualizing the predictive uncertainty.
In the original example, the network is trained to render views of a cow from $360$°.
We hold out $90$° as out-of-distribution data.
As \Cref{fig:nerf} shows, this leads to many artifacts and discontinuities with a deterministic net.
The pseudo-Bayesian NeRF averages many of these out, and provides helpful measures of uncertainty in form of the variances of the predicted images (right column).
\iffalse
\begin{figure}
\begin{minipage}{0.5\linewidth}
\centering
\subfigure{
\includegraphics[width=0.32\linewidth]{ml_render_seen}}%
\hspace{-1em}
\subfigure{
\includegraphics[width=0.32\linewidth]{vi_render_seen}}%
\hspace{-1em}
\subfigure{
\includegraphics[width=0.32\linewidth]{vi_uncertainty_seen}}
\\[-1.1em]%
\setcounter{subfigure}{0}
\subfigure[Det. NeRF]{
\includegraphics[width=0.32\linewidth]{ml_render_unseen}}%
\hspace{-1em}
\subfigure[Bay. NeRF]{
\includegraphics[width=0.32\linewidth]{vi_render_unseen}}%
\hspace{-1em}
\subfigure[Uncertainty]{
\includegraphics[width=0.32\linewidth]{vi_uncertainty_unseen}}
\caption{Pytorch\xspace{3D} example. Top row seen during training, bottom row excluded. Bayesian NeRF achieves an error of $8.1{\times}10^{-3}$ on a set of $10$ held-out angles, while the error is $9.4{\times}10^{-3}$ for the deterministic version. Uncertainty visualizes variance across different weight samples.}
\label{fig:nerf}
\end{minipage}\hfill%
\begin{minipage}[t]{0.425\linewidth}
\centering
\includegraphics[width=\linewidth]{split_vcl}
\caption{Mean accuracy and two standard errors on tasks seen so far for VCL and ML on Split-MNIST and -CIFAR.}
\label{fig:vcl}
\end{minipage}
\end{figure}
\fi
\section{Variational continual learning}
\begin{listing}
\centering
\inputpython{snippets/vcl.py}
\vspace{-0.75em}
\caption{Updating the prior of a \inlinepython{BNN} for variational continual learning. Line $1$ collects all weights over which we perform inference, line $2$ extracts the corresponding variational distributions from the guide, and line $3$ uses these to update the BNN's prior.}
\label{lst:vcl}
\vspace{-1.25em}
\end{listing}
Finally, we show how our separation of prior, guide and network architecture enables an elegant implementation of variational continual learning (VCL) \citep{nguyen2017vcl}.
Having set up and trained a BNN on a first task as in the previous examples, we only need construct a new prior from the guide distributions over the weights to update the previous BNN prior.
We show example code for this process in \Cref{lst:vcl} and the full implementation can be found in \inlinepython{examples/vcl.py}.
Training on the following task can then be conducted as usual with the \inlinepython{fit} method on the current dataset.
\begin{wrapfigure}{r}{0.5\textwidth}
\vspace{-0.75em}
\centering
\includegraphics[width=\linewidth]{split_vcl}
\vspace{-2em}
\caption{Mean accuracy and two standard errors on tasks seen so far for VCL and ML on Split-MNIST and -CIFAR.}
\label{fig:vcl}
\vspace{-3.5em}
\end{wrapfigure}
In \Cref{fig:vcl} we show the test accuracy across the observed tasks after training on each one on the classical Split-MNIST and Split-CIFAR benchmarks\citep{zenke2017continual}.
We do not use coresets as \citep{nguyen2017vcl}, but this would only require some boilerplate code for creating the coresets prior to training and then fine-tuning on each coreset prior to testing by calling \inlinepython{fit} and restoring the state of the Pyro\xspace parameter store.
As previously reported in the literature, deterministic networks suffer from forgetting on previous tasks, which can be mitigated by using a Bayesian approach such as VCL.
\section{Related work}
The most closely related piece of recent work is Bayesian Layers \citep{tran2019bayesian}, which extends the layer classes of Keras with the aim of them being usable as drop-in replacements for their deterministic counterpart.
This forces the user to modify the code where the network is defined or write their own boilerplate code.
Bayesian Layers are currently more general in scope, providing an abstraction over uncertainty over composable functions including normalizing flows and Gaussian Process mappings per layer, while at this point we have consciously limited ourselves to weight space uncertainty in neural networks and treat networks holistically rather than per layer.
For Pytorch\xspace, PyVarInf\footnote{\url{https://github.com/ctallec/pyvarinf}} provides functionality for turning \inlinepython{nn.Module}s into BNNs in a similar spirit to TyXe\xspace.
As it is not backed by a probabilistic programming framework, the choice of prior distributions is limited, inference is restricted to variational factorized Gaussians, sampling tricks such as local reparameterization are not implemented and MCMC-based inference is not available.
More recently, BLiTZ\footnote{\url{https://github.com/piEsposito/blitz-bayesian-deep-learning}} \citep{esposito2020blitzbdl} provides variational counterparts to Pytorch\xspace's linear, convolutional and some recurrent layers.
Networks need to be constructed manually based on those, with no support of other layer types.
Priors are limited to mixtures of up to two Gaussians and inference is performed with a factorized Gaussian without support for gradient variance reduction techniques.
Subsequent to release of TyXe\xspace, UQ360 \citep{uq360-june-2021} and BNNPriors \citep{bnnpriors} were released. BNNPriors provides support for a range of different weight priors, restricting inference to (stochastic) MCMC-based methods; UQ360 provides a general treatment of uncertainty techniques.
\section{Conclusion}
We have presented TyXe\xspace, a Pyro\xspace-based library that facilitates a seamless integration of Bayesian neural networks for uncertainty estimation and continual learning into Pytorch\xspace-based workflows.
We have demonstrated the flexibility of TyXe\xspace with applications based on 3rd-party libraries, ranging from modern deep image classification architectures over graph neural networks to neural radiance fields.
TyXe\xspace avoids implementing bespoke layer classes and instead leverages and expands on Pyro\xspace's powerful effect handler module, resulting in a flexible design that cleanly separates architecture definition, prior, inference, likelihood and sampling logic.
TyXe\xspace's choices of variational distributions are currently pragmatic, focused on serving practitioners and researchers interested in generating uncertainty estimates for downstream tasks that will benefit from the improvements offered by standard variational families or HMC over maximum likelihood.
Recent work has even argued that mean-field may be sufficient for inference in deep networks \citep{farquhar2020liberty}.
However, we are highly interested in further developing TyXe\xspace to support more complex recent approaches and become a tool for Bayesian deep learning research with its backing by Pyro facilitating extensions; see \Cref{app:directions} for an in-depth outlook.
We would expect techniques with structured covariance matrices \citep{louizos2016structured,ritter2018scalable} as well as hierarchical weight models \citep{louizos2017multiplicative,karaletsos2018probabilistic,inducing:weights} to be feasible to express within TyXe\xspace, with the latter possibly requiring additional abstractions.
Nevertheless, we believe that similar to Bayesian Layers \citep{tran2019bayesian} TyXe\xspace already makes a valuable contribution to the ML software ecosystem, filling the gap of easy-to-use uncertainty estimation for Pytorch\xspace.
|
2,877,628,091,005 | arxiv | \section{Introduction}
Sol-gel methods are commonly used for the fabrication of oxide materials with a wide range of functionalities. These methods involve the synthesis of a precursor solution (known as `sol') containing oligomeric chains of metal ions and oxygen atoms. Treatment of the sol, for example by the addition of water or by heating, causes the formation of a continuous metal-oxygen network, leading to gelation of the sol. The sol may be processed into a variety of products, such as bulk powders (by simply heating the gel), thin films by deposition of the sol onto a substrate (\textit{e.g.} by spin coating) or a multitude of other forms.\cite{Danks2016, Bassiri-Gharb2014} Employing a sol-gel-type synthesis for the production of oxide materials facilitates the control of composition and doping, making high homogeneity and short fabrication cycles possible.\cite{Danks2016} Furthermore, when used to fabricate thin films, it can give rise to smooth films covering a large surface area with a wide range of film thicknesses up to several micrometers.
One material which may be produced using a sol-gel method is the well-known lead zirconate titanate solid solution (\ch{PbZr_{1-x}Ti_xO_3}, also known as PZT), which is ferroelectric and, thus, piezoelectric, allowing for its use as sensors and actuators. The PZT composition with x=0.48 lies at a phase boundary between two different crystal structures with tetragonal (for Ti-rich compositions) and rhombohedral (for Zr-rich compositions) symmetries, where monoclinic structures have been observed\cite{Noheda1999}. At this boundary, known as as the morphotropic phase boundary (MPB), the piezoelectric coefficients are maximized.\cite{Jaffe1971} The piezoelectric parameters of PZT can be further improved through chemical doping with elements such as niobium (\ch{PbNb_y(Ti_xZr_{1-x})_{1-y}O_3} or PNZT).\cite{Damjanovic1999}
Traditional sol-gel methods used for the production of thin films of PZT, first reported by Budd, Dey and Payne\cite{Budd1985}, make use of the highly toxic 2-methoxyethanol as a solvent and alkoxides and acetates as the precursors for lead, zirconium and titanium. These methods rely on hydrolysis and condensation reactions of the alkoxide precursors to form a polymeric network of metal-oxygen-metal bonds. These methods use water for the initiation of the hydrolysis reaction. Hence, sols produced using such an approach tend to be sensitive to the presence of water.\cite{Danks2016} As a result, these sols require storage and processing in an oxygen-free and water-free environment, such as a glovebox.
More recently, there has been an interest in the development of chemical solution deposition (CSD) methods which are not based on hydrolysis-condensation reactions, instead relying on different types of reactions.\cite{Danks2016, Niederberger2007, Vioux1997, Debecker2012} One example of such a non-aqueous CSD method is based on ethylene glycol as bridging ligand and common alkoxides and acetates as reagents. This method was reported to be nontoxic, more stable to atmospheric moisture and have a more straightforward synthesis procedure.\cite{De-Qing2007} However, an investigation of the ferroelectric and piezoelectric properties of materials derived from this CSD method has, to our knowledge, not been reported.
We have studied the properties of both bulk and thin film products fabricated using the ethylene glycol-based CSD method\cite{De-Qing2007} . We show that multilayer stacks of thin films can be produced without cracks, voids or parasitic phases by carefully designing the deposition and heat treatment procedures, despite the presence of a large amount of organic material in the as-deposited film, which is commonly known to reduce film quality.\cite{Damjanovic1997} The piezoelectric behavior of the bulk and thin film products are comparable to reported values for films of similar characteristics. Finally, we have investigated the sensitivity of the solution to moisture.
\section{Results and discussion}
\label{sec:res}
Properties of the sol as well as structural and ferroelectric properties of the PNZT films and bulk ceramic pellets have been investigated.
Figure \ref{fig:dsc} contains plots of the differential thermal analysis (DTA) and thermo-gravimetric analysis (TGA) data collected from the sol dried at 230\textcelsius\ on a hotplate. Initial weight loss occurs around 300\textcelsius\ and is associated with a peak in the DTA trace. This peak corresponds to loss of ethylene glycol groups. Further weight loss occurs between 320\textcelsius\ and 400\textcelsius, corresponding to a large exothermic peak in the DTA trace. This peak is presumably the result of the removal of remaining organic material and the onset of crystallization of PNZT.\cite{Livage1994, Tu1995a} A total weight loss of approximately 23\% was observed up to 400\textcelsius. The final peak present at 842\textcelsius\ is possibly due to melting of lead oxide in the sample. These results are in rough correspondence with those reported in the literature using a similar ethylene glycol-based solution deposition method\cite{Livage1994}.
The sensitivity of the sol to the presence of water (for example, from the atmosphere), was determined by directly adding various concentrations of water to 1 mL of the sol. The sol was left at room temperature in a dark location for over a month, yet no gelation occurred even in sols to which 10 vol.\% water was added. After a month, some gelation was observed, but there was no correlation between the gelation time and the concentration of water that was added to the sol. Sols with up to 5 vol.\% water were used to produce pellets as described in the `Materials and Methods' section below. These pellets were analyzed by x-ray diffraction and scanning electron microscopy to assess the influence of the addition of water to the sol on the structural and microstructural properties of the product. No trends could be discerned in either the structural or the microstructural properties as the concentration of water was increased. These results show that the ethylene glycol-based sol is highly stable towards moisture.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\columnwidth]{DTA1}
\caption{DTA and TGA traces of the sol dried at 230\textcelsius.}
\label{fig:dsc}
\end{figure}
\subsection{Bulk}
PNZT pellets were produced from the PNZT sol with a 20\% excess of lead precursor. The sol was dried at 230\textcelsius\ and pyrolyzed at 420\textcelsius. Pellets with a nominal diameter of 10 mm were pressed from this powder at 6.4 ton/cm$^2$.
One pellet was sintered at 800\textcelsius\ for 2 hours. X-ray diffraction analysis of this pellet suggested that the pellet was in the perovskite phase. Nevertheless, the peak splittings expected for either the rhombohedral or tetragonal phases of PNZT were not present and no ferroelectric behavior was measured in this pellet. A second pellet was sintered at 1200\textcelsius\ for 2 hours. After sintering, the pellet had a diameter of 8.16 mm, a thickness of 1.39 mm and a density of 6936 $\mathrm{kg/m^3}$, that is 86.9\% of the theoretically predicted density\cite{Jaffe1954}.
Figure \ref{fig:xrdbulk} shows an x-ray diffraction pattern of the pellet after sintering at 1200\textcelsius. A good fit of this pattern was obtained using a combination of a rhombohedral PNZT phase and a $\mathrm{\beta}$-PbO phase. This indicates that the excess of lead precursor in the sol is too high, leading to the formation of an impurity phase. Nevertheless, good quality PNZT pellets could not be obtained using a lower lead excess. Blown-up versions of the (111), (200) and (220) peaks of the pattern are shown in figure \ref{fig:xrdblowup}. The splitting of the peaks indicates that the material is in a mostly rhombohedral phase, with a small admixute of a tetragonal or possibly a monoclinic phase\cite{Noheda1999, Noheda2000, Noheda2000b}. Hence, the material is approaching the morphotropic phase boundary between the rhombohedral and tetragonal phases known to present the best piezoelectric response.\cite{Jaffe1971} The slight deviation from the exact composition at the morphotropic phase boundary may result from the loss of titanium precursor during synthesis due to its high reactivity with atmospheric moisture or impurity of the precursor itself.
\begin{figure}[htb]
\centering
\includegraphics[width=0.7\columnwidth]{bulkxrdfinal}
\caption{X-ray diffraction pattern of the pellet and a fit of the profile using rhombohedral PNZT and $\mathrm{\beta}$-PbO phases.}
\label{fig:xrdbulk}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=0.7\columnwidth]{blowups}
\caption{Blow-ups of the (a) (111), (b) (200) and (c) (220) peaks of the pattern in figure \ref{fig:xrdbulk}. Peaks originating from a tetragonal or monoclinic phase are indicated using green arrows.}
\label{fig:xrdblowup}
\end{figure}
Scanning electron microscopy (SEM) of the pellet (figure \ref{fig:pelletsem}) shows a dense grain structure with PNZT grains of 500-1000 nm. Additionally, large, plate-like crystals are present in the PNZT matrix. These crystals were determined to be lead oxide by energy dispersive spectroscopy (EDS), confirming the presence of a lead oxide phase in the pellet.
\begin{figure}[htb]
\centering
\includegraphics[width=0.7\columnwidth]{pelletsem1}
\caption{Scanning electron microscopy image of a pellet sintered at 1200\textcelsius\ showing PNZT grains and larger lead oxide crystals (orange arrows).}
\label{fig:pelletsem}
\end{figure}
The pellet was poled in a silicone oil bath at 100\textcelsius\ with an electric field of 29 kV/cm for 30 minutes to align the dipoles in the material. Ferroelectric property measurements of the bulk ceramic pellet were performed, yielding the polarization-electric field hysteresis loops and strain-electric field ("butterfly") loops expected for a ferroelectric, as displayed in figure \ref{fig:piezopellet}. The remnant polarization measured for this pellet is P$_r$= 9.5 $\mathrm{\mu C/cm^2}$, the coercive field E$_c$= 7.78 kV/cm and the longitudinal piezoelectric coefficient d$_{33}$= 441 pm/V. The d$_{33}$ coefficient obtained here is compared to literature values in table \ref{tab:piezocomp}. Our PNZT pellet has piezoelectric properties in line with those found in literature, even competing with commercially available piezoelectric elements. We expect that the ferroelectric and piezoelectric parameters can be further increased by bringing the composition closer to the morphotropic phase boundary and by improved densification of the pellet by, for example, hot pressing. This work shows that the ethylene glycol CSD method is capable of producing a high-quality material despite the simplicity of the method.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\columnwidth]{bulkpiezo}
\caption{Polarization and strain loops of sol-gel derived PNZT pellet sintered at 1200\textcelsius.}
\label{fig:piezopellet}
\end{figure}
\begin{table}[ht]
\centering
\begin{tabular}{l l l l}
$d_{33}$ \textit{(pm/V)} & \textit{Doping element} & \textit{Production method} & \textit{Reference} \\
\hline
441 & Nb & CSD & This work\\
\hline
500 & Commercial & n.a. & \citet{Hinterstein2011}\\
475 & Commercial & n.a. & \citet{Xu2013}\\
420 & Undoped & Sol-gel & \citet{Sharma2001}\\
155 & Undoped & Wet chemical & \citet{Choy1997}\\
300 & Undoped & Wet chemical & \citet{Guiffard1998}\\
569 & La & Sol-gel & \citet{Shannigrahi2004}\\
269 & Nd & Sol-gel & \citet{Shannigrahi2004}\\
325 & La & Wet chemical & \citet{Sahoo2013}\\
236 & \ch{BiFeO3}/\ch{BaCu_{0.5}W_{0.5}O3}/\ch{CuO} & Solid-state & \citet{Dong1993}\\
338 & La/Nb & Solid-state & \citet{Singh2006}\\
520 & Sr/Nb & Solid-state & \citet{Zheng2001}\\
255 & Nb & Solid-state & \citet{Garcia2007}\\
\hline
\end{tabular}
\caption{Comparison of the longitudinal piezoelectric coefficient of the PNZT pellet fabricated using the ethylene glycol CSD method with literature values. }
\label{tab:piezocomp}
\end{table}
\subsection{Thin films}
A nine-layer PNZT film was produced from the 1.5 M PNZT sol by spinning at 5000 rpm followed by drying on the hotplate, with pyrolysis and annealing steps performed after every third layer. During heat treatment of these films, lead can be lost through evaporation at the film surface and through diffusion into the silicon substrate. This leads to the formation of a layer of lead-deficient pyrochlore phase at the film surface or at the film-electrode interface. An excess of lead precursor can be added to the PNZT sol to compensate for this loss. However, too large an excess can cause the formation of voids in the film due to evaporation of the excess lead species. Therefore, careful control of the excess is required.
To achieve such control, an alternative method was used here. A relatively small excess of lead of 10\% was added to the sol, compensating for diffusion but not evaporation. Additionally, a layer of pure lead oxide sol was deposited before the final pyrolysis step, compensating for evaporation from the film surface (see ref. \citenum{Brennecka2010}). The resulting film shows a dense structure with few voids and grains with sizes from several hundred nanometers up to 1 micrometer (figure \ref{fig:sem}). Using this procedure, no lead-deficient pyrochlore phase was found and no cracks or leakage paths are visible. These observations show that the combination of a lead excess in the sol with a lead oxide overcoat is effective at producing high quality thin films. A columnar grain structure is commonly observed in PZT thin films derived from traditional sol-gel methods based on 2-methoxyethanol, due to bottom-up growth of the grains after heterogeneous nucleation at the film-electrode interface. Such structure is not present in these films (figure \ref{fig:sem2}), indicating more homogeneous nucleation. This may be the result of the high organic content of the as-deposited films compared to traditional sol-gel-derived films.
\begin{figure}[ht]
\centering
\begin{subfigure}[h]{\columnwidth}
\centering
\includegraphics[width=0.7\columnwidth]{sem1}
\refstepcounter{subfigure}
\label{fig:sem1}
\end{subfigure}
~
\begin{subfigure}[h]{\columnwidth}
\centering
\includegraphics[width=0.7\columnwidth]{sem2}
\refstepcounter{subfigure}
\label{fig:sem2}
\end{subfigure}
\caption{(a) Plan-view and (b) cross-section SEM images of a nine-layer PNZT thin film stack with a PbO overcoat. The total thickness of the film is 440 nm.}
\label{fig:sem}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\columnwidth]{filmfinalxrd}
\caption{XRD pattern of the nine-layer thin film stack.}
\label{fig:xrd1}
\end{figure}
An x-ray diffraction pattern of the same film is shown in figure \ref{fig:xrd1}. The pattern shows a pure perovskite PNZT phase with no impurity peaks, except those originating from the platinized silicon substrate. No peak splitting is observed due to the broadening of the peaks. A small Pt(200) peak is present due to the top electrode, which is not perfectly (111) oriented. The preferential orientation of the PNZT film can be quantified by normalizing the integrated peak intensities with the intensities of the x-ray diffraction patterns of a powdered sample using the following expression:\cite{Balma2014}
\begin{equation}
\label{eq:preforient}
P(h_ik_il_i) = \frac{\dfrac{I(h_ik_il_i)}{I^*(h_ik_il_i)}}{\displaystyle \sum\limits_{hkl}\dfrac{I(hkl)}{I^*(hkl)}}
\end{equation}
where $P(h_ik_il_i)$ is a texture index quantifying the preferred orientation of the sample, $I(h_ik_il_i)$ is the intensity in the thin film sample and $I^*(h_ik_il_i)$ is the intensity in the powdered sample. The values in table \ref{tab:preforient} were obtained using the data in figure \ref{fig:xrd1}.
\begin{table}[ht]
\caption{Texture index values of the thin film sample}
\label{tab:preforient}
\centering
\begin{tabular}{l l}
$<h_ik_il_i>$ & $P(h_ik_il_i)$ \\
\hline
<100> & 0.25\\
<110> & 0.0086\\
<111> & 0.59\\
<200> & 0.11\\
<211> & 0.020\\
<220> & 0.016\\
\hline
\end{tabular}
\end{table}
A <111> orientation is preferred in these films due to the <111> texture of the underlying platinum electrode, showing that at least some of the film nucleates heterogeneously at the film-electrode interface. However, it is evident that some of the film nucleates homogeneously, resulting in a decreased <111> texture of the film. This is in agreement with the lack of columnar grains in the film.
\begin{figure}[ht]
\centering
\begin{subfigure}[h]{\columnwidth}
\centering
\includegraphics[width=0.7\columnwidth]{piezofilmfinal}
\refstepcounter{subfigure}
\label{fig:piezofilmfinal}
\end{subfigure}
~
\begin{subfigure}[h]{\columnwidth}
\centering
\includegraphics[width=0.7\columnwidth]{fatiguefilmfinal}
\refstepcounter{subfigure}
\label{fig:fatiguefilmfinal}
\end{subfigure}
\caption{(a) Ferroelectric and strain loop of the nine-layer PNZT thin film stack. The longitudinal piezoelectric coefficient is obtained from the slope of the strain loop at zero electric field, as indicated by the red tangent line. (b) Fatigue response of the film up to $\mathrm{10^7}$ cycles.}
\label{fig:piezofilm}
\end{figure}
Figure \ref{fig:piezofilmfinal} shows the ferroelectric hysteresis loop and strain loop of a nine-layer PNZT thin film stack.
A double-beam laser interferometer, which corrects for substrate bending to extract the true deformation of the film, was used to collect these loops. The loops were collected by sweeping the potential applied to the top electrode between +800 kV/cm and --800 kV/cm using a triangular waveform at a frequency of 100 Hz. The longitudinal piezoelectric coefficient is extracted from the strain loop by determining its slope at zero applied field. The film showed a remnant polarization of 10.5 $\mathrm{\mu C/cm^2}$, an average coercive field of 61.3 kV/cm, a longitudinal piezoelectric coefficient of 50 pm/V and a maximum deformation of 1.41 nm, that is 0.3\% of the thickness of the film. These values are again compared to those found in the literature, see table \ref{tab:piezocompfilm}, which displays the wide range of piezoelectric parameter values reported depending on the synthesis technique. Our piezoelectric coefficient is on the low end of this range, but improvements can likely be made. For example, fabrication of thicker films will improve piezoelectric behavior due to reduced clamping from the substrate.
\begin{table}[ht]
\centering
\begin{tabular}{l l l}
$d_{33}$ \textit{(pm/V)} & \textit{Production method} & \textit{Reference} \\
\hline
50 & CSD & This work\\
\hline
50 & Sol-gel & \citet{Balma2014}\\
77 & Sol-gel & \citet{Taylor2000}\\
200 & OMCVD & \citet{Lefki1994}\\
400 & Sol-gel & \citet{Lefki1994}\\
85 & Sol-gel & \citet{Ledermann2003}\\
70-80 & Sol-gel & \citet{Zavala1997}\\
200 & Sol-gel & \citet{Chen1996}\\
25 & Sol-gel & \citet{Wang2002}\\
57.6 & Sol-gel & \citet{Lian2000}\\
164* & PLD & \citet{Nguyen2014}\\
106 & PLD & \citet{Goh2005}\\
\hline
\end{tabular}
\caption{Comparison of the longitudinal piezoelectric coefficient of the PNZT thin film fabricated using the ethylene glycol CSD method with literature values. These results are for undoped PZT, unless otherwise noted. *: 1 \% Nb doping, OMCVD = organometalic chemical vapor deposition, PLD = pulsed laser deposition.}
\label{tab:piezocompfilm}
\end{table}
Figure \ref{fig:fatiguefilmfinal} shows the fatigue response of the film. The film was switched at a frequency of 200 Hz with an electric field amplitude of 114 kV/cm, that is above the coercive field. Ferroelectric hysteresis loops were collected at 3 points/decade with a field amplitude of 800 kV/cm and a frequency of 100 Hz. The film is stable to fatigue for at least $\mathrm{10^7}$ cycles. These results are similar to those reported in the literature (see, e.g., refs. \citenum{Balma2014, Klissurska1997}).
To summarize, a sol-gel method was developed based on ethylene glycol as a solvent and bridging ligand. This sol was used for the production of pellets and thin films of ferroelectric niobium-doped lead zirconate titanate. This sol offers the advantages of a lower toxicity solvent, improved stability during storage, decreased sensitivity to atmospheric moisture and the applicability to the synthesis of both bulk and thin film products. Furthermore, the synthesis of the sol is less complex than that of traditional, 2-methoxyethanol-based sols. DTA of the sol shows that decomposition of the gel is finished at 400\textcelsius, with crystallization of the desired PNZT phase occurring at higher temperatures.
Pellets of bulk PNZT were produced, having a density of 86.9\% of the theoretical density and a small lead oxide impurity. These show good properties with a coercive field of 68 kV/cm, a remnant polarization of 9.5 $\mathrm{\mu C/cm^2}$ and a piezoelectric coefficient of 441 pm/V, in line with literature values for similar PZT compositions.\cite{Damjanovic1999}
In addition, a nine-layer stack of PNZT thin films was fabricated from the sol by spin-coating with a thickness of 440 nm. An excess of lead was supplied to the thin films to compensate for evaporation and diffusion by combining the addition of an excess of lead precursor to the sol and the application of an overcoat of pure lead oxide. This method proved effective at suppressing the appearance of lead-deficient phases or voids in the stack. The final stack shows a dense perovskite grain structure with a weak (111) out-of-plane texture. Ferroelectric and piezoelectric characterization of the film shows ferroelectric coefficients close to literature values for thin films, with a remnant polarization of 10.5 $\mathrm{\mu C/cm^2}$, a coercive field of 61.3 kV/cm, a piezoelectric coefficient of 50 pm/V and a maximum deformation of 0.3\% of the thickness of the film. Furthermore, the film shows good stability to fatigue up to $\mathrm{10^7}$ cycles. This sol-gel method provides a safer, more water-stable alternative to traditional sol-gel methods based on 2-methoxyethanol for the fabrication of bulk and thin film products.
\section{Materials and Methods}
\subsection{Sol synthesis}
7.5 g of freeze dried lead acetate (\ch{Pb(CH_3COO)_2}, \ch{PbAc_2}, 23 mmol, 10 mol\% excess, $\mathrm\geq$ 99\%, Sigma Aldrich) and 9.4 mL ethylene glycol (\ch{(CH_2OH)_2}, EG) were added to a three-necked flask under a 0.5 lpm argon flow. An excess of lead acetate was used to compensate for losses due to evaporation and diffusion during the heat treatment steps. The suspension was heated to 90\textcelsius\ while stirring to dissolve the solids, then to 110\textcelsius\ to expel any remaining water from the solution. The sol was subsequently cooled to 90\textcelsius. 2.857 mL of titanium isopropoxide (\ch{Ti(OCH(CH_3)_2)_4}, 2.743 g, 9.65 mmol, 97\%, Sigma Aldrich), 0.210 mL niobium ethoxide (\ch{Nb(OCH_2CH_3)_5}, 0.266 g, 0.836 mmol, 99.95\%, Sigma Aldrich) and 4.686 mL of a 70 wt.\% solution of ziconium n-propoxide in 1-propanol (\ch{Zr(OCH_2CH_2CH_3)_4}, 3.425 g \ch{Zr(OCH_2CH_2CH_3)_4}, 10.5 mmol, Sigma Aldrich) were dissolved in 6.1 mL 1-propanol under inert atmosphere. The Ti/Nb/Zr solution was added to the lead sol slowly limiting exposure to air. Some precipitate formed upon addition. A further 15.7 mL of EG was added, yeilding 30 mL of solution at a concentration of metal ions of 1.5 M with a nominal composition of \ch{Pb_{1.1}Nb_{0.04}(Zr_{0.52}Ti_{0.48})_{0.96}O_3}. The suspension was stirred at 90\textcelsius\ until all precipitate had redissolved. The sol was cooled to room temperature and 4 vol.\% formamide (\ch{HCONH_2}, $\mathrm\geq$ 99\%, Sigma Aldrich) was added as a drying control chemical additive to limit the formation of cracks in the films.\cite{Hench1990} The sol was stored under inert atmosphere, where it is stable for at least 3 months. In air, the lifetime of the sol is shorter, but it is still stable for 1-2 weeks. A second sol was made in the same way, using a 20\% excess of lead precursor. This sol was used for the preparation of bulk PNZT (see below).
A separate lead oxide (PbO) sol was fabricated by dissolving 9.76 g of freeze dried lead acetate (\ch{Pb(CH_3COO)_2}, \ch{PbAc_2}, 23 mmol, 10 mol\% excess, $\mathrm\geq$ 99\%, Sigma Aldrich) in 30 mL of ethylene glycol (\ch{(CH_2OH)_2}, EG) while stirring to a final concentration of 1 M.
\subsection{Substrate preparation}
The substrates used for deposition of the PNZT sol were prepared from a (001) oriented silicon wafer without thermal oxide (Ted Pella) diced in 1x1 cm squares. The silicon substrates were cleaned ultrasonically in acetone, demineralized water and ethanol for ten minutes each. The substrates were subsequently blow dried using compressed air and loaded into a Kurt J. Lesker sputtering system. The substrated were \ch{O_2} plasma cleaned (0.15 mbar, 200 W, 5 min.) after which a titanium adhesion layer of 5/10 nm was DC sputtered (200 W, 0.2 nm/s) without breaking the vacuum. Subsequently, a 100 nm thick electrode of platinum was DC sputtered (200 W, 1.61 nm/s) onto the adhesion layer. The full electrode stack was annealed in a box furnace in air (450\textcelsius, 90 min., ramp rate 14.2 \textcelsius/s).
\subsection{Deposition procedure and heat treatment}
The platinized silicon substrates prepared as described above were again cleaned ultrasonically in acetone, demineralized water and ethanol for 10 minutes each. The substrates were blow dried using compressed air and UV/\ch{O_3} treated in an Ossila UV ozone cleaner to remove any residual organic contamination from the surface. The substrates were immediately placed in the center of the vacuum chuck of a spin coater. 75 $\mathrm{\mu L}$ of the 1.5 M sol was deposited onto the substrates. The spin coater was subsequently ramped up to the desired speed at 1000 rpm/s. It was held at this speed for 30 s, then slowed to a stop at 1000 rpm/s. The film was then placed on a hotplate at 230\textcelsius\ for drying. Additional layers were deposited after drying for the production of multilayer films. After the deposition of up to three layers, the films were pyrolyzed at 380\textcelsius\ on the hotplate and annealed by placing them in a preheated box furnace at 650\textcelsius\ for 10 minutes. Multilayer stacks of up to nine single deposited layers were produced, for which pyrolysis and annealing steps were performed every third layer. Multiple annealed layers are required to prevent the formation of leakage paths through the film. A 4x4 grid of circular top electrodes of 100 nm of platinum was sputter deposited onto the films using a hard mask.
\subsection{Pellet preparation}
For the preparation of bulk pellets of PNZT, 10 mL of the sol without an excess of lead was stirred and heated at 230\textcelsius\ on a hotplate. Some of the resulting gel was used for thermogravimetric analysis (TGA) and differential thermal analysis (DTA). The remaining gel was heated in a box furnace to 420\textcelsius\ to remove the organic groups. The resulting powder was ground using a pestle and mortar and heated again at 450\textcelsius\ for 30 minutes. The amorphous PNZT powder was pressed into 10 mm pellets under a load 6.4 ton/cm$\mathrm{^2}$. These pellets were sintered in a box furnace at 800\textcelsius\ or 1200\textcelsius\ for two hours.
\subsection{Characterization}
Grain structures of the films and pellets and film thicknesses were studied using an FEI Nova Nano\-SEM 650 scanning electron microscope. X-ray diffraction data was collected using a PanAnalytical X'Pert Pro MRD or a Bruker D8 Advance diffractometer (both in Bragg-Brentano geometry) for the films and pellets respectively. DTA-TGA data was collected in argon from 200\textcelsius\ to 1200\textcelsius\ at a heating rate of 10\textcelsius/minute using a TA instruments SDT 2960 differential scanning calorimeter. Finally, ferroelectric and piezoelectric properties of the films and pellets were measured using a state-of-the-art AixACCT TF analyzer 2000 ferroelectric-piezoelectric characterization system with an AixACCT double beam (films) or a Sios single beam (pellets) interferometer. The use of a double beam interferometer eliminates the contribution of the bending of the substrate to the measured deformation of the film.
\begin{acknowledgement}
We gratefully acknowledge the invaluable help of Jacob Baas and Henk Bonder in the lab. M.A. acknowledges financial support of a FOM-f Fellowship of the Dutch Research Council (NWO).
\end{acknowledgement}
\begin{suppinfo}
A listing of the contents of each file supplied as Supporting Information
should be included. For instructions on what should be included in the
Supporting Information as well as how to prepare this material for
publications, refer to the journal's Instructions for Authors.
The following files are available free of charge.
\begin{itemize}
\item Filename: brief description
\item Filename: brief description
\end{itemize}
\end{suppinfo}
|
2,877,628,091,006 | arxiv | \section{Introduction}
Physical systems are typically modeled by differential equations. For instance, the aerodynamics of an airplane can be represented by the Navier--Stokes equations~\cite{NavierSt76:online}, which are too complex to solve analytically.
Since analytical solutions are intractable for most practical problems of interest, numerical solutions are sought in a discretized domain. The process of discretization in space and time results in approximate solutions to the governing equations.
A numerical scheme is called \textit{convergent}, if in the limit of infinitesimal discretization, the bound on the discretization error is also infinitesimally small. Under these conditions, the numerical solution converges or approaches the analytic solution. This idea is formally articulated by the Lax equivalence theorem~\cite{lax1956survey}, which states that if a numerical method is \textit{consistent} and \textit{stable}, then it is \textit{convergent}.
\jb{Overall this paragraph is a little disorganized and needs to be crisper: what are we doing, why are we doing it, how are we doing it?}
Proofs of consistency, stability, and convergence are typically performed by hand, making them prone to possible errors.
Formal verification of mathematical proofs provides a much higher level of confidence of the correctness of manual proofs. Further, formal verification offers a pathway to leverage mathematical constructs therein, and to extend these proofs to more complex scenarios.
\jb{Some of this needs to move to Related Work. Maybe just keep a mention of Boldo and Taylor-Lagrange here.
Everything else moves to Related Work}
Recently, much effort has been dedicated to the definition of mathematical structures such as metric spaces, normed spaces, derivatives, limits etc. in a formal setting using proof assistants such as Coq \cite{o2008certified,boldo2015coquelicot,garillot2009packaging,martin2013certified}. Using automatic provers and proof assistants, a number of works have emerged in the formalization of numerical analysis \cite{boldo2013wave}. Pasca has formalized the properties of the Newton method~\cite{pasca2010formal}. Mayero et al. \cite{mayero2002using} presented a formal proof, developed in the Coq system, of the correctness of an automatic differentiation algorithm. Besides Coq, numerical analysis of ordinary differential equations has also been done in Isabelle/ HOL~\cite{immler2012numerical}. Immler et al.~\cite{Immler,immler2016flow,immler2019flow}, present a formalization of ordinary differential equations and the verification of rigorous (with guaranted error bounds) numerical algorithms in the interactive theorem prover Isabelle/HOL. The formalization comprises flow and Poincar\'e map of dynamical systems. Immler~\cite{10.1007/978-3-319-06200-6_9} implements a functional algorithm that computes enclosures of solutions of ODEs in the interactive theorem prover Isabelle/HOL. In~\cite{brehard2019certificate}, Brehard et al. present a library to verify rigorous approximations of univariate functions on real numbers, with the Coq proof assistant. Brehard~\cite{brehard2019calcul}, worked on rigorous numerics that aims at providing certified representations for solutions of various problems, notably in functional analysis. Work has also been done in formalizing real analysis for polynomials~\cite{cohen2010formalizing}. Boldo and co-workers \cite{boldo2013wave,boldo2014trusting,boldo2010formal} have made important contributions to formal verification of finite difference schemes. They proved consistency, stability and convergence of a second-order centered scheme for the wave equation.
However, the Lax equivalence theorem -- sometimes referred to as the fundamental theorem of numerical analysis -- which is central to finite difference schemes, has not been formally proven in the general case.
In this paper, we present a formal proof of the Lax equivalence theorem for a general family of finite difference schemes. We use the definitions of consistency and stability and prove convergence. To prove the consistency of a second-order centered scheme for the wave equation, Boldo et al.~\cite{boldo2014trusting} made assumptions on the regularity of the exact solution. This regularity is expressed as the existence of Taylor approximations of the exact solution up to some appropriate order. Our formalization instead takes the Taylor--Lagrange theorem of \cite{martin2013certified}, to prove the consistency of a finite difference scheme of any order. It should be noted that the order of accuracy of an explicit finite difference scheme depends on the number of points in the discretized domain (called \textit{stencils}) appearing in the numerical derivative. Our approach is to carry the Taylor series expansion for each of those stencils using the Taylor--Lagrange theorem, and appropriately instantiate the order of the truncated polynomial, to achieve the desired order of accuracy. By incorporating the discretization error into the Lagrange remainder and proving an upper bound for the Lagrange remainder, we propose a rigorous method of proving consistency of a finite difference scheme.
Since the Lax equivalence theorem is an essential tool in the analysis of numerical schemes using finite differences, its formalization in the general case opens the door to the formalization and certification of finite difference-based numerical software.
The present work will enable the formalization of convergence properties for a large class of finite difference numerical schemes, thereby providing formal proofs of convergence properties usually proved by hand, making explicit the underlying assumptions, and increasing the level of confidence in these proofs.
Overall this paper makes the following contributions:
\begin{itemize}
\item We provide a formalization in the Coq proof assistant of a general form of the Lax equivalence theorem.
\item We prove consistency and stability of a second order accurate finite difference scheme for the example differential equation $\frac {d^{2}u}{dx^{2}}=1$.
\item We formally apply the Lax equivalence theorem on this finite difference scheme for the example differential equation, thereby formally proving convergence for this scheme.
\item We also provide a generalized framework for a symmetric tri-diagonal (sparse) matrix in Coq. We define its eigen system and provide an explicit formulation of its inverse in Coq. We show that since the symmteric tri-diagonal matrix is normal, one can perform the stability analysis by just uniformly bounding the eigen values of the inverse. This is important because discretizations of mathematical model of physical systems are usually sparse~\cite{KIRK2013217}.
\end{itemize}
This paper is structured as follows.
In Section~\ref{Lax_section}, we review the definitions of consistency, stability and convergence, state the Lax equivalence theorem~\cite{lax1956survey,sanz1985general}, and discuss its formalization in the Coq proof assistant.
In Section~\ref{finite}, we discuss the consistency of a finite difference scheme. In particular, we consider the central difference approximation of the second derivative and formally prove the order of accuracy using the Taylor--Lagrange theorem in the Coq proof assistant. We also relate the pointwise consistency of the finite difference scheme with the Lax equivalence theorem, by instantiating it with an example. In Section~\ref{stability_section}, we discuss the generalized formalization of a symmetric tri-diagonal matrix and later instantiate it with the scheme to prove stability of the scheme. In Section~\ref{Lax_apply}, we apply the Lax equivalence theorem to the concrete finite difference scheme that we are considering.
In Section~\ref{conclusion}, we conclude by summarizing key takeaways from the paper, and discussing future work.
\section{Lax equivalence theorem}
\label{Lax_section}
In this section, we review the definitions of consistency, stability and convergence, discuss the problem set up and state the Lax equivalence theorem~\cite{lax1956survey}.
In this paper and for the formalization, we choose to follow the presentation of Sanz-Serna and Palencia~\cite{sanz1985general}.
We also discuss the proof of the Lax equivalence theorem which is then formalized in the Coq proof assistant.
\subsection{Consistency, Stability and Convergence}
\begin{definition}[The Continuous Problem~\cite{sanz1985general}]
Let $X$ (the space of solutions) and $Y$ (the space of data) be normed spaces, both real or both complex. We consider a linear operator $A$ with domain $D \subset X$ and range $R\subset Y$. The problem to be solved is of the form
\begin{equation} \label{true}\small
Au=f, \qquad f\in Y
\end{equation}
\end{definition}
Here $A$ is not assumed to be bounded, so that unbounded differential operators are included. The problem~(\ref{true}) is assumed to be well-posed, i.e., there exists a \textit{bounded, linear operator}, $E\in B(Y,X)$, such that $EA=I$ in $D$, and that for $f\in Y$, equation (\ref{true}) has a unique solution, $u=Ef$. Furthermore, the solution $u$ depends continuously on the data.
\begin{definition}[The Approximate Problem~\cite{sanz1985general}]
Let $H$ be a set of positive numbers such that $0$ is the unique limit point of $H$. For each $h \in H$ , let $X_{h}, Y_{h}$ be normed spaces and consider the approximate or discretized problem
\begin{equation}\label{approximate}\small
A_{h}u_{h}=f_{h},\qquad f_{h} \in Y_{h}
\end{equation}
where $A_{h}$ is a linear operator $A_{h}: X_{h}\longrightarrow Y_{h}$.
\end{definition}
We assume that for each $h \in H$, problem (\ref{approximate}) is well-posed and there exists a solution operator, $E_{h}=A_{h}^{-1}$, i.e. $u_{h}=E_{h}f_{h}$. The true solution $u$ and the approximate solution $u_{h}$ can be related with each other by defining a \textit{bounded, linear operator}, $r_{h}: X \to X_{h}$ for each $h \in H$. Similarly, data $f \in Y$ can be related to data in a discrete space, $f_{h} \in Y_{h}$ by defining a restriction operator $s_{h}$. For each $h \in H$, $s_{h}: Y \to Y_{h}$ is also a \textit{bounded, linear operator}. We assume that the operator norms can be \textcolor{black}{uniformly} bounded:
\begin{equation}\small
||r_{h}||\leq C_{1}, \qquad ||s_{h}||\leq C_{2},
\end{equation}
where the constants $C_{1},C_{2}$ are independent of $h$. The true solution $u=Ef$ is compared with the discrete solution $u_{h}=E_{h}s_{h}f$ corresponding to the discretized datum $f$.
The family $(X_{h}, Y_{h}, A_{h},r_{h},s_{h})$ defines a \textit{method} for the solution of (\ref{true})~\cite{sanz1985general}.
\begin{definition}[Convergence~\cite{sanz1985general}]
Let $f$ be a given element in $Y$. The method $(X_{h},Y_{h},A_{h},r_{h},s_{h})$ is convergent for the problem (\ref{true}) if
\begin{equation}\small \label{convergence}
\lim_{h \to 0} ||r_{h}Ef-E_{h}s_{h}f||_{X_{h}}=0
\end{equation}
We say that the method is convergent if it is convergent for each problem (\ref{true}) for any $f$ in $Y$.
\end{definition}
Intuitively, this means that in the limit of the discretization step, $h$, tending to zero, the numerical solution $E_{h}s_{h}f$ approaches the analytical solution $r_{h}Ef$. The analytical solution $r_{h}Ef$ is the restriction of the true (analytical) solution, $u=Ef$, onto the grid of size $N=1/h$, and $E_{h}s_{h}f$ is the discrete solution, $u_{h}=E_{h}f_{h}$ computed on the grid of size $N$.
\begin{definition}[Consistency~\cite{sanz1985general}]
Let $u$ be a given element in $D$. The method is consistent at $u$ if
\begin{equation}\small \label{consistency}
\lim_{h \to 0} ||A_{h}r_{h}u - s_{h}Au||_{Y_{h}} = 0
\end{equation}
A method is consistent if it is consistent at each $u$ in a set $D_{o}$ such that the image $A(D_{o})$ is dense in $Y$.
\end{definition}
Intuitively, this means that in the limit of the discretization step, $h$, tending to zero, the finite difference scheme $A_{h}u_{h}=f_{h}$ approaches the differential equation $Au=f$, i.e., we are discretizing the right differential equation.
\begin{definition}[Stability~\cite{sanz1985general}]
The method is stable if there exists a constant $K$ such that
\begin{equation}\small \label{stability}
||E_{h}||_{B(Y_{h},X_{h})} \leq K
\end{equation}
\end{definition}
Intuitively, stability of the numerical scheme means that a small numerical perturbation does not allow the solution to blow up. Uniform boundedness of the inverse $E_{h}=A_{h}^{-1}$ is a check on the conditioning of matrices (sensitivity to small perturbations), i.e., it ensures that the matrix $A_{h}$ is not ill-conditioned. Thus, if the numerical problem~(\ref{approximate}) were unstable, even though we were trying to solve the right differential equation, we would never converge to the true solution. Hence, both stability and consistency are sufficient for proving convergence of the numerical scheme.
The quantities within the norms (\ref{convergence}) and (\ref{consistency}) are, respectively, the \textit{global} and \textit{local} discretization errors.
\begin{theorem}[Lax equivalence theorem~\cite{sanz1985general}]\label{Lax}
Let
$(X,Y,A,X_{h},Y_{h},A_{h},r_{h},s_{h})$ be as above. If the method is consistent and stable, then it is convergent.
\end{theorem}
\begin{proof}\small
We start with the definition of \textit{convergence} in (\ref{convergence}),
\begin{align*}\small
& \lim_{h \to 0}||r_{h}Ef-E_{h}s_{h}f||_{X_{h}}\\
&= \lim_{h \to 0}||r_{h}u-E_{h}s_{h}f||_{X_{h}}\quad (u\overset{\Delta}{=} Ef)\\
& = \lim_{h \to 0}||r_{h}u-E_{h}s_{h}Au||_{X_{h}}\quad (f \overset{\Delta}{=} Au)\\
& =
\lim_{h \to 0}||Ir_{h}u-E_{h}s_{h}Au||_{X_{h}}\quad(r_{h}u=Ir_{h}u)\\
& =
\lim_{h \to 0}||E_{h}A_{h}r_{h}u-E_{h}s_{h}Au||_{X_{h}}\quad (E_{h}A_{h}\overset{\Delta}{=}I)\\
& \leq \lim_{h \to 0}||E_{h}||_{B(Y_{h},X_{h})}||(A_{h}r_{h}u-s_{h}Au)||_{Y_{h}} \\
& \leq K \lim_{h \to 0}||(A_{h}r_{h}u-s_{h}Au)||_{Y_{h}} \quad (\text{From stability: } (\ref{stability}))\\
& = 0 \quad (\text{From Consistency: } (\ref{consistency}))
\end{align*}
\end{proof}
\subsection{Formalization in the Coq Proof Assistant}
In this Section we show how we formalized the proof of the Lax equivalence theorem \cite{sanz1985general} in the Coq proof assistant.
All of the Coq formal proofs mentioned in this paper, containing the proofs of consistency, stability and convergence of finite difference schemes, and of the Lax equivalence theorem, are available at \url{http://www-personal.umich.edu/~jeannin/papers/NFM21.zip}.
The \texttt{Coquelicot} library~\cite{boldo2015coquelicot,Coquelic9:online} defines mathematical structures required for implementing the proof. \textcolor{black}{Since we use Coquelicot and standard reals library which are based on classical axiomatization of reals, our proofs are also non-constructive~\cite{boldo2015coquelicot}.} We define the \textit{Banach spaces} (complete normed spaces, complete in the metric defined by the norm \cite{kreyszig1978introductory}) $(X,Y,X_{h},Y_{h})$ using a canonical structure, \texttt{CompleteNormedModule}, in Coq \cite{garillot2009packaging}.
The definitions of the true problem (\ref{true}) and the approximate problem (\ref{approximate}) require that the mappings $A: X \to Y $ and $A_{h}: X_{h} \to Y_{h}$ be linear, and the solution operators $E: Y \to X$ and $E_{h}: Y_{h} \to X_{h}$ be linear and bounded. The linear mappings $A_{h}$ and $E_{h}$ are defined as functions of $h \in \mathbb{R}$.
Boldo et al.~\cite{boldo2017coq} have defined linear mapping in the context of a \texttt{ModuleSpace} and bounded linear mapping in the context of a \texttt{NormedModule} in their formalization of the \textit{Lax Milgram Theorem} in Coq~\cite{httpswww42:online,FlorianF30:online}. We extended these definitions in the context of \texttt{CompleteNormedModule}.
The definition of \textit{consistency} (\ref{consistency}) and \textit{convergence} (\ref{convergence}) hold in the limit of $h$ tending to zero. Thus, an important step in the proof is to express these limits in Coq. Formally, the notion of $f$ tending to $l$ at the limit point $x$ requires, for any $\epsilon > 0$, to find a neighborhood $V$ of $x$ such that any point $u$ of $V$ satisfies $|f(u)-l|<\epsilon$ \cite{boldo2015coquelicot}. This notion has been formalized in \texttt{Coquelicot} \cite{Coquelic9:online} using the concept of \textit{filters}. In topology, a filter is a set of sets, which is nonempty, upward closed, and closed under intersection \cite{cohen2017formal}. It is commonly used to express the notion of convergence in topology. We have used a filter, \texttt{locally x} \cite{lelay2015express} to denote an open neighborhood of $x$, and predicate \texttt{filterlim} \cite{lelay2015express} to formalize the notion of convergence (in the context of limits) of $f$ towards $l$ at limit point $x$, i.e. $\lim_{x \to a} f(x) =l$. Therefore, the definition of consistency (\ref{consistency}) is expressed as:
\begin{small}
\begin{verbatim}
(is_lim (fun h:R => norm (minus (Ah h (rh h u)) (sh h (A u)))) 0 0
\end{verbatim}
\end{small}
where the limits of functions is expressed using \texttt{is\_lim}~\cite{boldo2015coquelicot}.
We next discuss the formalization of the statement of convergence of a finite difference scheme in Coq. We note that from Theorem~\ref{Lax}, \textit{consistency} and \textit{stability} imply \textit{convergence}. This notion is expressed in Coq as follows:
\mohit{Hopefully, this is addressed in the following paragraphs}
\jb{present math first, Coq next}
\begin{small}
\begin{verbatim}
(is_lim (fun h:R => norm (minus (Ah h (rh h u)) (sh h (A u)))) 0 0
(*Consistency*) /\
(exists K:R , forall (h:R), operator_norm(Eh h)<=K ) (* Stability*) ->
is_lim(fun h:R=>norm (minus (rh h (E(f))) (Eh h (sh h (f))))) 0 0)
(*Convergence*).
\end{verbatim}
\end{small}
where the \textit{operator norm} is defined as $||f||_{\phi}=sup_{u \neq 0_{E}\land \phi(u)}\frac {||f(u)||_{F}}{||u||_{E}}$ and has been formally defined in \cite{boldo2017coq}.
The basic idea is that we bound the \textit{global discretization error} ($||r_{h}Ef - E_{h}s_{h}f||)$ above using the stability criterion, i.e. $||r_{h}Ef-E_{h}s_{h}f|| \leq K ||A_{h}r_{h}u - s_{h}Au||$, and then prove that as the \textit{local discretization error} ($||A_{h}r_{h}u - s_{h}Au||)$ tends to zero in the limit of $h$ tending to zero, the upper bound on the global discretization error tends to zero (using the property of limits). Using the property of norm , i.e. $0 \leq ||r_{h}Ef- E_{h}s_{h}f||$, we arrive at the inequality
\begin{equation*}\small
0 \leq ||r_{h}Ef- E_{h}s_{h}f|| \leq K ||A_{h}r_{h}u - s_{h}Au||
\end{equation*}
In Coq, we define the lower bound of the inequality as a constant function with value $0$ as: \texttt{fun\;\_ => 0}.
Since
the limit of a constant function is the constant itself, i.e. $\lim_{h \to 0} 0 =0$,
and
$lim_{h\to 0}||A_{h}r_{h}u-s_{h}Au|| = 0$ (Consistency), using the \textit{sandwich theorem} for limits,
$\lim_{h\to 0}||r_{h}Ef- E_{h}s_{h}f||=0$. The \textit{sandwich theorem} states that if we have functions obeying the inequality: $f(x)\leq g(x) \leq h(x)$ and $\lim_{x \to a}f(x)=L \quad \land \quad \lim_{x \to a}h(x)=L$ on some open neighborhood of $x=a$ , then $\lim_{x \to a}g(x)=L$. This proves the convergence of Definition~\ref{convergence} and completes the proof of the Lax equivalence theorem.
\section{Proof of consistency of a sample finite difference scheme}
\label{finite}
A finite difference scheme (FD) approximates a differential equation with a difference equation. The derivatives are expressed in terms of function values at finite number of points in the dicretized domain. For instance, consider a simple differential equation, $\frac {d^{2} u}{d x^{2}}=1$ on a domain $x \in (0,L)$ with boundary conditions $u(0)=0$ and $u(L)=0$, where L is the length of the domain. A second order accurate finite difference approximation would be $\frac {u(x+\Delta x)-2u(x)+u(x-\Delta x)}{\Delta x ^2}=1$, where $\Delta x$ is the discretization step and $x$ is the point at which the difference equation is evaluated. We will refer to this as numerical scheme $\mathcal{N}_h$. Since we are computing a numerical approximation to the actual derivatives, we are interested in knowing the order of the discretization error.
\begin{definition}[Discretization error]\small
Let $D(u)$ denote the true derivative of a function $u:\mathbb{R} \to \mathbb{R}$ and $N(u)$ denote the finite difference approximation of the true derivative. The discretization error (commonly referred to as the truncation error) ($\tau$) is then defined as:
\begin{equation}
\tau \overset{\Delta}{=} D(u)-N(u)
\end{equation}
\end{definition}
If the function $u$ is \textit{analytic}, it can be expressed as a \textit{Taylor series expansion} at the point of evaluation. The truncation error is then evaluated by expressing the numerical derivatives in terms of a truncated Taylor polynomial and then taking a difference of the true derivative and the numerical derivative. This gives us an upper bound on the discretization error. If a numerical method is consistent, the truncation error can be expressed as:
\begin{equation*}\small
\tau = \mathcal{O}(\Delta x ^{n})
\end{equation*}
when $\Delta x$ tends to zero, and where $n$ is the order of the truncated Taylor polynomial. We use this idea to formalize the proof of consistency of a finite difference scheme. This requires the use of an important theorem from calculus, the Taylor--Lagrange theorem.
\begin{theorem}[Taylor--Lagrange theorem]\label{Taylor_Lagrange}\small
Suppose that $f$ is $n+1$ times differentiable on some interval containing the center of convergence $c$ and $x$, and let $P_{n}(x)= f(c)+\frac {f^{(1)}(c)}{1!}(x-c)+\frac{f^{2}(c)}{2!}(x-c)^{2}+..+\frac{f^{(n)}(c)}{n!}(x-c)^{n}$ be the $n^{th}$ order Taylor polynomial of $f$ at $x=c$. Then $f(x)=P_{n}(x)+E_{n}(x)$ where $E_{n}(x)$ is the error term of $P_{n}(x)$ from $f(x)$. i.e. $E_{n}=f(x)-P_{n}(x)$, and for $\xi$ between $c$ and $x$, the Lagrange remainder form of the error $E_{n}$ is given by the formula $E_{n}(x)=\frac{f^{n+1}(\xi)}{(n+1)!} (x-c)^{(n+1)}$.
\end{theorem}
Martin-Dorel et al. \cite{martin2013certified} proved the Taylor--Lagrange theorem formally in Coq, and it is available in the \texttt{Coq.Interval} library \cite{Interval66:online,brisebarre2012rigorous}. We used this formalization of the Taylor--Lagrange theorem to prove the consistency of a finite difference scheme.
We will specifically prove that for a central difference approximation of the second derivative, $\frac {d^{2}u}{dx^{2}}$, expressed as : $\frac {u(x+\Delta x)-2 u(x)+u(x-\Delta x)}{(\Delta x)^{2}}$, the truncation error $\tau$ is quadratic in $\Delta x$:
\begin{equation*}\small
\tau = \left | \frac{d^{2}u}{dx^{2}}- \frac {u(x+\Delta x)-2u(x)+u(x-\Delta x)}{(\Delta x)^2} \right | = \mathcal{O}(\Delta x ^2)
\end{equation*}
\subsection{Proof of consistency for the finite difference scheme}\label{point_const}
We want to prove that for a central difference approximation of the second derivative in the numerical scheme $\mathcal{N}_h$, the truncation error, $\tau= \mathcal{O}(\Delta x^2)$.
By invoking the definition of Big-O notation, the theorem statement can be stated as:
\begin{equation}\label{cons_2}\small
\exists \gamma >0, \Gamma>0, \left | \frac{d^{2}u}{dx^{2}} - \frac { u(x+\Delta x)-2u(x)+ u(x-\Delta x)}{(\Delta x)^2}\right |\leq \Gamma (\Delta x ^2), \; 0<|\Delta x|<\gamma.
\end{equation}
The equation (\ref{cons_2}) is stated formally in Coq as:
\begin{small}
\begin{verbatim}
Theorem taylor_FD (x:R): Oab x ->exists gamma:R, gamma >0 /\ exists G:R,
G>0/\ forall dx:R, dx>0 -> Oab (x+dx) -> Oab (x-dx)->(dx< gamma ->
Rabs((D 0 (x+dx)- 2*(D 0 x) + D 0 (x-dx))*/(dx * dx)- D 2 x)<= G*(dx^2)).
\end{verbatim}
\end{small}where \texttt{Oab x} mean $a < x < b$ and \texttt{D k x} denotes $k^{th}$ derivative of $u$ with respect to x. \\
We start by introducing the following lemmas required to complete the proof.
\begin{small}
\begin{lemma}
[$|F(x)|\sim \mathcal{O}(\Delta x)^4$]
\label{lem:lem1}
$\forall x \in (a,b),\exists\; \eta \in \mathbb{R}, \eta>0 \land \exists \;M \in \mathbb{R}, M>0 \land\\ \forall \Delta x \in \mathbb{R}, \Delta x >0
\to (x+\Delta x) \in (a,b) \to \Delta x < \eta \to |F(x)|\leq M(\Delta x)^4.$
\end{lemma}
Here, $F(x)$ is the Lagrange remainder in the expansion of $u(x+\Delta x)$ up to degree 3 and is defined as:
\begin{equation}\label{def_1}
F(x) \overset{\Delta}{=} u(x+\Delta x)-u(x)-\Delta x \frac{du}{dx}\Big|_x -\frac{1}{2!}(\Delta x)^{2}\frac{d^{2}u}{dx^2}\Big|_x-
\frac{1}{3!}(\Delta x)^{3}\frac{d^{3}u}{dx^{3}}\Big|_x
\end{equation}
Thus, Lemma~\ref{lem:lem1} states that the Lagrange remainder $F(x)= \frac{1}{4!}(\Delta x)^4 \frac{d^4 u(\xi)}{dx^4}$ is of order $(\Delta x)^4$ for all $\xi \in (x, x+\Delta x)$.
\begin{lemma}
[$|G(x)|\sim \mathcal{O}(\Delta x)^4$]
\label{lem:lem2}
$\forall x \in (a,b), \exists\; \delta \in \mathbb{R}, \delta>0 \land \exists \;K \in \mathbb{R}, K>0 \land \\
\forall \Delta x \in \mathbb{R}, \Delta x > 0 \to (x-\Delta x) \in (a,b)\to \Delta x < \delta \to |G(x)|\leq K(\Delta x)^4.$
\end{lemma}
\end{small}
Here, $G(x)$ is the Lagrange remainder in the expansion of $u(x-\Delta x)$ up to degree 3 and is defined as:
\begin{equation}\label{def_2}\small
G(x) \overset{\Delta}{=}u(x-\Delta x)-u(x)+\Delta x \frac{du}{dx}\Big|_x-\frac{1}{2!}(\Delta x)^{2}\frac{d^{2}u}{dx^{2}}\Big|_x+\frac{1}{3!}(\Delta x)^{3}\frac{d^{3}u}{dx^{3}}\Big|_x
\end{equation}
Thus, Lemma~\ref{lem:lem2} states that the Lagrange remainder $G(x)= \frac{1}{4!}(\Delta x)^4 \frac{d^4 u(\xi)}{dx^4}$ is of order $(\Delta x)^4$ for all $\xi \in (x-\Delta x, x)$.
Both the lemmas are a straightforward application of the Taylor--Lagrange theorem (Theorem~\ref{Taylor_Lagrange}), and are crucial to the formalization of the proof of the consistency of the finite difference scheme.
Next, we present an informal proof of the theorem followed by a discussion on the formal proof of the consistency theorem.
\begin{proof}\small
\begin{equation}\label{lemma_1}\small
|F(x)|\leq M (\Delta x )^4 \quad \text{[From Lemma~\ref{lem:lem1}]}
\end{equation}
\begin{equation}\label{lemma_2}\small
|G(x)| \leq K (\Delta x)^4 \quad \text{[From Lemma~\ref{lem:lem2}]}
\end{equation}
Adding equation (\ref{lemma_1}) and (\ref{lemma_2}), we get:
\begin{align}\small
&|F(x)|+|G(x)| \leq (M+K) (\Delta x)^{4} \nonumber \\
\implies&|F(x)+ G(x)| \leq (M+K) (\Delta x)^{4} \nonumber \\
&\text{[Using the triangle inequality, $(|F(x)+G(x)| \leq |F(x)|+|G(x)|)$ ]}\nonumber \\
\implies& |F(x)+G(x)| \leq \Gamma (\Delta x)^ {4} \quad(\text{Instantiating} \Gamma := M+K) \label{cons_res}
\end{align}
Unfolding the definitions $F(x)$ and $G(x)$, and doing the algebra we get:
\begin{align}
&\Big |u(x+\Delta x)-2u(x)+u(x-\Delta x)-(\Delta x)^{2}\frac{d^{2}u}{dx^{2}}\Big| \leq \Gamma (\Delta x ^4) \nonumber\\
\implies &\Big |\frac{u(x+\Delta x)-2u(x)+u(x-\Delta x)}{(\Delta x)^{2}}-\frac{d^{2}u}{dx^{2}}\Big| \leq \Gamma (\Delta x ^2)\label{final_FD} \quad \textbf{[QED]}
\end{align}
\end{proof}
An important point to note is that the condition $|F(x)|+|G(x)|\leq M(\Delta x)^4+K(\Delta x)^4$ holds when $0<|\Delta x|<\gamma$, where $\gamma$ is as defined in (\ref{cons_2}). We therefore choose, $\gamma = min(\eta, \delta)$, where $\eta$ is such that, $|F(x)|\leq M(\Delta x)^4$ holds when $0<|\Delta x|<\eta$, and $\delta$ is such that, $|G(x)|\leq K(\Delta x)^4$ holds when $0<|\Delta x|<\delta$.
\subsection{Formalization in the Coq Proof assistant}
We followed the proof above and formalized it in the Coq proof assistant.
To apply the Taylor--Lagrange theorem \cite{martin2013certified} to the consistency analysis of a central difference approximation, we broke down the theorem statement into two lemmas as discussed in the previous section. Therefore, in this section, we will discuss the proof of Lemma~\ref{lem:lem1} and~\ref{lem:lem2}.
\subsubsection{Proof of Lemma~\ref{lem:lem1}:}
Formally Lemma~\ref{lem:lem1}
is stated in Coq as:
\begin{small}
\begin{verbatim}
Lemma taylor_uupper (x:R): Oab x-> exists eta: R, eta>0 /\
exists M :R, M>0 /\ forall dx:R, dx>0 -> Oab (x+dx) ->
(dx<eta -> Rabs(D 0 (x+dx)- Tsum 3 x (x+dx))<=M*(dx^4)).
\end{verbatim}
\end{small}
In the proof of the Lemma, existential quantification associated with $\eta$ and $M$ has to be addressed. We chose $\eta$ as $b-x$, since the interval in which we are studying Taylor--Lagrange for $u(x+\Delta x)$ is $[x,b]$. Since $\Delta x \in (x,b)$ and $\Delta x < \eta$, it seems logical to chose $\eta=b-x$. For the choice of $M$, we obtained extreme bounds in the interval. Since the function $u$ and its derivatives are continuous in a compact set $[x,b]$, we are guaranteed to get maximum and minimum values. In Coq, we applied the lemma \texttt{continuity\_ab\_max} to obtain a maximum value, $\left(\frac{d^{4}u}{dx^{4}}\right)_{max}=\frac{d^{4}u(F)}{dx^{4}}$ such that $\frac{d^{4}u(\xi)}{dx^{4}}\leq \frac{d^{4}u(F)}{dx^{4}}, \forall \xi \in [x,b]$. Similarly, we apply the lemma \texttt{continuity\_ab\_min} to obtain a minimum value, $\left(\frac{d^{4}u}{dx^{4}}\right)_{min}=\frac{d^{4}u(G)}{dx^{4}}$ such that $\frac{d^{4}u(G)}{dx^{4}}\leq \frac{d^{4}u(\xi)}{dx^{4}}, \forall \xi \in [x,b]$. \\
Thus, $M$ is chosen as $M=max\left(\left|\frac{d^{4}u(G)}{dx^{4}}\right|,\left|\frac{d^{4}u(F)}{dx^{4}}\right|\right)$. With this choice of $M$, we can bound the Lagrange remainder or the trunction error from above and thus prove Lemma~\ref{lem:lem1}.
\subsubsection{Proof of Lemma~\ref{lem:lem2}:}
Formally Lemma~\ref{lem:lem2}
is stated in Coq as:
\begin{small}
\begin{verbatim}
Lemma taylor_ulower (x:R): Oab x -> exists delta: R, delta>0 /\
exists K :R, K>0 /\ forall dx:R, dx>0 ->Oab (x-dx) ->
(dx<delta -> Rabs(D 0 (x-dx)-Tsum 3 x (x-dx))<=K*(dx^4)).
\end{verbatim}
\end{small}
The proof of Lemma~\ref{lem:lem2} follows the same approach as that of Lemma~\ref{lem:lem1}. Here, we chose $\delta $ as $x-a$, since the interval in which we are studying Taylor--Lagrange theorem for $u(x-\Delta x)$, $\Delta x \in (a,x)$, and $\Delta x < \delta$. We chose $K$ in the same way as we chose $M$ in Lemma~\ref{lem:lem1} except that the interval in which we obtain maximum and minimum values for $\frac{d^{4}u}{dx^{4}}$ is $[a,x]$ in this case. Thus, $\left(\frac{d^{4}u}{dx^{4}}\right)_{min}=\frac{d^{4}u(G)}{dx^{4}}$, $\left(\frac{d^{4}u}{dx^{4}}\right)_{max}=\frac{d^{4}u(F)}{dx^{4}}$,and $K=max\left(\left|\frac{d^{4}u(G)}{dx^{4}}\right|,\left|\frac{d^{4}u(F)}{dx^{4}}\right|\right), \forall c \in [a,x]$.
To prove the main theorem statement on consistency, we break the statement into Lemma~\ref{lem:lem1} and~\ref{lem:lem2}, by instantiating $\Gamma = M+K$, and $\gamma = \min(\eta, \delta)$, where $(M,\eta)$ and $(K,\delta)$ have been defined as in Lemma~\ref{lem:lem1} and~\ref{lem:lem2} respectively, in the manner shown in section~(\ref{point_const}). To implement this instantiation, we have to carefully \textit{destruct} the lemmas introduced in the theorem statement. Then, we simply apply lemma~\ref{lem:lem1} and~\ref{lem:lem2}, to complete the main proof.
\subsection{Relating pointwise consistency to the Lax equivalence theorem}
In this section, we relate the proof of consistency from Section~\ref{point_const} with the Lax equivalence Theorem~\ref{Lax}. The numerical discretization of the differential equation can be expressed in the discrete domain as:
\begin{equation}\label{FD_scheme}
\small
\underbrace{
\frac{1}{h^{2}}
\begin{bmatrix}
1 & 0 & 0 & 0 & \hdots & 0\\
1 &-2 & 1& 0 & \hdots & 0\\
\vdots& \ddots &\ddots&\ddots& &\vdots\\
0 & \hdots & 1 & -2 & 1 & 0\\
0& \hdots & 0 & 1 & -2 & 1\\
0 &\hdots & 0 & 0 & 0 &1
\end{bmatrix}
}_\text{$A_{h}$}
\underbrace{
\begin{bmatrix}
u_{o}\\
u_{1}\\
\vdots\\
u_{N-2}\\
u_{N-1}\\
u_{N}
\end{bmatrix}
}_\text{$r_{h}u$}=
\underbrace{
\begin{bmatrix}
0\\
1\\
\vdots\\
1\\
1\\
0
\end{bmatrix}
}_\text{$s_{h}Au$}
\end{equation}
Comparing with the statement of consistency (\ref{consistency}), we have
\begin{small}
\begin{equation}\label{FD1}
\small
\lim_{h \to 0}\left|\left |
\frac{1}{h^{2}}
\begin{bmatrix}
1 & 0 & 0 & 0 & \hdots & 0\\
1 &-2 & 1& 0 & \hdots & 0\\
\vdots& \ddots &\ddots&\ddots& &\vdots\\
0 & \hdots & 1 & -2 & 1 & 0\\
0& \hdots & 0 & 1 & -2 & 1\\
0 &\hdots & 0 & 0 & 0 &1
\end{bmatrix}
\begin{bmatrix}
u_{o}\\
u_{1}\\
\vdots\\
u_{N-2}\\
u_{N-1}\\
u_{N}
\end{bmatrix}-
\begin{bmatrix}
0\\
1\\
\vdots\\
1\\
1\\
0
\end{bmatrix}
\right| \right|
= \lim_{h \to 0}\left|\left|
\begin{bmatrix}
\frac{u_{o}}{h^2}\\
\frac{u_{o}-2u_{1}+u_{2}}{h^{2}}-1\\
\frac{u_{1}-2u_{2}+u_{3}}{h^{2}}-1\\
\vdots\\
\frac{u_{N-2}-2u_{N-1}+u_{N}}{h^{2}}-1\\
\frac{u_{N}}{h^2}
\end{bmatrix}
\right| \right|=0
\end{equation}
\end{small}
Taking the vector norm in the $L_{1}$ sense, $||.||_{1}$, equation (\ref{FD1}) can be written as:
\begin{equation}
\lim_{h \to 0} \Big[\left|\frac{u_{o}}{h^2}\right|+\left|\frac{u_{o}-2u_{1}+u_{2}}{h^{2}}-1\right| +..+ \left|\frac{u_{N-2}-2u_{N-1}+u_{N}}{h^{2}}-1\right|+ \left|\frac{u_{N}}{h^2}\right|\Big]=0\label{term_1}
\end{equation}
$\lim_{h \to 0} \frac{u_{o}}{h^2}=0$ and $\lim_{h \to 0} \frac{u_{N}}{h^2}=0$, trivially because of the boundary conditions we imposed, i.e. $u_{o}=0$ and $u_{N}=0$. \textcolor{black}{The norm used in~(\ref{FD1}) are in the space $Y_h$, i.e., $||.||_{Y_h}$}.\\
This reduces to proving:
\begin{equation} \label{FD2}\small
\sum_{i=1}^{N-1} \lim_{h \to 0} \left| \frac {u_{i-1}-2u_{i}+u_{i+1}}{h^{2}}-1\right|=0
\end{equation}
But from the Taylor--Lagrange analysis discussed in section~(\ref{point_const}), we have
\begin{equation}\label{FD3}\small
\left |\frac {u_{i-1}-2u_{i}+u_{i+1}}{h^{2}}- \frac{d^{2}u}{dx^{2}}\Big|_{x_i} \right| \leq Ch^2
\end{equation}
where $C$ is a constant, and $u_{i}=u(x_i), u_{i-1}=u(x_i -h), u_{i+1}=u(x_i +h)$. Substituting $\left. \frac{d^{2}u}{dx^{2}} \right|_{x_{i}}=1$, and using the inequality (\ref{FD3}) and equation(\ref{FD2}), we get
\begin{equation}\small
\sum_{i=1}^{N-1} 0 \leq \sum_{i=1}^{N-1} \lim_{h \to 0} \left| \frac {u_{i-1}-2u_{i}+u_{i+1}}{h^{2}}-1\right| \leq \sum_{i=1}^{N-1} \lim_{h \to 0} |C h^2|
\end{equation}
But,
$ \sum_{i=1}^{N-1} \lim_{h \to 0} |C h^2|=0$. Hence, using the sandwich theorem, we prove that
\begin{small}
\begin{equation*}
\sum_{i=1}^{N-1} \lim_{h \to 0} \left| \frac {u_{i-1}-2u_{i}+u_{i+1}}{h^{2}}-1\right|=0 \qquad \textbf{[QED]}
\end{equation*}
\end{small}
\subsection{Formalization in Coq}
In order to represent, $x_{i},\; i=0..N$,
we define $x$ of type: \texttt{nat $\to$ R}.
The boundary conditions are imposed as hypothesis statements:
\begin{small}
\begin{verbatim}
Hypothesis u_0 : (D 0 (x 0))= 0.
Hypothesis u_N: (D 0 (x N)) =0.
\end{verbatim}
\end{small}
The differential equation is defined as:
\begin{small}
\begin{verbatim}
Hypothesis u_2x: forall i:nat, (D 2 (x i)) =1.
\end{verbatim}
\end{small}
Equation (\ref{FD2}) is formalized as a lemma statement:
\begin{small}
\begin{verbatim}
Lemma lim_sum:is_lim (fun h:R =>
sum_n_m (fun i:nat =>Rabs (( D 0 (x i -h) -2* (D 0 (x i))
+ D 0 (x i +h))*/(h^2) -1))
\end{verbatim}
\end{small}
This is where we integrate the proof of pointwise consistency of the FD scheme from section (\ref{finite}).
The main theorem statement which is an application of the statement of consistency required in the proof of Lax equivalence theorem from section (\ref{Lax_section}) is as follows:
\begin{small}
\begin{verbatim}
Theorem consistency_inst: forall (U:X) (f:Y) (h:R) (uh: Xh h)
(rh: forall (h:R), X -> (Xh h)) (sh: forall (h:R), Y->(Yh h))
(E: Y->X) (Eh:forall (h:R),(Yh h)->(Xh h)),
is_lim (fun h:R => norm (minus (Ah h (rh h U)) (sh h (A U)))) 0 0.
\end{verbatim}
\end{small}
We note here that the above-mentioned formalization is not unique to the second order scheme that we discussed. The approach we discuss can easily be generalized to verify consistency of any finite difference scheme. The crucial step in such a generalization is the appropriate instantiation of the $A_{h}$ matrix and the vectors $r_{h}u$ and $s_{h}Au$.
\section{Stability of the scheme}\label{stability_section}
In this section we discuss the stability of the scheme $\mathcal{N}_h$. From section~\ref{Lax_section}, stability of a numerical scheme requires the solution operator $E_{h}=A_{h}^{-1}$ to be uniformly bounded. We prove this by bounding the eigenvalues of $E_{h}$ uniformly. Eigenvalues of $E_{h}$ are just inverse of the eigenvalues of $A_{h}$. A formal proof of this can be referred to in the Appendix~\ref{inverse_spectrum}.
We will first discuss a generalized framework for the formalization of stability for a symmetric tri-diagonal matrix in Coq. We denote this matrix with $A_{h}(a,b,c)$ with $c=a$ for symmetry. This notation means that $b$ is on the diagonal, $c$ is on the upper diagonal and $a$ is on the lower diagonal. All the other entries are zero. Since we are treating stability from a spectral viewpoint, we next discuss the formalization of the Eigen system for $A_{h}(a,b,a)$.
\subsection{Lemma to verify that the eigenvalues and eigenvectors belong to the spectrum of $A_{h}(a,b,a)$}
Analytical expressions for the eigenvalues and eigenvectors of $A_{h}(a,b,c)$ are given by:
\begin{small}
\begin{equation*}
\lambda_{m}=b+2\sqrt{ac}\cos{\left[\frac{m\pi}{N+1}\right]}; \quad s_{m}=\left(s_{j}\right)_{m}=\left[\frac{a}{c}\right]^{j-1/2}\sqrt{\frac{2}{N+1}}\sin{\left[j \frac{m\pi}{N+1}\right]}\;
\end{equation*}
\end{small} $ \forall m,j = 1..N$.
In Coq, we defined $\lambda_m$ and $s_m$ as follows:
\begin{small}
\begin{verbatim}
Definition Eigen_vec (m N:nat) (a b c:R):= mk_matrix N
sqrt ( 2 / INR (N+1))*(Rpower (a */c) (INR i +1 -1*/2))*
sin(((INR i +1)*INR(m+1)*PI)*/INR (N+1))).
Definition Lambda (m N:nat) (a b c:R):= mk_matrix
b + 2* sqrt(a*c)* cos ( (INR (m+1) * PI)*/INR(N+1))).
\end{verbatim}
\end{small}Since naturals in Coq start with 0, we write \texttt{INR (m+1)} and \texttt{INR i+1}.
We then formally verify that the analytical expressions for the pair $(\lambda_{m}, s_{m})$ indeed belong to the spectrum of $A_{h}$. From now on, we will refer to $A_{h}(a,b,a)$ as $A_{h}$ for the sake of brevity.
In Coq, we state this formally as:
\begin{small}
\begin{verbatim}
Lemma eigen_belongs (a b c:R): forall (m N:nat), (2 < N
(0 <= m < N
\end{verbatim}
\end{small}
where, $LHS\overset{\Delta}{=}A_{h}s_{m}$ and $RHS\overset{\Delta}{=}s_{m}\lambda_{m}$. Here we used the definition of eigenvalue-eigenvector, i.e., $A_{h} s_{m}\overset{\Delta}{=}\lambda_{m}s_{m}$. Formalizing the proof of the lemma \texttt{eigen\_belongs} was challenging due to the structure of the matrix $A_{h}$. $A_{h}$ is a tri-diagonal matrix with non-zero entries on the diagonal, sub-diagonal and super-diagonal. The other entries are zero and hence the matrix is sparse.
\begin{equation}\small
\label{sparse_sum}
\therefore\;
\underbrace{
\sum_{j=0}^{N-1} {A_{h} (i,j) s_{m}(i)}
}_\text{$A_{h}(i,j) \neq 0$} +
\underbrace{
\sum_{j=0}^{N-1} {A_{h} (i,j) s_{m}(i)}
}_\text{$A_{h}(i,j) =0$} = \lambda_{m} s_{m}(i); \quad 0\leq i \leq N-1
\end{equation}
In Coq, we have to carefully destruct the matrix $A_{h}$ to separate the non-zero and zero sums in the LHS of equation~(\ref{sparse_sum}). The idea is to do a case analysis on the row-index $i$, and has been illustrated in figure~(\ref{tridiagonal}) in the Appendix~\ref{lemma_eigen}. Details on the formal proof of the zero and non-zero cases are presented in Appendix~\ref{lemma_eigen}.
Next, we discuss formalization of the boundedness of the matrix norm of $E_{h}=A_{h}^{-1}$. We have used an explicit formulation of $A_{h}^{-1}$~\cite{hu1996analytical} in our formalization and we verify this formally using the definition: $A_{h}^{-1}A_{h}=I \; \land \; A_{h}A_{h}^{-1}=I$. Details on the proof can be referred to in the Appendix~\ref{invertible_check}.
\subsection{ Lemma on the boundedness of the matrix norm for scheme $\mathcal{N}_{h}$}
Here, we have used the definition of the spectral (2-norm): $||A||_{2} = \rho(A)$,
where $\rho(A)$ is the spectral radius of $A$ and is defined as the maximum eigen-value of A, i.e. $\rho(A)=max_{m} |\lambda_{m}(A)|$.
For the symmetric tri-diagonal matrix $A_{h}$, $A=E_{h}$ and $\lambda_{m}(E_{h})= 1/ \lambda_{m}(A_{h})$.
Since $\lambda_{m} (A_{h}) < 0$, $max_{m} |\lambda_{m}(E_{h})|= 1/ |\lambda_{min}(A_{h})|$. Hence, we define the matrix norm in Coq as follows:
\begin{small}
\begin{verbatim}
Definition matrix_norm (N:nat):= 1/ Rabs (Lambda_min N).
\end{verbatim}
\end{small} To show that the matrix norm is uniformly bounded, we need to show that $1/ |\lambda_{min}(A_{h})|$ is uniformly bounded. This is where we instantiate the tri-diagonal matrix $A_{h}$ with the scheme $\mathcal{N}_{h}$. Thus, we prove the following lemma in Coq:
\begin{small}
\begin{verbatim}
Lemma spectral: forall(N:nat),(2<N
\end{verbatim}
\end{small}where $L$ is the length of the domain, independent of $h$, and is constant throughout. \texttt{Lambda\_min} is the minimum eigenvalue for the instantiated matrix,
$A_{h}'= A_{h}(\frac{1}{h^2}, \frac{-2}{h^2}, \frac{1}{h^2})$. We provide a paper proof of this bound in the Appendix~\ref{paper_proof}.
To show that all the eigenvalues have the same bound, we prove that $\frac{1}{\lambda_{min}(A_{h}')}$ is the maximum eigenvalue of $E_{h}'$. The lemma statement is as follows:
\begin{small}
\begin{verbatim}
Lemma eigen_relation: forall (i N:nat), (2<N
Rabs (lam i N) <= 1/ Rabs( Lambda_min N).
\end{verbatim}
\end{small}This completes the proof on the boundedness of the eigenvalues of $E_{h}'$. The lemma, \texttt{eigen\_relation} also shows that the spectral radius of $E_{h}'$ is $\frac{1}{|\lambda_{min}(A_{h}')|}$, and justifies the defintion of \texttt{matrix\_norm}.
We note that the definition of the matrix norm of $A_{h}^{-1}$ is valid only if $A_{h}^{-1}$ is a normal matrix . We therefore verify that $A_{h}^{-1}$ is normal. The lemma statement is provided in the Appendix~\ref{inverse_normal}.
We also provide the proof that $A_{h}$ is diagonalizable in the Appendix~\ref{diagonalization}. This helps us to formally establish that the eigen vectors are orthogonal and hence the eigen space is complete.
\subsection{Main stability theorem}
In this section, we integrate all of the previous lemmas to prove the main stability theorem (\ref{stability}).
\begin{small}
\begin{verbatim}
Theorem stability: forall (u:X) (f:Y) (h:R) (uh: Xh h)
(rh: forall (h:R), X -> (Xh h))(sh: forall (h:R), Y->(Yh h))
(E: Y->X) (Eh:forall (h:R), (Yh h)->(Xh h)),
exists K:R , forall (h:R), operator_norm(Eh h)<=K.
\end{verbatim}
\end{small}where the operator norm is instantiated with the matrix norm using the following hypothesis:
\begin{small}
\begin{verbatim}
Hypothesis mat_op_norm: forall (u:X) (f:Y) (h:R) (uh: Xh h)
(rh: forall (h:R), X -> (Xh h))(sh: forall (h:R), Y->(Yh h))
(E: Y->X) (Eh:forall (h:R),(Yh h)->(Xh h)),
operator_norm (Eh h) = matrix_norm m.
\end{verbatim}
\end{small}
\section{Application of the Lax equivalence theorem to the example problem}
\label{Lax_apply}
In this section, we apply the Lax equivalence theorem that we proved in Section~\ref{Lax_section} to a concrete differential equation $\frac {d^{2}u}{dx^{2}}=1$ and the numerical scheme $\mathcal{N}_h$ given by $\frac { u_{i+1}-2u_{i}+u_{i-1}}{\Delta x^2} = 1$. We recall that the proof of convergence using the Lax equivalence theorem requires that the difference scheme is consistent with respect to the differential equation and is stable. We discussed the proof of consistency of the scheme in Section~\ref{finite} and the stability in Section~\ref{stability_section}. Thus, we apply these proofs to complete the proof of convergence for the scheme. We provide the theorem statement to verify convergence of the scheme in the Appendix~\ref{appendix_A}.
\section{Conclusion and Future work}\label{conclusion}
This work investigated the formalization of convergence, stability and consistency of a finite difference scheme in the Coq proof assistant. Any continuously differentiable function can be approximated by a Taylor polynomial. The Lagrange remainder of a Taylor series provides an estimate of the \textcolor{black}{truncation} error and we formally proved that this error can be bound by $n^{th}$ power of the discretization step, $\Delta x$, where $n-1$ is the order of the Taylor
polynomial. We implemented the proof of the consistency of a finite difference scheme by breaking down the theorem statement into lemmas, each corresponding to function values at points neighboring the point of evaluation. These lemmas were proved individually by applying the Taylor--Lagrange theorem, the proof of which is already formalized in the \texttt{Coq.Interval} library \cite{martin2013certified}.
Consistency and stability guarantees convergence as stated by the Lax equivalence theorem. Following the proof of the the Lax equivalence theorem, we formally proved convergence of a specific finite difference scheme. Specifically, we proved that the global discretization error could be bounded above by a constant times the local discretization error.
Then, by applying the sandwich theorem for limits, we proved that the convergence condition is satisfied in the limit $\Delta x \to 0$. In the process of formalizing the proof of stability for the numerical scheme, we also developed tools for linear algebra and spectral theory, for the \texttt{Coquelicot} definition of matrices in Coq, which can be reused. As noted earlier, the approach we follow is not specific to the sample numerical scheme, but can be easily extended to other numerical schemes with appropriate \textcolor{black}{instantiation} of the matrix $A_{h}$, and vectors, $r_{h}u$, $s_{h}Au$. Formalization of the proof of orthogonality of the eigenvectors helped us report the missing constant $\sqrt{\frac{2}{N+1}}$ in $s_{m}$ that occurs in most textbooks/literature on numerical analysis.
This work considered the impact of the discretization error on the convergence of a numerical method to the exact solution. In a practical setting, floating point errors have to be also accounted for, as an accumulation of such errors can lead to deviations from the true solution. In future work, we will extend our results to incorporate floating point errors and their impact on the convergence of finite difference numerical schemes. We also plan on working with iterative solvers, which would be an extension of our current work on direct solvers (explicit inversion of the matrix $A_{h}$). We also plan on working with the Frama-C toolkit~\cite{10.1007/978-3-642-33826-7_16} for verification \textcolor{black}{of existing programs} and be able to discharge the generated verification conditions using the Coq proofs we present in this paper.
\subsection{ Effort and challenges: }
The total length of the Coq code and proofs is about 14,000 lines, \textcolor{black}{of which about 1200 lines are specific to the scheme. The rest of the formalization can be reused for a generic symmetric tridiagonal matrix}. \textcolor{black}{It took us about 15 months for the entire formalization.} Much of the effort was spent on destructing the matrices and developing required linear algebra tools to handle the matrix manipulation. Since we are treating stability from a spectral point of view, lack of spectral theory for numerical analysis for the \texttt{Coquelicot} definition of matrices has been challenging for us.
For the proof of consistency, the primary challenge was the right placement of the quantifiers to bound the Lagrange remainder using the definition of big-$O$ notation.
To instantiate $\Gamma =M+K$, we had to carefully destruct the lemmas into the main theorem. \textcolor{black}{We believe that a generic library with an automated implementation of the big-O definitions would save considerable effort here.} We also encountered issues in selecting appropriate instantiations for other existential parameters. In the proof of convergence, we had to carefully construct the application of properties of limit with filters of neighborhoods.
\newpage
\bibliographystyle{splncs04}
|
2,877,628,091,007 | arxiv | \section{Introduction}
Procedural content generation via machine learning (PCGML)~\cite{summerville2017procedural} denotes a subgroup of PCG techniques that learn models of the type of content to be generated and then sample from those models to create new instances of the content (e.g. learn from a set of example game levels and then generate new levels having characteristics and properties of the example levels). Common challenges of PCGML approaches are the generalizability of trained models across domains and finding or creating the training data needed for a given domain. As such, most PCGML level generation approaches have only explored a handful of level domains (predominantly, \textit{Super Mario Bros.}~\cite{summerville2016mariostring,guzdial2016game,snodgrass2017learning}, \textit{Kid Icarus}~\cite{snodgrass2017learning,snodgrass2016approach,sarkar2018blending,sarkar2019blending}, and \textit{The Legend of Zelda}~\cite{summerville2015samplinghyrule}).
Recent work has begun exploring ways of addressing the above challenges. Some have explored methods for leveraging existing training data to build models that generalize across several domains. These methods either try to supplement a new domain's training data with examples from other domains~\cite{snodgrass2016approach}, build multiple models and blend them together~\cite{guzdial2018automated,sarkar2018blending}, or directly build a model trained on multiple domains~\cite{sarkar2019blending}. Such approaches are pushing the field towards more generally applicable PCGML techniques, and open the door for more creative PCGML~\cite{guzdial2018combinatorial}. We propose an approach to level blending that falls in the latter category. Our approach blends levels from different domains together by finding and leveraging structural similarities between domains.
We build on existing PCGML research by combining two methods for generating levels, variational autoencoders (VAEs) and example--driven binary space partitioning (EDBSP). We leverage these approaches to model and generate levels at two levels of abstraction: one abstraction layer captures the structural information of the levels, and the other captures the finer domain--specific details such as object, enemy, and item placements. We test and evaluate our proposed approach across $7$ platforming games, $3$ of which have not been used as training or test domains in prior PCGML research, to the best of our knowledge.
The main contributions of this paper are:
\begin{enumerate}
\item A new PCGML approach for domain blending that combines two previous techniques, VAEs for modeling and generating structural level layouts and EDBSP for filling in those generated layouts by blending details from various domains.
\item A multi-domain evaluation of the proposed approach exploring a broader range of domains than previous work.
\end{enumerate}
\section{Related Work}
Procedural content generation via machine learning (PCGML)~\cite{summerville2017procedural} describes a family of approaches for PCG that first learn a model of a domain from a set of training examples and then use that learned model to generate new content. Much PCGML research has focused on building models of individual domains in order to create new content within the chosen domain. A variety of approaches have been explored in pursuit of this goal (e.g., LSTMs~\cite{summerville2016mariostring}, DBNs~\cite{guzdial2016game}, Markov Models~\cite{snodgrass2017learning,dahlskog2014linear}, GANs~\cite{volz2018evolving}, VAEs~\cite{thakkar2019autoencoder}), and each has shown its ability to generate levels within a chosen domain. However, these techniques are only applicable in the domains in which they are trained, and rely on the existence of training data from the target domain. For this work, among the above approaches, we chose to use VAEs. Prior work has demonstrated their potential for blending domains \cite{sarkar2019blending} by learning continuous, latent models of input domains. Additionally, unlike GANs, VAEs also learn the mapping from the input domain to the latent domain which may make it more suitable in a co-creative design context. This is particularly useful since we hope to develop our approach into a mixed-initiative tool in the future. Moreover, VAEs also offer potential for controllability in the form of conditional VAEs.
Recently there has been work exploring PCGML approaches for blending domains and domain transfer. Guzdial and Riedl~\cite{guzdial2016learning} proposed a level blending model that blended different level styles within a single domain. Our work differs from theirs in that ours aims to blend between multiple domains. Guzdial and Riedl~\cite{guzdial2018automated} have also proposed a method for blending and combining complete games via conceptual expansion on learned graph representations of games. Our work instead focuses on blending levels by finding structural similarities between training domains and an input level sketch. Snodgrass and Ontan{\'o}n~\cite{snodgrass2016approach} presented a domain transfer approach for supplementing one domain with translated levels from another domain by finding mappings between the representations. In our work, we instead define a uniform abstract representation across domains which we use for finding structural similarities. Sarkar and Cooper~\cite{sarkar2018blending} trained separate LSTMs on multiple domains, and created blended levels by switching between the trained models. While the abstract level generation stage of our approach is trained separately on different domains, our full resolution level generation stage which performs the blending need not be retrained
In blending and generating levels by combining together parts of different domains, our work, like past work referred above \cite{guzdial2016game,guzdial2016learning,guzdial2018automated,guzdial2018combinatorial,sarkar2018blending, sarkar2019blending}, also falls under combinational creativity \cite{boden2004creative}, the branch of creativity where new ideas and concepts are generated by combining existing ones in novel ways. Such methods can help in producing and exploring new design domains via blending and combination, as we attempt to do in this work by blending existing platformer domains to create new ones.
The approaches that are most relevant to our proposed work are Sarkar et al.'s~\cite{sarkar2019blending} use of VAEs for level generation and blending and Snodgrass'~\cite{snodgrass2019levels} example-driven BSP approach for generating levels from an input sketch. We present a hybrid model that combines these methods into a single pipeline allowing for the creation of new sketches by sampling from VAEs to create structural level sketches, and generating fully realized blended levels by using EDBSP with access to multiple domains to fill in the details of the sketches. This work extends previous EDBSP work by using multiple domains, allowing for domain blending; and by using sketches generated by VAEs, thus highlighting the versatility of the EDBSP approach.
\section{Methods}
At a high level, our proposed approach is composed of two stages. First, we use a variational autoencoder (VAE) to model and generate the abstracted structural patterns from a set of training levels in a given domain. Next, we pass a generated structural level sketch to an example-driven extension to the binary space partitioning algorithm. This algorithm generates a fully realized level by finding matching structural patterns in a set of training levels across multiple domains, and using those level sections to fill in the details resulting in a blended level. Below we describe how we represent our levels, and each stage of our approach in more detail.
\begin{figure*}[t]
\centering
\includegraphics[width=.8\textwidth]{LR_1.pdf}
\caption{This figure shows a \textit{Lode Runner} level (a), that same level represented with the full resolution representation (b), and that level represented with the sketch resolution representation (c).}\label{fig:levelRep}
\end{figure*}
\subsection{Level Representations}
We demonstrate our approach using a set of NES platforming games (described in Section \ref{sec:dom}). We represent game levels with a tile grid where a cell can take a value from a set of tile types corresponding to elements of the domain. Figure \ref{fig:levelRep} (a-b) shows an example of such a representation. This style of representation is commonly used in PCGML approaches~\cite{summerville2017procedural} and is also used by the Video Game Level Corpus (VGLC)~\cite{summerville2016vglc}. Using this tile-based representation, we represent levels at two layers of abstraction, a \textbf{Full Resolution} layer and a \textbf{Sketch Resolution} layer. The tile types composing the full resolution layer differ between domains and correspond to specific structural components, interactive elements, enemies, and items in that domain. The sketch resolution layer, however, consists of the same three tile types across all domains:
\begin{enumerate}
\item \textbf{\#}, representing a solid/impassable element; \item \textbf{-}, representing empty space or otherwise passable elements;
\item \textbf{?}, representing a wildcard that can be interpreted as either solid or empty.
\end{enumerate}
\noindent The wildcard tile extends the previous sketch resolution representation~\cite{snodgrass2019levels}, and was included in this work to more easily capture structures that are not clearly represented by the empty or solid types (e.g., ladders). Figure \ref{fig:levelRep} shows a \textit{Lode Runner} level represented in these two abstractions.
\subsection{Generating Sketch Resolution Levels}
Variational autoencoders (VAEs) \cite{kingma2013autoencoding} are generative models that learn continuous, latent representations of training data which can then be sampled to produce novel outputs. Such models consist of an encoder which maps the input data to a latent space and a decoder which maps from points in this latent space to outputs. While vanilla autoencoders \cite{hinton2006reducing} learn lower-dimensional latent representations of training data by only minimizing reconstruction error, VAEs additionally enforce the learned latent representation to model a continuous, probability distribution by minimizing the KL divergence between the latent distribution and a known prior (usually a Gaussian). Thus, similar to GANs, VAEs can generate novel variations of the training data in addition to being able to perform reconstruction. In this work, we used VAEs to generate levels at the sketch resolution layer, training a separate generative model for each domain.
\subsection{Generating Full Resolution Levels}\label{sec:BSP}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Pipeline.pdf}
\caption{This figure shows the basic pipeline of the EDBSP algorithm. First, an input sketch is provided (a). This sketch can be chosen from the training data (as in Section \ref{sec:EDBSP-TS}) or generated with a VAE (as in Sections \ref{sec:VAE-GS} and \ref{sec:EDBSP-GS}). Next, BSP is used to split the sketch into regions (b). Finally, structural matches for those sketch regions are found in the training data (c), and are used to create a full-resolution level (d).}
\label{fig:EDBSP}
\end{figure}
Binary Space Partition (BSP)~\cite{togelius2016introduction} is a partitioning algorithm classically used in PCG for dungeon generation. The standard BSP algorithm recursively splits regions of a map into two smaller regions using a random orientation (vertical or horizontal) and positioning within the region until some end condition is met (e.g., a specified number of regions are created). Another process then takes those regions and converts them into a level (e.g., connects regions with doors, places enemies and keys, etc.). We use an extension of BSP called Example-driven Binary Space Partition (EDBSP)~\cite{snodgrass2019levels} which uses training data to fill in the details of the produced regions. Specifically, EDBSP is given an input sketch for a level (Figure \ref{fig:EDBSP}.a), and a set of training levels represented in both sketch and full resolution. BSP is then used to split the input sketch into regions (Figure \ref{fig:EDBSP}.b). For each region in the sketch, all the matching sketch resolution regions in the training levels are found, and one is chosen randomly from the set for that region (Figure \ref{fig:EDBSP}.c). The corresponding full resolution regions from the training set are then stitched together to produce the full resolution generated level (Figure \ref{fig:EDBSP}.d).
\section{Experiments}
\subsection{Domains}\label{sec:dom}
We test our level blending approach across seven domains chosen from NES platforming games: \textit{Castlevania (CV)} \cite{castlevania}, \textit{Kid Icarus (KI)} \cite{kidicarus}, \textit{Lode Runner (LR)} \cite{loderunner}, \textit{Mega Man (MM)} \cite{megaman}, \textit{Metroid (MT)} \cite{metroid}, \textit{Ninja Gaiden (NG)} \cite{ninjagaiden}, and \textit{Super Mario Bros. (SM)} \cite{supermario:nes}. Each of these domains differs from the others in the number of levels available and the size and shape of those levels (e.g., \textit{LR} has $150$ levels in the VGLC and \textit{KI} has $6$). This results in imbalanced data sets, which could lead to one domain being over represented in the generated levels simply by having more examples to draw from. To better investigate the relationships between the domains and the capabilities of our approach, we standardize the amount of training data from each domain. Specifically, we use a subset of levels from each domain such that training data from each domain is composed of approximately $18,000$ tiles\footnote{The set of training levels used in each domain can be found here: \url{https://bitbucket.org/FDG2020-Sketch/level-data/}}. Note, this value was chosen as it is the smallest number of tiles in our domains when using all data (i.e., the sum of tiles in all the \textit{CV} levels is $17,728$).
We divide our domains according to the presence of wildcards:
\begin{itemize}
\item \textbf{WildCards (\textit{WC})}: This set contains domains with wildcard tiles in their sketch representations. This set includes \textit{CV} ($6$ levels), \textit{LR} ($25$ levels), \textit{MM} ($4$ levels), and \textit{NG} ($8$ levels).
\item \textbf{No WildCards ($\neg$\textit{WC})}: This set contains the domains that do not have wildcard tiles in their sketch representations. This set includes \textit{KI} ($5$ levels), \textit{MT} ($1$ section of the map split according to locked doors), and \textit{SM} ($6$ levels).
\item \textbf{All Domains (\textit{ALL})}: This set is the union of the above sets.
\end{itemize}
\noindent
\subsection{Experimental Setup}
We test our proposed approach on its ability to generate sketches and full resolution levels. We evaluate each of the stages of our approach individually, and then the full pipeline.
\subsubsection{Sketch Generation}\label{sec:VAE}
To test the sketch generation stage of our approach on its own, we trained a separate VAE on each of the domains, using the same overall architecture for each domain except for the dimensions of the input and output segments which we varied to suit each individual domain. For each VAE, the encoder consisted of 2 strided convolutional layers with batch normalization and leaky ReLU activation while the decoder consisted of 3 convolutional layers which were strided or non-strided as required by the dimensions of the specific domain. The decoder also used batch normalization but with ReLU activation. All models used a 32-dimensional latent space and were trained for 5000 epochs using the Adam optimizer and a learning rate of 0.001. For generation, we selected the model from the epoch which best minimized reconstruction error. All models were implemented using PyTorch \cite{paszke2017automatic}. Note that we use fixed-size windows instead of full levels for training and generation. This is to account for the variation in level sizes both across and within domains and for the fact that convolutional generative models work with fixed-size inputs and outputs. Thus, like prior work using such models for level generation \cite{volz2018evolving,sarkar2019blending}, we generated our training data by sliding a fixed-size window across the levels in each domain and trained our models using those segments obtained after filtering out ones that contained any empty space. We used the following dimensions for each domain:
\begin{itemize}
\item \textit{CV}: 11x16
\item \textit{KI}: 16x16
\item \textit{MM}: 15x16
\item \textit{SM}: 14x14
\item \textit{LR}: 11x16
\item \textit{MT}: 15x16
\item \textit{NG}: 11x16
\end{itemize}
\noindent Note, we use different dimensions for the domains based on the height and width of the training levels.
For each domain, we then generated $100$ sketch resolution sections of the fixed-size for that domain. For evaluating these sections, we computed the following metrics for each segment:
\begin{itemize}
\item \textit{Density}: the proportion of solid tiles in a region.
\item \textit{Non--Linearity}: how well a segment's topology fits to a line. It is the mean squared error of running linear regression on the highest point of each of the columns in a segment. A zero value indicates perfectly linear topology.
\item \textit{Plagiarism}: a pairwise metric which counts the number of rows and columns a segment shares with another segment.
\item \textit{E--Distance}: a measure of the distance between two distributions introduced by \cite{szekely2013energy} and suggested as a suitable metric for evaluating generative models by \cite{summerville2018expanding} due to certain desirable properties. The lower the E-distance, the more similar are the distributions being compared. For our evaluations, we computed E-distance using the \textit{Density} and \textit{Non-Linearity} of each of the 100 generated segments and that of a random sampling of 100 training segments, per domain.
\end{itemize}
\noindent Notice that we also computed these metrics for the training levels in order to compare against the generated set. The \textit{density}, \textit{non--linearity}, and \textit{E--distance} metrics measure how well the VAE can capture and replicate the structural patterns from the training levels. The \textit{plagiarism} metric measures how much the VAE copies from the training domain, and gives insight into whether the model is able to generate new sections or just replicate existing ones. Additionally, we computed \textit{self-plagiarism} i.e. how much pairs of training segments plagiarize from each other, as a means of understanding how well or poorly the plagiarism detected in the generative model compares with that which already exists in the training data. Due to the large number of training segments compared to the 100 generated segments per domain, for our evaluations, we computed plagiarism and self-plagiarism values using a random sampling of 100 training segments. Additionally, statistical comparisons between generated and training segments were also performed using this sampling.
\subsubsection{Conditional Sketch Generation} In addition to training a standard VAE on each sketch domain, we also trained a conditional VAE (CVAE)~\cite{sohn2015learning,yan2015conditional} on sketches from all domains taken together, with each sketch labeled with its corresponding domain. Conditional generative models~\cite{mirza2014conditional}, as the name suggests, enable generation of outputs conditioned on some given input. Such models are trained simply by concatenating training data instances with the data to be used for conditioning such as a class label, for example. Thus a CVAE trained as described above could enable generating sketches of a desired domain allowing for greater control in the generation process. For our CVAE, we used a different architecture than the regular VAEs described above, with the encoder and decoder both consisting of 2 linear layers, though the latent space was still 32-dimensional. The conditioning input was a one-hot encoded vector indicating the domain of the corresponding input sketch. For training, we used segments of dimension 11x16 for all domains as this was the largest window size that could accommodate all domains. The 11x16 segments were flattened to a single-dimensional input vector for the linear layers. Unfortunately, we did not obtain strong results using this approach and did not use CVAE-generated sketches as inputs to EDBSP for full level generation. However, conditioning the generation process still resulted in interesting outputs and opens up directions to consider for future work.
\subsubsection{Full Resolution Generation}
To test the full resolution generation stage of our approach on its own, we used each of the domains separately as input sketches to the EDBSP algorithm paired with different subsets of domains as the levels used for filling in the details and blending. For this, we chose a domain, then generated a total of $100$ full resolution levels for that domain divided evenly amongst the sketches (e.g., \textit{LR} has $25$ sketches, and therefore EDBSP generates four full resolution levels for each sketch; \textit{SM} has $6$ sketches, and EDBSP generates $16-17$ full resolution levels for each sketch). We perform this process for each domain, using each defined subset of domains (i.e., \textit{WC}, $\neg$\textit{WC}, \textit{ALL}) as the example full resolution levels to EDBSP. While using a given domain for its sketches, we removed it from its respective training data subset. This resulted in $300$ generated levels per domain, $100$ for each subset of domains.
To test our full pipeline for level blending and generation (i.e., the full resolution generation stage combined with the sketch generation stage), we follow a similar procedure as above. We use the sketch sections generated for each domain using the VAE described in Section \ref{sec:VAE} as input to the EDBSP algorithm. For each domain we generate $10$ full resolution sections from each of the $100$ generated sketches. We perform this process with each defined subset of domains (\textit{WC}, $\neg$\textit{WC}, \textit{ALL}) as example full resolution levels for EDBSP, while removing the current sketch domain from the subsets. This results in $3000$ full resolution sections for each domain, $1000$ for each defined subset of domains.
We evaluated the generator and generated levels by computing:
\begin{itemize}
\item \textit{Domain Proportion}: the proportion of the generated level that was generated using a given domain. This is computed as $\frac{\text{tiles from a domain in the level}}{\text{total tiles in the level}}$.
\item \textit{Element Distribution Similarity}: the distribution of common level elements in the generated level (i.e., empty space, solid objects, enemies, items, hazardous objects, and climbable objects). We compute the KL divergence~\cite{kullback1951information} between this distribution in the generated levels and the training levels.
\end{itemize}
\noindent The \textit{domain proportion} measure gives insight into the biases of our generator and representation. It can also help us understand which domains are structurally similar to one another and which contain more diverse structures. The \textit{element distribution similarity} measures if the generator is able to approximate a domain using examples from other domains. KL divergence has been used by others to guide level generators~\cite{lucas2019tile,volz2018evolving} and we use it here to measure relatedness between generated levels and the target domain.
\begin{table*}[tbh]
\centering
\caption{Computed metrics for VAE generated level sections. $\dagger$ on the generated values indicates statistically significant differences between the generated sections and the training levels in terms of the corresponding metric (using Wilcoxon test with $p\leq0.05$). Metric values for generated sections that are not significantly different from those for training levels are preferred since they indicate that the learned distribution is not significantly different than the distribution of the training domain. Similarly, the lower the E-distance, the closer the learned distribution is to the training distribution.}
\resizebox{\textwidth}{!}{
\begin{tabular}{c||cc|cc|cc|c}
& \multicolumn{2}{c|}{\textbf{Density}} & \multicolumn{2}{c|}{\textbf{Non--Linearity}} &\multicolumn{2}{c|}{\textbf{Plagiarism}} & \textbf{E--Distance} \\\hline
\textbf{Domain} & \textit{Training} & \textit{Generated} & \textit{Training} & \textit{Generated} & \textit{Training} & \textit{Generated} & --\\ \hline\hline
\textit{CV} & $18.71 \pm 11.02$ & $15.47 \pm 6.47$ & $3.55 \pm 3.16$ & $4.74 \pm 3.21^\dagger$ & $4.5 \pm 5.22$ & $4.32 \pm 3.34^\dagger$ & $1.63$ \\
\textit{LR} & $36.61\pm16.39$ & $28.44\pm9.43^\dagger$ & $5.64\pm5.26$ & $6.23\pm3.62$ & $0.52\pm2.76$ & $0.26\pm0.69^\dagger$ & $2.80$ \\
\textit{MM} & $41.25\pm14.74$ & $32.06\pm9.85^\dagger$ & $9.90\pm9.51$ & $16.53\pm10.73^\dagger$ & $2,18\pm4.41$ & $1.12\pm2.01^\dagger$ & $4.71$\\
\textit{NG} & $18.07\pm10.83$ & $14.48\pm5.38$ & $4.18\pm4.35$ & $3.77\pm2.49$ & $4.96\pm4.10$ & $5.44\pm2.85^\dagger$ & $0.71$ \\
\textit{KI} & $23.74\pm11.48$ & $15.69\pm5.41^\dagger$ & $12.28\pm11.77$ & $23.19\pm10.98^\dagger$ & $2.02\pm3.67$ & $1.61\pm1.76^\dagger$ & $9.72$ \\
\textit{MT} & $43.36\pm13.07$ & $34.63\pm10.63^\dagger$ & $7.26\pm8.65$ & $14.74\pm10.49^\dagger$ & $1.67\pm3.56$ & $0.47\pm0.97$ & $9.03$ \\
\textit{SM} & $10.81\pm4.74$ & $7.17\pm2.52^\dagger$ & $4.32\pm3.96$ & $2.59\pm2.96^\dagger$ & $15.16\pm5.45$ & $15.79\pm5.08^\dagger$ & $1.93$ \\
\end{tabular}}
\label{tab:vae}
\end{table*}
\section{Results and Discussion}
\begin{table}[tbh]
\centering
\caption{E--distance between CVAE-generated sketches and VAE-generated sketches from the corresponding domain and between 100 random sketches from the corresponding training domain. E-distances between the respective VAEs and training domains are also given for comparison.}
\resizebox{\columnwidth}{!}{
\begin{tabular}{c||c|c|c}
\textbf{Domain} & \textbf{CVAE vs VAE} & \textbf{CVAE vs Train} & \textbf{VAE vs Train}\\ \hline \hline
\textit{CV} & $7.97$ & $7.81$ & $1.56$\\
\textit{LR} & $0.99$ & $3.91$ & $2.86$\\
\textit{MM} & $4.61$ & $10.14$ & $5.87$\\
\textit{NG} & $7.11$ & $3.91$ & $0.79$\\
\textit{KI} & $14.44$ & $2.22$ & $11.38$\\
\textit{MT} & $2.78$ & $14.49$ & $7.13$\\
\textit{SM} & $10.36$ & $4.39$ & $2.41$\\
\end{tabular}}
\label{tab:cvae}
\end{table}
\subsection{Sketch Generation using VAEs}\label{sec:VAE-GS}
Table \ref{tab:vae} depicts the results of our evaluations of the sketch sections generated using the VAEs. The results suggest that the VAE performs the best in learning the distribution of \textit{NG} as exhibited by it having the lowest \textit{E-distance}, followed by \textit{CV}, \textit{SM} and \textit{LR} with the models for \textit{MM} and especially \textit{MT} and \textit{KI} performing worse with respect to these metrics. Generated sketch sections for \textit{NG} were the only ones to not be significantly different from the training set in terms of both \textit{Density} and \textit{Nonlinearity}, with those for \textit{CV} and \textit{LR} being significantly different in terms of one of these while those for the more E-distant \textit{MM}, \textit{MT} and \textit{KI} being different in terms of both. The outlier here is \textit{SM} which has the third lowest \textit{E-distance} but is significantly different in terms of both metrics. One possible explanation is that while the sections have similar mean values for the metrics, the individual values for the metrics on the generated and training sections may be very different from one another.
Overall, the VAEs seem to do better in domains with less dense level structures such as \textit{SM}, \textit{CV} and \textit{NG} as opposed to those with higher density like \textit{MM}, \textit{MT} and \textit{KI}. This makes sense as it requires the model to learn less complex structural elements. Note that we used the same architecture for each domain so it is likely that the denser domains could have been better learned using more complex models. In a similar vein, domains with more uneven in-segment topology (i.e. having highly non-linear segments) are more difficult to learn than those with more linear segments. Since we trained our generators using fixed-size segments rather than whole levels, global level structure did not impact how well the generators were able to learn the input distribution. \textit{CV}, \textit{MT}, \textit{MM}, \textit{NG} progress both horizontally and vertically, \textit{SM} and \textit{KI} progress only horizontally and only vertically respectively, but differences in VAE performance were not detected along these lines. Rather, as our results show, it is the more local segment-based properties in the training sketches that influence the quality of generated sketches. To better depict the capabilities of the generators, as per the recommendations of \cite{summerville2018expanding}, for each domain, we show pairs of training and generated segments that were nearest and furthest with respect to each metric in Figure \ref{fig:VAEcomp} in the Appendix
\begin{figure*}
\centering
\begin{tabular}{ccccccc}
\includegraphics[width=.11\textwidth]{CV_z.png}&
\includegraphics[width=.11\textwidth]{LR_z.png}&
\includegraphics[width=.11\textwidth]{MM_z.png}&
\includegraphics[width=.11\textwidth]{NG_z.png}&
\includegraphics[width=.11\textwidth]{KI_z.png}&
\includegraphics[width=.11\textwidth]{MT_z.png}&
\includegraphics[width=.11\textwidth]{SM_z.png}\\
(a) \textit{CV} & (b) \textit{LR} &(c) \textit{MM} &(d) \textit{NG} &(e) \textit{KI} &(f) \textit{MT} &(g) \textit{SM}\\
\end{tabular}
\caption{Sketch sections generated by the CVAE using the same input vector but different domain as the conditioning input}
\label{fig:CVAE}
\end{figure*}
As stated previously, our conditional sketch generation efforts did not produce strong results with most metrics being very different from the input domains with statistical significance. Table \ref{tab:cvae} shows E-distances between 100 CVAE-generated sketches vs. 100 sketches generated using the VAE of that corresponding domain and vs. 100 sketches sampled randomly from the training set. All distances are higher than those between the VAE-generated sketches and the training domain, with the exception of \textit{KI} which proved to have the lowest E-distance for the CVAE case while having the highest for the VAE. However, seeing how the CVAE does not do well in other domains, we attribute this to be circumstantial and leave further investigations into the CVAE for future work. As exemplars of what is possible using this approach, Figure \ref{fig:CVAE} shows segments generated using the same random vector conditioned on different domains.
\begin{table*}[tbh]
\caption{Distribution of domain proportions in full resolution levels generated from existing sketches}
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tabular}{c||ccccccc}
& \multicolumn{7}{c}{\textbf{ALL}}\\ \hline\hline
\textbf{Input} & \textit{CV} & \textit{LR} &\textit{MM} & \textit{NG} & \textit{KI} & \textit{MT} & \textit{SM} \\ \hline
\textit{CV} & -- & $\mathbf{0.263 \pm 0.066}$ & $0.148 \pm 0.025$ & $0.163 \pm 0.028$ & $0.110 \pm 0.026$ & $0.150 \pm 0.030$ & $0.166 \pm 0.035$ \\
\textit{LR} & $0.156 \pm 0.102$ & -- & $\mathbf{0.270 \pm 0.120}$ & $0.154 \pm 0.093$ & $0.109 \pm 0.079$ & $0.164 \pm 0.109$ & $0.146 \pm 0.138$ \\
\textit{MM} & $0.157 \pm 0.025$ & $\mathbf{0.323 \pm 0.063}$ & -- & $0.176 \pm 0.020$ & $0.146 \pm 0.028$ & $0.049 \pm 0.105$ & $0.148 \pm 0.020$ \\
\textit{NG} & $0.150 \pm 0.064$ & $\mathbf{0.358 \pm 0.140}$ & $0.134 \pm 0.056$ & -- & $0.078 \pm 0.043$ & $0.134 \pm 0.053$ & $0.146 \pm 0.062$ \\
\textit{KI} & $0.113 \pm 0.034$ & $\mathbf{0.394 \pm 0.128}$ & $0.141 \pm 0.029$ & $0.119 \pm 0.032$ & -- & $0.127 \pm 0.027$ & $0.106 \pm 0.036$ \\
\textit{MT} & $0.123\pm 0.005$ & $0.254 \pm 0.008$ & $\mathbf{0.262 \pm 0.007}$ & $0.135 \pm 0.006$ & $0.125 \pm 0.006$ & -- & $0.102 \pm 0.005$ \\
\textit{SM} & $0.171 \pm 0.036$ & $\mathbf{0.287 \pm 0.109}$ & $0.151 \pm 0.036$ & $0.172 \pm 0.047$ & $0.096 \pm 0.044$ & $0.123 \pm 0.033$ & -- \\ \hline
& \multicolumn{4}{c||}{\textbf{WC}} &\multicolumn{3}{c}{$\neg$\textbf{WC}}\\ \hline\hline
\textbf{Input} & \textit{CV} & \textit{LR} &\textit{MM} &\multicolumn{1}{c||}{\textit{NG}} & \textit{KI} & \textit{MT} & \textit{SM} \\ \hline
\textit{CV} & -- & $\mathbf{0.426 \pm 0.069}$ & $0.273 \pm 0.033$ &\multicolumn{1}{c||}{$0.301 \pm 0.052$} & $0.293 \pm 0.042$ & $0.348 \pm 0.05$ & $\mathbf{0.359 \pm 0.057}$ \\
\textit{LR} & $0.276 \pm 0.144$ & -- & $\mathbf{0.463 \pm 0.172}$ &\multicolumn{1}{c||}{$0.260 \pm 0.119$} & $0.311 \pm 0.139$ & $\mathbf{0.367 \pm 0.168}$ & $0.323 \pm 0.189$ \\
\textit{MM} & $0.249 \pm 0.025$ & $\mathbf{0.468 \pm 0.037}$ & -- &\multicolumn{1}{c||}{$0.283 \pm 0.024$} & $0.263 \pm 0.023$ & $\mathbf{0.465 \pm 0.028}$ & $0.272 \pm 0.026$ \\
\textit{NG} & $0.262 \pm 0.073$ & $\mathbf{0.496 \pm 0.123}$ & $0.242 \pm 0.088$ &\multicolumn{1}{c||}{--} & $0.263 \pm 0.068$ & $0.354 \pm 0.077$ & $\mathbf{0.383 \pm 0.083}$ \\ \hline
\textit{KI} & $0.151 \pm 0.053$ & $\mathbf{0.498 \pm 0.132}$ & $0.192 \pm 0.046$ &\multicolumn{1}{c||}{$0.160 \pm 0.045$} & -- & $\mathbf{0.569 \pm 0.050}$ & $0.431 \pm 0.050$ \\
\textit{MT} & $0.169 \pm 0.006$ & $0.321 \pm 0.009$ & $\mathbf{0.327 \pm 0.007}$ &\multicolumn{1}{c||}{$0.183 \pm 0.006$} & $\mathbf{0.634 \pm 0.008}$ & -- & $0.366 \pm 0.008$ \\
\textit{SM} & $0.227 \pm 0.046$ & $\mathbf{0.346 \pm 0.088}$ & $0.193 \pm 0.032$ &\multicolumn{1}{c||}{$0.234 \pm 0.050$} & $0.435 \pm 0.093$ & $\mathbf{0.565 \pm 0.093}$ & -- \\
\end{tabular}}
\end{center}
\label{tbl:dstr}
\end{table*}
\begin{table*}[tbh]
\caption{Distribution of domain proportions in full resolution levels generated from generated sketches}
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tabular}{c||ccccccc}
& \multicolumn{7}{c}{\textbf{ALL}}\\ \hline\hline
\textbf{Input} & \textit{CV} & \textit{LR} &\textit{MM} & \textit{NG} & \textit{KI} & \textit{MT} & \textit{SM} \\ \hline
\textit{CV} & -- & $\mathbf{0.652\pm0.235}$ & $0.105\pm0.143$ & $0.080\pm0.129$ & $0.032\pm0.678$ & $0.516\pm0.101$ & $0.080\pm0.130$ \\
\textit{LR} & $0.148\pm0.143$ & -- & $\mathbf{0.280\pm0.191}$ & $0.192\pm0.169$ & $0.165\pm0.152$ & $0.086\pm0.109$ & $0.128\pm0.163$\\
\textit{MM} & $0.053\pm0.078$ & $\mathbf{0.607\pm0.182}$ & -- & $0.087\pm0.100$ & $0.049\pm0.068$ & $0.133\pm0.0115$ & $0.070\pm0.098$\\
\textit{NG} & $0.077\pm0.121$ & $\mathbf{0.598\pm0.248}$ & $0.112\pm0.170$ & -- &$0.029\pm0.065$ & $0.051\pm0.101$ & $0.133\pm0.173$ \\
\textit{KI} & $0.033\pm$ & $\mathbf{0.814\pm0.122}$ & $0.055\pm0.070$ & $0.041\pm0.062$ & -- & $0.020\pm0.041$ & $0.037\pm0.060$\\
\textit{MT} & $0.039\pm0.058$ & $\mathbf{0.646\pm0.160}$ & $0.183\pm0.132$ & $0.057\pm0.074$ & $0.047\pm0.059$ & -- & $0.028\pm0.051$\\
\textit{SM} & $0.147\pm0.185$ & $\mathbf{0.485\pm0.281}$ & $0.116\pm0.179$ & $0.128\pm0.174$ & $0.021\pm0.052$ & $0.102\pm0.172$ & --\\\hline
& \multicolumn{4}{c||}{\textbf{WC}} &\multicolumn{3}{c}{$\neg$\textbf{WC}}\\ \hline\hline
\textbf{Input} & \textit{CV} & \textit{LR} &\textit{MM} &\multicolumn{1}{c||}{\textit{NG}} & \textit{KI} & \textit{MT} & \textit{SM} \\ \hline
\textit{CV} & -- & $\mathbf{0.719\pm0.221}$ & $0.151\pm0.170$ & \multicolumn{1}{c||}{$0.120\pm0.161$} & $0.306\pm0.179$ & $0.270\pm0.197$ & $\mathbf{0.423\pm0.223}$\\
\textit{LR} & $0.246\pm0.175$ & -- & $\mathbf{0.436\pm0.206}$ & \multicolumn{1}{c||}{$0.318\pm0.199$} & $\mathbf{0.443\pm0.197}$ & $0.254\pm0.168$ & $0.303\pm0.209$ \\
\textit{MM} & $0.098\pm0.103$ & $\mathbf{0.753\pm0.153}$ & -- & \multicolumn{1}{c||}{$0.149\pm0.124$} & $0.316\pm0.143$ & $\mathbf{0.430\pm0.162}$ & $0.254\pm0.163$\\
\textit{NG} & $0.134\pm0.166$ & $\mathbf{0.692\pm0.234}$ & $0.173\pm0.188$ & \multicolumn{1}{c||}{--} & $0.266\pm0.176$ & $0.251\pm0.182$ & $\mathbf{0.483\pm0.221}$\\\hline
\textit{KI} & $0.042\pm0.063$ & $\mathbf{0.839\pm0.114}$ & $0.068\pm0.075$ & \multicolumn{1}{c||}{$0.052\pm0.065$} & -- & $0.480\pm0.138$ & $\mathbf{0.520\pm0.138}$\\
\textit{MT} & $0.051\pm0.069$ & $\mathbf{0.688\pm0.149}$ & $0.195\pm0.127$ & \multicolumn{1}{c||}{$0.067\pm0.075$} & $\mathbf{0.689\pm0.131}$ & -- & $0.312\pm0.131$\\
\textit{SM} & $0.161\pm0.187$ & $\mathbf{0.532\pm0.274}$ & $0.149\pm0.194$ & \multicolumn{1}{c||}{$0.158\pm0.190$} & $0.261\pm0.189$ & $\mathbf{0.739\pm0.189}$ & -- \\
\end{tabular}}
\end{center}
\label{tbl:dstrGen}
\end{table*}
\subsection{EDBSP with Training Sketches}\label{sec:EDBSP-TS}
Table \ref{tbl:dstr} shows the results of the \textit{domain proportion} for each domain, across sets of levels generated with existing sketches. What is immediately apparent is that \textit{Lode Runner} (\textit{LR}) dominates many of the generated levels when it is included in the example set, particularly in the \textit{WC} set where there are fewer example domains. This is likely because \textit{LR} levels have a large proportion of wildcard tiles in their sketches as compared to the other domains ($12\%$ of tiles in \textit{LR} while next highest is $~3\%$ of tiles in \textit{Mega Man} (\textit{MM})). Due to how EDBSP performs pattern matching with the wildcard tiles, this causes many more viable matches for \textit{LR} than for other domains, and an inflation of the prominence of \textit{LR} in the generated levels. An example of this is shown in a generated \textit{KI} level in Figure \ref{fig:KIKL} (right).
The only generated set which uses \textit{LR}, but does not have it as the most common domain is \textit{Metroid} (\textit{MT}), where \textit{MM} is the most common. This may be due to the similarity in the structural layouts of \textit{MM} and \textit{MT} levels (i.e., both domains' levels consist of large sections of horizontal and vertical traversals with smaller obstacles mixed in). Additionally, as mentioned above, \textit{MM} has the second highest proportion of wildcard tiles. However, this relationship is not reciprocal. When using \textit{ALL} domains, \textit{MT} is the least frequent domain in the \textit{MM} levels. This shows that the wildcard tiles are important when finding matching examples in the training data, but when present in the input sketch they lead to more matches in all domains. When generating with the $\neg$\textit{WC} example set, we see that \textit{MT} is typically the most prevalent (or near to the most prevalent) domain, displaying the structural diversity of the domain. Lastly, while the generated blended levels in Figures \ref{fig:CVKL} and \ref{fig:KIKL} may not be playable using rules from the input sketch domain, we are not expecting the final levels to replicate a single domain and so playability within that domain is not required. Further, recall that the end goal is a mixed-initiative tool where the user could be controlling for final quality of the blended levels.
\begin{table}[tbh]
\centering
\caption{KL divergence between the training levels and levels generated from existing sketches using the distribution of game elements in the levels.}
\resizebox{\columnwidth}{!}{
\begin{tabular}{c||c|ccc}
\textbf{Domain} & \textbf{Uniform} & \textbf{ALL} & \textbf{WC} & $\neg$ \textbf{WC}\\ \hline \hline
\textit{CV} & $1.098$ & $\mathbf{0.062}$ & $0.089$ & $0.136$\\
\textit{LR} & $0.790$ & $0.280$ & $\mathbf{0.252}$ & $1.027$\\
\textit{MM} & $0.930$ & $\mathbf{0.067}$ & $0.163$ & $0.276$\\
\textit{NG} & $1.192$ & $\mathbf{0.087}$ & $0.119$ & $\mathbf{0.087}$\\\hline
\textit{KI} & $1.195$ & $0.178$ & $0.208$ & $\mathbf{0.040}$\\
\textit{MT} & $0.873$ & $0.098$ & $0.111$ & $\mathbf{0.081}$\\
\textit{SM} & $1.374$ & $0.088$ & $0.105$ & $\mathbf{0.045}$\\
\end{tabular}}
\label{tab:tileKLdiv}
\end{table}
\begin{table}[tbh]
\centering
\caption{KL divergence between the training levels and levels generated from VAE-generated sketch sections using the distribution of game elements in the levels.}
\resizebox{\columnwidth}{!}{
\begin{tabular}{c||c|ccc}
\textbf{Domain} & \textbf{Uniform} & \textbf{ALL} & \textbf{WC} & $\neg$ \textbf{WC}\\ \hline \hline
\textit{CV} & $1.098$ & $0.248$ & $0.264$ & $\mathbf{0.124}$\\
\textit{LR} & $0.790$ & $0.242$ & $\mathbf{0.222}$ & $1.151$\\
\textit{MM} & $0.930$ & $\mathbf{0.196}$ & $0.345$ & $0.260$\\
\textit{NG} & $1.192$ & $0.219$ & $0.254$ & $\mathbf{0.075}$\\\hline
\textit{KI} & $1.195$ & $0.567$ & $0.581$ & $\mathbf{0.061}$\\
\textit{MT} & $0.873$ & $0.415$ & $0.434$ & $\mathbf{0.146}$\\
\textit{SM} & $1.374$ & $0.170$ & $0.182$ & $\mathbf{0.045}$\\
\end{tabular}}
\label{tab:tileKLdivGen}
\end{table}
\subsection{Training Sketches vs Generated Sketches}\label{sec:EDBSP-GS}
Table \ref{tab:tileKLdiv} shows the KL divergence between the \textit{element distributions} of the training levels, generated levels, and a uniform distribution. Here we can see the impact the choice of training data has on generating full resolution levels. Specifically, the last three rows containing the $\neg$\textit{WC} domains show that KL divergence is lowest when using the associated $\neg$\textit{WC} training set. Alternatively, in the first four rows, the \textit{WC} domains tend to have the lowest KL divergence with the levels generated using all the training data. This result shows that generally, the \textit{WC} domain sketches benefit from a variety of training data with different properties, while the $\neg$\textit{WC} domain sketches are best filled with details from more similar domains. The outlier in this table is again \textit{LR}, which has a higher KL divergence across all generated sets than any other domain, and when using the $\neg$\textit{WC} training domains, has a higher KL divergence than when compared with the uniform distribution of elements. This is due to the high frequency of special structures in \textit{LR}.
Table \ref{tbl:dstrGen} shows the distribution of the domains in the full resolution sections generated using VAE-generated sketch sections. In this table we can see the same trends as when generating with existing sketches, but more exaggerated. Specifically, we see that \textit{LR} dominates the generated sections to a higher degree. This likely results from the interaction between the size of the sections generated, and the way the partitioning algorithm divides the regions. EDBSP splits sections using the minimum dimension of the input as the maximum size of a region. In smaller areas, this can result in the large portions of the section being assigned one domain, which is likely to be assigned to \textit{LR} given its large number of wildcards.
Table \ref{tab:tileKLdivGen} shows the KL divergence between \textit{element distributions} in the training levels, VAE-generated sketches, and a uniform distribution. This table reflects the disproportionate representation of \textit{LR} in the generated sections. Notably, the KL divergence has increased by large proportion in the \textit{ALL} and \textit{WC} generation sets, with much less variation in $\neg$\textit{WC} generated sets. Additionally, the lowest KL divergences are different from those in Table \ref{tab:tileKLdiv} for \textit{CV} and \textit{NG}, the \textit{WC} domains with lower wildcard proportions.
The results all point towards the importance of the choice of domains when blending. If approximating a specific domain or style of level is desired, then the domains with levels similar to the desired style should be chosen. For example, approximating \textit{KI} using \textit{MT} and \textit{SM} leads to similar \textit{element distributions}. On the other hand, if replicating a specific domain or style is not the goal, but instead exploration of new potential domains, then mixing a variety of different domains and examples can result in levels that have vastly different properties from the input domains. For example, blending \textit{MT}, \textit{KI}, and \textit{SM} with a sketch from \textit{MM} results in levels with \textit{element distributions} very different from the sketch domain.
\section{Conclusions}
We presented a novel, hybrid PCGML approach that combines the use of Example-Driven Binary Space Partitioning and VAEs to generate and blend levels using multiple domains. Our results demonstrate that different level generation and blending style goals (integrity vs. novelty, for example) can be traded off using different choices of domains. We consider several avenues for future work.
The experiments revealed that the choice of training domain representation can have a large impact on the resulting generated levels when blending. One avenue we would like to explore is intelligent automatic grouping of training domains. For example, if we know a priori that a set of domains has similar structures and game element distributions vs a set of domains that has similar structures but very different element distributions, we can better leverage the training data to guide the generator towards the users' goals (e.g., novelty vs replication).
Similarly, future work could also explore different choices of abstractions. In this work, the solid/empty sketch resolution abstraction allowed us to blend domains based on structural similarities but other abstractions could be defined based on other affordances such as those given in the Video Game Affordances Corpus \cite{bentley2019videogame}. Abstractions based on such affordances could potentially enable blending across different genres that do not share the same structural patterns and properties.
Our conditional sketch generation results were not optimal and conditioning a combined model failed to approximate the distributions of individual domains. It is likely the architecture was not well suited to the problem but even so, the results depicted in Figure \ref{fig:CVAE} suggest that this may be a promising direction to pursue. Successfully training such models would eliminate reliance on separate models for each domain for sketch generation. We would also like to explore other established blending and style transfer approaches. For example, how would CycleGAN~\cite{zhu_unpaired_2017} or pix2pix~\cite{isola_image_2017} perform on tile-resolution data instead of pixel resolution?
Lastly, we are interested in developing this approach into a mixed-initiative tool for level design and blending by allowing users to select their input domains, and create sketches for the EDBSP algorithm to fill in. By leveraging VAEs to generate new sketches, we have shown that the EDBSP approach is able to handle unseen sketches well, and therefore user generated sketches should be usable by the algorithm. Furthermore, the inner workings of the EDBSP algorithm are straightforward and explainable; and we would like to perform a user study to determine if that explainability increases usability in a mixed-initiative setting.
\section*{Appendix}\label{sec:app}
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{CV_ALL_minKL.png}
\includegraphics[width=\columnwidth]{CV_notWC_maxKL.png}
\caption{The generated \textit{CV} level with the lowest KL divergence ($0.044$) in the \textit{ALL} generated set (above); and the generated \textit{CV} level with the highest KL--divergence ($0.152$) in the $\neg$\textit{WC} generated set (below). Both are cropped for space.}
\label{fig:CVKL}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=.2\columnwidth]{KI_notWC_minKL.png}
\includegraphics[width=.2\columnwidth]{KI_WC_maxKL.png}
\caption{The generated \textit{KI} level with the lowest KL divergence ($0.038$) in the $\neg$\textit{WC} generated set (left); and the generated \textit{KI} level with the highest KL--divergence ($0.378$) in the \textit{WC} generated set (right). Both are cropped for space.}
\label{fig:KIKL}
\end{figure}
\begin{figure*}[h]
\centering
\begin{tabular}{ccc}
\multicolumn{3}{c}{(a) Comparison of generated \textit{CV} sketch sections and training sketch sections}\\
\includegraphics[width=0.28\textwidth]{CV-D.png} & \includegraphics[width=0.28\textwidth]{CV-L.png} &
\includegraphics[width=0.28\textwidth]{CV-P.png}\\
\multicolumn{3}{c}{(b) Comparison of generated \textit{LR} sketch sections and training sketch sections}\\
\includegraphics[width=0.28\textwidth]{LR-D.png} & \includegraphics[width=0.28\textwidth]{LR-L.png} &
\includegraphics[width=0.28\textwidth]{LR-P.png}\\
\multicolumn{3}{c}{(c) Comparison of generated \textit{MM} sketch sections and training sketch sections}\\
\includegraphics[width=0.28\textwidth]{MM-D.png} & \includegraphics[width=0.28\textwidth]{MM-L.png} &
\includegraphics[width=0.28\textwidth]{MM-P.png}\\
\multicolumn{3}{c}{(d) Comparison of generated \textit{NG} sketch sections and training sketch sections}\\
\includegraphics[width=0.28\textwidth]{NG-D.png} & \includegraphics[width=0.28\textwidth]{NG-L.png} &
\includegraphics[width=0.28\textwidth]{NG-P.png}\\
\end{tabular}
\end{figure*}
\begin{figure*}[h]
\centering
\begin{tabular}{ccc}
\multicolumn{3}{c}{(e) Comparison of generated \textit{KI} sketch sections and training sketch sections}\\
\includegraphics[width=0.28\textwidth]{KI-D.png} & \includegraphics[width=0.28\textwidth]{KI-L.png} &
\includegraphics[width=0.28\textwidth]{KI-P.png}\\
\multicolumn{3}{c}{(f) Comparison of generated \textit{MT} sketch sections and training sketch sections}\\
\includegraphics[width=0.28\textwidth]{MT-D.png} & \includegraphics[width=0.28\textwidth]{MT-L.png} &
\includegraphics[width=0.28\textwidth]{MT-P.png}\\
\multicolumn{3}{c}{(g) Comparison of generated \textit{SM} sketch sections and training sketch sections}\\
\includegraphics[width=0.28\textwidth]{SM-D.png} & \includegraphics[width=0.28\textwidth]{SM-L.png} &
\includegraphics[width=0.28\textwidth]{SM-P.png}\\
\end{tabular}
\caption{This figure shows VAE-generated sketch sections for each domain compared with the nearest and furthest counterparts in the training levels, based on the evaluation metrics.}
\label{fig:VAEcomp}
\end{figure*}
\clearpage
\newpage
\bibliographystyle{acm}
|
2,877,628,091,008 | arxiv | \section*{Acknowledgments} \hspace{5mm}This work was
supported in part by the National Natural Science Foundation of
China under Grants No.11275088 and No.11545012, and by the Natural Science Foundation of the Liaoning Scientific Committee
(No. 2014020151).
\vspace{5mm}
|
2,877,628,091,009 | arxiv | \section{Introduction}\label{sec:intro}
\setcounter{equation}{0}
Despite the initial scepticism expressed by Einstein, Podolsky and Rosen (EPR)~\cite{Einstein:1935rr} concerning the completeness\- of Quantum Mechanics (QM), the non-local nature of the quantum-mechanical wavefunction has been vindicated by now in vast number of experiments\- through the violation of the famous Bell's inequalities~\cite{Bell:1964kc}. An astounding physical consequence of the property of non-locality in QM is the emergence of the so-called quantum entanglement between quantum states which became the primary engineering principle in many applications of modern quantum theory, including quantum information, quantum technology and particle physics~\cite{Feynman:1981tf,Alonso:2022oot}.
With the advent of the more complete Quantum Field Theory (QFT), physical observables\- associated with scattering processes are encoded in the so-called S-matrix~\cite{Lehmann:1954rq, Eden:1966dnq}. The development of a unitary S-matrix theory allowed us to make accurate predictions for (differential) cross sections of $2\to n$ processes in momentum space which have been tested with great success\- in collider experiments (for a review, see~\cite{Workman:2022ynf}). However, in its standard formulation, the S-matrix provides no space-time information of the non-local form of Feynman propagators, thus limiting considerably its degree of applicability. For instance, the production and decay of long-lived particles, like those that occur in $K$-, $B$- and $D$-meson systems, would necessitate the knowledge of the production and detection vertices, along with the momenta of the particles in both the initial and final state of such processes. Likewise, the necessity of describing the observed phenomenon of neutrino oscillations in space within the framework of QFT~\cite{Giunti:1993se,Grimus:1996av,Campagne:1997fu,Kiers:1997pe,Ioannisian:1998ch,Cardall:1999bz,Beuthe:2001rc,Kopp:2009fa,Akhmedov:2009rb,Akhmedov:2010ms,Akhmedov:2010ua,Naumov:2013bea,Naumov:2020yyv,Grimus:2019hlq,Cheng:2022lys,Naumov:2022kwz} would require the development of a non-local S-matrix theory. In addition, such a non-local S-matrix theory might be useful to successfully regulate $t$-channel kinematic singularities of tree-level transition amplitudes~\cite{Peierls:1961zz,Coleman:1965xm,Nowakowski:1993iu,Ginzburg:1995bc} that appear in the physical region of the phase space, without appealing, for example, to statistical uncertainties of the particle momenta in the colliding muon beams~\cite{Kotkin:1992bj,Melnikov:1996na,Melnikov:1996iu,Grzadkowski:2021kgi}. Other applications of a non-local S-matrix theory may include new-physics searches for displaced vertices during the hadronization process at high-energy colliders like LHC~\cite{LHCb:2014osd,Bondarenko:2019tss}, which may lead to an improved interpretation of the experimental~data.
In this paper we aim to formulate a non-local S-matrix theory in which localisation effects in particle interactions are consistently taken into account in scattering processes. Such a non-local extension of the S-matrix receives its standard form in the well defined limit in which the spread of localisation of all interaction vertices is taken to be infinite. To illustrate the key features of the non-local S-matrix formalism, we will consider a $2\to 2$ scattering process within a solvable QFT model in which the production and detection vertices are assumed to be not sharply localised in space, but they have a spatial spread of Gaussian form.
The aforementioned solvable QFT model was previously introduced in~\cite{Ioannisian:1998ch} where several basic\- properties of propagation and oscillation of neutrinos were analysed in two physical regions that depend on the distance $|\vecl|\xspace$ of the detector from the source. Here, we further consolidate these earlier findings by borrowing a terminology known from light diffraction in classical optics. Exactly as~in diffractive optics, depending on $|\vecl|\xspace$, we have two regions which we call the near-field and far-field zones, or the Fresnel and Fraunhofer regions. These two regions depend on the spatial localisation spread of the production or detection vertices, which we generically denote as $\delta \ell\xspace$, and the magnitude~$|\vecp|\xspace$ of the net three-momentum ${\bf p}$ of all particles in the initial or final state. Hence, the Fresnel (near-field) zone refers typically to distances $|\vecl|\xspace$ in the interval, ${0 \le |\vecl|\xspace \lesssim |\vecp|\xspace\,\delta \ell\xspace^2}$, whilst the Fraunhofer (far-field) regime sets on in its full glory when~${|\vecl|\xspace \gg |\vecp|\xspace\, \delta \ell\xspace^2}$.
In this article we also study in more detail all emerging quantum phenomena that result from our non-local S-matrix formalism in the context of the solvable QFT model presented in~\cite{Ioannisian:1998ch}. In particular, we re-examine the question whether mediators of a particle-mixing system like neutrinos produce an oscillating pattern if they get detected in the Fresnel region. As well as confirming certain earlier results~\cite{Ioannisian:1998ch} concerning the analytic behaviour of the S-matrix amplitude in the forward Fresnel and Fraunhofer zones, we find several novel features with respect to its angular dependence which have not been discussed in adequate detail before in the literature. Most remarkably, we obtain a ``quantum obliquity factor'' in the transition amplitude that suppresses the propagation of the mediator in the backwards direction, when the latter satisfies the on-mass shell (OS) condition. This suppression is achieved without imposing the restrictions owing to the Huygens--Fresnel's principle, but it is rather a consequence of the inherent boundary conditions that the Feynman propagator obeys. Thus, an alternative quantum field-theoretic explanation can be obtained for the origin of the obliquity factor in diffractive optics~\cite{Optics_1999}.
The non-local S-matrix theory that we will be developing here could be utilised at high-energy colliders to describe hadronization in a framework consistent with quantum mechanics, beyond the so-called Lund model~\cite{Andersson:1983ia,Ferreres-Sole:2018vgo}. Likewise, short and long baseline neutrino experiments would benefit from the development of a non-local S-matrix theory that will provide a more accurate interpretation of the low-energy neutrino oscillation data.
The paper is organised as follows. After this introductory section, we briefly review in Section~\ref{sec:non_local_intro} the basic results emanating from the conventional S-matrix theory by considering a $2\to2$ process in a simple scalar field theory. We then discuss a non-local extension of this standard S-matrix theory, and present exact analytic results within a solvable QFT model. In~close analogy with diffractive optics, we present in Section~\ref{sec:approx} approximate analytic expressions of the S-matrix amplitude in the near- and far-field zones as functions of the distance $|\vecl|\xspace$ of the detector from the source.
In Section~\ref{sec:numerics} we give exact results by analysing numerical examples, which confirm explicitly the validity of the Fresnel and Fraunhofer approximations discussed in the previous section. In addition, we show the complete angular dependence of the transition amplitude in the polar coordinates $(|\vecl|\xspace,\theta)$, where $\theta$ (with $0\le \theta \le \pi)$ is the angle
between the distance vector~$\pmb{\ell}\xspace$ and the total three-momentum ${\bf p}$ of the colliding particles in the initial state of the scattering process. Finally, Section~\ref{sec:summary} provides a succinct summary of our results and discusses possible future research directions. Technical details of the calculation of the non-local S-matrix amplitude within a solvable QFT model are given in Appendices~\ref{app:amplitude}, \ref{app:angular} and~\ref{app:radial}.
\section{Non-Locality in QFT}\label{sec:non_local_intro}
\setcounter{equation}{0}
In a local QFT, the notion of non-locality enters through the Feynman propagator, which we denote here as~$\Delta_{\rm F} (x,y)$. A remarkable property of the Feynman propagator is that it has non-zero support for two space-time points, $x$ and $y$, which lie at space-like separations, i.e.~$\Delta_{\rm F} (x,y) \neq 0$, for $(x-y)^2 < 0$. In fact, this property encodes the counter-intuitive non-local phenomenon of quantum entanglement in QM which was called by Einstein in a letter to Max Born in 1947: "spooky action at a distance." But exactly as happens with quantum entanglement in QM, no true information between any two space-like separated points,~$x$ and~$y$, can be transferred faster than the speed of light, and as such, QFT respects causality~\cite{Maiani:1994zi,Eden:1966dnq,Donoghue:2020mdd}.
In the standard S-matrix theory emerging from QFT~\cite{Eden:1966dnq,Pokorski:1987ed,Weinberg:1995mt,Peskin:1995ev,Srednicki:2007qs,Schwartz:2014sze}, the transition amplitudes resulting from the so-called Lehmann--Symanzik--Zimmermann (LSZ) formalism~\cite{Lehmann:1954rq} do not depend\- on space-time, but only on the four-momenta of all particles in the initial and final state of a scattering process. Hence, any information concerning the locality of the interactions in a $2\to n$ process is lost after integration over an infinite space-time volume. However, for reasons mentioned in the introduction, if these interactions are restricted locally within a space-time volume of finite size, the resulting S-matrix will become non-local. It will depend on the space-time coordinates and other quantities that parameterise the spread of localisation due to coherent QM uncertainties at the interaction vertices. The formulation of such a non-local S-matrix theory is that we wish to put forward in this paper$\,$\footnote{For other attempts along this research direction, see~\cite{Giunti:1993se,Grimus:1996av,Campagne:1997fu,Kiers:1997pe,Cardall:1999bz,Beuthe:2001rc,Kopp:2009fa,Akhmedov:2009rb,Akhmedov:2010ms,Akhmedov:2010ua,Naumov:2013bea,Naumov:2020yyv,Grimus:2019hlq,Cheng:2022lys,Naumov:2022kwz,Nowakowski:1993iu,Ginzburg:1995bc,Kotkin:1992bj,Melnikov:1996iu,Melnikov:1996na,Grzadkowski:2021kgi}.}.
In the remainder of the section, we consider a $2\to 2$ scattering process in a local QFT model with scalar fields. We first recall the simple derivation of the ordinary S-matrix element for such a process in the Born approximation. We then turn our attention to the non-local modifications introduced in this S-matrix element, when the interaction vertices occur in a confined region of space-time. Finally, we revisit the analytic results obtained in the solvable QFT model of~\cite{Ioannisian:1998ch}.
\subsection{Standard S-Matrix Theory}
Let us consider a simple scalar field theory consisting of five real scalar fields: $S_{1,2}$, $\chi_{1,2}$, and~$\Phi$.
The interactions of this QFT model are governed by the local Lagrangian,
\begin{eqnarray}
{\cal L}_{\rm int} (x)\, =\, \lambda \ S_1(x) \chi_1(x)\, \Phi(x)\, +\, g \ S_2(x) \chi_2(x)\, \Phi(x) \;,
\label{eq:Lint_local}
\end{eqnarray}
where $\lambda$ and $g$ are two real couplings. In the Born approximation of this local QFT model, any scattering between $S_{1,2}$ and $\chi_{1,2}$ will be mediated by the exchange of a particle $\Phi$ involving a single Feynman diagram.
\begin{figure}[t!]
\centering\includegraphics[width=0.6\textwidth ]{local_FeynD.pdf}
\caption{The Feynman diagram for the process $S_1(p_1) \ \chi_1(p_2) \to S_2(k_1) \ \chi_2(k_2)$.}
\label{fig:local_FeynD}
\end{figure}
To set the stage for our formalism, let us for definiteness consider the scattering process: $S_1(p_1) \ \chi_1(p_2) \to S_2(k_1) \ \chi_2(k_2)$. At the tree level, this $2\to 2$ process may be represented by the $s$-channel Feynman diagram shown in Figure~\ref{fig:local_FeynD}. For later convenience, we define the total four-momenta, $p = p_1 + p_2$ and $k = k_1 + k_2$, of all particles in the initial and final state, respectively.
Applying the LSZ formalism~\cite{Lehmann:1954rq}, the transition amplitude $T_{\rm F}$, for the aforementioned process, may be evaluated as
\begin{align}
T_{\rm F}\ =&\, -\lambda \, g \,\displaystyle\int \xspace \dfrac{d^4 q}{(2\pi)^4} \ \dfrac{1}{q^2 - m_{\Phi}^2 + i \epsilon}
\displaystyle\int \xspace d^4 x \ d^4 y \ e^{-i (p - q ) \cdot x } e^{i (k - q) \cdot y}\nonumber\\
=&\ \ \lambda \, g \,
\dfrac{\lrb{2 \pi}^4}{|\vecp|\xspace^2 - \tilde{q}\xspace^2 - i \epsilon} \ \delta^{(4)}(p-k) \;,
\label{eq:amplitude_std}
\end{align}
with $\tilde{q}\xspace^2 = (p^0)^2 - m_{\Phi}^2$. Note that energy-momentum conservation, $p = k$, arises as a result of Lorentz invariance and the local nature of the interactions. If the particle $\Phi$ becomes OS in the $s$-channel of Figure~\ref{fig:local_FeynD}, we may naively incorporate its
decay width $\Gamma_\Phi$ by complexifying its squared mass~(e.g., see~\cite{Nowakowski:1993iu,Campagne:1997fu}), which amounts to making the substitution, $m^2_\Phi \to m^2_\Phi + i m_\Phi\Gamma_\Phi$, in~\eqref{eq:amplitude_std}.
For simplicity, we ignore possible finite width effects in this work, by setting $\Gamma_\Phi = 0$.
We should emphasise here that the transition amplitude $T_{\rm F}$ is not only Lorentz invariant, but also enjoys the fundamental property of analyticity which in turn implies the so-called crossing symmetry~\cite{Eden:1966dnq}. Specifically, the transition amplitude for the ($t$-channel) process $S_1(p_1) \ S_2(p_2) \to \chi_1(p^{\prime}_1) \ \chi_2(p^{\prime}_2)$ can be recovered from the $s$-channel amplitude given in~\eqref{eq:amplitude_std}. In this case, the relevant four-momenta, $p$ and $k$, are defined as: $p = p_1 - p_1^{\prime}$ and $k = p_2^{\prime} - p_2$. As a consequence of the analyticity of the S-matrix, we can only change the signs of the momenta of the incoming and outgoing particles, but the analytic structure of the amplitude in~\eqref{eq:amplitude_std} remains intact. This property of analyticity is that we wish to preserve in our formulation of a non-local extension of the S-matrix which we discuss below.
\subsection{Analytic Non-Local Extension of the S-Matrix}
As stated earlier, the amplitude $T_{\rm F}$ in~\eqref{eq:amplitude_std}, as derived from the usual S-matrix theory, pertains to a scattering of particles with definite four-momenta. Hence, by virtue of the uncertainty principle${}$, no information about its space-time dependence is available.
However, both the particles themselves and their interactions may be localised in a finite space-time volume. Following${}$~\cite{Kiers:1997pe,Ioannisian:1998ch}, we will assume the latter and regard all particles in the initial and final state of a process as being well described asymptotically by plane waves to a very good approximation. Such a consideration will be equivalent to the more often discussed wave-packet approach~(e.g., see~\cite{Akhmedov:2010ua}), since the localised interactions may be viewed as intersections of the wave packets of the initial and final particles at the vertices of a scattering process.
Let us first consider the {\em production vertex} at some generic space-time point~$x$. To~introduce a finite non-zero localisation spread at $x$, we define the Lorentz-invariant Gaussian function~\cite{Ioannisian:1998ch,Naumov:2020yyv}
\begin{equation}
G(x;\svev{x},\Delta p) = e^{ -\lrb{x-\svev{x}}^{\mu} \Delta p_{\mu \nu} \lrb{x-\svev{x}}^{\nu} } \;,
\label{eq:general_smearing_def}
\end{equation}
where the momentum correlation tensor, $\Delta p_{\mu \nu}$, is defined through the relation: $\Delta p^{\rho\mu}\Delta x_{\mu\sigma} = \delta^{\rho}_{\sigma}$, with
\begin{equation}
\Delta x_{\mu \nu} = \svev{x_{\mu}x_{\nu}} - \svev{x_{\mu}} \svev{x_{\nu}} \;.
\label{eq:correlation_tensor}
\end{equation}
Here, the parameters $\svev{x^{\mu}}$ and $\svev{x^{\mu}x^{\nu}}$ characterize the uncertainties in the four-position $x$. As we will see below, such finite uncertainties will trigger a violation in the conservation of the four-momentum. Since the exponent of \eqref{eq:general_smearing_def} describes a complicated four-dimensional ellipsoid, we may simplify the analysis by assuming the factorisable form: $\Delta p^{\mu\nu} = \delta p^\mu \delta p^\nu$. In this case, we have
\begin{equation}
G(x;\svev{x}\xspace,\delta p) = e^{- \lrsb{(x-\svev{x}\xspace)\cdot\delta p}^{2} } \;,
\label{eq:spherical_smearing_x_def}
\end{equation}
where $\svev{x}\xspace$ is the centre of the production vertex or the source, and $\delta p$ is the would-be four-momentum uncertainty. We note that $\delta p$ may naively be associated with an effective interaction radius $\delta x$ as $\delta x\xspace^\mu = 1/\delta p^\mu$.
By analogy, we may introduce the following localization function for the {\em detection vertex} at a generic four-position~$y$:
\begin{equation}
G (y;\svev{y}\xspace,\delta k) = e^{- \lrsb{(y-\svev{y}\xspace)\cdot\delta k}^{2} } \;,
\label{eq:spherical_smearing_y_def}
\end{equation}
where $\svev{y}\xspace$ and $\delta k$ are the centre of the detection vertex and its would-be four-momentum uncertainty, respectively, with the effective interaction radius $\delta y$ defined as $\delta y\xspace^\mu = 1/\delta k^\mu$.
Taking into account the localization functions in~\eqref{eq:spherical_smearing_x_def} and~\eqref{eq:spherical_smearing_y_def} for the four-positions $x$ and~$y$, the non-local amplitude $T_{\rm NL}$ for the process ${S_1\chi_1\to \Phi^*\to S_2\chi_2}$ takes on the form
\begin{align}
T_{\rm NL}(p,k; \svev{x}\xspace,\svev{y}\xspace,\delta x,\delta y)\, =&\ - \lambda \, g \displaystyle\int \xspace d^4 x \, d^4 y \ e^{- \lrsb{(x-\svev{x}\xspace)\cdot \delta p}^{2} } \, e^{- \lrsb{(y-\svev{y}\xspace)\cdot \delta k}^{2} } \nonumber \\
&\ \times e^{-i p \cdot x + i k \cdot y} \displaystyle\int \xspace \dfrac{d^4 q}{(2\pi)^4} \ \dfrac{e^{i q \cdot (x-y)}}{q^2 - m_{\Phi}^2 + i \epsilon} \;,
\label{eq:amplitude_init}
\end{align}
which, up to a constant, coincides with~\cite{Ioannisian:1998ch}. To be specific, the amplitude $T_{\rm NL}$ describes the annihilation of the particles $S_1$ and $\chi_1$ with a sum of four-momenta $p$, at a mean four-position~$\langle x\rangle$ with an uncertainty~$\delta x$, and the subsequent creation of the particles $S_2$ and $\chi_2$ with a sum of four-momenta $k$, at a mean four-position~$\langle y\rangle$ with an uncertainty $\delta y$. In fact, such a setting may be applied equally well to describe particles that are forced to go through a restricted area of an aperture with a shape that is expressed by a function with a given localization profile, e.g.~of the Gaussian form like in~\eqref{eq:spherical_smearing_x_def}. Hence, the non-local S-matrix that we have been developing here can be viewed as another equivalent approach that allow us to describe particle diffraction, such as light diffraction, within the framework of~QFT.
\begin{figure}[t!]
\centering\includegraphics[width=0.8\textwidth ]{non-local_FeynD.pdf}
\caption{A schematic representation of the non-local scattering process ${S_1\chi_1\to \Phi^*\to S_2\chi_2}$, which corresponds to the transition amplitude $T_{\rm NL}(p,k;\pmb{\ell}\xspace ,\delta x\xspace,\delta y\xspace)$ in~\eqref{eq:amplitude_shift}.}
\label{fig:non-local_FeynD}
\end{figure}
In several applications of interest, the production and detection times, $x^0$ and $y^0$, contain large time uncertainties,
$\delta x^0$ and $\delta y^0$. We may therefore simplify our computations by adopting the working hypothesis that these time uncertainties are much bigger than the corresponding spatial uncertainties, $\delta {\bf x}$ and $\delta {\bf y}$,
and so take the infinite limit: $\delta x^0,\,\delta y^0 \to \infty$, or equivalently the zero limit: $\delta p^0, \, \delta k^0 \to 0$. As illustrated in Figure~\ref{fig:non-local_FeynD}, a further simplification occurs if all spatial uncertainties are taken to be equal, i.e.~$\delta x^i = \delta x\xspace$ and $\delta y^i = \delta y\xspace$, for all $i=1,2,3$. Then, up to an overall frame-dependent phase factor $e^{i (\bvec p\xspace \cdot \svev{ \bvec x}\xspace - \bvec k\xspace \cdot \svev{ \bvec y}\xspace)}$, the non-local amplitude for the process ${S_1\chi_1\to \Phi^*\to S_2\chi_2}$ becomes
\begin{align}
T_{\rm NL}(p,k;\pmb{\ell}\xspace ,\delta x\xspace,\delta y\xspace)\, = &\ -2\pi\, \delta(p^0 - k^0) \ \lambda \, g \displaystyle\int \xspace d^3 \bvec x \, d^3 \bvec y \ e^{-\bvec{x}^2/\delta x\xspace^2 } \, e^{-\bvec{y}^2/\delta y\xspace^2 } \nonumber \\
&\ \times e^{i (\bvec p\xspace \cdot \bvec x - \bvec k\xspace \cdot \bvec y)} \displaystyle\int \xspace \dfrac{d^3 \bvec q}{(2\pi)^3} \ \dfrac{e^{-i \bvec q \cdot (\bvec x- \bvec y - \pmb{\ell}\xspace)}}{-|\bvec{q}|^2 + \tilde{q}\xspace^2 + i \epsilon} \;,
\label{eq:amplitude_shift}
\end{align}
where we have introduced the average distance vector $\pmb{\ell}\xspace \equiv \svev{ \bvec y}\xspace - \svev{ \bvec x}\xspace$. Notice that in addition to the momentum dependence of standard amplitude $T_{\rm F}$, the non-local amplitude $T_{\rm NL}$ now depends on the distance vector $\pmb{\ell}\xspace$ between the production and detection vertices, and their uncertainties, $\delta x\xspace,\, \delta y\xspace$. Hence, the amplitude $T_{\rm NL}$ is frame-independent as well, as a consequence of the Poincare invariance of the theory. Moreover, it is not difficult to see that in the infinite limits $\delta x\xspace, \delta y\xspace \to \infty $, the non-local amplitude $T_{\rm NL}$ in~\eqref{eq:amplitude_shift} becomes, up to an overall phase factor $e^{i{\bvec p\xspace\cdot \pmb{\ell}\xspace}}$, identical to the ordinary S-matrix amplitude $T_{\rm F}$ in~\eqref{eq:amplitude_std}.
As shown in~\cite{Ioannisian:1998ch} and in Appendices~\ref{app:amplitude},~\ref{app:angular} and~\ref{app:radial} using a different method,
the various integrations over angular and radial variables in~\eqref{eq:amplitude_shift} can be performed analytically, yielding the amplitude
\begin{align}
T_{\rm NL}(p,k; \pmb{\ell}\xspace,\delta x\xspace,\delta y\xspace)\, =&\ 2\pi\, \delta(p^0 - k^0) \ \lambda\, g\, \dfrac{\pi^2}{8}\,
\dfrac{ \delta x\xspace^3 \, \delta y\xspace^3}{|\vecL|\xspace}\;
e^{-\lrsb{\lrb{|\bvec p\xspace|^2+\tilde{q}\xspace^2} \delta x\xspace^2 +\lrb{|\bvec k\xspace|^2+\tilde{q}\xspace^2} \delta y\xspace^2 }/4} \nonumber \\
&\ \times \bigg( e^{i \tilde{q}\xspace\,|\vecL|\xspace} \, {\rm Erfc}\xspace\,z_-\: -\: e^{-i \tilde{q}\xspace\,|\vecL|\xspace} \, {\rm Erfc}\xspace\,z_+ \bigg)\;.
\label{eq:amplitude_final}
\end{align}
In the above, we have used the shorthand notation: $\bvec L\xspace = \pmb{\ell}\xspace- \frac{i}{2} \big(\bvec p\xspace\,\delta x\xspace^2 + \bvec k\xspace\, \delta y\xspace^2\big)$, $|\vecL|\xspace \equiv \sqrt{\bvec L\xspace \cdot \bvec L\xspace}$, and $z_\pm = - \dfrac{i}{2} \tilde{q}\xspace \, \delta \ell\xspace\, \pm\, \dfrac{|\vecL|\xspace}{\delta \ell\xspace}$, with $\delta \ell\xspace^2 = \delta x\xspace^2 + \delta y\xspace^2$. In~\eqref{eq:amplitude_final}, ${\rm Erfc}\xspace\,z$ is the complementary error function analytically continued with a complex argument $z \in \mathbb{C}$ as follows:
\begin{equation}
\label{eq:Erfc}
{\rm Erfc}\xspace\,z \: =\: 1\, -\, \frac{2}{\sqrt{\pi}}\,
\int_0^z\,dt\, e^{-t^2}\, .
\end{equation}
We should point out that the non-local amplitude $T_{\rm NL}$
for the process ${S_1\chi_1\to \Phi^*\to S_2\chi_2}$ remains\- finite in the OS kinematic region, $|\bvec p\xspace| = |\bvec k\xspace| = \tilde{q}\xspace$, as long as $\delta \ell\xspace$ is finite, even if one assumes a vanishing $\Phi$-decay width $\Gamma_\Phi$. On the other hand, due to the analyticity of~$T_{\rm NL}$, the same analytic expression in~\eqref{eq:amplitude_final} may be used to regulate the $t$-channel singularities~\cite{Coleman:1965xm,Peierls:1961zz,Ginzburg:1995bc},
which can occur in the crossing symmetric non-local process ${S_1S_2 \to \Phi^* \to \chi_1\chi_2}$ in the physical region. Unlike other methods that model the finite size of the interacting beams~\cite{Kotkin:1992bj,Melnikov:1996na,Melnikov:1996iu}, our
non-local S-matrix approach takes into account the finite size of the interaction volume coherently, where all spatial uncertainties are implemented at the amplitude level~$T_{\rm NL}$, and not at its squared~$|T_{\rm NL}|^2$. As well as being devoid of $t$-channel infinities, the amplitude $T_{\rm NL}$ also contains information for the
distance between the source and the detector, through the distance vector $\pmb{\ell}\xspace$. The latter can shed light on phenomena that may take place on both microscopic and macroscopic distances, like neutrino oscillations~\cite{Ioannisian:1998ch}, which we discuss in more detail in Sections~\ref{sec:approx} and~\ref{sec:numerics}.
We note that the non-local amplitude $T_{\rm NL}$ given in~\refs{eq:amplitude_final} is finite at $|\vecL|\xspace = 0$. This can be easily deduced by observing that $T_{\rm NL}$ is an even function with respect to $|\vecL|\xspace$, implying that $|\vecL|\xspace\ T_{\rm NL}$ is an odd one. Then, by means of a Taylor series expansion, one may verify that $|\vecL|\xspace\ T_{\rm NL}$ approaches zero at least as fast as $|\vecL|\xspace$, so $T_{\rm NL}$ is finite at $|\vecL|\xspace = 0$.
We conclude this section by presenting two more interesting limits concerning the non-local amplitude $T_{\rm NL}$ stated in~\refs{eq:amplitude_final}.
\subsubsection{Zero-spread localisation limit}
In the limit of vanishing spread of the production and detection vertices, {i.e.}\xspace $\delta x\xspace,\, \delta y\xspace \to 0$, the complementary error functions take the values: ${\rm Erfc}\xspace\, z_+ \to 0$ and ${\rm Erfc}\xspace\,z_- \to 2$. In this vanishing limit of $\delta \ell\xspace$, the non-local amplitude~\refs{eq:amplitude_final} simplifies to
\begin{equation}
T_{\rm NL}(p,k; \pmb{\ell}\xspace)\, =\, 2\pi \, \delta(p^0 - k^0) \ \lambda \, g\; \dfrac{\pi^2}{4} \delta x\xspace^3 \, \delta y\xspace^3 \ \dfrac{ e^{i \tilde{q}\xspace |\vecl|\xspace}}{|\vecl|\xspace}\;.
\label{eq:amplitude_zero_volume}
\end{equation}
This form of the amplitude implies that the exchanged particle passes through definite points in space, $\svev{ \bvec x}\xspace$ and $\svev{ \bvec y}\xspace$. In fact,
$T_{\rm NL}$ gets proportional to the Green's function of the Euclidean 3D space, $e^{i \tilde{q}\xspace |\vecl|\xspace}/|\vecl|\xspace$. Since the momentum
uncertainties diverge when $\delta \ell\xspace\to 0$, the resulting incoming and outgoing momenta, ${\bf p}$ and ${\bf k}$, will be unrelated to each other and so arbitrary. However, the three-momentum of the mediator may be identified by its wavenumber, $\tilde{q}\xspace$. If $\tilde{q}\xspace^2 \geq 0$, this would correspond to a real particle with momentum $\tilde{q}\xspace$. Instead, for $\tilde{q}\xspace^2 < 0$, the amplitude $T_{\rm NL}$ would fall off exponentially as $e^{-|\tilde{q}\xspace|\, |\vecl|\xspace}$, representing a decaying mode that travels an effective mean distance of~$1/|\tilde{q}\xspace|$ from a point source.
\subsubsection{Momentum conservation limit}
For most experimental settings, we expect $|\vecp|\xspace \, \delta \ell\xspace , \; |\veck|\xspace \, \delta \ell\xspace \gg 1$, so that the violation of energy-momentum conservation is marginal, with $\bvec p\xspace \simeq \bvec k\xspace$. In principle, we may enforce a total four-momentum conservation limit by assuming a translationally invariant localization of the form
$e^{-\lrb{\bvec{x}-\bvec{y}-\pmb{\ell}\xspace}^2/\delta \ell\xspace^2},$
instead of two independent Gaussians centered at $\langle {\bf x}\rangle$ and $\langle {\bf y}\rangle$.
Upon integration over the coordinates, the above restricted form of localisation gives rise to the 3D $\delta$-function, $\delta^{(3)}\lrb{\bvec p\xspace -\bvec k\xspace}$, in~\refs{eq:amplitude_final}. This ensures four-momentum conservation between the incoming
and outgoing particles, i.e.~$\bvec p\xspace = \bvec k\xspace$, even though one still has in general ${\bf q} \neq \bvec p\xspace$ for the momentum ${\bf q}$ of the exchanged particle $\Phi$. In addition, the overall constant changes by a factor $\dfrac{1}{\pi^{3/2}}\dfrac{\delta \ell\xspace^3}{\delta x\xspace^3 \delta y\xspace^3}$.
Putting everything together, the non-local amplitude $T_{\rm NL}$ reads
\begin{align}
T_{\rm NL}(p,k; \pmb{\ell}\xspace,\delta \ell\xspace)\, =&\ (2\pi)^4 \, \delta(p^0 - k^0)\ \delta^{(3)}(\bvec p\xspace - \bvec k\xspace)\ \lambda \, g\ \dfrac{\sqrt{\pi}}{8}\, \dfrac{ \delta \ell\xspace^3 }{|\vecL|\xspace}\;
e^{-\delta \ell\xspace^2 \lrb{|\bvec k\xspace|^2+\tilde{q}\xspace^2}/4 } \nonumber \\
&\ \times \bigg( e^{i \tilde{q}\xspace\, |\vecL|\xspace} \, {\rm Erfc}\xspace\,z_-\: -\: e^{-i \tilde{q}\xspace\, |\vecL|\xspace} \, {\rm Erfc}\xspace\,z_+ \bigg)\;,
\label{eq:amplitude_k=p_final}
\end{align}
where $\bvec L\xspace = \pmb{\ell}\xspace- \frac{i}{2} \,\bvec p\xspace\, \delta \ell\xspace^2$, and $\delta \ell\xspace$ and the complex arguments $z_\pm$ are defined after~\eqref{eq:amplitude_final}. Without compromising the main features of our non-local S-matrix formalism, we shall employ the
simplified\- amplitude $T_{\rm NL}$ given by~\eqref{eq:amplitude_k=p_final}. To further simplify matters, we strip off an overall factor of $(2\pi)^4\, \delta^{(4)}(p - k)\, \lambda\,g$ from $T_{\rm NL}$ in~\eqref{eq:amplitude_k=p_final},
and define a corresponding non-local matrix element~$M$ as follows:
\begin{equation}
M(|\vecp|\xspace,\tilde{q}\xspace; \pmb{\ell}\xspace,\delta \ell\xspace)\ =\ \dfrac{\sqrt{\pi}}{8}\, \dfrac{ \delta \ell\xspace^3 }{|\vecL|\xspace}\;
e^{-\delta \ell\xspace^2 \lrb{|\bvec p\xspace|^2+\tilde{q}\xspace^2}/2}\,
\bigg( e^{i \tilde{q}\xspace\, |\vecL|\xspace} \, {\rm Erfc}\xspace\,z_-\: -\: e^{-i \tilde{q}\xspace\, |\vecL|\xspace} \, {\rm Erfc}\xspace\,z_+ \bigg)\;.
\label{eq:matrix_element_def}
\end{equation}
Our analysis in the following two sections will utilise this last form of the matrix element $M$.
\section{Near- and Far-Field Approximations}\label{sec:approx}
\setcounter{equation}{0}
Although the matrix element $M$ in~\refs{eq:matrix_element_def} for a generic non-local process ${S_1\chi_1\to \Phi^*\to S_2\chi_2}$ is given in a closed form, it still remains difficult to deduce from the latter what its main physical implications are. To better understand these, we derive in this section analytical approximations of $M$ as a function of the distance vector $\pmb{\ell}\xspace$ between the production of the $\Phi$-mediator and its detection, the spatial distance uncertainty $\delta \ell\xspace$, as well as of the total three-momentum ${\bf p}$ of the incoming particles $S_1$ and $\chi_1$. In all our approximations, we consider that $|\vecp|\xspace \, \delta \ell\xspace \gg 1$, which happens to be a valid assumption for most realistic situations. In close analogy with diffractive optics, we differentiate two regions: (i)~the Fraunhofer or far-field zone where $|\vecl|\xspace \gg |\vecp|\xspace \, \delta \ell\xspace^2$, and (ii)~the Fresnel or near-field zone in~which $|\vecl|\xspace \ll |\vecp|\xspace \, \delta \ell\xspace^2$.
Depending on the magnitude of $z_{\pm}$ of the complementary error function ${\rm Erfc}\xspace\,z$ defined in~\eqref{eq:Erfc}, we may use either a Taylor series expansion~\cite{HandbookCF:2008qt},
\begin{equation}
{\rm Erfc}\xspace \, z\, \simeq\, 1- \dfrac{2}{\sqrt{\pi}} z \,, \label{eq:erfc_taylor}
\end{equation}
for $|z| \ll 1$, or an asymptotic expansion~\cite{HandbookCF:2008qt},
\begin{equation}
{\rm Erfc}\xspace\,z\, \simeq\ \dfrac{e^{-z^2}}{\sqrt{\pi} \ z} \ ,
\label{eq:erfc_asymptotic}
\end{equation}
when $|z| \gg 1$ and $\arg z < 3\pi/4$. If $\arg z \geq 3 \pi/4$, the asymptotic expansion may be obtained after applying first the identity: ${\rm Erfc}\xspace\,z = 2 - {\rm Erfc}\xspace(-z)$.
\subsection{Fraunhofer Zone}
In the Fraunhofer or far-field region, the distance is $|\vecl|\xspace \gg |\vecp|\xspace \, \delta \ell\xspace^2$, so the complex vector norm $|\vecL|\xspace$ may then be approximated as
\begin{equation}
|\vecL|\xspace\, \simeq\, |\vecl|\xspace - \dfrac{i}{2} \cos\theta \ |\vecp|\xspace \ \delta \ell\xspace^2\, -\,
\dfrac{\sin^2\theta \ |\vecp|\xspace^2\, \delta \ell\xspace^4}{8\,|\vecl|\xspace} \, ,
\label{eq:L_far}
\end{equation}
where $\theta$ is the angle between $\pmb{\ell}\xspace$ and $\bvec p\xspace$. In this limit, we can ignore the exponentially suppressed term of~\refs{eq:erfc_asymptotic}. Thus, ${\rm Erfc}\xspace\,z_- \simeq 2$ and ${\rm Erfc}\xspace\,z_+ \simeq 0$, and the matrix element reads
\begin{equation}
M\, \simeq\ \dfrac{\sqrt \pi}{4} \; \delta \ell\xspace^3 \ \dfrac{e^{i \tilde{q}\xspace |\vecl|\xspace}}{|\vecl|\xspace}\; e^{-\lrb{\bvec p\xspace - \tilde{q}\xspace \;
\pmb{\hat \ell}\xspace}^2\delta \ell\xspace^2/4} \;,
\label{eq:M_far}
\end{equation}
with $\pmb{\hat \ell}\xspace$ being a unit vector along the distance vector $\pmb{\ell}\xspace$. The square of the matrix element $M$ in~\eqref{eq:M_far} obeys the expected inverse-square law $1/|\pmb{\ell}\xspace|^2$ for $\tilde{q}\xspace^2 \geq 0$, {i.e.}\xspace~for physical intermediate states. This result is in agreement with the so-called Grimus-Stockinger theorem~\cite{Grimus:1996av}, which is only applicable in the Fraunhofer regime~\cite{Ioannisian:1998ch,Akhmedov:2009rb,Akhmedov:2010ms,Naumov:2013bea}.
For off-shell particle virtualities with $\tilde{q}\xspace^2 < 0$, the approximate matrix element $M$ in~\eqref{eq:M_far} can be analytically continued from $\tilde{q}\xspace \to i|\tilde{q}\xspace|$, and so one can show that $M$ falls off exponentially, i.e.~$M \propto e^{- |\tilde{q}\xspace| |\vecl|\xspace} / |\vecl|\xspace$. We~note that this exponential fall-off of $M$ with increasing distance $|\vecl|\xspace$ is much stronger than the generic weaker scaling behaviour of $M \propto |\vecl|\xspace^{-2}$, claimed in~\cite{Grimus:1996av,Grimus:2019hlq}. Furthermore, it is not difficult to see from~\eqref{eq:M_far} that for $\tilde{q}\xspace^2 <0$, one has $M \propto \exp\big[\,i|\tilde{q}\xspace|\, \bvec p\xspace \cdot \pmb{\hat \ell}\xspace\, \delta \ell\xspace^2/2\big]$, but this only surviving phase leads to no favourable direction on the particle propagation in the Fraunhofer zone, i.e.~$|M|$ is completely independent of~$\theta$.
We should also observe that for finite spatial uncertainties $\delta \ell\xspace$, the non-local matrix element${}$~$M$ is devoid of the $s$-channel singularity${}$ haunting the ordinary S-matrix amplitude $T_{\rm F}$ in~\eqref{eq:amplitude_std}, in the OS limit $|\vecp|\xspace \to \tilde{q}\xspace$. Further\-more, for a given angle $\theta_*$, there is a characteristic momentum, which we call~$|\vecp|_{\rm *}\xspace$, that maximizes the norm of~$M$, $|M|$. In particular, we find that $|\vecp|_{\rm *}\xspace$ is shifted from its OS value $|\vecp|_{\rm *}\xspace = \tilde{q}\xspace$ in the forward direction to smaller values, according to the simple relation: $|\vecp|_{\rm *}\xspace = \tilde{q}\xspace \, \cos\theta_*$. Such shifts may be probed in observations that would involve non-zero angles $\theta$, and as such, they may provide a non-trivial test of the non-local S-matrix theory under study.
Finally, it is interesting to remark that if the $\Phi$-mediator has real momentum ($\tilde{q}\xspace^2>0$), the non-local amplitude $M$ will be suppressed away from the forward ($\theta=0$) direction, because of the exponential factor $e^{-\lrb{\bvec p\xspace - \tilde{q}\xspace \; \pmb{\hat \ell}\xspace}^2\delta \ell\xspace^2/4}$ in~\eqref{eq:M_far}. This factor also disfavours propagation in the backwards hemisphere for angles~$\theta \ge \pi/2$, and so it resembles the engineered {\em obliquity factor} that features in the well-known Helmholtz--Kirchhoff diffraction formula (see, {e.g.}\xspace~\cite{Optics_1999}).
But~unlike the classical case, the non-local S-matrix theory provides naturally the necessary ``quantum obliquity factor'' which although it suppresses, it does not prohibit particle propagation in a classically forbidden region. As we will see in the next subsection, this property still holds true for the Fresnel region as well.
\subsection{Fresnel Zone}
\begin{table}[t]
\centering
\begin{tabular}{|c||l|c|}
\hline
Subregion & \hspace{25mm}Conditions &
Magnitude of $z_-$ \\
\hline\hline
\textbf{I} & $\big||\vecp|\xspace -\tilde{q}\xspace\big|\,\delta \ell\xspace \gg {\rm max}\big(1,\;
|\bvec{\hat p}\xspace\cdot\pmb{\ell}\xspace|/\delta \ell\xspace \big)$ & $|z_-| \gg 1$ \\ \hline
\textbf{II} &
$|\bvec{\hat p}\xspace\cdot\pmb{\ell}\xspace| \ll \delta \ell\xspace$ ~and~ $\big||\vecp|\xspace -\tilde{q}\xspace\big|\,\delta \ell\xspace \ll 1$ & $|z_-| \ll 1$ \\ \hline
\textbf{III} & $|\bvec{\hat p}\xspace\cdot\pmb{\ell}\xspace| \gg \delta \ell\xspace$ ~and~ $\big||\vecp|\xspace-\tilde{q}\xspace\big|\,\delta \ell\xspace^2 \ll |\bvec{\hat p}\xspace\cdot\pmb{\ell}\xspace|$ & $|z_-| \gg 1$ \\ \hline
\end{tabular}
\caption{The three Fresnel subregions as described in more detail in the text. The conditions which hold in all subregions are: $|\vecp|\xspace \, \delta \ell\xspace^2 \gg |\vecl|\xspace$ and $|\vecp|\xspace \,\delta \ell\xspace \gg 1$. Note that the latter entails $||\vecp|\xspace + \tilde{q}\xspace |\delta \ell\xspace \gg 1$, which in turn implies $|z_+| \gg 1$ for all subregions.}
\label{tab:fresnel_Subregion}
\end{table}
In the Fresnel or near-field region, in which $|\vecl|\xspace \ll |\vecp|\xspace\, \delta \ell\xspace^2$, the complex norm $|\vecL|\xspace$ may be expanded as
\begin{equation}
|\vecL|\xspace\: \simeq\: -\, \dfrac{i}{2} |\vecp|\xspace \, \delta \ell\xspace^2\, +\,
|\vecl|\xspace \, \cos\theta\, +\, \dfrac{i\,|\vecl|\xspace^2}{|\vecp|\xspace \, \delta \ell\xspace^2} \, \sin^2\theta\, ,
\label{eq:L_near}
\end{equation}
when $\cos\theta <0$. Although $|\vecL|\xspace$ should be multiplied by $-1$ for $\cos\theta \geq 0$, we can still use \refs{eq:L_near}, since the amplitude~\refs{eq:amplitude_final} is an even function of $|\vecL|\xspace$.
Given the central working hypothesis $|\vecp|\xspace \, \delta \ell\xspace \gg 1$, it follows that $\big||\vecp|\xspace+\tilde{q}\xspace\big|\delta \ell\xspace \gg 1$. This in turn implies that $|z_+| \gg 1$. Consequently, ${\rm Erfc}\xspace \,z_+$ can be expanded as in~\refs{eq:erfc_asymptotic}. On the other hand, the size of $|z_-|$ depends on the magnitude of the dimensionless quantities: $|\bvec{\hat p}\xspace\cdot\pmb{\ell}\xspace|/\delta \ell\xspace$ and $\big||\vecp|\xspace-\tilde{q}\xspace\big|\,\delta \ell\xspace$, where $\bvec{\hat p}\xspace \equiv \bvec p\xspace/|\vecp|\xspace$ is a unit vector along the three-momentum $\bvec p\xspace$. The first quantity, $|\bvec{\hat p}\xspace\cdot\pmb{\ell}\xspace|/\delta \ell\xspace$, gives a measure of the projection of $\pmb{\ell}\xspace$ onto the direction of $\bvec p\xspace$ in units of $\delta \ell\xspace$. The second one, $\big||\vecp|\xspace-\tilde{q}\xspace\big|\,\delta \ell\xspace$, quantifies the degree of ``off-shellness'' of the exchanged $\Phi$ particle.
The various possible hierarchies of the two quantities, $|\bvec{\hat p}\xspace\cdot\pmb{\ell}\xspace|/\delta \ell\xspace$ and $\big||\vecp|\xspace-\tilde{q}\xspace\big|\,\delta \ell\xspace$,
form three subregions. These are succinctly summarized in Table~\ref{tab:fresnel_Subregion}.
In detail, Subregion~\textbf{I} is defined by the constraint: $\big||\vecp|\xspace -\tilde{q}\xspace\big|\,\delta \ell\xspace \gg {\rm max}\lrb{ 1,\;\big|\bvec{\hat p}\xspace\cdot\pmb{\ell}\xspace\big|/\delta \ell\xspace }$, which implies that $|z_-| \gg 1$.
Subregion~\textbf{II} corresponds to $|\bvec{\hat p}\xspace\cdot\pmb{\ell}\xspace| \ll \delta \ell\xspace$ and $\big||\vecp|\xspace-\tilde{q}\xspace\big|\delta \ell\xspace \ll 1$, and so it is $|z_-| \ll 1$.
Finally, Subregion~{\bf III} is given by $\big|\bvec{\hat p}\xspace\cdot\pmb{\ell}\xspace\big| \gg \delta \ell\xspace$ and $\big||\vecp|\xspace-\tilde{q}\xspace\big|\delta \ell\xspace^2 \ll |\bvec{\hat p}\xspace\cdot\pmb{\ell}\xspace|$, which results in $|z_-| \gg 1$.
Notice that $\big||\vecp|\xspace-\tilde{q}\xspace\big|\,\delta \ell\xspace \ll 1$ defines a resonant region for the $\Phi$ mediator. But as happened in the Fraunhofer zone, the maximum of the modulus of the matrix element, $|M|$, is not guaranteed to occur on the resonance point, $|\vecp|\xspace=\tilde{q}\xspace$, as the angle $\theta$ varies from 0 to $\pi$.
\subsubsection{Subregion~\textbf{I}:
\texorpdfstring{$\big||\vecp|\xspace -\tilde{q}\xspace\big|\,\delta \ell\xspace \gg {\rm max}\big(1,\;|\bvec{\hat p}\xspace\cdot\pmb{\ell}\xspace|/\delta \ell\xspace\big)$}{subI}
}
In this subregion, ${\rm Erfc}\xspace \, z_\pm$ can be approximated as in~\refs{eq:erfc_asymptotic}. Then, the matrix element becomes
\begin{align}
M\, \simeq&\ \: \dfrac{\delta \ell\xspace^3}{8\,|\vecL|\xspace} \lrb{\dfrac{1}{z_-}-\dfrac{1}{z_+}} e^{i \bvec p\xspace \cdot \pmb{\ell}\xspace\: -\: \lrb{|\vecl|\xspace/\delta \ell\xspace}^2 }\, \simeq\,
\dfrac{\delta \ell\xspace^2 \ e^{i \bvec p\xspace \cdot \pmb{\ell}\xspace - \lrb{|\vecl|\xspace/\delta \ell\xspace}^2 }}{\Big(|\vecp|\xspace^2 - \tilde{q}\xspace^2\Big) \delta \ell\xspace^2\:
+\: 4 \Big[ i \bvec p\xspace \cdot \pmb{\ell}\xspace - \big(|\vecl|\xspace/\delta \ell\xspace\big)^2\Big]}\ .
\label{eq:M_sub-I}
\end{align}
We should note that this approximation is accurate up to corrections $\mathcal{O}\big[\delta \ell\xspace^2\,|\bvec p\xspace \times \pmb{\ell}\xspace|^4/(|\vecp|\xspace \, \delta \ell\xspace)^6 \big]$. There are other relevant higher order terms that depend on the sign of $\tilde{q}\xspace^2$ and $\bvec p\xspace \cdot \pmb{\ell}\xspace$, as well as on their relative size. For example, when both $\bvec{\hat p}\xspace\cdot \pmb{\ell}\xspace$ and $\tilde{q}\xspace^2$ are negative with $|\tilde{q}\xspace| \delta \ell\xspace^2/2 < |\bvec{\hat p}\xspace\cdot \pmb{\ell}\xspace|$, higher order terms $\mathcal{O}\Big(\exp\big[ |\tilde{q}\xspace| \, \bvec{\hat p}\xspace\cdot \pmb{\ell}\xspace + \tilde{q}\xspace^2\delta \ell\xspace^2/2\big] \Big)$ that may potentially appear are getting suppressed by their negative exponent.
Finally, like in the Fraunhofer region for $\tilde{q}\xspace^2 < 0$, there is no directional constraint on $|M|$ in this subregion.
In Subregion~{\bf I}, for $\tilde{q}\xspace \, \delta \ell\xspace^2 \gg |\vecl|\xspace$, the characteristic momentum $|\vecp|_{\rm *}\xspace$ that maximizes $|M|$ obeys the relation: $|\vecp|_{\rm *}\xspace \simeq \tilde{q}\xspace + 2(1-2\cos^2\theta)\,|\vecl|\xspace^2/(\tilde{q}\xspace\;\delta \ell\xspace^4)$. Thus, we have $|\vecp|_{\rm *}\xspace > \tilde{q}\xspace$ ($|\vecp|_{\rm *}\xspace < \tilde{q}\xspace$) for $|\cos\theta|<1/\sqrt{2}$ ($|\cos\theta|>1/\sqrt{2}$). On the other hand, if $\tilde{q}\xspace \, \delta \ell\xspace^2 \ll |\vecl|\xspace$, the momentum that maximizes the matrix element turns out to be: $|\vecp|_{\rm *}\xspace \simeq \sqrt{|\cos 2\theta|}\,|\vecl|\xspace/\delta \ell\xspace^2$, which does not respect Fresnel's central constraint: $|\vecp|\xspace\,\delta \ell\xspace^2 \gg |\vecl|\xspace$. In this case, the matrix element in Sub-region~{\bf I} does not exhibit a maximum. Instead, it decreases monotonically with the momentum, i.e.~${|M|\propto 1/|\vecp|\xspace^2}$. We note that for $\cos\theta=0$, the matrix element $M$ as approximated in~\refs{eq:M_sub-I} appears to have singularities, when $|\vecp|\xspace^2 =\tilde{q}\xspace^2 +\, 4|\vecl|\xspace^2/\delta \ell\xspace^4$. However, these would-be singularities are not present. They originate from $z_{\pm} =0$, and so they violate the basic assumption $|z_\pm| \gg 1$ that underlies the validity of this approximation.
\subsubsection{Subregion~\textbf{II}:
\texorpdfstring{$|\bvec{\hat p}\xspace\cdot\pmb{\ell}\xspace| \ll \delta \ell\xspace$ and $\big||\vecp|\xspace -\tilde{q}\xspace\big|\,\delta \ell\xspace \ll 1$}{subII}
}
If the momenta obey the resonant condition,
$\big||\vecp|\xspace -\tilde{q}\xspace\big|\,\delta \ell\xspace \ll 1$, and also $|\bvec{\hat p}\xspace\cdot\pmb{\ell}\xspace| \ll \delta \ell\xspace$, we then have $|z_-|\ll 1$. Making use of~\eqref{eq:erfc_taylor}, the matrix element may be approximated as
\begin{align}
M\, \simeq&\ \,
\dfrac{ i \sqrt \pi}{4} \dfrac{\delta \ell\xspace}{|\vecp|\xspace}\:
\bigg[1+\frac{2}{\sqrt{\pi}} \ \dfrac{ \bvec{\hat p}\xspace\cdot \pmb{\ell}\xspace }{\delta \ell\xspace} - \dfrac{i}{\sqrt{\pi}} \big(|\vecp|\xspace - \tilde{q}\xspace\big)\delta \ell\xspace\bigg] \nonumber \\[2mm]
&\ \times \exp\bigg[\!-\frac{1}{4}\big(|\vecp|\xspace-\tilde{q}\xspace\big)^2\delta \ell\xspace^2\, +\, \tilde{q}\xspace\,\bigg( i \, \bvec{\hat p}\xspace\cdot \pmb{\ell}\xspace - \frac{|\bvec{\hat p}\xspace \times \pmb{\ell}\xspace|^2 }{|\vecp|\xspace\,\delta \ell\xspace^2}\,\bigg)\bigg]
\nonumber \\[2mm]
&\ +\: \dfrac{1}{2}\, \dfrac{e^{i \bvec p\xspace \cdot \pmb{\ell}\xspace-(|\vecl|\xspace/\delta \ell\xspace)^2}}{ |\vecp|\xspace \, \big(|\vecp|\xspace+\tilde{q}\xspace\big)}
\;.
\label{eq:M_sub-II}
\end{align}
We observe that for a finite $\delta \ell\xspace$, the singularity of the S-matrix amplitude $T_{\rm F}$ in~\eqref{eq:amplitude_std}
at $|\vecp|\xspace = \tilde{q}\xspace$ is successfully regulated. Furthermore, the value of the matrix element $M$ in the OS limit, $|\vecp|\xspace \to \tilde{q}\xspace$, gets reduced as $\theta$ increases, with a minimum in the backwards direction $\theta =\pi$. On~the other hand, for $\theta \to 0$, $|M|$ increases slightly with the distance $|\vecl|\xspace$. Thus, there seems to be a focusing effect that makes the observation (or decay) more probable away from the origin, $|\vecl|\xspace=0$. Also, this phenomenon may affect the assumed flux for the $\Phi$ particles
at the source.
The value of the characteristic momentum $|\vecp|_{\rm *}\xspace$ that gives rise to a maximum $|M|$ is estimated to be
\begin{equation}
|\vecp|_{\rm *}\xspace\: \simeq\: \tilde{q}\xspace\, +\, \dfrac{2}{\tilde{q}\xspace\,\delta \ell\xspace^2} \ \lrb{\dfrac{|\bvec{\hat p}\xspace \times \pmb{\ell}\xspace|^2}{\delta \ell\xspace^2} - 1}\;.
\label{eq:pmax_sub-II}
\end{equation}
Although this estimate assumes $\big||\vecp|\xspace -\tilde{q}\xspace\big|\,\delta \ell\xspace \ll 1$, the resulting value of $|\vecp|_{\rm *}\xspace$ may lie outside or be at the boundary of this Fresnel subregion. In such case, $M$ should be estimated numerically using~\refs{eq:matrix_element_def}, as done in Section~\ref{sec:numerics}. However, the above exercise is still useful as it shows that the maximum occurs at $|\vecp|_{\rm *}\xspace$ that may be below and above $\tilde{q}\xspace$, for $|\bvec{\hat p}\xspace \times \pmb{\ell}\xspace| < \delta \ell\xspace$ and $|\bvec{\hat p}\xspace \times \pmb{\ell}\xspace| > \delta \ell\xspace$, respectively.
We must remark that the approximate matrix element in~\refs{eq:M_sub-II} offers a rather accurate description of the exact amplitude $M$ in Subregion~{\bf II}. The main higher order contribution is~$\mathcal{O}\big[|\bvec{\hat p}\xspace \cdot \pmb{\ell}\xspace|/(|\vecp|\xspace^2\,\delta \ell\xspace)\big]$. All other higher order corrections turn out to be subdominant.
\subsubsection{Subregion~\textbf{III}:
\texorpdfstring{$|\bvec{\hat p}\xspace\cdot\pmb{\ell}\xspace| \gg \delta \ell\xspace$ and $\big||\vecp|\xspace-\tilde{q}\xspace\big|\,\delta \ell\xspace^2 \ll |\bvec{\hat p}\xspace\cdot\pmb{\ell}\xspace|$}{subIII}
}
If the detection vertex obeys the restrictions: $|\bvec{\hat p}\xspace\cdot\pmb{\ell}\xspace| \gg \delta \ell\xspace$ and $\big||\vecp|\xspace-\tilde{q}\xspace\big|\,\delta \ell\xspace^2 \ll |\bvec{\hat p}\xspace\cdot\pmb{\ell}\xspace|$, the complementary error functions are then expanded as in~\refs{eq:erfc_asymptotic}. Despite $|z_-| \gg 1$ in both Subregions~{\bf I} and~{\bf III}, we find that the matrix element assumes different forms as different terms dominate in the expansion of the arguments $z_\pm$. Hence, in Subregion~{\bf III} the matrix element will be approximated as
\begin{align}
M\, \simeq\ \; &
\pm \dfrac{i\delta \ell\xspace^2\, \sqrt{\pi}}{2}\; \dfrac{ \exp\bigg[\!-\dfrac{1}{4}\big(|\vecp|\xspace \mp \tilde{q}\xspace\big)^2\, \delta \ell\xspace^2\: \pm\: \tilde{q}\xspace\, \bigg(i\bvec{\hat p}\xspace\cdot\pmb{\ell}\xspace\, -\, \dfrac{|\bvec{\hat p}\xspace\times\pmb{\ell}\xspace|^2}{|\vecp|\xspace\, \delta \ell\xspace^2}\bigg)\bigg] } {|\vecp|\xspace \; \delta \ell\xspace\: +\: 2 i\,\bvec{\hat p}\xspace \cdot \pmb{\ell}\xspace/\delta \ell\xspace} \nonumber\\[2mm]
& +\
\dfrac{\delta \ell\xspace^2 \, e^{i\bvec p\xspace\cdot\pmb{\ell}\xspace - \lrb{|\vecl|\xspace/\delta \ell\xspace}^2 }}{\big(|\vecp|\xspace^2 - \tilde{q}\xspace^2\big)\,\delta \ell\xspace^2 \: +\: 4\lrsb{i\bvec p\xspace\cdot\pmb{\ell}\xspace\: -\: \big(\bvec{\hat p}\xspace\cdot\pmb{\ell}\xspace/\delta \ell\xspace\big)^2} } \;.
\label{eq:M_sub-III}
\end{align}
In the above, the upper (lower) sign corresponds to $\cos\theta > 0$ ($\cos\theta < 0$), {i.e.}\xspace~towards the forward (backward) direction, and originates from the second (first) term of~\refs{eq:matrix_element_def}. Instead, the last term
in~\eqref{eq:M_sub-III} remains the same in both directions. Evidently, as $|\bvec p\xspace \cdot \pmb{\ell}\xspace| \gg |\vecp|\xspace\,\delta \ell\xspace \gg 1$, it~is not difficult to verify that the matrix element in~\eqref{eq:M_sub-III} is finite in the resonant region $|\vecp|\xspace \simeq \tilde{q}\xspace$.
In this subregion, the angular dependence is more involved than that in the other two. However, backwards particle propagation gets strongly disfavoured within Subregion~\textbf{III}. We may elucidate this by first considering propagation in the forward direction, $\theta = 0$. In this case, the parameters satisfy: $|\vecl|\xspace \gg \delta \ell\xspace$, $|\vecl|\xspace \ll |\vecp|\xspace \, \delta \ell\xspace^2$, and $\big||\vecp|\xspace -\tilde{q}\xspace\big|\,\delta \ell\xspace^2 \ll |\vecl|\xspace$. Under these conditions,~\refs{eq:M_sub-III} is dominated by its first term which is only slightly reduced as the distance increases. On the other hand, for $\theta = \pi$, the second term in~\eqref{eq:M_sub-II} will become dominant. In~this case, however, propagation in the backwards direction will be disfavoured, because of the exponential suppression factor $e^{-(|\vecl|\xspace/\delta \ell\xspace)^2}$. These attributes will be discussed in more detail in Section~\ref{sec:numerics}, where the exact matrix element~\refs{eq:matrix_element_def} will be numerically evaluated.
In Subregion~{\bf III}, the characteristic momentum, $|\vecp|_{\rm *}\xspace$, that maximizes (locally) $|M|$ depends on both the angle~$\theta$ and the average distance~$|\vecl|\xspace$. To explicitly demonstrate this dependence, we consider again the forward and backward directions which have $\theta=0$ and $\theta=\pi$, respectively. In the former, we have $|\vecp|_{\rm *}\xspace \simeq \tilde{q}\xspace -2/(\tilde{q}\xspace \, \delta \ell\xspace^2)$, while it is $|\vecp|_{\rm *}\xspace \simeq \tilde{q}\xspace - 2|\vecl|\xspace^2/(\tilde{q}\xspace\,\delta \ell\xspace^4)$ in the latter. Although the shift of the maximum is negative, its magnitude in the backwards direction is enhanced by a factor of $(|\vecl|\xspace/\delta \ell\xspace)^2$. We note that this enhancement should not be fully trusted, as it originates from a $|\vecp|_{\rm *}\xspace$ whose value lies outside or is on the boundary of this subregion, such that $\big||\vecp|_{\rm *}\xspace -\tilde{q}\xspace\big|\,\delta \ell\xspace^2\geq |\bvec{\hat p}\xspace \cdot \pmb{\ell}\xspace|_{\rm *}$.
The matrix element~\refs{eq:M_sub-III} is obtained by ignoring several higher order corrections. For example, the first term is accurate up to $\mathcal{O} \big[|\vecl|\xspace/(|\vecp|\xspace^2 \, \delta \ell\xspace^3)\big]$. Although such terms are found to be subdominant, they need to be included in order to obtain an accurate numerical value for the matrix element${}$~$M$. Nevertheless, we find that the relative numerical difference by evaluating the two expressions in~\refs{eq:M_sub-III} and~\refs{eq:matrix_element_def} is typically within the $20\%$ level.
\subsubsection{Standard S-matrix limit in the Fresnel region}
The standard S-matrix limit is a special case of Subregions~\textbf{I} and~\textbf{II}. For the former, taking the limit $|\vecl|\xspace/\delta \ell\xspace \to 0$ is a straightforward exercise. For the latter, thanks to the $\delta$-function representation
\begin{equation}
\displaystyle \lim_{\epsilon \to \infty} \epsilon\ e^{ - \epsilon^2 \, x^2 } \to \sqrt{\pi} \delta(x) \;,
\label{eq:delta_def}
\end{equation}
it can be shown that
\begin{equation}
M\, =\ \dfrac{ i \pi}{2} \dfrac{ e^{i \tilde{q}\xspace\, \bvec{\hat p}\xspace\cdot \pmb{\ell}\xspace }}{|\vecp|\xspace} \ \delta(|\vecp|\xspace - \tilde{q}\xspace) \;.
\label{eq:M_res_dl_inf_start}
\end{equation}
This last expression can be rewritten as
\begin{equation}
M\, =\ i \pi \ e^{i \bvec p\xspace \cdot \pmb{\ell}\xspace} \ \delta_{+}(|\vecp|\xspace^2 - \tilde{q}\xspace^2) \;,
\label{eq:M_res_dl_inf_final}
\end{equation}
where $\delta_{+}(|\vecp|\xspace^2 - \tilde{q}\xspace^2) \equiv \delta (|\vecp|\xspace^2 - \tilde{q}\xspace^2)\: \theta (|\vecp|\xspace )$.
Finally, as $\delta \ell\xspace / |\vecl|\xspace \to \infty$ for any finite value of $|\vecl|\xspace$, we may combine the approximate expressions in~\refs{eq:M_sub-I,eq:M_sub-II} in order to write the matrix element into the more familiar form (e.g., see~\cite{Schwartz:2014sze}):
\begin{equation}
M\, =\, \ e^{i \bvec p\xspace \cdot \pmb{\ell}\xspace} \ \bigg[ i \pi \ \delta_{+}(|\vecp|\xspace^2 - \tilde{q}\xspace^2) + \mathcal{P}\bigg\{\dfrac{1}{|\vecp|\xspace^2 - \tilde{q}\xspace^2}\bigg\} \bigg] \;,
\label{eq:M_dl_inf}
\end{equation}
where $\mathcal{P}\{ \, \dots\}$ denotes the Cauchy principal value. Also, notice that the appearance of an overall (unobservable) $\pmb{\ell}\xspace$-dependent phase in~\eqref{eq:M_dl_inf} due to the spatial translation invariance of the original non-local amplitude in~\refs{eq:amplitude_shift}. Otherwise, the matrix element~$M$ in~\eqref{eq:M_dl_inf} matches exactly with the standard result of the S-matrix amplitude in~\eqref{eq:amplitude_std}.
\subsubsection{Oscillations of mixed mediators in the Fresnel region}
The approximations,~\refs{eq:M_sub-II,eq:M_sub-III}, of the matrix element in the Fresnel Subregions~\textbf{II} and~\textbf{III} indicate\- that the propagation of mixed mediators will also give rise to their oscillation in this regime, where ${|\vecp|\xspace \, \delta \ell\xspace^2 \gg |\vecl|\xspace}$. First, we should observe that exponential suppression factors, such as those mentioned in~\cite{Beuthe:2001rc}, play no role here, as they are direction-dependent and vanish not only in the forward direction ($\theta = 0$), but also in the backward direction ($\theta = \pi$).
Let us have a closer look at the phenomenon of oscillations within Subregion~\textbf{III} in the forward direction~(${\theta =0}$). As a mixed system of mediators, we may consider two exchanged particles, $\Phi_{1,2}$, with different masses, $m_{1,2}$, obeying the hierarchy: $0< \Delta m = m_2 - m_1 \ll m_{1,2}$. After setting all coupling constants of the theory to 1, for simplicity, the total matrix element $M$ will be the sum of two matrix elements, $M_{1,2}$, describing the exchange of the particles $\Phi_{1,2}$ in the $s$-channel, i.e.~$M = M_1 + M_2$. Each matrix element~$M_{1,2}$ can then be expanded according to~\refs{eq:M_sub-III}, with $\tilde{q}\xspace^2_{1,2} = (p^0)^2 - m^2_{1,2}$. Although the complex norms $|M_{1,2}|$ individually do not predominantly depend on $|\vecl|\xspace$, there is still a phase difference between $M_1$ and $M_2$, given by $\exp\big[ i(\tilde{q}\xspace_1 - \tilde{q}\xspace_2)\,\bvec{\hat p}\xspace\cdot \pmb{\ell}\xspace\big]$. Obviously, this phase difference induces an oscillating pattern in $|M|$ with oscillation length: ${L_\text{osc} = |\tilde{q}\xspace_1 - \tilde{q}\xspace_2|^{-1}}$. This pattern is exactly the same as in the frequently discussed Fraunhofer zone, but it has an almost constant amplitude $|M|$ with the distance $|\vecl|\xspace$ like {\em plane waves} as observed in~\cite{Ioannisian:1998ch,Naumov:2013bea}, rather than it is decreasing as $1/|\vecl|\xspace$ as spherical waves. A~similar conclusion would be reached if we had considered Subregion~{\bf II}, which represents a region in the deep Fresnel zone, as it lies much closer to the QM center~$\svev{ \bvec x}\xspace$ of the source. Here, we must caution the reader that statistical uncertainties, $\sigma_{\pmb{\ell}\xspace}$, play a significant role in oscillations. These are usually larger than~$\delta \ell\xspace$, i.e.~$\sigma_{\pmb{\ell}\xspace} \gtrsim \delta \ell\xspace$, and so they will reduce the amplitude of oscillations, at least by a factor $L_{\rm osc}/\sigma_{\pmb{\ell}\xspace} \ll 1$, in oscillation scenarios with $L_{\rm osc} \ll \delta \ell\xspace$~\cite{Ioannisian:1998ch,Beuthe:2001rc}.
In the backwards direction ($\theta = \pi$) of Subregion~{\bf III}, the last term on the RHS of~\refs{eq:M_sub-III} will dominate, and so $|M|=|M_1+M_2|$ will have a tiny oscillating amplitude. As a result, there will be no visible oscillations in this region. As for Subregion~{\bf I}, it is worth commenting that it is a kinematic region signifying a highly off-shell regime of particle propagation, since we have the condition: $\big||\vecp|\xspace - \tilde{q}\xspace\big| \gg |\vecl|\xspace/\delta \ell\xspace^2$, specifically~in the forward (backwards) direction where~${\theta = 0\,(\pi)}$. According to the analytic matrix-element approximation in~\eqref{eq:M_sub-I}, no oscillations from mixed mediators will take place in this subregion.
In conclusion, the predictions derived from our non-local S-matrix theory can be tested against experiments designed to measure directional dependence of interactions, as well as particle oscillations. The latter may not only take place in the usually considered Fraunhofer region which lies far away from the source, but also within the Fresnel zone as we have explicitly demonstrated here.
\section{Exact Results}\label{sec:numerics}
\setcounter{equation}{0}
Thus far, we have established that in both the Fresnel and Fraunhofer regions no kinematic singularities occur in the non-local matrix element $M$ given in~\refs{eq:matrix_element_def}. In addition, we have examined how the maximum of $|M|$ depends on the kinematic parameters, and also have shown that detection in the backwards direction is generally suppressed. In this section, we present typical numerical examples using the exact matrix element $M$ in~\refs{eq:matrix_element_def}, in order to analyse with greater accuracy its dependence on the momentum $|\vecp|\xspace$, the distance $|\pmb{\ell}\xspace|$, and the angle~$\theta$ (with $0\le \theta \le \pi$). For~illustration, we choose the radius of interaction $\delta \ell\xspace$ to be: $\delta \ell\xspace=1~{\rm GeV}\xspace^{-1}$, and fix the momentum of the exchanged $\Phi$ particle to have the value: $\tilde{q}\xspace=5~{\rm GeV}\xspace$. The~other parameters that appear in $M$ are varied independently, in order to showcase the various phenomena within the different near- and far-field regimes of interest, and at their interfacial regions.
\subsection{Momentum Dependence}
\begin{figure}[t!]
\centering \hspace*{-1.cm}
\begin{subfigure}[b]{0.5\textwidth}
\centering\includegraphics[width=1\textwidth ]{M_vs_p-far.pdf}
\caption{\label{fig:M_vs_p-far}}
\end{subfigure}%
\begin{subfigure}[b]{0.5\textwidth}
\centering\includegraphics[width=1\textwidth ]{M_vs_p-near.pdf}
\caption{\label{fig:M_vs_p-near}}
\end{subfigure}%
\caption{
{\bf (a)} The ratio $|M|/|M|_{|\vecp|\xspace=0}$ as a function of $|\vecp|\xspace$ for different azimuthal angles $\theta$ in the Fraunhofer zone, with $|\vecl|\xspace=100~{\rm GeV}\xspace^{-1}$, $\delta \ell\xspace=1~{\rm GeV}\xspace^{-1}$, and $\tilde{q}\xspace = 5~{\rm GeV}\xspace$.
%
{\bf (b)} $|M|/|M|_{|\vecp|\xspace=0}$ versus~$|\vecp|\xspace$ for different angles $\theta$ in the Fresnel zone, with $|\vecl|\xspace=2~{\rm GeV}\xspace^{-1}$, $\delta \ell\xspace=1~{\rm GeV}\xspace^{-1}$, and $\tilde{q}\xspace = 5~{\rm GeV}\xspace$. In both (a) and (b) panels, the gray vertical line corresponds to $|\vecp|\xspace = \tilde{q}\xspace$.
}
\label{fig:M_vs_p}
\end{figure}
To start with, let us first consider the Fraunhofer region. In this region, the non-local matrix element $M$ may be approximated as in~\refs{eq:M_far} and is exponentially dependent on the initial momentum~$|\vecp|\xspace$. Like $|M|$ itself, its maximum also depends significantly on the angle $\theta$, defined by the vectors $\bvec p\xspace$ and $\pmb{\ell}\xspace$. In Figure~\ref{fig:M_vs_p-far}, we display a numerical example, which shows the value of the matrix element (over its value for $|\vecp|\xspace=0$) in the far-field regime for a set of different angles. The distance between production and detection is set to $|\vecl|\xspace=100~{\rm GeV}\xspace^{-1}$. As expected, the forward direction ($\theta=0$) corresponds to the maximum values of $|M|$, while larger observation angles $\theta$ give rise to lower values of $|M|$. The (global) maximum in the forward direction is obtained for $|\vecp|_{\rm *}\xspace=\tilde{q}\xspace$, while $|\vecp|_{\rm *}\xspace = \tilde{q}\xspace /\sqrt{2}$ when $\theta = \pi/4$. If observation occurs towards the backwards hemisphere ($\theta \geq \pi/2$), the matrix element $M$ suffers a monotonous exponential suppression on $|\vecp|\xspace$.
We now turn our attention to the Fresnel zone, in which $|\vecp|\xspace \, \delta \ell\xspace^2 > |\vecl|\xspace$. To this end, we show in \Figs{fig:M_vs_p-near} how $|M|/|M|_{|\vecp|\xspace=0}$ changes with the momentum $|\vecp|\xspace$ in this zone. In this near-field region, the matrix element exhibits a more complicated dependence on $\bvec p\xspace$ and $\theta$. This is evident by the different forms we obtained in Section~3.2 for Subregions~\textbf{I},~\textbf{II}, and~\textbf{III} [cf.~\refs{eq:M_sub-I,eq:M_sub-II,eq:M_sub-III}]. Because of the specific choice of the spatial parameters ($|\vecl|\xspace = 2~{\rm GeV}\xspace^{-1}$, $\delta \ell\xspace = 1~{\rm GeV}\xspace^{-1}$), the three Fresnel subregions depend strongly on the observation angle~$\theta$.
For $\big||\vecp|\xspace - \tilde{q}\xspace\big|\delta \ell\xspace \gg {\rm max}\big( 1,\;|\bvec{\hat p}\xspace\cdot\pmb{\ell}\xspace|/\delta \ell\xspace\big)$, all angles fall in Subregion~\textbf{I}. To be more precise, we observe that as $|\vecp|\xspace$ surpasses $\tilde{q}\xspace$ all lines converge to the asymptotic curve of $|M| \propto 1/|\vecp|\xspace^2$, as expected from~\refs{eq:M_sub-I}.
If the mediator happens to be kinematically close to its mass shell, i.e.~when $\big||\vecp|\xspace - \tilde{q}\xspace\big| \, \delta \ell\xspace \ll 1$, according to the approximate matrix elements~\refs{eq:M_sub-II,eq:M_sub-III}, we expect to find some maxima at initial momentum both above and below $\tilde{q}\xspace$.
In the perpendicular direction $(\theta =\pi/2)$, if the resonant condition, $|\vecp|\xspace \simeq \tilde{q}\xspace$, is satisfied, the observation vertex is always in Subregion~\textbf{II}. Hence, as $|\bvec{\hat p}\xspace \times \pmb{\ell}\xspace|>\delta \ell\xspace$, the maximum occurs when $|\vecp|_{\rm *}\xspace \gtrsim \tilde{q}\xspace$ [cf.~\refs{eq:pmax_sub-II}].
Unlike $\theta =\pi/2$, the angles $\theta = 0$ and $\theta = \pi$ are entirely in Subregion~\textbf{III}, when $|\vecp|\xspace \simeq \tilde{q}\xspace$. As $|\vecl|\xspace = 2\delta \ell\xspace$, the projection of $\pmb{\ell}\xspace$ on $\hat\bvec p\xspace$ is not well beyond the interaction radius, $\delta \ell\xspace$. Therefore, the estimate of the matrix element in Subregion~\textbf{III} [cf.~\refs{eq:M_sub-III}] may not be accurate. Nevertheless, in the forward direction, we observe that $|\vecp|_{\rm *}\xspace$ is close to the position of the maximum of the matrix element~$|M|$ as approximated in~\refs{eq:M_sub-III}, {i.e.}\xspace $|\vecp|_{\rm *}\xspace \simeq \tilde{q}\xspace -2/(\tilde{q}\xspace\,\delta \ell\xspace^2) \simeq 4.6~{\rm GeV}\xspace$. On the other hand, for $\theta=\pi$, the maximum of $|M|$ in~\refs{eq:M_sub-III} occurs at $|\vecp|_{\rm *}\xspace \simeq \tilde{q}\xspace - 2|\vecl|\xspace^2/(\tilde{q}\xspace\delta \ell\xspace^4) \simeq 3.4~{\rm GeV}\xspace$. This results in $\big||\vecp|_{\rm *}\xspace -\tilde{q}\xspace\big|\,\delta \ell\xspace^2 \simeq 1.6~{\rm GeV}\xspace^{-1} \sim |\hat\bvec p\xspace \cdot \pmb{\ell}\xspace|_{\rm *}$, which indicates that this estimate may not be applicable in this case. Indeed, as shown in \Figs{fig:M_vs_p-near}, a numerical evaluation reveals a continuous decrease of the exact matrix element~\refs{eq:matrix_element_def} as $|\vecp|\xspace$ increases.
Directions with $\theta = \pi/4$ and $\theta = 3\pi/4$ turn out to be close to the boundary between Subregions~\textbf{II} and~\textbf{III}, and have maxima at $|\vecp|_{\rm *}\xspace \lesssim \tilde{q}\xspace$. In general, we can see that, apart from the successful regularization of the singularity at $|\vecp|\xspace = \tilde{q}\xspace$, the matrix element exhibits distinguishable qualitative behaviour at different zones. This may be exploited by experiments, in searches for new particles as well as to study other potential implications of this formalism.
\begin{figure}[t!]
\centering\includegraphics[width=0.5\textwidth ]{p-res-shift.pdf}
\caption{
The characteristic momentum $|\vecp|_{\rm *}\xspace$, for which the maximum value of $|M|$ is attained, as a function of the mean distance $|\vecl|\xspace$, for different angles~$\theta$, with $\delta \ell\xspace = 1~{\rm GeV}\xspace^{-1}$, $\tilde{q}\xspace = 5~{\rm GeV}\xspace$. In the near-field region, in which $|\vecl|\xspace \ll |\vecp|\xspace \, \delta \ell\xspace^2$, all detection angles $\theta$ converge to the same $|\vecp|_{\rm *}\xspace$ value, which does not coincide with the usual OS momentum $\tilde{q}\xspace$. In the far-field region, where $|\vecl|\xspace \gg |\vecp|\xspace \, \delta \ell\xspace^2$, $|\vecp|_{\rm *}\xspace$ is highly $\theta$-dependent. In the forward direction, $|\vecp|_{\rm *}\xspace = \tilde{q}\xspace$ as expected, while for $\theta>0$ $|\vecp|_{\rm *}\xspace < \tilde{q}\xspace$. For angles $\theta > \pi/2$, $|\vecp|_{\rm *}\xspace \to 0$, since the matrix element $|M|$ decreases monotonically with $|\vecp|\xspace$. }
\label{fig:p-res-shift}
\end{figure}
The aforementioned shift of $|\vecp|_{\rm *}\xspace$ for various distances and angles is illustrated in~\Figs{fig:p-res-shift}. To be specific, \Figs{fig:p-res-shift} shows $|\vecp|_{\rm *}\xspace$ as a function of $|\vecl|\xspace$, for discrete choices of directions between $\theta = 0$ and $\theta = \pi$. The values of $|\vecp|_{\rm *}\xspace$ in the Fraunhofer zone agree with~\refs{eq:M_far}. That is, $|\vecp|_{\rm *}\xspace = 0$ for $\theta \geq \pi/2$, $|\vecp|_{\rm *}\xspace = \tilde{q}\xspace/\sqrt{2}$ for $\theta = \pi/4$, and $|\vecp|_{\rm *}\xspace = \tilde{q}\xspace$ for $\theta = 0$.
Because of the assumed values of the input parameters, the resulting $|\vecp|_{\rm *}\xspace$ cannot be estimated by~\refs{eq:M_sub-I,eq:M_sub-II,eq:M_sub-III}, for a wide range of~$|\vecl|\xspace$ values in the Fresnel zone. However, \Figs{fig:p-res-shift} still reflects the behaviour expected from these estimates. In particular, as $|\vecl|\xspace/\delta \ell\xspace \to 0$, we expect that the maximum to occur at $|\vecp|_{\rm *}\xspace \lesssim \tilde{q}\xspace$, as the matrix element is described by~\refs{eq:M_sub-II}, as long as $\big||\vecp|_{\rm *}\xspace -\tilde{q}\xspace\big|\delta \ell\xspace \ll 1$.
At greater distances, $|\vecp|_{\rm *}\xspace$ can be both below and above $\tilde{q}\xspace$. Consider, for example, the numerical estimates of $|\vecp|\xspace_*$, for $\theta=\pi/4$ in~\Figs{fig:p-res-shift}. As the distance between the production and detection increases, the corresponding Fresnel subregion changes from~\textbf{II} to~\textbf{III}. This causes $|\vecp|_{\rm *}\xspace$ to increase between $|\vecl|\xspace \ll \delta \ell\xspace$ and $|\vecl|\xspace \simeq 2~{\rm GeV}\xspace^{-1}$. As the distance $|\vecl|\xspace$ is getting even larger, $|\vecp|_{\rm *}\xspace$ moves towards its value found in the Fraunhofer region, {i.e.}\xspace it falls to $|\vecp|_{\rm *}\xspace \simeq \tilde{q}\xspace/\sqrt{2}$. This results in the maximum we observe in \Figs{fig:p-res-shift} around $|\vecl|\xspace = 2~{\rm GeV}\xspace^{-1}$.
In the perpendicular direction ($\theta=\pi/2$), if the resonant condition ($\big||\vecp|_{\rm *}\xspace - \tilde{q}\xspace\big|\delta \ell\xspace \ll 1$) is satisfied, observation occurs in Subregion~\textbf{II} regardless of the distance~$|\vecl|\xspace$, as long as $|\vecl|\xspace \lesssim |\vecp|_{\rm *}\xspace \, \delta \ell\xspace^2$. According to~\refs{eq:pmax_sub-II}, this means that $|\vecp|_{\rm *}\xspace$ starts lower than $\tilde{q}\xspace$, and increases as $|\bvec p\xspace \times \pmb{\ell}\xspace|$ surpasses $\delta \ell\xspace$. Once $|\vecl|\xspace$ approaches the boundary with the Fraunhofer zone, $|\vecp|_{\rm *}\xspace$ drops to $|\vecp|_{\rm *}\xspace=0$, around $|\vecl|\xspace = 2.5~{\rm GeV}\xspace^{-1}$.
\subsection{Spatial Dependence}
One important aspect of the non-local S-matrix theory under consideration is its introduction of an explicit dependence on the average distance vector $\pmb{\ell}\xspace$ between the production and detection vertices. This explicit radial dependence has been used to model neutrino oscillations~\cite{Ioannisian:1998ch}, but it can also be used for other studies, such as displaced vertex searches~\cite{LHCb:2014osd,Bondarenko:2019tss} for long-lived particles. Therefore, it is worth examining the predictions derived from the exact matrix element $M$ in~\refs{eq:matrix_element_def} in the Fraunhofer zone, the three Fresnel subregions, and all their interfaces.
To illustrate the spatial dependence of the exact matrix element $M$~in~\refs{eq:matrix_element_def}, we show in \Figs{fig:M_vs_l_on-shell} the ratio $|M|/|M|_{|\vecl|\xspace=0}$ as a function of~$|\vecl|\xspace$ in the OS region where~${|\vecp|\xspace =\tilde{q}\xspace}$, for selected values of $\theta$ between $0$ and $\pi$.
\begin{figure}[t!]
\centering \hspace*{-1.cm}
\begin{subfigure}[b]{0.5\textwidth}
\centering\includegraphics[width=1\textwidth ]{M_vs_l_on-shell.pdf}
\caption{\label{fig:M_vs_l_on-shell}}
\end{subfigure}%
\begin{subfigure}[b]{0.5\textwidth}
\centering\includegraphics[width=1\textwidth ]{M_vs_l_off-shell.pdf}
\caption{\label{fig:M_vs_l_off-shell}}
\end{subfigure}%
\caption{
{\bf (a)} The ratio $|M|/|M|_{|\vecl|\xspace=0}$ versus distance $|\vecl|\xspace$, for $\delta \ell\xspace=1~{\rm GeV}\xspace^{-1}$, $|\vecp|\xspace=\tilde{q}\xspace=5~{\rm GeV}\xspace$ and discrete choices of the angle $\theta$ between $0$ and $\pi$.
%
{\bf (b)} The same as in the left frame~(a), but with $|\vecp|\xspace=10~{\rm GeV}\xspace$.
}
\label{fig:M_vs_l}
\end{figure}
As can be seen from this figure for $\theta = 0$ (in black), there is a maximum at a location away from the source. This implies a greater flux of outgoing particles in the forward direction. This is a distinct prediction that originates from our non-local S-matrix amplitude and might well be tested in dedicated experiments.
There is a significant dependence on the direction of $\pmb{\ell}\xspace$, and as $\theta$ gets larger, the matrix element $M$ displays no local maximum. In particular, for $\theta \geq \pi /2$, the exact $M$ decreases exponentially between $|\vecl|\xspace \simeq \delta \ell\xspace$ and $|\vecl|\xspace \simeq |\vecp|\xspace \, \delta \ell\xspace^2$ (the vertical gray lines). This means that propagation towards the backwards hemisphere~({$\theta \ge \pi/2$}) is suppressed, which is consistent with our findings for the near- and far-field approximations in~\refs{eq:M_far,eq:M_sub-II,eq:M_sub-III}. Finally, it is interesting to notice that in the Fraunhofer region ($|\vecl|\xspace \gg |\vecp|\xspace\,\delta \ell\xspace^2$), the matrix element evaluated at a point in the forward direction ($\theta = 0$) is more than ten orders of magnitude larger than its value at an equidistant point, lying in the backwards direction~($\theta =\pi$).
In \Figs{fig:M_vs_l_off-shell} we now show the spatial dependence of the exact matrix element in~\refs{eq:matrix_element_def} for an off-shell
kinematic configuration, with $|\vecp|\xspace=10~{\rm GeV}\xspace$, while the rest of the parameters are as in~\Figs{fig:M_vs_l_on-shell}. We observe that the absence of a maximum away from the origin in any direction. Also, for $|\vecl|\xspace \lesssim2~{\rm GeV}\xspace^{-1}$, all directions result in similar values of $|M|$, as expected from~\refs{eq:M_sub-I}. This means that off-shell propagation close to the interaction area can occur in all directions with equal probability. However, at greater distances, the forward direction is preferred, as $|M|$ falls off exponentially for larger angles. Like in the on-shell case, at $|\vecl|\xspace \gg |\vecp|\xspace \, \delta \ell\xspace^2$, all $\theta$ angles predict a matrix element $|M| \propto 1/|\vecl|\xspace$.
\begin{figure}[t!]
\centering \hspace*{-0.35cm}
\begin{subfigure}[b]{0.5\textwidth}
\centering\includegraphics[width=1\textwidth ]{contour_norm_on-shell.pdf}
\caption{\label{fig:contour_norm_on-shell}}
\end{subfigure}%
\hspace*{0.2cm}
\begin{subfigure}[b]{0.5\textwidth}
\centering\includegraphics[width=1\textwidth ]{contour_on-shell.pdf}
\caption{\label{fig:contour_on-shell}}
\end{subfigure}%
\caption{
{\bf (a)} Numerical estimates of the exact matrix element $|M|$ in~(\ref{eq:matrix_element_def}) normalised by its value in the forward direction, $|M|_{\theta=0}$. The red contours show specific values for $|M|/|M|_{\theta=0} =0.8,\,0.4,\,4 \times 10^{-11},\text{ and }2 \times 10^{-11}$.
%
{\bf (b)} The same as in (a), but for $|M|$ normalised by its value $|M|_{|\vecl|\xspace=0}$ at the origin $|\vecl|\xspace=0$. The red contours delineate the curves on which the logarithm of the aforementioned ratio is $1.5$ and $0.5$. The black point outside $|\vecl|\xspace=\delta \ell\xspace$ indicates the maximum of $|M|/|M|_{|\vecl|\xspace=0}\simeq 1.7$.
%
In both panels (a) and (b), the values of the parameters are taken as in Figure~\ref{fig:M_vs_l_on-shell}. The gray circles show $|\vecl|\xspace = \delta \ell\xspace$, $|\vecl|\xspace = |\vecp|\xspace \, \delta \ell\xspace^2 $ (the boundary between the Fraunhofer and Frenel regions), and $|\vecl|\xspace=10 \, |\vecp|\xspace \, \delta \ell\xspace^2$. The various colours show the order of magnitude of the ratios: $|M|/|M|_{\theta=0}$ in (a) and $|M|/|M|_{|\vecl|\xspace=0}$ in (b), for a given distance~$|\vecl|\xspace$ and angle~$\theta$.
}
\label{fig:contours_on-shell}
\end{figure}
In \Figs{fig:contours_on-shell,fig:contours_off-shell}, we present, as polar density plots, the radial and angular dependence of the exact matrix element $M$ in~\refs{eq:matrix_element_def} for on-shell and off-shell kinematic configurations of the mediator propagator.
More explicitly, in \Figs{fig:contour_norm_on-shell}, we show $|M|$ normalized with respect to its value at $\theta =0$ for $\delta \ell\xspace=1~{\rm GeV}\xspace^{-1}$ and $|\vecp|\xspace = \tilde{q}\xspace = 5~{\rm GeV}\xspace$. The radial parameter is $|\vecl|\xspace$ with the three grey concentric circles indicating $|\vecl|\xspace = \delta \ell\xspace$, $|\vecp|\xspace \, \delta \ell\xspace^2$, and $10 \times |\vecp|\xspace \, \delta \ell\xspace^2$. The various colours represent the order of magnitude of $|M|/|M|_{\theta=0}$, and we explicitly show four curves (in red) with the values $|M|/|M|_{\theta=0} = 0.8,\,0.4,\,4\times10^{-11}$, and $2 \times 10^{-11}$. As expected, in the far-field region the radial dependence~$|\vecl|\xspace$ cancels out as $|M|/|M|_{\theta=0} \simeq e^{|\vecp|\xspace \tilde{q}\xspace \cos\theta /2}$. Furthermore, in the near-field region, the radial dependence cannot be factored out, as implied by~\refs{eq:M_sub-II} and~\eqref{eq:M_sub-III}. As a result, the spatial pattern in the Fresnel zone displays a strong angular dependence.
Focusing on angles $\theta \lesssim \pi/4$, {e.g.}\xspace~looking at the curve for $|M|/|M|_{\theta = 0} =0.8$, both $|\vecl|\xspace$ and~$\theta$ increase for $|\vecl|\xspace > \delta \ell\xspace$, in order to keep the ratio $|M|/|M|_{\theta = 0}$ constant. However, as $|\vecl|\xspace$ approaches $|\vecp|\xspace \, \delta \ell\xspace^2$, $\theta$~starts decreasing until $|\vecl|\xspace \simeq |\vecp|\xspace \, \delta \ell\xspace^2$. Thus, $|M|/|M|_{\theta = 0} =0.8$ has a non-trivial behaviour as observation moves from the near-field to the far-field zone.
In \Figs{fig:contour_on-shell}, we display the norm of the matrix element over its value at the origin, $|M|/|M|_{|\vecl|\xspace=0}$, for the same parameters as in~\Figs{fig:M_vs_l_on-shell}. The various colours represent the order of magnitude of $|M|/|M|_{|\vecl|\xspace=0}$, along with the two curves (in red) for $|{M|/|M|_{|\vecl|\xspace=0} = 1.5}$~and~$0.5$. We also indicate with a black dot the point where the global maximum occurs. This figure shows the overall $|\vecl|\xspace$ and $\theta$ dependence of $|M|$. We observe that in the far-field regime, the behaviour of $|M|/|M|_{|\vecl|\xspace=0}$ matches perfectly well with that predicted by~\refs{eq:M_far}. Like in~\Figs{fig:contour_norm_on-shell}, there is an effective boundary at $|\vecl|\xspace \simeq |\vecp|\xspace \, \delta \ell\xspace^2$ that severely restricts propagation in the backwards hemisphere ($\theta \ge \pi/2$). \Figs{fig:contour_on-shell} also shows how the maximum observed in \Figs{fig:M_vs_l_on-shell} changes for different angles and distances. We notice that $|M|$ only increases for $\theta<\pi/2$ and reaches a maximum indicated by a black dot at which $|M|/|M|_{|\vecl|\xspace = 0} \simeq 1.7$. This maximum occurs at a distance marginally larger than $\delta \ell\xspace$ at $\theta=0$.
\begin{figure}[t!]
\centering \hspace*{-0.35cm}
\begin{subfigure}[b]{0.5\textwidth}
\centering\includegraphics[width=1\textwidth ]{contour_norm_off-shell.pdf}
\caption{\label{fig:contour_norm_off-shell}}
\end{subfigure}%
\hspace*{0.2cm}
\begin{subfigure}[b]{0.5\textwidth}
\centering\includegraphics[width=1\textwidth ]{contour_off-shell.pdf}
\caption{\label{fig:contour_off-shell}}
\end{subfigure}%
\caption{
{\bf (a)} The ratio $|M|/|M|_{\theta=0}$ in polar coordinates $(|\vecl|\xspace,\, \theta)$ in the Fresnel zone, for the same input parameters as in Figure~\ref{fig:M_vs_l_off-shell}. The red curves correspond to $|M|/|M|_{\theta=0} = 0.5$ and $1.1$. The maximum value of this ratio is $|M|/|M|_{\theta=0} = 1.7$ (black dots), obtained approximately along the perpendicular direction $\theta \simeq \pi/2$.
%
{\bf (b)} The ratio $|M|/|M|_{\theta=0}$ in the Fresnel region, for the same input parameters as in~(a).
%
The red curves correspond to $|M|/|M|_{\theta=0} = 0.5,\,0.1$, and $10^{-3}$. In contrast to the on-shell case, the maximum value of this ratio corresponds to $|\vecl|\xspace \simeq 0$ in the forward direction.
}
\label{fig:contours_off-shell}
\end{figure}
Although an off-shell mediator will generate a similar far-field pattern (for $\tilde{q}\xspace^2>0$), the Fresnel regime is quantitatively different. This is shown in \Figs{fig:contour_norm_off-shell}, where we compute $|M|/|M|_{\theta=0}$ for the same parameters as in~\Figs{fig:contour_norm_on-shell}, but for the off-shell point: ${|\vecp|\xspace=2\tilde{q}\xspace}$. In~contrast to the OS case ($|\vecp|\xspace = \tilde{q}\xspace$), this ratio exhibits maxima away from the origin, in the perpendicular direction~$(\theta = \pi/2)$, which are indicated symmetrically with two black dots. Interestingly, the ratio is larger than~$1$, even towards the backwards direction well within the Fresnel zone. However, close to the interface between the near- and far-field zone, there is a drastic exponential suppression when $\theta > \pi/2$, similar to the one we saw in~\Figs{fig:contour_norm_on-shell}. At larger distances, the matrix element falls off as $1/|\vecl|\xspace$, in agreement with our expectations in the Fraunhofer regime [cf.~\eqref{eq:M_far}].
\Figs{fig:contour_off-shell} shows the spatial profile of $|M|/|M|_{|\vecl|\xspace=0}$ for an off-shell mediator with $|\vecp|\xspace=2\tilde{q}\xspace$. We observe that there are significant differences with its on-shell counterpart in~\Figs{fig:contour_on-shell}. Specifically, the curves, for which the ratios $|M|/|M|_{|\vecl|\xspace=0}$ are being kept constant, are almost independent of the angle well within the Fresnel zone. This property is also reflected in the approximation~\refs{eq:M_sub-I}, as well as in \Figs{fig:M_vs_l_off-shell}. Thus, close to the source, no preferred direction exists and propagation happens at all angles with almost equal probabilities. As $|\vecl|\xspace$ approaches $|\vecp|\xspace \, \delta \ell\xspace^2$, the forward direction becomes more favourable, whereas the matrix element for angles $\theta \gtrsim \pi/2$ falls off exponentially, in line with our numerical estimates in~\Figs{fig:M_vs_l_off-shell}.
In summary, we find that the numerical estimates presented here
by utilising the exact non-local matrix element~$M$ stated in~\eqref{eq:matrix_element_def} give firm support of the validity of the more intuitive Fresnel and Fraunhofer approximations discussed in Section~\ref{sec:approx}.
\section{Summary and Future Directions}\label{sec:summary}
\setcounter{equation}{0}
Non-locality is an inherent property in QM and plays an instrumental role in understanding several non-local phenomena in many applications of modern quantum theory, ranging from simple two-particle quantum-entangled systems, like those that occur in an EPR experiment~\cite{Einstein:1935rr}, to more complex situations in quantum information and quantum technology~\cite{Alonso:2022oot}. Here, our aim was to extend this notion of non-locality to the standard S-matrix of QFT. In particular, we put forward a non-local S-matrix theory in which each particle interaction in a scattering process is taken to be localised in a volume of finite size. Evidently, such a non-local S-matrix theory assumes its standard S-matrix form, when the infinite spread limit in the localisation of all interactions is considered.
To gain insight into the non-local S-matrix formalism, we have considered a simple $2\to 2$ scattering process within an analytically solvable QFT model that was previously discussed in~\cite{Ioannisian:1998ch}. This solvable QFT model is based on two working hypotheses. First, we have taken the QM uncertainty in time, $\delta t$, to be much bigger than the combined QM
uncertainty $\delta \ell\xspace$ of the detector and the source. In fact, we have worked in the limit of~${\delta t \to \infty}$, which in turn implies that energy is conserved at each interaction vertex of the scattering process. Second, we have assumed that both the production and detection points of interaction have spatial spreads with spherical Gaussian form. The latter hypothesis enables us to carry out most of the complex integrations that we encounter, and so arrive at an analytic result that only depends on well-tabulated complementary error functions with complex arguments. In spite of the above assumptions, we should expect that the results presented here for the different near- and far-field zones will still be generically valid, up to obvious amendments, for other scenarios with QM localisations that go beyond the spherical approximation we have been studying here.
In the context of a solvable QFT model discussed earlier in~\cite{Ioannisian:1998ch}, we have derived several analytic approximations of the non-local S-matrix amplitude for detection regions that are either quite close to the source or very far from it. Adopting a terminology known from light diffraction in classical optics, we called these two regions interchangeably the near-field and far-field zones, or the Fresnel and Fraunhofer regions. The Fresnel (near-field) zone is confined to distances $|\vecl|\xspace$ from the source that lie in the interval, $0 \le |\vecl|\xspace \lesssim |\vecp|\xspace\,\delta \ell\xspace^2$, where $\bvec p\xspace$ is the net three-momentum of all particles in the initial or final state of the process. Instead, the Fraunhofer (far-field) region characterises the area far from the source, for which $|\vecl|\xspace \gg |\vecp|\xspace\,\delta \ell\xspace^2$.
We have found that the Fresnel zone may be subdivided into three subregions according to the values of the two dimensionless quantities, $|\bvec{\hat p}\xspace\cdot \pmb{\ell}\xspace |/\delta \ell\xspace$ and $\big||\vecp|\xspace - \tilde{q}\xspace \big|\,\delta \ell\xspace$. A more detailed description of Subregions {\bf I}, {\bf II} and {\bf III} is given in~Table~\ref{tab:fresnel_Subregion}. For all these three Fresnel subregions, we observed that
the on-shell transition amplitude $M$ does {\em not} fall off as $1/|\vecl|\xspace$ as a function of the distance~$|\vecl|\xspace$ between the source and the detector, thereby confirming the non-dispersive, plane-wave behaviour of $M$ in the forward direction, in agreement with earlier observations made first in~\cite{Ioannisian:1998ch}, and subsequently in~\cite{Beuthe:2001rc,Naumov:2013bea,Naumov:2022kwz} in different settings. Remarkably enough, in the same forward direction of propagation, we have observed a novel focusing phenomenon manifesting itself with the appearance of a small area where the magnitude $|M|$ of the transition amplitude can be higher than its value at the origin, where~$|\vecl|\xspace=0$. As expected, in the Fraunhofer region, we recover the usual $1/|\vecl|\xspace$ reduction of $|M|$. In both the near- and far-field regions, we have confirmed the phenomenon of oscillations if the mediators form a mixed system of particles, as is the case for neutrino oscillations.
Another novelty of the present study is the analysis of the transition amplitude $M$ beyond the forward direction of propagation, as a function of the angle $\theta$ defined by the average distance vector~$\pmb{\ell}\xspace$ and the total three-momentum vector ${\bf p}$ of the particles in the initial state. An~important finding of such an analysis was the observation that in the backwards direction ($\theta = \pi$), the amplitude $M$ is extremely suppressed. One may therefore conclude that the Feynman propagator provides the necessary ``quantum obliquity factor'' to suppress the propagation of on-shell particles in the backwards direction. We must emphasize here that this desirable property of~$M$ is achieved without the need to impose certain boundary conditions on the system. In this way, the non-local S-matrix theory can provide a quantum field-theoretic explanation for the origin of the obliquity factor
in diffractive optics. In the same vein, it is appealing to suggest that the analytic result for $M$ (which depends on complexified error functions) represents an analytic QFT extension of the famous Euler--Cornu spiral~\cite{Optics_1999} in classical optics to the complete off-shell region of particle propagation.
In realistic situations, we expect that the temporal and spatial QM uncertainties due to finite space-time volume
effects on a non-local S-matrix amplitude, $M$, will depend on the experimental setup, including the preparation and detection of the initial and final states. In addition to the coherent QM uncertainties, one must therefore include {\em incoherent} statistical uncertainties to be added at the squared amplitude level, $|M|^2$, along with phase-space and other classical resolution effects~\cite{Giunti:1993se,Beuthe:2001rc,Akhmedov:2009rb,Naumov:2020yyv,Cheng:2022lys,Naumov:2022kwz}. In this context,
the non-local S-matrix formalism will offer one important element in a holistic construction of a more elaborate multi-local Wigner function~\cite{Wigner:1932eb,Hillery:1983ms} which may include all possible uncertainties for all realistic experimental settings. Hence, as well as both short and long baseline neutrino experiments, high-energy colliders have the potential to probe many of the predictions resulting from such a non-local S-matrix theory. For instance, one may exploit the crossing symmetry of the non-local S-matrix amplitude to regulate the notorious $t$-channel singularities at $\mu^+\mu^-$ colliders~\cite{Nowakowski:1993iu,Ginzburg:1995bc,Kotkin:1992bj,Melnikov:1996iu,Melnikov:1996na,Grzadkowski:2021kgi}. Other applications may include spatial analyses of parton showering and displaced vertices during the
hadronization process at high-energy colliders like the LHC~\cite{LHCb:2014osd,Bondarenko:2019tss}. We envisage that such analyses might also lead to improved interpretation of data from $B$-meson observables at the~LHCb.
In this paper we only laid out the foundations for an analytic non-local S-matrix theory. However, further work must be done if we wish to go beyond the Born approximation. For example, for the $2\to 2$ process under study, we expect that box contributions to the non-local transition amplitude $M$ will decay exponentially faster with increasing distance $|\vecl|\xspace$ from the source than the one-particle-reducible propagator effects. In this way, a physical separation between the irreducible (box) and reducible (self-energy) loop diagrams may be possible, thus enabling a better understanding of S-matrix diagrammatic approaches like those based on the pinch technique~\cite{Binosi:2009qm}. On the other hand, apart from scalar mediators that we have been studying here in a solvable QFT model, it should be straightforward to generalise the non-local S-matrix theory and include non-local exchange graphs with fermions and gauge bosons. We~shall return to address
some of the issues mentioned above in a future study.
\subsection*{Acknowledgements}
We wish to thank Cumrun Vafa and Vassilis Spanos for remarks and useful comments at the {\em XXIX International Conference on Supersymmetry and Unification of Fundamental Inter\-actions}
(SUSY~2022, 27~June~--~2~July~2022, University of Ioannina, Greece), where part of this work was first presented.
We also thank Bohdan Grzadkowski and Michal Iglicki for clarifying discussions with regards to~\cite{Grzadkowski:2021kgi}.
The work of AP and DK is supported in part by the Lancaster-Manchester-Sheffield Consortium for Fundamental Physics, under STFC Research Grant ST/T001038/1.
\newpage
\setcounter{section}{0}
\section*{Appendix}
|
2,877,628,091,010 | arxiv | \section{Introduction}
In electroencephalography (EEG) electrical currents are recorded at
different positions on the scalp to measure the brain activity. The
correlations between the time series of these currents strongly depend
on the overall state of the brain. During an epileptic seizure, for
example, the correlations are much stronger than in normal
periods~\cite{pijn1991chaos, muller2005detection}. This time dependence of the correlations is the
kind of non--stationarity that we wish to address.
Non--stationarities are also seen when wave packets travel through
disordered systems. Even if the disorder is static, the correlations
between the wave intensities measured at different
positions versus time will change, when the direction or the composition of the
wave packet is altered~\cite{hohmann2010freak, metzger2014statistics, degueldre2015random}. Finance provides
another important example for this type of non--stationarity. The
correlations between stock price time series change in time, just as
the business relations between the firms and the traders' market
expectations~\cite{bekaert1995time,Longin1995,Onnela2003,Zhang2011,Sandoval2012,muennix2012}. Similar
non--stationarities exist in many complex systems, including velocity
fluctuations in turbulent flows, heartbeat dynamics, series of waiting
times, etc. \cite{Ghasemi2008, PhysRevE.87.062139, PhysRevE.75.060102,
Schafer20103856}.
A system showing non--stationary correlations may be interpreted as
being out of equilibrium, implying that some of the key tools in
statistical physics are not applicable. Yet, the challenges are
similar to the one faced for equilibrium systems: Is there generic or
universal behavior? --- How can we identify it? -- Can we set up
statistical models for these non--stationarities? --- In the context
of finance, we recently put forward a random matrix approach to tackle
these issues~\cite{Schmitt2013}. We also succesfully applied it in a
study of credit risk and its impact on systemic
stability~\cite{schmitt2014credit}. Inspite of the conceptual differences,
random matrix theory~\cite{meh04,guhr98} formally has much in common
with statistical mechanics. Observables are averaged over an ensemble;
in statistical mechanics, it usually is the microcanonical, canonical
or macrocanonical one, in random matrix models, it is an ensemble of those
matrices which describe or characterize the system. In the context of
the present discussion, random matrix models can be divided into two
classes:
\begin{enumerate}\setlength{\itemsep}{-2pt}
\item The ensemble is fictitious. It comes into play via an ergodicity
argument only.
\item The ensemble really exists and can be identified in the
system. The issue of ergodicity does not arise.
\end{enumerate}
The vast majority of random matrix models in, \textit{e.g.}, quantum
chaos falls into class 1, for a review see Ref.~\cite{guhr98}. One is
interested in the spectral statistics of one individual system. Its
Hamiltonian is viewed as a random matrix, whose dimension is
eventually sent to infinity. Ergodicity holds in this limit, meaning
that a smoothing energy average of an observable over one individual
spectrum equals the average over an ensemble of random matrices. A
noticeable exception are random matrix applications to quantum
chromodynamics~\cite{Jac00}. In lattice gauge theory, the quarks first
propagate in frozen configurations of the gauge fields, before an
average over the gauge fields, modeled by random matrices, is carried
out as second step. This clearly belongs in class 2. The fluctuating
gauge fields truly exist, the partition function involves an integral
over them. Ergodicity reasoning is not evoked.
There are numerous applications of random matrix theory in
finance~\cite{lal99,lal99b,laloux00,ple99,ple02,pafka04,potters2005financial,drozdz2008empirics,kwapien2006bulk,biroli2007student,burda2011applying} which address statistical properties of
correlation matrices. Many of them also deal with non--Gaussian
ensembles. To the best of our knowledge, all of these applications
fall into class 1, because one is interested in the statistics of one
individual correlation matrix, measured at one particular instant in
time. In our study~\cite{Schmitt2013}, we put forward a first
application of random matrices in finance that belongs in class 2.
Non--stationarity makes the covariances fluctuate and thereby creates
an ensemble of covariance matrices which we approximated by a Gaussian
Wishart ensemble of random matrices~\cite{Wishart1928}. We derived how
the multivariate distribution of dimensionless price changes, referred
to as returns, acquires heavy tails due to the
non--stationarity. Hence, we showed that the non--stationarities
indeed have universal features.
Here, we have three goals: First, we present in Sec.~\ref{sec2} a
statistically significant way to construct a proper and analytically
tractable random matrix ensemble from the data. We emphasize that this
is an important issue for random matrix models in the context of
correlations. In contrast to quantum chaos, where universality holds
on the scale of the mean level spacing, there is not such a local
scale when studying statistical properties of correlation
matrices. Thus, a Gaussian assumption is not always justified and it
does matter what the ensemble looks like in reality. In particular,
realistic ensembles considerably help to understand and model large
events. Our construction is general and not tied to any specific
system. Its merit lies in the fact that once the enemble is known, it
can be used to work out generic statistical properties of any
observable depending on the correlation matrices, see
Ref.~\cite{schmitt2014credit} for an example. Second, we apply our approach
to financial data in Sec.~\ref{sec3}. We identify an algebraic
ensemble, which is quite relevant for risk estimation. Third, we
discuss two issues arising in our general construction in
Secs.~\ref{sec4} and~\ref{sec5}, namely a certain conceptual caveat
and yet a further extension, respectively. Conclusions are given in
Sec.~\ref{sec6}.
\section{Constructing a Proper Random Matrix Ensemble}
\label{sec2}
After setting up the general problem in Sec.~\ref{sec21}, we introduce
the deformed Wishart ensemble and derive the correponding amplitude
distribution in Sec.~\ref{sec22}. The determination of the deformation
functions which characterize the ensemble and the amplitude
distribution is discussed in Sec.~\ref{sec23}. Here, we derive the
approach for the general case, for sake of illustration, the reader is
referred to Ref.~\cite{Schmitt2013} and Sec.~\ref{sec3}
\subsection{Non--Stationary Covariances}
\label{sec21}
Suppose we have measured in a system with randomness $K$ amplitudes as
time series $R_k(t), k=1,\ldots,K$ over a long interval
$T_\textrm{tot}$ of time $t=1,\ldots,T_\textrm{tot}$. For examples,
these amplitudes can be electric or magnetic fields at $K$ different
points in a disordered system, positions of $K$ randomly moving
particles or financial returns, \textit{i.e.} dimensionless price
changes for $K$ stocks. Importantly, we assume that there are
correlations between the time series. In complex systems, one often
encounters the situation that crucial system parameters, in particular
the covariances or correlations, are seemingly random functions of
time~\cite{ball2000stochastic,burtschell2005beyond,ankirchner2012cross}.
To be more precise, we consider a time window of
length $T$ that is much shorter than the total interval, $T\ll
T_\textrm{tot}$. We now want to average over the subinterval $[t-T+1,t]$ of
length $T$ whose position in the total interval is determined by the
time $t$. Sample averages of a function $f(t)$ in this subinterval
are then written as
\begin{equation}
\langle R_k \rangle_T(t) = \frac{1}{T} \sum_{t'=t-T+1}^t f(t') \ .
\label{returns}
\end{equation}
We are particularly interested in the covariances
\begin{eqnarray}
\Sigma_{kl}(t) &=& \langle R_k R_l\rangle_T(t) -
\langle R_k\rangle_T(t) \langle R_l\rangle_T(t) \nonumber\\
&=& \langle r_k r_l\rangle_T(t) \ ,
\label{covar}
\end{eqnarray}
where we introduced the amplitudes normalized to zero mean value,
\begin{equation}
r_k(t) = R_k(t)-\langle R_k \rangle_T(t) \ .
\label{returns}
\end{equation}
We keep in mind that the resulting $K\times K$ covariance matrix
$\Sigma(t)$ is calculated from time series of length $T$. We now move
this time window of length $T$ through the data, the resulting
covariances $\Sigma_{kl}(t)$ fluctuate. This non--stationarity has an
important impact on other statistical observables. In the present
study, we focus on the distribution of the amplitudes. We now consider
a time interval $T$ as short as possible such that the covariance
matrix $\Sigma_s$ in this time interval is in good approximation
constant. We begin with addressing the case in which the distribution
of the amplitudes is, for a given time $t$, well approximated by a
multivariate Gaussian
\begin{equation}
g(r|\Sigma_s) = \frac{1}{\sqrt{\det{\left(2\pi\Sigma_s\right)}}}
\exp{\left(-\frac{1}{2}r^\dagger \Sigma_s^{-1} r \right)}
\label{gaussian}
\end{equation}
with the $K$ component vector $r=(r_1,\ldots,r_k)$ and the $K\times K$
covariance matrix $\Sigma_s$. We suppress the argument $t$ of $r$ and
use $\dagger$ to indicate the transpose. We refer to $g(r|\Sigma_s)$
as \textit{static amplitude distribution}. Due to the correlations, a
Gaussian assumption for the static distribution is not as restrictive
as it may seem. In the eigenbasis of $\Sigma_s$, the amplitudes only
appear in linear combinations. Thus, for large $K$, the mechanisms
that lead to the central limit theorem start working and drive the
distributions towards Gaussians. Later on in Sec.~\ref{sec5} we will
nevertheless relax the Gaussian assumption for the static amplitude
distribution and look at more general functional forms.
\subsection{Deformed Wishart Ensemble and its Amplitude Distribution}
\label{sec22}
How does the non--stationarity affect the amplitude distribution when
data from the total interval $T_\textrm{tot}$ are analyzed? --- As in
Ref.~\cite{Schmitt2013}, we model this by random matrices. As the
covariance matrix is different at each time $t$ where it is analyzed,
we replace the covariance matrix in the distribution (\ref{gaussian})
by the expression
\begin{equation}
\Sigma_s \longrightarrow \frac{1}{N}AA^\dagger \ ,
\label{repl}
\end{equation}
where $A$ is a real rectangular $K\times N$ random matrix without any
symmetries. The right hand side of Eq.~(\ref{repl}) has to have the
form given to ensure that it can model a properly defined covariance
matrix. This follows directly from the definition~(\ref{covar}).
Although $K$, the first dimension is fixed, the second one, $N$, is
for the time being a free model parameter. It can be viewed as the
length of the model time series. Further clarifications will follow.
To obtain the amplitude distribution for the total interval, we average
over the random matrices
\begin{equation}
\langle g\rangle (r|\Sigma,N) = \int d[A] \overline{w}(A|\Sigma,N)
g\left(r\left|\frac{1}{N}AA^\dagger\right.\right) \ ,
\label{average}
\end{equation}
where $d[A]$ is the volume element, \textit{i.e.}, the product of all
independent variables in $A$. Following
Wishart~\cite{Wishart1928,Muirhead1982}, the Gaussian distribution
\begin{equation}
w(A|\Sigma) = \frac{1}{\det^{N/2}{\left(2\pi\Sigma\right)}}
\exp{\left(-\frac{1}{2}\Tr A^\dagger\Sigma^{-1}A\right)}
\label{uswis}
\end{equation}
was assumed for the random matrices in Ref.~\cite{Schmitt2013}. It
describes the Gaussian fluctuations of the model covariance matrices
$AA^\dagger/N$ about the given empirical covariance matrix $\Sigma$,
which is evaluated over the total time interval of length
$T_\textrm{tot}$. It should not be confused with the empirical
covariance matrix $\Sigma(t)$ calculated in the subintervals
$[t-T+1,t]$. The crucial difference compared to
Ref.~\cite{Schmitt2013} is a generalization of the
ensemble~(\ref{uswis}). We introduce the deformed Wishart ensemble
\begin{equation}
\overline{w}(A|\Sigma,N) = \int\limits_0^{\infty}d\eta f(\eta)
w\left(A\left|\frac{N\Sigma}{\eta}\right.\right)
\label{defwis}
\end{equation}
which is defined by the \textit{ensemble deformation function}
$f(\eta)$ with the properties
\begin{align}
\int\limits_0^\infty f(\eta) d\eta = 1 \quad \textrm{and} \quad f(\eta) \ge 0 \ .
\label{fconstraints}
\end{align}
For later convenience, $\Sigma$ on the right hand side of
Eq.~(\ref{defwis}) is rescaled with $N$. The flutuations of the model
covariance matrices $AA^\dagger/N$ deviate from Gaussian, but
always about the empirical covariance matrix $\Sigma$. The meaning of
the model parameter $N$ now becomes clearer. It sets the variance for
these fluctuations. The above rescaling only changes the functional
dependencies, but not the r\^ole of $N$. We emphasize once more that
$\Sigma$ is evaluated over the total time interval. Similar
deformations of random matrix ensembles but in a Hamiltonian, not
Wishart setting were apparently first put forward in
Refs.~\cite{Bertuola2004,MuttalibKlauder2005}.
After inserting the ansatz~(\ref{defwis}) into Eq.~(\ref{average}),
we may use the result~\cite{Schmitt2013}
\begin{equation}
\int w(A|\Sigma) g\left(r\left|\frac{1}{N}AA^\dagger\right.\right) d[A]
= \int\limits_0^\infty \chi^2_N(z) g\left(r\left|\frac{z}{N}\Sigma\right.\right) dz \ ,
\label{gaver}
\end{equation}
which reformulates the whole random matrix average as a univariate
average over the $\chi^2$ distribution
\begin{equation}
\chi_N^2(z) = \frac{1}{2^{N/2}\Gamma(N/2)} z^{N/2-1} \exp\left(-\frac{z}{2}\right)
\label{chi2}
\end{equation}
of $N$ degrees of freedom. On the mathematical side, there are
connections between formula~(\ref{gaver}) and the calculation of
certain distributions in scattering theory~\cite{Poli09,Yan12}. Using
the result~(\ref{gaver}), the amplitude distribution reduces to the
double integral
\begin{equation}
\langle g\rangle (r|\Sigma,N) = \int\limits_0^{\infty}d\eta f(\eta)
\int\limits_0^\infty dz \chi^2_N(z)
g\left(r\left|\frac{z}{\eta}\Sigma\right.\right) \ .
\label{averagedouble}
\end{equation}
Again, we point out the rescaling of $\Sigma$ with $N$, \textit{cf.}
Eq.~(\ref{gaver}). It is useful to rewrite that as a single integral
\begin{equation}
\langle g\rangle (r|\Sigma,N) = \int\limits_0^{\infty} p(x)
g\left(r|x\Sigma\right) dx
\label{averagesingle}
\end{equation}
by introducing the variable $x=z/\eta$ and its distribution
\begin{equation}
p(x) = \int\limits_0^{\infty}d\eta f(\eta)
\int\limits_0^\infty dz \chi^2_N(z) \delta\left(x-\frac{z}{\eta}\right) \ .
\label{aampldef}
\end{equation}
We refer to it as \textit{amplitude distribution deformation function}. One easily
obtains
\begin{equation}
p(x) = \frac{x^{N/2-1}}{2^{N/2}\Gamma(N/2)}
\int\limits_0^{\infty}d\eta f(\eta) \eta^{N/2}\exp\left(-\frac{\eta x}{2}\right) \ ,
\label{pdis}
\end{equation}
which establishes the relation between the two deformation functions.
We notice that the ansatz~(\ref{defwis}) restricts the form of the
deformed distribution $\overline{w}(A|\Sigma,N)$ to functions of $\Tr
A^\dagger\Sigma^{-1}A$ only. Even though the inclusion of further
terms such as $\Tr(A^\dagger\Sigma^{-1}A)^2$ is likely to improve the
quality of the data fits, we stick to the ansatz~(\ref{defwis}). Its
considerable advantage is the guaranteed but otherwise questionable
analytical tractability as will be shown in the sequel. Moreover,
further terms will also increase the number of deformation functions
which will hamper their unambiguous determination.
\subsection{Determination of the Deformation Functions}
\label{sec23}
Apart from the deformation functions, the distributions
$\overline{w}(A|N\Sigma)$ and $\langle g\rangle (r|\Sigma,N)$ depend
on the usual covariance matrix $\Sigma$ analyzed by sampling over the
total interval. We notice that the corresponding covariance matrix
$\Sigma^{(d)}$ in the deformed ensemble slightly differs from that.
By definition we have
\begin{eqnarray}
\Sigma^{(d)} &=& \langle \frac{1}{N} AA^\dagger \rangle \nonumber\\
&=& \int \frac{1}{N}AA^\dagger \overline{w}(A|\Sigma,N) d[A] \ .
\label{sigmad}
\end{eqnarray}
Inserting Eq.~(\ref{defwis}), we can do the ensemble average in the
Gaussian case which yields the covariance matrix $N\Sigma/\eta$. Thus,
only the $\eta$ integral remains and we have
\begin{equation}
\Sigma^{(d)} = N \Sigma \int\limits_0^\infty \frac{f(\eta)}{\eta} d\eta
= N \Sigma \overline{\eta^{-1}}
\label{sigmadd}
\end{equation}
implying that the two covariance matrices differ by the average of $1/\eta$.
Alternatively, one can calculate $\Sigma^{(d)}$ from the amplitude distribution,
\begin{equation}
\Sigma^{(d)} = \langle rr^\dagger \rangle
= \int rr^\dagger \langle g\rangle (r|\Sigma,N) d[r] \ ,
\label{sigmada}
\end{equation}
which yields
\begin{equation}
\Sigma^{(d)} = \Sigma \int\limits_0^\infty x p(x) dx
= \Sigma \overline{x} \ .
\label{sigmadad}
\end{equation}
Here, the two covariance matrices differ by the first moment of $x$.
With the help of Eq.~(\ref{aampldef}), the
results~(\ref{sigmadd},\ref{sigmadad}) are easily seen to coincide.
Having extracted the covariance matrix for the total time interval
from the data, we can proceed with the determination of the
deformation functions. The exponential function in the integrand of
Eq.~(\ref{pdis}) allow us to interpret it as a Laplace transform,
\begin{equation}
\frac{\Gamma(N/2)}{2} \frac{p(x)}{x^{N/2-1}}
= \mathcal{L}\left(\tilde{\eta}^{N/2}f(2\tilde{\eta})\right) \ ,
\label{laplace}
\end{equation}
where we introduced $\tilde{\eta}=\eta/2$ to avoid inconvenient factors
of two. Thus, the ensemble deformation function is the inverse
Laplace transform
\begin{equation}
f(2\tilde{\eta}) = \frac{\Gamma(N/2)}{2}\frac{1}{\tilde{\eta}^{N/2}}
\mathcal{L}^{-1}\left(\frac{p(x)}{x^{N/2-1}}\right) \ .
\label{invlaplace}
\end{equation}
of the amplitude distribution deformation function divided by a power
of $x$. This makes it possible to determine $f(\eta)$ by extracting
$p(x)$ from the amplitude time series and carrying out the inverse
Laplace transform. In contrast, extracting $f(\eta)$ directly from
the data is cumbersome and burdened by limited statistics, as the
following discussion shows. The rows of $A$ are the model time series
of length $N$ and cannot easily be identified with the amplitude time
series $r_k(t)$ of lenght $T$. However, the matrices $AA^\dagger/N$
form the ensemble of model covariance matrices and can be compared
with the empirical ones. As a certain sample length is required for
meaningful results, it is out of question to compare the matrices
directly, \textit{i.e.} their individual matrix elements. A better
observable is the distribution
\begin{equation}
q(s) = \int \delta\left(s-\frac{1}{N}\Tr AA^\dagger\right)\overline{w}(A|\Sigma,N) d[A]
\label{trace}
\end{equation}
of the traces, which can easily be written as a single integral
involving the ensemble deformation function $f(\eta)$. The
distribution~(\ref{trace}) is empirically obtained by moving a time
window through the amplitude time series and calculating the empirical
covariance matrices and their traces. This then gives $f(\eta)$.
The problem with the above procedure is its still limited statistical
significance. Instead, extracting the amplitude distribution
deformation function $p(x)$ from the data gives much more meaningful
results. As we discuss in the sequel, the number of data points is
larger by a factor of $K$. The amplitudes $r_k$ appear in
Eq.~(\ref{averagesingle}) only via the bilinear form $r^\dagger\Sigma
r$. We rotate the amplitude vector $r$ into the eigenbasis of the
empirically obtained covariance matrix $\Sigma$. By definition, the
eigenvalues of $\Sigma$ are positive and larger than zero, provided
the length of the sampling interval is larger than $K$. We divide each
component of the rotated amplitude vector by the square root of the
corresponding eigenvalue and denote the resulting vector by
$\tilde{r}$. Within our model, all components of $\tilde{r}$ should
be equally distributed. We integrate out all but one, $\tilde{r}_k$,
and arrive at the distribution
\begin{equation}
\langle \tilde{g}\rangle (\tilde{r}_k) = \int\limits_0^{\infty} p(x)
\frac{1}{\sqrt{2\pi x}}\exp\left(-\frac{\tilde{r}_k^2}{2x}\right) dx \ .
\label{rotscale}
\end{equation}
Thus, $p(x)$ may be identified with the distribution of the variances
$x$ of the Gaussian distributed random variables
$\tilde{r}_k$. Conceptually, this is our main result. It provides a
simple and statistically significant method to obtain the amplitude
distribution deformation function $p(x)$ which then yields upon
inverse Laplace transform the ensemble distribution function
$f(\eta)$. As we have $K$ time series $r_k(t)$, we gain a factor of
$K$ by aggregation.
\section{Application to Financial Data}
\label{sec3}
We now apply our method to stock market data. This is of particular
interest as heavy tails are ubiquituous in finance. A better modeling
for multivariate distributions is urgently called for to improve risk
estimation. We present the data in Sec.~\ref{sec31}, extract the
deformation functions in Sec.~\ref{sec32} and calculate the ensemble
and return distributions in Sec.~\ref{sec33}.
\subsection{Data Set}
\label{sec31}
We analyze the $K=306$ continuously traded stocks with
prices $S_k(t), \ k=1,\ldots,K$ in the S\&P 500\textsuperscript{\textregistered}~ index
between 1992 and 2012~\cite{yahoo}, which we previously analyzed with
a purely Gaussian, \textit{i.e.}, non--deformed Wishart
ensemble~\cite{Schmitt2013}. The amplitudes are here the dimensionless
price changes
\begin{equation}
R_k(t) = \frac{S_k(t+\Delta t)-S_k(t)}{S_k(t)} \ ,
\label{returng}
\end{equation}
which are referred to as returns. They depend on the chosen return
horizon $\Delta t$. According to Eq.~(\ref{returns}), we calculate the
returns $r_k(t)$ normalized to zero mean. To make our presentation
self--contained, we show once more how strongly the whole $K\times K$
correlation matrix $C$ for this data set changes in time. In
Fig.~\ref{fig1}, it is displayed for subsequent three--months time
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.235\textwidth]{fig3a.pdf}
\includegraphics[width=0.235\textwidth]{fig3b.pdf}
\end{center}
\caption{Correlation matrices of $K=306$ companies in S\&P
500\textsuperscript{\textregistered}~ for the fourth quarter of 2005 and the first
quarter of 2006. The darker, the stronger the correlation. The
companies are sorted according to industrial sectors. Taken from
Ref.~\cite{Schmitt2013}.}
\label{fig1}
\end{figure}
windows. In most random matrix approaches, the ensemble is fictitious
and enters only by means of an ergodicity argument. This is not so
here, as Fig.~\ref{fig1} illustrates. Our ensemble exists in reality,
it is the whole set of matrices analyzed by moving a sample time
window through the data. In Fig.~\ref{fig1}, one also sees rather
stable stripes in these correlation matrices which are due to the
different industrial sectors, see, \textit{e.g.},
Ref.~\cite{muennix2012}. Obviously, basis invariance is not present in
this data set, and probably neither in any other real data set. Hence,
a direct extraction of the ensemble deformation function $f(\eta)$
which preserves the basis invariance of the random matrix ensemble is
problematic. Yet, there is still another reason: market states were
identified which reveal a fine structure of the
ensemble~\cite{muennix2012,chetalova2015zooming}. As every random matrix
ensemble has an effective character, one is advised to analyze
quantities which already reflect this. In the present case, such
quantities are the amplitude, in the present case return, distribution
and the corresponding deformation function $p(x)$.
\subsection{Deformation Functions}
\label{sec32}
We use daily data, \textit{i.e.}, $\Delta t=1$ trading day. Rotation
of the return vector $r$ into the eigenbasis of the empirical
covariance matrix $\Sigma$, normalization to the square roots of the
eigenvalues and aggregation on a five--day window yield the empirical
distributions of variances shown in Fig.~\ref{fig2}. Aggregation on a
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.4\textwidth]{voldistSP500yahoodt1T5.pdf}
\end{center}
\begin{center}
\includegraphics[width=0.4\textwidth]{voldistlogSP500yahoodt1T5.pdf}
\end{center}
\caption{Aggregated distribution of variances as dots, calculated
on a five--day time window on a linear (top) and logaritmic
(bottom) scale. Fit to a beta prime distribution as solid
line. For comparison, a $\chi^2$ distribution as dashed dotted
line.}
\label{fig2}
\end{figure}
ten--day window gives similar results; hence, the estimation noise does not have a major impact on the distribution. A variety of functions is
capable of describing the data. In finance, one often employs the
log--normal distributions to model volatilities, see \textit{e.g.}
Ref.~\cite{Micciche2002756}. In finance, the standard deviations are
referred to as volatilities. However, the log--normal distribution
fails to capture the empirically found tail behavior. More suitable is
the beta prime distribution
\begin{equation}
p(x|N,L) = \frac{\Gamma(N+L/2)}
{\Gamma(N/2)\Gamma((N+L)/2)} \frac{x^{N/2-1}}{(1+x)^{N+L/2}}
\label{betaprime}
\end{equation}
with two positive parameters $N$ and $L$. Anticipating the following
discussion, we choose their combination in the
expression~(\ref{betaprime}) in such a way that $N$ coincides with the
parameter $N$ introduced in Sec.~\ref{sec2}. The fit is depicted in
Fig.~\ref{fig2}, the agreement with the data is much better than for a
$\chi^2$ distribution corresponding to the ensemble of
Ref.~\cite{Schmitt2013} which is formally obtained by setting
$f(\eta)=\delta(\eta-1)$ or $f(\eta)=\delta(\eta-N)$, respectively,
depending on the rescaling with $N$. We carry out fits for different
return horizons $\Delta t$. The results for $N$ and $L$ are shown in
Fig.~\ref{fig3}. While $L$ stays constant around two, $N$ increases
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.4\textwidth]{nldeltatdep.pdf}
\end{center}
\caption{Fitted values of $N$ (upper, ascending dots) and $L$
(lower dots) for the beta prime distribution versus the return
horizon $\Delta t$. Triangles and diamonds for the fits with
constraint to integer values, suares and circles without
constraint.}
\label{fig3}
\end{figure}
from about seven for daily data to about 23 for $\Delta t=19$ trading
days. We postpone the interpretation up to the evaluation of the
ensemble and return distribution.
Having extracted the amplitude, \textit{i.e.}, return distribution
deformation function $p(x|N,L)$, we calculate the inverse Laplace
transform~(\ref{invlaplace}),
\begin{equation}
f(2\tilde{\eta}|N,L) =
\frac{\Gamma(N+L/2)}{2\tilde{\eta}^{N/2}\Gamma((N+L)/2)}
\mathcal{L}^{-1}\left((1+x)^{-N-L/2}\right) \ ,
\label{betaprime1}
\end{equation}
and find~\cite{Gradshteyn2007} with $\eta=2\tilde{\eta}$ for
the ensemble deformation function
\begin{eqnarray}
f(\eta|N,L) &=& \frac{\eta^{(N+L)/2-1}}{2^{(N+L)/2}\Gamma((N+L)/2)}
\exp{\left(-\frac{\eta}{2}\right)} \nonumber\\
&=& \chi_{N+L}^{2}(\eta) \ .
\label{betaprime2}
\end{eqnarray}
This is a $\chi^2_{N+L}$ distribution with $N+L$ degrees of
freedom. As required, $f(\eta|N,L)$ is a positive and normalized
function.
\subsection{Deformed Ensemble and Return Distribution}
\label{sec33}
Inserting Eq.~(\ref{betaprime2}) into Eq.~(\ref{defwis}) yields
after a straightforward calculation
\begin{eqnarray}
\overline{w}(A|\Sigma,N,L) &=& \frac{\Gamma((N+NK+L)/2)}
{\Gamma((N+L)/2)\det^{N/2}(\pi N\Sigma)} \nonumber\\
& & \left(1+\frac{\Tr A^\dagger\Sigma^{-1}A}{N}\right)^{-(N+NK+L)/2} \ .
\label{wisal}
\end{eqnarray}
for the distribution of the random matrices $A$. Thus, we arrive at an
ensemble characterized by an algebraic distribution. For a similar
ensemble, but in the special case of $\Sigma=1_K$, spectral
correlation functions were studied in
Refs.~\cite{Akemann2008,Abul-Magd2009}. Here, however, we derived our
ensemble from data, and the dependence on a non--trivial $\Sigma$ is
in the present essential. Anticipating the result~(\ref{wisal}), we
rescaled $\Sigma$ with $N$ as compared to Ref.~\cite{Schmitt2013}.
Thereby, $N$ and $L$ appear on equal footing in the formulae. To
obtain the ensemble averaged return distribution, we plug
Eq.~(\ref{betaprime}) into Eq.~(\ref{averagesingle}) and find
\begin{eqnarray}
& & \langle g\rangle (r|\Sigma,N,L) = \frac{\Gamma(N+L/2)\Gamma((N+K+L)/2)}
{\Gamma(N/2)\Gamma((N+L)/2)\sqrt{\det(\pi N\Sigma)}} \nonumber\\
& & \qquad \mathcal{U}\left(\frac{N+K+L}{2},\frac{K-N+2}{2},\frac{r^\dagger\Sigma^{-1}r}{2}\right)
\label{wisret}
\end{eqnarray}
with the confluent hypergeometric function~\cite{Abramowitz1972}
\begin{equation}
\mathcal{U}(a,b,z) = \frac{1}{\Gamma(a)}\int\limits_0^\infty y^{a-1} (1+y)^{b-a-1} \exp(-yz) dy
\label{hyper}
\end{equation}
for positive real parts of $a$ and $z$. From Eq.~(\ref{sigmadd}) or (\ref{sigmadad})
the covariance matrix
\begin{equation}
\Sigma^{(d)} = \frac{N}{N+L-2} \Sigma
\label{covard}
\end{equation}
for the deformed ensemble follows. To compare with the empirical
return distribution, we compute the integral~(\ref{rotscale}),
\begin{eqnarray}
& & \langle \tilde{g}\rangle (\tilde{r}_k|N,L) = \frac{\Gamma(N+L/2)\Gamma((N+L+1)/2)}
{\Gamma(N/2)\Gamma((N+L)/2)\sqrt{2\pi}} \nonumber\\
& & \qquad\qquad \mathcal{U}\left(\frac{N+L+1}{2},\frac{3-N}{2},\frac{\tilde{r}_k^2}{2}\right) \ .
\label{wisretmarginal}
\end{eqnarray}
A comment on the permissible values for the parameter $N$ is in
order. In the return distributions~(\ref{wisret})
and~(\ref{wisretmarginal}), $N$ can take any positive real value. In
the ensemble distribution~(\ref{wisal}), however, $N$ is the length of
the model time series or, equivalently, one of the dimensions of the
$K\times N$ matrices $A$ and thus restricted to integer values. It is
thus a matter of interpretation whether one wants to impose the
constraint that $N$ be integer. There is no such restriction for the
parameter $L$. In any case, we also carried out fits with the integer
constraint. The results shown in Fig.~\ref{fig3} do not indicate a
strong influence of this constraint.
The results of the data comparison are displayed in Fig.~\ref{fig4}
for daily returns, $\Delta t=1$ trading day. The fitted parameter values
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.4\textwidth]{distr1SP500yahoopeak.pdf}
\end{center}
\begin{center}
\includegraphics[width=0.4\textwidth]{distr1SP500yahootail.pdf}
\end{center}
\caption{Aggregated distribution of daily returns, $\Delta t=1$
trading day. Empirical results as dots, fit to the
distribution~(\ref{wisretmarginal}) as solid line. The
corresponding result of Ref.~\cite{Schmitt2013} as dashed
line. Center of the distribution on a linear scale (top), whole
distribution on a logarithmic scale (bottom).}
\label{fig4}
\end{figure}
are $N=8.13$ and $L=2.24$. The center of the empirical
distribution is slightly better described by employing the deformed
Wishart ensemble instead of the non--deformed one in
Ref.~\cite{Schmitt2013}. The heavy tails clearly reveal that the
deformed Wishart ensemble yields overall a better description, since
the result of Ref.~\cite{Schmitt2013} consistently underestimates the
large events. In Fig.~\ref{fig5} we present the same analysis for
returns with $\Delta t=20$ trading days, the fit
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.4\textwidth]{distr20SP500yahootail.pdf}
\end{center}
\caption{Same as Fig.~\ref{fig4} (bottom) for returns with $\Delta t=20$
trading days.}
\label{fig5}
\end{figure}
gives $N=20.98$ and $L=2.07$. Here, the tails are still strong, but
less pronounced than for daily data. For the interpretation of these
results, we recall the well--established fact that \textit{univariate}
distributions of returns for one stock acquire heavy tails as the
return horizon $\Delta t$ becomes smaller, see \textit{e.g.}
Ref.~\cite{doi:10.1080/713665670}. Here, however, we analyze the
\textit{multivariate} distribution of $K$ correlated stocks. Thus,
there are two competing effects. First, as discussed in general in
connection with Eq.~(\ref{gaussian}) and for the financial data in
Ref.~\cite{Schmitt2013}, the superposition of the amplitudes, in the
present case the returns, drives the multivariate distribution towards
a Gaussian, provided that the covariances are sufficiently
constant. Second, as observed in Ref.~\cite{Schmitt2013} and extended
here, the fluctuations of the non--stationary covariances lift the
tails of the distributions evaluated over long time intervals and make
them heavier. Not surprisingly, the heavier the tails of the
\textit{univariate} distributions, the heavier are also those of
the ensemble averaged multivariate ones shown above. This is nicely
reflected in the nearly linear increase of the parameter $N$ on the
return horizon $\Delta t$, see Fig.~\ref{fig3}. The smaller $N$, the
heavier are the tails in the ensemble distributions~(\ref{wisal}) and
in the enesmble averaged return distribution~(\ref{wisret}). As $N$
grows and $L$ is held fixed, the distribution~(\ref{wisal}) comes
closer to a Gaussian, \textit{i.e.}, to the non--deformed ensemble.
It is quite remarkable that the fit of $L$ always yields values close
to two. According to Eq.~(\ref{covard}), this implies
$\Sigma^{(d)}\approx\Sigma$. Put differently, the heavy tails in the
cases considered alter the measured covariances only slightly as
compared to a Gaussian assumption. When looking at financial risk,
however, the tails are very important.
Finally, we use the opportunity to discuss an issue of general
interest when presenting and fitting a multivariate distribution that
depends on the statistical variables only via a bilinear form such as
$r^\dagger\Sigma^{-1}r$. Instead of the above procedure which involves
rotation of $r$ into the eigenbasis of $\Sigma$ and aggregation, one
might also view the bilinear form as a generalized radius
\begin{equation}
\rho = \sqrt{r^\dagger\Sigma^{-1}r}
\label{rho1}
\end{equation}
in the $K$ dimensional space and study its distribution
\begin{equation}
\langle g_\textrm{rad}\rangle (\rho) =
\int \delta\left(\rho-\sqrt{r^\dagger\Sigma^{-1}r}\right)
\langle g\rangle (r|\Sigma,N,L) d[r] \ .
\label{rho2}
\end{equation}
A rather straightforward calculation yields
\begin{eqnarray}
& &\langle g_\textrm{rad}\rangle (\rho) = \frac{\Gamma(N+L/2)\Gamma((N+K+L)/2)}
{2^{K/2-1}\Gamma(N/2)\Gamma(K/2)\Gamma((N+L)/2)} \nonumber\\
& & \qquad \rho^{K-1} \mathcal{U}\left(\frac{N+K+L}{2},\frac{K-N+2}{2},\frac{\rho^2}{2}\right) \ .
\label{disrad}
\end{eqnarray}
The Jacobian $\rho^{K-1}$, typical for such a radial distribution, appears
and, because of $K=306$, dominates the functional form of the
distribution for small $\rho$, as can be seen in Fig.~\ref{fig6}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.4\textwidth]{distrho1SP500yahoofull.pdf}
\end{center}
\begin{center}
\includegraphics[width=0.4\textwidth]{distrholog1SP500yahoofull.pdf}
\end{center}
\caption{Radial distribution of daily returns, $\Delta t=1$ trading
day. Empirical results as dots, fit to the
distribution~(\ref{disrad}) as solid line. The corresponding
result of Ref.~\cite{Schmitt2013} as dashed line, on a linear
(top) and a logarithmic scale (bottom).}
\label{fig6}
\end{figure}
The theoretical result~(\ref{disrad}) describes the data much better
than the corresponding result of
Ref.~\cite{Schmitt2013}. Nevertheless, the dominance of $\rho^{K-1}$
gives a somewhat misleading picture and we infer that using the
distributions~(\ref{wisret}) and~(\ref{wisretmarginal}) is more
appropriate.
\section{Permissibility of Deformation Functions}
\label{sec4}
When extracting the return distribution deformation function $p(x)$
from the data, we encountered a puzzling problem that we wish to
report here. The log--logistic distribution
\begin{equation}
p(x|b,c) = \frac{b}{c} \frac{(x/c)^{b-1}}{(1+(x/c)^b)^2}
\label{logp}
\end{equation}
with $b=N/2$ yields a very good description of the data, even slightly
better than the beta prime distribution. The $c$ values are around
one, $N=4$ for $\Delta t=1$ trading day and increasing for larger
$\Delta t$. However, the resulting ensemble deformation function can
take positive and negative values. For example, for $N=4$, we find
\begin{equation}
f(\eta|c) \sim \frac{\sin(\eta/2)-\eta\cos(\eta/2)/2}
{\eta^2} \ .
\label{logf}
\end{equation}
Hence, it cannot be interpreted as a distribution. Indeed, we are
confronted with a problem of interpretation. While the discussion in
Sec.~\ref{sec23} clearly revealed that $p(x)$ is a well--defined
distribution of a random variable, namely of a variance, it is not
obvious that $f(\eta)$ also represents a well--defined
distribution. The corresponding random variables, the matrices $A$, do
not have a direct data interpretation. Thus, one might simply view
$f(\eta)$ as a continuous coefficient function for the expansion of
the distribution~(\ref{defwis}) in terms of Gaussians. Nevertheless,
even if one does not want to enforce an interpretation of $f(\eta)$ as
a distribution, the resulting ensemble distribution~(\ref{defwis})
must be positive definite. We test this by calculating the
distribution of the traces
\begin{equation}
u(s) = \int \delta\left(s-\frac{1}{N}\Tr A^\dagger\Sigma^{-1}A\right)\overline{w}(A|N,c) d[A] \ .
\label{tracesig}
\end{equation}
As we only wish to test the positivity, it is convenient to choose it
different from the above distribution~(\ref{trace}) by including the
covariance matrix. After some algebra, we can express it as a
high--order derivative involving the return distribution deformation
function,
\begin{eqnarray}
u(s) &=& \frac{(-1)^{(K-1)N/2}\Gamma(N/2)}{\Gamma(KN/2)}s^{KN/2-1} \nonumber\\
& & \qquad\qquad \frac{d^{(K-1)N/2}}{ds^{(K-1)N/2}}\frac{p(s|b,c)}{s^{N/2-1}} \ .
\label{u1}
\end{eqnarray}
This in principle general result gives for the log--logistic
distribution~(\ref{logp})
\begin{eqnarray}
u(s) &=& \frac{(-1)^{(K-1)N/2}Nc^{N/2}\Gamma(N/2)}{2\Gamma(KN/2)}s^{KN/2-1} \nonumber\\
& & \qquad\qquad \frac{d^{(K-1)N/2}}{ds^{(K-1)N/2}}\frac{1}{(c^{N/2}+s^{N/2})^2} \ .
\label{u2}
\end{eqnarray}
Restricting ourselves to even $N$, we may employ the theory of complex
functions to calculate the pole expansion
\begin{eqnarray}
& &\frac{1}{(c^{N/2}+s^{N/2})^2} = \sum_{n=1}^{N/2} \frac{1}{\prod_{m\neq n} (a_n-a_m)^2}\nonumber\\
& & \qquad\qquad \left(\frac{1}{(s-a_n)^2}-\frac{2}{s-a_n}\sum_{l\neq n}\frac{1}{a_n-a_l}\right)
\label{poleexp}
\end{eqnarray}
with the poles
\begin{equation}
a_n = c \exp\left(\frac{i2\pi}{N}(2n+1)\right) \ .
\label{poles}
\end{equation}
The derivatives in Eq.~(\ref{u2}) can now easily be evaluated and we
arrive at
\begin{eqnarray}
u(s) &=& \frac{Nc^{N/2}\Gamma(N/2)}{2\Gamma(KN/2)}s^{KN/2-1}\nonumber\\
& & \quad
\sum_{n=1}^{N/2} \frac{1}{\prod_{m\neq n} (a_n-a_m)^2}
\Biggl(\frac{\Gamma((K-1)N/2+2)}{(s-a_n)^{(K-1)N/2+2}}\nonumber\\
& & \qquad -
\frac{2\Gamma((K-1)N/2+1)}{(s-a_n)^{(K-1)N/2+1}}\sum_{l\neq n}\frac{1}{a_n-a_l}\Biggr) \ .
\label{u3}
\end{eqnarray}
Inspite of the complex poles, this is by construction a real function.
Yet, it takes positive and negative values which outrules an
interpretation of $u(s)$ and thus also of $\overline{w}(A|N,c)$ as
distributions. By means of this example we face the somewhat
surprising result that a well--defined distribution $p(x)$ does not
necessarily yield a well--defined ensemble. Each case has to be
investigated individually.
\section{Further Extension by Deforming the Static Amplitude
Distribution}
\label{sec5}
We argued in Sec.~\ref{sec21} that the Gaussian
assumption~(\ref{gaussian}) for the static amplitude distribution is
not as restrictive as it might appear at first sight. Nevertheless, we
now extend our construction by assuming more general functional
forms. At present, we do not have data at our disposal in which the
static amplitude distribution is non--Gaussian, but we nevertheless
now extend our construction, as it might be useful for future data
analyses. Moreover, we will also come across some interesting
observations. Instead of Eq.~(\ref{gaussian}), we now assume that the
static amplitude distribution can be expressed as an average over the
Gaussian~(\ref{gaussian}),
\begin{equation}
\overline{g}(r|\Sigma_s) = \int\limits_0^\infty h(\xi)
g\left(r\left|\frac{\Sigma_s}{\xi}\right.\right) d\xi
\label{static}
\end{equation}
with a new deformation function $h(\xi)$ that fulfils
\begin{align}
\int\limits_0^\infty h(\xi) d\xi = 1 \quad \textrm{and} \quad h(\xi) \ge 0 \ .
\label{hconstraints}
\end{align}
We proceed as in Sec.~\ref{sec22}. Instead of the ensemble
average~(\ref{average}), we now have
\begin{equation}
\langle\overline{g}\rangle (r|\Sigma,N) = \int d[A] \overline{w}(A|\Sigma,N)
\overline{g}\left(r\left|\frac{1}{N}AA^\dagger\right.\right) \ .
\label{averageg}
\end{equation}
This is readily cast into the form
\begin{equation}
\langle\overline{g}\rangle (r|\Sigma,N) = \int\limits_0^{\infty} p(x)
g\left(r|x\Sigma\right) dx \ ,
\label{averagesingleg}
\end{equation}
which differs from Eq.~(\ref{averagesingle}) only by the definition of
the amplitude distribution deformation function. It is now given by
\begin{equation}
p(x) = \int\limits_0^{\infty}d\xi h(\xi) \int\limits_0^{\infty}d\eta f(\eta)
\int\limits_0^\infty dz \chi^2_N(z) \delta\left(x-\frac{z}{\xi\eta}\right) \ .
\label{aampldefg}
\end{equation}
For fixed $\xi$, we introduce the new variable $\hat{\eta}=\eta\xi$ and find
\begin{equation}
p(x) = \int\limits_0^{\infty}d\hat{\eta} \hat{f}(\hat{\eta})
\int\limits_0^\infty dz \chi^2_N(z) \delta\left(x-\frac{z}{\hat{\eta}}\right) \ .
\label{aampldefg2}
\end{equation}
which coincides with Eq.~(\ref{aampldef}), but now involving the new
ensemble deformation function
\begin{equation}
\hat{f}(\hat{\eta}) = \int\limits_0^{\infty} \frac{h(\xi)}{\xi}
f\left(\frac{\hat{\eta}}{\xi}\right) d\xi \ .
\label{aampldefg3}
\end{equation}
This integral is reminiscent of a convolution. Thus, the case of a
deformed, non--Gaussian static amplitude distribution is formally
traced back to the Gaussian case. The difference can be fully absorbed
into the ensemble deformation function. Importantly, this means that
all other results of Sec.~\ref{sec2} continue to hold, in particular
the Laplace transform~(\ref{laplace}) and its
inversion~(\ref{invlaplace}). Nevertheless, the following problem
remains. We can extract $h(\xi)$ and $p(x)$ from the data by using the
methods outlined in Sec.~\ref{sec23} for very short time intervals and
for the whole, long time interval, respectively. From the inverse
Laplace transform~(\ref{invlaplace}), we obtain $\hat{f}(\hat{\eta})$,
but to determine $f(\eta)$, we are left with the task to invert
Eq.~(\ref{aampldefg3}). Although that is definitely possible for some
special cases, a general inversion formula is lacking. In practical
applications, however, the extension sketched above is more likely to
be needed for consistency tests. For example, if some of the available
data for the same system permit the Gaussian assumption for the static
amplitude distribution and others do not, one can first determine
$f(\eta)$ as described in Sec.~\ref{sec2} and then turn to the data
which require an additional deformation function $h(\xi)$. Once both
of these deformation functions are known, one can evaluate
$\hat{f}(\hat{\eta})$ and check if it is consistent with the
inverse~(\ref{invlaplace}) of $p(x)$ which is independently extracted
from the data.
As an example, we consider the case that both, $f(\eta)$ and $h(\xi)$,
are $\chi^2$ distributions
\begin{equation}
f(\eta) = \chi_{N+L}^2(\eta) \qquad \textrm{and} \qquad
h(\xi) = \chi_M^{2}(\xi)
\label{aampldefg4}
\end{equation}
of $N+L$ and $M$ degrees of freedom, respectively. The choice for
$f(\eta)$ coincides with the result of Sec.~\ref{sec32}. With
Eq.~(\ref{aampldefg3}), we obtain
\begin{eqnarray}
\hat{f}(\hat{\eta}) = \frac{\sqrt{\hat{\eta}}^{(N+L+M)/2-2}\mathcal{K}_{(N+L-M)/2}(\sqrt{\hat{\eta}})}
{2^{(N+L+M)/2-1}\Gamma((N+L)/2)\Gamma(M/2)} \ ,
\label{wisretext}
\end{eqnarray}
where $\mathcal{K}_\nu$ is the modified Bessel function of the second
kind of order $\nu$. This function already appeared in the ensemble
averaged return distribution of Ref.~\cite{Schmitt2013}. According to
Eq.~(\ref{aampldefg2}), the distribution $p(x)$ is an integral
involving the modified Bessel function and the return distribution
averaged over the deformed ensemble is an integral over a product of
modified Bessel functions, but we do not give the formulae here.
\section{Conclusions}
\label{sec6}
Non--stationarity is an often encountered feature in complex systems.
Here, we addressed non--stationarity of correlations. We presented a
method to determine their distribution from the amplitude
distribution. Put differently, we showed how to extract the proper
ensemble of random covariance matrices from amplitude data. Carrying
out our analysis for the case of financial data, we found an algebraic
distribution of covariance matrices reflecting the heavy tails in the
amplitude, \textit{i.e.}, return distributions.
A conceptually important comment is in order. Consider any two
empirical covariance matrices $\Sigma(t_1)$ and $\Sigma(t_2)$. They
are certainly not independent, because, first, the correlation
structure due to, \textit{e.g.}, the industrial sectors in the
financial markets only changes on a large time scale and, second, one
would expect that they are the more dependent the smaller the time
difference $|t_1-t_2|$. The first cause is a built--in feature of our
model, as the random model covariance matrices fluctuate around the
empirical $\Sigma$. The second cause is effectively accounted for as
the length of our model time series $N$, \textit{i.e.}, the second
dimension of the random matrices $A$, is different from the length $T$
of the subintervals in which the empirical covariance matrices
$\Sigma(t)$ are evaluated. The values for $N$ resulting from the data
analysis are very small $N\ll K$ while $T\ge K$ when measuring
$\Sigma(t)$. Our random matrix ansatz does not aim at modeling the
ensemble of the empirical covariance matrices $\Sigma(t)$ in a
one--to--one fashion. This is never the goal of a statistical
approach. Our model has an effective character which is obvious, for
example, in the relation $N\ll T$. Model time series much shorter than
the empirical ones suffice to properly grasp the statistical effects
induced by the fluctuations around the average covariance matrix. This
also reflects the mutual dependence of the empirical covariance
matrices. We demonstrated by our analysis of financial data how useful
our model is.
Furthermore, observables such as the amplitude distribution do not
resolve autocorrelations in time of any kind. Even reshuffeling the
order of the empirical covariance matrices $\Sigma(t)$ in time does
not change the amplitude distribution for the total time interval. A
similar situation is encountered when analyzing statistical properties
of quantum chromodynamics. The gauge fields may be viewed as a really
existing ensemble over which an average is carried out --- this is the
very definition of the partition function. Although these gauge fields
are not independent either, an effect of this autocorrelation is only
seen if corresponding observables are used. Densities are not
affected. Random matrices as models for the highly non--trivial gauge
fields are applied with great success.
Yet another aspect is worth mentioning. In contrast to random matrix
applications for Hamiltonian systems, there is not a local scale that
can enforce universal statistical behavior. Thus, it is important how
the covariances are actually distributed. Gaussian assumptions are
only acceptable if really justified by data analysis. The present
study extends a previous one in which we employed such a Gaussian
assumption in finance. Here, we reconsidered the same data set and
clearly demonstrated that the Gaussian assumption underestimates the
tails. The algebraic distributions that we found here are relevant for
risk estimation as they help to better understand large
events. Importantly, once the ensemble is properly extracted,
meaningful averages can be computed for all observables that depend on
non--stationary covariances.
When developing our construction, we came across a puzzling feature
which calls for a caveat. The deformation function extracted from the
amplitude distribution determines, on the one hand, uniquely the
ensemble, but, on the other hand, this ensemble is not necessarily
well--defined. Each case has to be studied individually. We do not expect
this to cause severe problems in applications, but conceptually it is
an interesting aspect. We also extended our construction by including
deformed static amplitude distributions. The additional freedom
accompanying this extension might offer a possibility to circumvent
the above mentioned puzzling problem. From a more general viewpoint,
we have to emphasize that our construction only includes functional
forms of the ensemble that depend on the trace over the product of the
random covariance matrix and the mean covariance matrix. Although this
is quite natural, as it guarantees a certain amount of basis
invariance which all random matrix models need, more general
functional forms pose an interesting and potentially important
challenge.
Hitherto, we only applied our method to finance. We plan applications
to other complex systems, too. This may be rewarding, as large events
and risk estimations are not only important in finance.
\section*{Acknowledgments}
We thank Desislava Chetalova, Tobias Nitschke, Thilo Schmitt and Yurij
Stepanov for fruitful discussions.
|
2,877,628,091,011 | arxiv | \section{Introduction} \label{Intro}
Let $\Omega$ be a bounded domain in $\mathbb{C}^n$ and $A^{2}(\Omega)$ denote the Bergman space
of square-integrable holomorphic functions on $\Omega.$ The Bergman projection on
$\Omega$ is the orthogonal projection from $L^2(\Omega)$ onto $A^{2}(\Omega).$
The Bergman projection is known to be regular, in the sense that it maps $W^s$ to
$W^s$ for all $s\geq 0$ where $W^s$ denotes the Sobolev space of order $s,$ on a
large class of smooth bounded pseudoconvex domains (throughout this paper a domain is
smooth if its boundary is a smooth manifold). Regularity is, usually,
established through the $\overline{\partial}$-Neumann problem, the solution operator for the
complex Laplacian $\Box=\overline{\partial}\dbar^*+\overline{\partial}^*\overline{\partial}$ on square integrable
$(0,1)$-forms. For more information on this matter we refer the reader to
\cite{BoasStraube99,StraubeBook} and the references therein.
Irregularity of the Bergman projection is not understood nearly as well as
regularity. The story of irregularity goes back to the discovery of the
worm domains in $\mathbb{C}^2$ by Diederich and Forn\ae{}ss \cite{DiederichFornaess77}.
Worm domains were constructed to show that the closure of some smooth
bounded pseudoconvex domains may not have Stein neighborhood bases (a compact
set $K\subset \mathbb{C}^n$ is said to have a Stein neighborhood basis if for every open
set $U$ containing $K$ there exists a pseudoconvex domain $V$ such that
$K\subset V \subset U$). Indeed, Diederich and Forn\ae{}ss in
\cite{DiederichFornaess77} showed that the closure a worm domain does not
have a Stein neighborhood basis if the total winding is bigger than or equal to
$\pi.$ It turned out that worm domains are also counter-examples for regularity
of the Bergman projection. In 1991, Kiselman \cite{Kiselman91} showed that the
Bergman projection does not satisfy Bell's condition R on nonsmooth worm
domains (a domain $\Omega$ satisfies Bell's condition
R if the Bergman projection maps $C^\infty(\overline\Omega)$ to
$C^\infty(\overline\Omega)$). In 1992, the first author
\cite{Barrett92} showed that the Bergman projection on a smooth worm domain does not map
$W^s$ into $W^s$ if $s\geq \pi /(\text{total winding}).$ On the other hand, Boas
and Straube \cite{BoasStraube92} showed that the Bergman projection maps $W^k$
into $W^k$ if $k\leq \pi /(2\times \text{total winding})$ and $k$ is a positive
integer or $k=1/2.$ Finally, in 1996 Christ \cite{Christ96} showed that the
Bergman projections on smooth worm domains, with any positive
winding, do not satisfy Bell's condition R. Recently, Krantz and Peloso
\cite{KrantzPeloso08a,KrantzPeloso08b} studied the asymptotics
for the Bergman kernel on the model domains in $\mathbb{C}^2$ and derived $L^p$
(ir)regularity for the Bergman projection on worm domains in $\mathbb{C}^2.$
In this note we will construct smooth bounded pseudoconvex domains
$\Omega_{\alpha\beta}\subset \mathbb{C}^n$ that are higher dimensional generalizations of
the worm domains in $\mathbb{C}^2$ and study the irregularity of the Bergman projection
on these domains on $L^p$ Sobolev spaces for $1\leq p<\infty.$ We will use
the method developed by the first author in \cite{Barrett92} to show that
irregularity on $L^2$ Sobolev spaces depends only on the total winding whereas the
irregularity on $L^p$ spaces with $p\neq 2$ depend on the total winding as well as the
dimension $n$.
The two parameters $\alpha$ and $\beta$ in $\Omega_{\alpha\beta}$ represent the speed of
the winding and the thickness of the annulus, respectively. Both parameters play a role in
the proof of Theorem \ref{Theorem}, but we find it interesting to note that the actual
results depend only on the total winding whether this is achieved by fast winding along a
thin annulus or slow winding along a thick annulus.
The domains $\Omega_{\alpha\beta} \subset \mathbb{C}^n, n\geq 3,$ are defined by
\begin{equation*}\label{Domain}
\Omega_{\alpha\beta}=\left\{(z_{1}, z',z_{n})\in \mathbb{C}^{n}: r(z_1,z',z_n) <0\right\}
\end{equation*}
with
\[r(z_1,z',z_n) =\left|z_{1}-e^{2i\alpha \ln|z_{n}|}
\right|^{2}+|z'|^{2}-1 +\sigma(|z_n|^2-\beta^2)+ \sigma(1-|z_n|^2);\]
here $z'=(z_{2},\ldots,z_{n-1}), |z'|^{2}=|z_{2}|^{2}+\cdots+|z_{n-1}|^{2}$,
the constants $\alpha>0, \beta>1,$ and
$\sigma(t)=Me^{-1/t}$ for $t> 0,\,\sigma(t)=0$ for $t\leq 0$ for some $M>0$.
In section \ref{levi} below we show that $\Omega_{\alpha\beta}$
is smooth bounded pseudoconvex when $M$ is sufficiently large.
The main result of this paper is the following theorem.
\begin{theorem}\label{Theorem}
The Bergman projection for $\Omega_{\alpha\beta}$ does not map
$W^{p,s}\left(\Omega_{\alpha\beta}\right)$ into
$W^{p,s}\left(\Omega_{\alpha\beta}\right)$ where $1\leq p<\infty$ and
$s\ge \frac{\pi}{2\alpha\ln\beta}+n\left(\frac{1}{p}-\frac{1}{2}\right).$
\end{theorem}
Here $W^{p,s}\left(\Omega_{\alpha\beta}\right)$ is the Sobolev space of order
$s$ with exponent $p$ and when
$W^{p,s}\left(\Omega_{\alpha\beta}\right)\not\subset L^2\left(\Omega_{\alpha\beta}\right)$
we mean that the $W^{p,s}$ bounds do not hold for the Bergman projection on
$W^{p,s}\left(\Omega_{\alpha\beta}\right)\cap L^2\left(\Omega_{\alpha\beta}\right).$ The
denominator $2\alpha\ln\beta$ appearing above may be interpreted as the
total amount of winding along the annulus $1<|z_{n}|<\beta$
(see \eqref{LimitDomain} below).
If we choose $p=2$ then the amount of irregularity provided by a fixed amount
of winding is independent of the dimension.
\begin{corollary}\label{corollary1}
The Bergman projection for $\Omega_{\alpha\beta}$ does not map
$W^{2,s}\left(\Omega_{\alpha\beta}\right)$ to
$W^{2,s}\left(\Omega_{\alpha\beta}\right)$ when $s\geq
\frac{\pi}{2\alpha\ln\beta}$.
\end{corollary}
\begin{remark}
Assume that the Bergman projection $P_U$ of a domain $U$ bounded on $L^p(U)$
where $p> 2.$ Then the duality and self-adjointness of the Bergman
projection imply that $P_U$ is also bounded on $L^q(U)$
where $\frac{1}{p}+\frac{1}{q}=1.$ Furthermore, interpolation implies that
$P_U$ is bounded on $L^r$ for all $r\in [q,p].$
\end{remark}
Therefore, when $s=0$ and $n\alpha\ln\beta>\pi,$ the previous remark and Theorem
\ref{Theorem} imply the following corollary.
\begin{corollary}\label{corollary2}
The Bergman projection for $\Omega_{\alpha\beta}$ does not map
$L^p\left(\Omega_{\alpha\beta}\right)$ to
$L^p\left(\Omega_{\alpha\beta}\right)$ when
$0<\frac{1}{p}\le\frac{1}{2}-\frac{\pi}{2n\alpha\ln\beta}$ or
$\frac{1}{2}+\frac{\pi}{2n\alpha\ln\beta}\leq \frac{1}{p}<1$.
\end{corollary}
Theorem \ref{Theorem} is proved in section \ref{ProveThm} below. The proof is
based on model domain asymptotics developed in section \ref{Model}.
\section{Geometry of the Worm Domains} \label{levi}
\begin{proposition}
The domain $\Omega_{\alpha\beta}$ is smooth bounded and pseudoconvex whenever
$M$ is sufficiently large.
\end{proposition}
\begin{proof}
Start by requiring $M>e^2$. Then $\Omega \subset \{z\in \mathbb{C}^n:
|z_1|<3,|z'|<2,1/2 <|z_n|<\sqrt{\beta^2+1/2}\}$. Then $\Omega$ is bounded.
Also, by considering $z_1$-, $z'$-, and $z_n$-derivatives in order it is easy to
check that the gradient of $r(z)$ does not vanish on $\{z\in \mathbb{C}^n:r(z)=0\}$, so
$\Omega$ has smooth boundary.
It remains to show that $\Omega_{\alpha\beta}$ is pseudoconvex. It suffices to
check this locally. We focus on the case $|z_n|\geq (1+\beta)/2$, the case
$|z_n|\leq (1+\beta)/2$ being similar.
Multiplying $r(z)$ by $e^{\Arg(z^{2\alpha}_{n})}$
we obtain the new defining function
\begin{equation*}
r_1(z)=r_2(z)-2\,\re\left(z_1 z_n^{-2\alpha i}\right)
\end{equation*}
where
\[r_2(z) = \left( |z_1|^2 + |z'|^2+\lambda(z_n)\right) e^{\Arg(z^{2\alpha}_{n})}
\text{ and } \lambda(z_n)=\sigma\left(|z_n|^2-\beta^2\right).\]
Since $2\,\re\left(z_1 z_n^{-2\alpha i}\right)$ is pluriharmonic it will
suffice now to show that $r_2$ is plurisubharmonic.
To simplify the notation let $A(z)=|z_1|^2+|z'|^2+\lambda(z_n)$ and
$B(z)=\Arg(z^{2\alpha}_{n}).$ Let $W=\sum_{j=1}^nw_j \partial/\partial z_j$ with
$w_j$ constant. In the following calculations $H_{f}(W)$ denote the complex
Hessian of $f$ in the direction $W.$ Then $W(r_2)=e^B(W(A)+AW(B))$ and
Cauchy-Schwarz inequality implies that
\[-2\re\left( \overline w_n B_{\overline
z_n}\sum_{j=1}^{n-1} w_j\overline z_j \right)\leq
\sum_{j=1}^{n-1}|w_j|^2+|\overline w_nB_{\overline
z_n}|^2 \sum_{j=1}^{n-1}|z_j|^2.\]
Using the above inequality in the second line below we get
\begin{align*}
H_{r_2}(W)
&=e^B(H_A(W)+2\re(W(A)\overline{W}(B))+A|W(B)|^2+A H_B(W)) \\
&\geq |w_n|^2e^B(\lambda_{z_n\overline z_n}+2\re(\lambda_{z_n}B_{\overline
z_n})+\lambda |B_{\overline z_n}|^2).
\end{align*}
One can check that
$\lambda_{z_n}(z_n)=\overline{z}_n \sigma'(|z_n|^2-\beta^2),|B_{\overline
z_n}|=\frac{\alpha}{|z_n|},$ and
\begin{equation*}
\lambda_{z_n\overline{z}_n}(z_{n})= |z_n|^2\sigma''(|z_n|^2-\beta^2)
+\sigma'(|z_n|^2-\beta^2).
\end{equation*}
We note that since $\lambda
(z_n)=\lambda_{z_n}(z_n)=\lambda_{z_n\overline z_n}(z_n)=0$ for
$|z_n|\leq \beta,$ without loss of generality we can assume that
$|z_n|>\beta.$ Using the fact that $\beta<|z_n|<\sqrt{\beta^2+1/2}$ and
$t=|z_n|^2-\beta^2$ on the third line below we get
\begin{align*}
\lambda_{z_n\overline z_n}
+2\re(\lambda_{z_n}B_{\overline z_n})+\lambda |B_{\overline z_n}|^2 \geq &
\lambda_{z_n\overline z_n}
-\frac{2\alpha|\lambda_{z_n}|}{|z_n|}\\
\geq &|z_n|^2\sigma''(|z_n|^2-\beta^2)
+(1-2\alpha)\sigma'(|z_n|^2-\beta^2) \\
=& Me^{-1/t} \left(
\frac{\beta^2+t}{t^4}-\frac{2(\beta^2+t)}{t^3}+\frac{1-2\alpha}{t^2}
\right)\\
=&\frac{M(\beta^2+t)e^{-1/t}}{t^4} \Big( 1-2t+\frac{(1-2\alpha)t^2}{\beta^2+t}
\Big)
\end{align*}
We can choose $M$ sufficiently large so that $z\in \Omega_{\alpha\beta}\cap
\{z\in\mathbb{C}^n:|z_n|\geq \beta\}$ implies that $t$ is sufficiently small. In return, this
implies that
\[1-2 t+\frac{(1-2\alpha)t^2}{\beta^2+t} > 0.\]
The last inequality above implies that
$\lambda_{z_n\overline z_n} +2\re(\lambda_{z_n}B_{\overline z_n})+\lambda
|B_{\overline z_n}|^2 \geq 0$ for
$z\in \Omega_{\alpha\beta}$ such that $|z_n|\geq (1+\beta)/2.$
Hence, the domain $\Omega_{\alpha\beta}$ is pseudoconvex for sufficiently large $M.$
\end{proof}
\begin{remark}
A similar calculation shows that the set of weakly pseudoconvex points in the
boundary is the set $\{(0,\ldots,0,z_n)\in \mathbb{C}^n:1\leq |z_n| \leq \beta \}.$
\end{remark}
\begin{remark}
We note that regularity of the $\overline{\partial}$-Neumann operator is closely connected
to regularity of the Bergman projection \cite{BoasStraube90}. In particular,
if the $\overline{\partial}$-Neumann operator of a smooth bounded pseudoconvex domain is
globally regular then the Bergman projection satisfies Bell's condition R.
One can show that on the set $\{(0,\ldots,0,z_{n})\in \mathbb{C}^n:1\leq |z_n| \leq
\beta \}$ the Levi form of $r$ has only one vanishing eigenvalue as the Levi
form has positive eigenvalues in the direction transversal to $z_{n}$-axis. In
this case Theorem 1 in \cite{SahutogluStraube06} applies and it implies that the
$\overline{\partial}$-Neumann operator is not compact on $(0,1)$-forms (compactness of the
$\overline{\partial}$-Neumann operator implies that it is globally regular
\cite{KohnNirenberg65}). However, to show irregularity of the Bergman
projection in Sobolev scale one needs to work harder.
\end{remark}
\section{Model Domains} \label{Model}
In this section we are going to define a family of simplified model domains and
calculate the asymptotics for the Bergman kernels of these model domains. We
use a modified version of the method developed by the first author
in \cite{Barrett92}.
For $\lambda>0$ let
\begin{align*}
\tau_{\lambda}(z_{1},z',z_{n})&=(2\lambda^{2}z_{1},\lambda z',z_{n}),\\
r_{\lambda}&=\lambda^{2} r \circ \tau_{\lambda}^{-1},\\
D_{\lambda}&=\tau_{\lambda}(\Omega_{\alpha\beta}).
\end{align*}
Then for $1\le|z_{n}|\le\beta$ we have $r_{\lambda}\searrow r_{\infty}$ as
$\lambda\to \infty$ where
\[r_{\infty}(z_{1},z',z_{n})=|z'|^{2}-\re\left(z_{1}e^{
-2\alpha i\ln|z_{n}|}\right);\]
for $|z_n|$ outside this range we have $r_{\lambda}\to\infty$. It follows that
the $D_{\lambda}$ converge in an appropriate sense to the limit domain
\begin{equation}\label{LimitDomain}
D=D_{\alpha\beta}= \left\{(z_{1},z',z_{n})\in \mathbb{C}^{n}:\re\left(
z_{1}e^{-2\alpha
i\ln|z_{n}|}\right) >|z'|^{2},1<|z_{n}|<\beta\right\},
\end{equation}
the limit being increasing over the annulus $1\le|z_{n}|\le\beta$.
Bergman projection $P$ of $D$ is defined by $Pf(z)=\int_D K(z,w) f(w)\,dV(w)$
where $f\in L^{2}(D)$ and $K:D\times D\to \mathbb{C},$ is the Bergman kernel
characterized by the following conditions
\begin{itemize}
\item [i.] $K(z,w)\in A^2(D)$ for fixed $w\in D$,
\item[ii.] $K(w,z)=\overline{K(z,w)}$,
\item[iii.] $\int_D K(z,w) f(w)\,dV(w)=f(z)$ for $f\in A^{2}(D)$.
\end{itemize}
If $f_1, f_2,\dots$ is an orthonormal basis for $A^{2}(D)$ then we
have $K(z,w)=\sum_j f_j(z)\overline{f_j(w)}.$
To study the Bergman kernel of $D$ we begin by performing a Fourier
decomposition. We define
\begin{equation}\label{projection}
(P_{Jk}f)(z_{1},z',z_{n})=\frac{1}{2^{n-1}\pi^{n-1}}\int\limits_{[-\pi,\pi]^{n-1
}}f(z_{1},e^{iS}z',e^{it}z_{n})e^{-iJ S}e^{-ikt}dS\,dt,
\end{equation}
where
\begin{align*}
e^{iS}&=(e^{is_{1}},\ldots,e^{is_{n-2}}),\\
S&=(s_{1},\ldots,s_{n-2})\in [-\pi,\pi]^{n-2},\\
J&=(j_{1},\ldots,j_{n-2})\in \mathbb{N}^{n-2},\\
k&\in \mathbb{Z},\\
J S&=j_{1}s_{1}+\cdots+j_{n-2}s_{n-2},\\
dS&=ds_{1}\cdots ds_{n-2}.
\end{align*}
Let us define the mapping
$\rho_{St}(z_{1},z',z_{n})=(z_{1},e^{iS}z',e^{it}z_{n}).$
Then $P_{Jk}$ is the orthogonal projection from $A^{2}(D)$ onto
\[A_{Jk}^{2}(D)=\{f\in A^{2}(D): f\circ \rho_{St}=e^{iJS}e^{ikt}f \text{ for all
} S,t\}.\]
Therefore the Bergman space $A^{2}(D)$ can be written as an orthogonal sum
\[A^{2}(D)=\underset{\begin{subarray}{c}
J\in \mathbb{N}^{n-2},\,k\in\mathbb{Z} \end{subarray}}{\bigoplus} A_{Jk}^{2}(D)\]
and the Bergman kernel $K(z,w)$ for $D$ satisfies
\[K(z,w)=\sum\limits_{J\in \mathbb{N}^{n-2},\,k\in\mathbb{Z}}K_{Jk}(z,w)\]
where $K_{Jk}(z,w)$ is the kernel for $A_{Jk}^{2}(D).$
One can show that for $f\in A_{Jk}^{2}(D)$ the function
$f(z_{1},z',z_{n})z_{2}^{-j_1}\cdots z_{n-1}^{-j_{n-2}}z_{n}^{-k}$ is a
function that is locally independent of $(z',z_n)$. We notate such functions as
functions of $z_1$, where it is understood that $z_1$ ranges over the Riemann
domain described by $-\pi/2<\Arg z_1<2\alpha\ln\beta+\pi/2$.
Let $|J|=j_{1}+\cdots+j_{n-2}.$ Then a square integrable holomorphic function
$f$ on $D$ can be written as
\[ f(z)=\sum_{J\in \mathbb{N}^{n-2},\,k\in \mathbb{Z} }F_{Jk}(z)\,\]
where
\[F_{Jk}(z_{1},z',z_{n})=z_{1}^{-\frac{|J|+n}{2}}f_{Jk}(z_{1})z'^{J}z_{n}^{k}\]
and the sum converges locally uniformly.
Now we will calculate the $L^{2}$-norm of $F_{Jk}$ on $D.$ Let
$z_{1}=r_{1}e^{i\theta_{1}}, r_{j}=|z_{j}|$ for $j=1,\ldots n
,r'=\sqrt{r_{2}^{2}+\cdots+r_{n-1}^{2}},s=\ln|z_{n}|^{2}.$ Then $D$ is described
by the inequalities
\begin{align*}
0&< r_{1}<\infty,\\
0&<s<2\ln\beta,\\
|\theta_{1}-\alpha s|&<\pi/2,\\
0&\leq r'<\sqrt{r_{1}\cos(\theta_{1}-\alpha s)}.
\end{align*}
We have
\begin{eqnarray} \displaystyle
\nonumber \|F_{Jk}\|^{2}_{D}&=&
\int\limits_{D}
|f_{Jk}(r_{1}e^{i\theta_{1}})|^{2}r_{1}^{-|J|-n+1}r_2^{2j_2+1}\cdots
r_{n-1}^{2j_{n-2}+1} r_{n}^{2k+1}d\theta_{1}\cdots d\theta_{n} dr_{1}\cdots
dr_{n} \\
\nonumber &=&
C_{nJ} \int\limits_{\substack{0< r_{1}<\infty \\
|\theta_{1}-\alpha s|<\pi/2 \\
0<s<2\ln \beta}}
|f_{Jk}(r_{1}e^{i\theta_{1}})|^{2}\cos^{|J|+n-2}(\theta_{1}-\alpha
s)e^{s(k+1)}r_{1}^{-1} d\theta_{1} dr_{1} ds \\
&=& \label{IntOnGamma}
\int\limits_{\substack{
0< |z_{1}|<\infty \\
-\pi /2<\arg(z_{1})<2\alpha \ln \beta +\pi /2 }}
|f_{Jk}(z_{1})|^{2}W_{Jk}(\theta_{1})|z_{1}|^{-2}\,dV(z_{1})
\end{eqnarray}
where $C_{nJ}$ is a positive constant,
\[W_{Jk}(\theta_{1})=C_{nJ}\int_{-\infty}^{\infty}\cos^{|J|+n-2}(\theta_{1}
-\alpha t)\chi_{\pi/2}(\theta_{1}-\alpha t)e^{t(k+1)}\chi_{\ln
\beta}(t-\ln\beta)\,dt, \]
and $\chi_a(t)$ is the characteristic function of the interval $[-a,a]$ for
$a>0.$ (The positivity of $C_{nJ}$ follows from the fact that we are only
integrating over positive values of $r_j$.)
Let us use a change of coordinates $z=\ln z_1$ in the last integral to obtain
\begin{align}\label{ChangeCoord}
\|F_{Jk}\|^{2}_{D}&= \int\limits_{\substack{
-\infty< x<\infty \notag\\
-\pi /2<y<2\alpha \ln \beta +\pi /2 }}|f_{Jk}(e^z)|^{2}W_{Jk}(y)\,dV(z) \\
&= \int\limits_{\substack{-\infty< x<\infty \\
-\pi /2<y<2\alpha \ln \beta +\pi /2 }}|\widetilde
f_{Jk}(z)|^{2}W_{Jk}(y)\,dV(z)
\end{align}
where $z=x+iy$ and $\widetilde f_{Jk}(z)=f_{Jk}(e^{z}).$
Then $\widetilde f_{Jk}$ is a square integrable holomorphic function on
$S_{\alpha\beta}=\{z\in \mathbb{C}:-\pi/2<\im(z)<\pi /2+2\alpha \ln \beta \}$ with
weight $W_{Jk}.$ Furthermore, the Bergman kernel $K_{Jk}$ for $A_{Jk}^{2}(D)$
can be calculated as
\begin{equation}\label{EqnKernelTransfrom}
K_{Jk}(z,w)=K_{Jk}^{\alpha\beta} (\ln z_{1},\ln
w_{1})\frac{z'^{J}z^{k}_{n}\overline w'^{J}\overline
w^{k}_{n}}{z_{1}^{\frac{|J|+n}{2}}\overline w^{\frac{|J|+n}{2}}_{1}}
\end{equation}
where $K_{Jk}^{\alpha\beta}$ is the Bergman kernel on $S_{\alpha\beta}$ with the
weight $W_{Jk}.$
(One way to see this is to note that \eqref{ChangeCoord} allows us to convert an
orthonormal basis for the Bergman space on $S_{\alpha\beta}$ with weight
$W_{Jk}$ to an orthonormal basis for $A_{Jk}^{2}$.)
Let $\mathcal{F}(f)$ denote the Fourier transform of $f$; thus
$\mathcal{F}(f)(\xi)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}f(t)e^{-i\xi t}dt$ and
$\mathcal{F}^{-1}(f)(x)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}f(\xi)e^{i\xi
t}d\xi$.
\begin{proposition}\label{JkKernel}
$K^{\alpha\beta}_{Jk}$ is given by the integral
\begin{equation}\label{IntFormula}
K^{\alpha\beta}_{Jk}(z,w)=\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}} \frac{ e^{i(z-\overline
w)\xi}}{\mathcal{F}(W_{Jk})(-2i\xi)}d\xi.
\end{equation}
\end{proposition}
\begin{proof}
See \cite{Barrett92} and \cite[Lemma 6.5.1]{ShawBook}.
\end{proof}
Note also that $-\pi<\text{Im}(z-\overline w)<\pi+4\alpha\ln\beta $ for $
z,w\in S_{\alpha\beta}.$
\begin{proposition}\label{WeightTransform}
The Fourier transform of $W_{Jk}$ is given by
\begin{equation}\label{WeightTransformFormula}
\mathcal{F}(W_{Jk})(\xi)
=D_{nJ}e^{-\frac{i\xi\pi}{2}}\frac{E_{Jk}(\xi)}{
(\xi+|J|+n-2)(\xi+|J|+n-4)\ldots(\xi-|J|-n+2)}
\end{equation}
where
\[E_{Jk}(\xi)=\left(e^{i\xi\pi}-(-1)^{|J|+n}\right)\left(\frac{e^{
2(k+1-i\alpha\xi)\ln\beta}-1}{k+1-i\alpha\xi}\right). \]
\end{proposition}
We postpone the proof of this Proposition.
To apply residue methods to \eqref{IntFormula} we need to find the zeros of
$\mathcal{F}(W_{Jk})(-2i\xi)$. Let us denote the set $\{s\in \mathbb{Z}:-m\leq s\leq m\}$
by $\mathbb{I}(m).$ From Proposition \ref{WeightTransform} we see that if
$|J|+n$ is even then the zeros of $\mathcal{F}(W_{Jk})(-2i\xi)$ are located at
\[\left\{ mi:m\in
\mathbb{Z}\setminus\mathbb{I}\left(\frac{|J|+n-2}{2}\right)\right\} \bigcup
\left\{\frac{m\pi i}{2\alpha\ln\beta}+\frac{k+1}{2\alpha
}:m\in \mathbb{Z}\setminus\{0\}\right\} \]
and in case $|J|+n$ is odd they are located at
\begin{multline*}
\left\{ mi+\frac{ i}{2}:m\in
\mathbb{Z}\setminus\left(\mathbb{I}\left(\frac{|J|+n-3}{2}\right)\cup\{
-(|J|+n-1)/2\}\right)\right\} \\
\bigcup \left\{\frac{m\pi i}{2\alpha\ln\beta}+\frac{k+1}{2\alpha
}:m\in \mathbb{Z}\setminus\{0\}\right\}.
\end{multline*}
For simplicity we focus now on the case $J=0,\, k=-2$; note that this
guarantees that the zeros enumerated above are simple (see Remark \ref{kChoice}
below).
Let $\nu_{\alpha\beta}=\frac{\pi}{2\alpha\ln\beta}$ and
$\mu_{\alpha}=\frac{1}{2\alpha }>0$.
\begin{proposition} \label{Kernel}
The kernels $K_{0,-2}$ satisfy
\begin{multline}\label{BergmanKernel}
K_{0,-2}(z,w)= \sum_{\ell=0}^{[\nu_{\alpha\beta}-n/2]}
C_\ell z_{1}^{\ell}\overline w_{1}^{-\ell-n}
z_n^{-2}\overline w_n^{-2}
+Cz_{1}^{\nu_{\alpha\beta}-n/2-i\mu_{\alpha} }\overline
w_1^{-\nu_{\alpha\beta}-n/2+i\mu_{\alpha}} z_n^{-2}\overline w_n^{-2}
+\rem(z,w)
\end{multline}
where $\varepsilon>0$, the constants $C$ and $C_\ell$ are nonzero and the
remainder term $\rem(z,w)$ satisfies
\begin{equation*}
\left(\frac{\partial}{\partial z_{1}}\right)^{m}\rem(z,w) =
O\left(z_1^{\nu_{\alpha\beta}-n/2+\varepsilon-m}
\overline w_1^{-\nu_{\alpha\beta}-n/2-\varepsilon}\right)
\end{equation*}
uniformly on closed subannuli of $1<|z_{n}|<\beta$.
\end{proposition}
\begin{proof}
We apply the residue theorem to the integral in \eqref{IntFormula} along the
strip $-\nu_{\alpha\beta}-\varepsilon\le \im\xi\le 0$ to obtain
\begin{equation*}
K^{\alpha\beta}_{0,-2}(z,w) =
\sum_{\ell=0}^{[\nu_{\alpha\beta}-n/2]} C_\ell e^{(\ell+\frac{n}{2})( z-
\overline w)} + C e^{(\nu_{\alpha\beta}-i\mu_{\alpha})( z- \overline w)} +
\widetilde\rem(z,w)
\end{equation*}
for non-zero $C, C_\ell$, where $\widetilde\rem(z,w)$ and all of its derivatives
are $O\left( e^{(\nu_{\alpha\beta}+\varepsilon)( z-\overline w)}\right)$
on closed substrips of $S_{\alpha\beta}$.
Plugging this into \eqref{EqnKernelTransfrom} we obtain \eqref{BergmanKernel}.
\end{proof}
\begin{remark}\label{kChoice}
We have focused on the case $J=0,\, k=-2$ because this is the simplest choice
which avoids possible problems with double poles. Analogous formulae hold for
other values of $k$ in the absence of double poles. When double poles do occur
they contribute factors of $\ln(z_1-\overline w_1)$.
\end{remark}
\begin{lemma}\label{SumEval}
$\displaystyle{ \sum_{s=0}^{j}\binom{j}{s} \frac{(-1)^{s}}{\xi+\alpha(j-2s)}
= \frac{(-2\alpha)^j j!} {(\xi+\alpha j)(\xi+\alpha(j-2))\cdots(\xi-\alpha
j)}}.$
\end{lemma}
\begin{proof}
The statement is true for $j=0$.
Working inductively and recalling that
$\binom{j}{s}=\binom{j-1}{s-1}+\binom{j-1}{s}$ we have
\begin{align*}
\sum_{s=0}^{j}\binom{j}{s}\frac{(-1)^{s}}{\xi+\alpha(j-2s)}
&=
\sum_{s=0}^{j-1}\binom{j-1}{s}\frac{(-1)^{s}}{\xi+\alpha(j-2s)}
+
\sum_{s=1}^{j}\binom{j-1}{s-1}\frac{(-1)^{s}}{\xi+\alpha(j-2s)}
\\
&=
\frac{(-2\alpha)^{j-1} (j-1)!}
{(\xi+\alpha j)(\xi+\alpha(j-2))\cdots(\xi+\alpha(-j+2))}\\
&\qquad -\frac{(-2\alpha)^{j-1} (j-1)!}
{(\xi+\alpha(j-2))(\xi+\alpha(j-4))\cdots(\xi-\alpha j)}\\
&=
\frac{(-2\alpha)^{j-1} (j-1)!}
{(\xi+\alpha(j-2))\cdots(\xi+\alpha(-j+2))}
\left(
\frac{1}{\xi+\alpha j}-\frac{1}{\xi-\alpha j}
\right)\\
&=\frac{(-2\alpha)^j j!}
{(\xi+\alpha j)(\xi+\alpha(j-2))\cdots(\xi-\alpha j)}.
\end{align*}
\end{proof}
\begin{proof}[Proof of Proposition \ref{WeightTransform}]
Write
\[W_{Jk}(y)= C_{nJ} \Big( W_{Jk1} * W_{Jk2}\Big) (y/\alpha)\]
for $ -\pi /2<y<\pi/2+2\alpha \ln \beta$ where $f*g$ denotes the convolution of
$f$ and $g$ and
\begin{align*}
W_{Jk1}(t)&=\cos^{|J|+n-2}(\alpha t)\chi_{\pi/2}(\alpha t), \\
W_{Jk2}(t)&=e^{t(k+1)}\chi_{\ln \beta}(t-\ln\beta).
\end{align*}
To calculate the Fourier transform of $W_{Jk}$ we first calculate
\[\cos^{j}(t)=\frac{1}{2^{j}}\sum_{s=0}^{j}\binom{j}{s}e^{i(2s-j)t}.\]
One can calculate that
\[\mathcal{F}(\cos^{j}(t)\chi_{\pi/2}(t))(\xi)=\frac{1}{i\sqrt{2\pi}2^{j-1}}
\sum_{s=0}^{j}\binom{j}{s}\frac{\left(e^{\frac{i(\xi+j-2s)\pi}{2}}-e^{-\frac{i
(\xi+j-2s)\pi}{ 2 }}\right)}{2(\xi+j-2s)}.\]
Lemma \ref{SumEval} implies that
\begin{align*}
\mathcal{F}(\cos^{j}(\alpha t)\chi_{\pi /2}(\alpha t))(\xi)
&= \frac{1}{\alpha}\mathcal{F}(\cos^{j}( t)\chi_{\pi /2}(t))(\xi/\alpha)
\\
&=\frac{i^{j-1}\Big(e^{\frac{i\xi\pi}{2\alpha}}-(-1)^{j}e^{-\frac{i\xi\pi}{
2\alpha}}\Big)}{\sqrt{2\pi}2^{j}}\sum_{s=0}^{j}\binom{j}{s}\frac{(-1)^{s}}{
\xi+\alpha (j-2s)}\\
&=\frac{(-\alpha i)^j j!
\Big(e^{\frac{i\xi\pi}{2\alpha}}-(-1)^{j}e^{-\frac{i\xi\pi}{2\alpha}}\Big)}
{i\sqrt{2\pi}{(\xi+\alpha j)(\xi+\alpha(j-2))\cdots(\xi-\alpha j)}}.
\end{align*}
We also need to find the Fourier transform of $e^{kt}\chi_{a}(t-a)$:
\[\mathcal{F}(e^{kt}\chi_{a}(t-a))(\xi)= \frac{1}{\sqrt{2\pi}}\frac{e^{2a(k-i\xi)}-1}{
k-i\xi}.\]
Using $\mathcal{F}(f*g)=\sqrt{2\pi}\mathcal{F}(f)\mathcal{F}(g)$ we find that the Fourier transform of
$W_{Jk}$ is given by \eqref{WeightTransformFormula}.
\end{proof}
\section{Proof of Theorem \ref{Theorem}}\label{ProveThm}
The proof of Theorem \ref{Theorem} follows immediately from
Lemmas \ref{SupposeContrary} and \ref{Bad-f} below.
\begin{lemma}
If $P$ is continuous on $W^{p,s}(\Omega_{\alpha\beta})$ then
\begin{equation}\label{Estimate}
\left\| |r_{\lambda}|^{t} \left(\frac{\partial}{\partial z_1}\right)^m
P_{\lambda}f\right\|_{L^{p}(D_{\lambda})} \leq C \left\|
f\right\|_{W^{p,s}(D_{\lambda})}
\end{equation}
where $m$ is a nonnegative integer, $0\leq t<1$ such that $m=s+t$ and the
constant $C$ is independent of $\lambda$ and $f.$
\end{lemma}
\begin{proof}
Assume that $P$ maps $W^{p,s}(\Omega_{\alpha\beta})$ onto itself continuously
and let $T_{\lambda}f=f\circ \tau_{\lambda}.$ Then one can check that
\[\left\| \left(\frac{\partial}{\partial z}\right)^{P}
\left(\frac{\partial}{\partial \overline z}\right)^{Q}T_{\lambda}f
\right\|_{L^{p}(\Omega_{\alpha\beta})}=2^{p_{1}+q_{1}-2/p}\lambda^{2p_{1}
+2q_{1}+|P'|+|Q'|-2n/p} \left\| \left(\frac{\partial}{\partial z}\right)^{P}
\left(\frac{\partial}{\partial \overline z}\right)^{Q}f
\right\|_{L^{p}(D_{\lambda})} \]
where
$P=(p_{1},\ldots,p_{n}),Q=(q_{1},\ldots,q_{n}),P'=(p_{2},\ldots,p_{n-1}),Q'=(q_{
2},\ldots,q_{n-1}),|P'|=p_{1}+\cdots+p_{n-1},$ and $|Q'|=q_{1}+\cdots+q_{n-1}.$
Therefore we have
\[\| T_{\lambda}f \|_{W^{p,k}(\Omega_{\alpha\beta})}\leq
2^{k-2/p}\lambda^{2k-2n/p}\| f \|_{W^{p,k}(D_{\lambda})} .\]
By interpolation we also have
$\| T_{\lambda}f \|_{W^{p,s}(\Omega_{\alpha\beta})}\leq
2^{s-2/p}\lambda^{2s-2n/p}\| f \|_{W^{p,s}(D_{\lambda})}$ for all $s>0$.
Let $s=m-t$ where $m$ is a nonnegative integer and $0\leq t<1$. We have
\begin{equation}\label{HoloSob}
\left\| |r|^{t} \left(\frac{\partial}{\partial z_{1}}\right)^{m}
f\right\|_{L^{p}(\Omega_{\alpha\beta})} \le C_1
\left\|f\right\|_{W^{p,s}(\Omega_{\alpha\beta})}
\end{equation}
for $f$ holomorphic on $\Omega_{\alpha\beta}$ (see, for example,
\cite{LIgocka87}).
Let $P_{\lambda}$ be the Bergman projection for $D_{\lambda}$. Then
$P_{\lambda}=T_{\lambda}^{-1}PT_{\lambda}$ and
\begin{eqnarray*}
\left\| |r_{\lambda}|^{t} \left(\frac{\partial}{\partial z_1}\right)^m
P_{\lambda}f\right\|_{L^{p}(D_{\lambda})}
&=& \left\| |r_{\lambda}|^{t} \left(\frac{\partial}{\partial z_1}\right)^{m}
T_{\lambda}^{-1}PT_{\lambda}f\right\|_{L^{p}(D_{\lambda})}\\
&=&2^{2/p-m}\lambda^{2t+2n/p-2m}\left\| |r|^{t} \left(\frac{\partial}{\partial
z_{1}}\right)^{m} PT_{\lambda}f\right\|_{L^{p}(\Omega_{\alpha\beta})}\\
&\leq& C_2 \lambda^{2t+2n/p-2m}\left\|
PT_{\lambda}f\right\|_{W^{p,s}(\Omega_{\alpha\beta})}\\
&\leq& C_3 \lambda^{2n/p-2s}\left\|
T_{\lambda}f\right\|_{W^{p,s}(\Omega_{\alpha\beta})}\\
&\leq& C_4 \left\| f\right\|_{W^{p,s}(D_{\lambda})}
\end{eqnarray*}
where the constants are independent of $\lambda$.
\end{proof}
\begin{lemma} \label{SupposeContrary}
If the estimate \eqref{Estimate} holds on $D_{\lambda}$ then
$$\left\| |r_{\infty} |^{t} \left(\frac{\partial}{\partial z_1}\right)^m
P_{\infty}f\right\|_{L^{p}(D)} \leq C \left\| f\right\|_{W^{p,s}(D)} $$
where $P_{\infty}$ is the Bergman projection on $D$ and the constant $C$ is
independent of $f.$
\end{lemma}
The above lemma can be proved like Lemma 1 in \cite{Barrett92}.
\begin{lemma} \label{Bad-f}
Let $s\geq \nu_{\alpha\beta}+n\left(\frac{1}{p}-\frac{1}{2}\right)$ where
$\nu_{\alpha\beta}=\frac{\pi}{ 2\alpha\ln\beta}$ and $s=m-t$ as above. Then
there exists $f\in C^{\infty}_{0}(D)$ such that $|r_{\infty} |^{t}
\left(\frac{\partial}{\partial z_1}\right)^m P_{\infty}f$ is not in $L^p(D)$.
\end{lemma}
\begin{proof} Since $P_{Jk}$ maps $W^{p,\delta}(D)\cap A^p(D)$ onto
$W^{p,\delta}(D)\cap A^p_{Jk}(D)$ for all $\delta\geq 0$ it is sufficient to
prove that there exists $f\in C^{\infty}_0(D)$ such that
$P_{Jk}P_{\infty}f\not\in W^{p,s}(D).$ Fix $w\in D, J=0,$ and $k=-2$. Let $f$ be
a nonnegative smooth function with compact support in $D$ such that it depends
on $|z-w|$ and $\int_{D}f=1.$ Then $K_{0,-2}(\cdot,w)=P_{0,-2}P_{\infty}f.$ We
can write $s=m-t$ where $m$ is a nonnegative integer and $0\leq t<1.$ In view
of \eqref{HoloSob} above (adapted to $D$) it suffices to show that
$|r_{\infty}(z)|^{t}\frac{\partial^m}{\partial z_1^m}K_{0,-2}(z,w)\not\in
L^p(D)$ for fixed $w.$ Proposition \ref{Kernel} implies that
\[\frac{\partial^m}{\partial z_1^m}K_{0,-2}(z,w)=
C z_{1}^{\nu_{\alpha\beta}-n/2-i\mu_{\alpha} -m}
+O\left(z_1^{\nu_{\alpha\beta}-n/2+\varepsilon-m}
\right).\]
Let
\begin{multline*}
D'= \Big\{(z_{1},z',z_{n})\in \mathbb{C}^{n}:\re\left( z_{1}e^{-2\alpha
i\ln|z_{n}|}\right) >|z'|^{2},1+\delta<|z_{n}|<\beta-\delta,\\
|z_1|<\delta, \Big|\theta_1-2\alpha \ln|z_n|\Big|<\frac{\pi}{4}\Big\}
\end{multline*}
for suitably small $\delta>0$.
Then $|r_{\infty}|$ is comparable to $|z_1|$ on $D'$ and
\begin{align*}
\int\limits_{D}|r_{\infty}(z)|^{pt}\left|\frac{\partial^m}{\partial
z_1^m}K_{0,-2}(z,w)\right|^{p}dV(z)
&\ge \int\limits_{D'}|r_{\infty}(z)|^{pt}\left|\frac{\partial^m}{\partial
z_1^m}K_{0,-2}(z,w)\right|^{p}dV(z)\\
&\ge c \int_{0}^{\delta} r_1^{p\nu_{\alpha\beta}+pt-pm+n-1-pn/2} dr_1
\end{align*}
where $c$ is a positive constant. The last integral above is divergent if $s\geq
\nu_{\alpha\beta}+n\left(\frac{1}{p}-\frac{1}{2}\right).$ Therefore
\[|r_{\infty}(z)|^t\frac{\partial^m}{\partial z_1^m}P_{0,-2}P_\infty f
=|r_{\infty}(z)|^t\frac{\partial^m}{\partial z_1^m}K_{0,-2}(z,w)\not\in
L^p(D)\]
for $s\geq \nu_{\alpha\beta}+n\left(\frac{1}{p}-\frac{1}{2}\right).$
\end{proof}
\section{Acknowledgment}
We would like to thank the referee for pointing out a mistake in an
earlier version of this manuscript.
|
2,877,628,091,012 | arxiv | \section{Introduction.}
In ref.\deriv\ we claimed that a sequence of truncations of the field
dependence of the ERG\Wil\weg\ do not work in
general (in the sense of providing dependable and in principle
arbitrarily accurate results), because these do not converge beyond a
certain point, and because no completely reliable method exists
to reject the many spurious solutions that are also generated.
In this letter
we verify this claim for the relatively simple
$O(p^0)$ case described in the abstract. We compute the truncations to
high order ($n=25$), and show how this behaviour may be
understood, accurately and at a deeper level,
in terms of the analytic behaviour of the untruncated solutions
-- which is described here in much more detail than was possible in ref.\deriv.
(We must emphasise here the distinction between a {\sl momentum/derivative
expansion} in which higher space-time derivative terms are discarded but
{\sl no approximation} is made in the field dependence -- these approximations
{\sl do} appear to converge\erg\deriv\ -- and {\sl truncations} of the field
dependence of the lagrangian which in general do not converge and can give even
qualitatively wrong results).
We point out that the analytic properties of the untruncated solutions
allow for the possibility of searching, within the momentum/derivative
expansion \deriv--\truncm, {\sl systematically}
through the infinite dimensional space of non-perturbative lagrangians
for new continuum limits.
It need hardly be stated that, firstly very little is known about the
possible existence of continuum theories in such a space, and secondly,
if new fixed points were found, they could have profound implications.
We have done
such a search for $O(N)$ scalar field theory in $D=$ 4 and 3
dimensions, for the cases $N=1,2,3,4$, in the $O(p^0)$
approximation. However, we found there to be
only the known fixed points at this level:
Gaussian for $D=3$ and 4, and the Wilson fixed points in 3 dimensions.
Finally we construct two better methods of approximation by expansion.
The simplest way to
solve the $O(p^0)$ equations, is directly
numerically\deriv\ however (see also later, and ref.\hashas).
The truncations, if carefully interpreted, can give moderately accurate
results -- thus the simpler low order truncations may be of some use
in situations where more reliable and
accurate calculations are prohibitively difficult to perform.
This situation seems very reminiscent of approximations (also
involving truncations of the operator basis) to the ``real space
renormalization group'' investigated in the late 1970's[\the\refno]\nref\realsp{See
for example the reviews in eds. T.W. Burkhardt and J.M.J. van Leeuwen,
``Real-Space Renormalization'' (1982), Springer, Berlin. }.
For recent work on approximations
to the ERG see for example refs.\deriv--\others; From Alford's work\al,
and the numerical solution, it is
clear that expansion of $\varphi$ around the semi-classical minimum of
the effective potential, results at low orders of truncation,
in convergence to three decimal places or more for the $O(p^0)$ $\nu$
(or the other exponents related by scaling).
It would be interesting to better understand
the reliability of this method and its limiting
accuracy (also for $\omega$). Its behaviour no doubt is otherwise
similar to the truncations we discuss, and for the same
underlying causes. Effectively this expansion was part of the
calculation in ref.\wet, where the authors claim to
obtain quite
accurate values for exponents of $3D$ $N$-vector models. At higher orders
in the momentum expansion we expect that truncations become more
limited in accuracy and reliability,
if only because it ought now to involve the much more complicated
calculation of the expansion of several functions\deriv\ in
polynomials, each with their own limitations in accuracy. Indeed,
even in ref.\truncm, where we will consider truncations in the field
dependence of higher momentum terms ($O(p^n)$ with $n>0$) only, making
no expansion in the potential, the results support this hypothesis.
Here we will concentrate on the case of sharp cutoff and $O(p^0)$, i.e.
the remarkable equation[\the\refno]\nref\orig{J.F. Nicoll, T.S. Chang and H.E.
Stanley, Phys. Lett. 57A (1976) 7.}\ first studied without further
approximation, by Hasenfratz and Hasenfratz\hashas. We start with a
simple alternative derivation.\foot{ For more information on this, see
ref\erg.}\ The equivalent smooth cutoff equation\deriv\ is
qualitatively very similar so that much the same analysis,
and all our general conclusions, apply
equally well to the smooth case. We work in $D$
euclidean dimensions with a single real scalar field $\varphi$. The
partition function is defined as
\eqn\zorig{\exp W_\Lambda[J]=\int\!{\cal D}\varphi\
\exp\{-\half\varphi.\Delta_\Lambda^{-1}\! .\varphi-S_{\Lambda_0}[\varphi]+J.\varphi\}\ \ .}
$\Delta_\Lambda\equiv\Delta(q,\Lambda)=\te q/q^2$ is a
free massless propagator times a smooth (everywhere positive)
infrared cutoff function $\te q$. The sharp cutoff limit is given by
the Heaviside function:
\eqn\thelimit{\te q\to \theta(q-\Lambda)\ins11{as} \varepsilon\to0\ \ .}
{}From \zorig\ we derive the flow equation for $W_\Lambda$:
$${\partial\over\partial\Lambda}W_\Lambda[J]=
-{1\over2}\left\{ {\delta W_\Lambda\over
\delta J}.{\partial \Delta_\Lambda^{-1} \over\partial\Lambda}.{\delta
W_\Lambda\over\delta J} +
{\rm tr}\left({\partial \Delta_\Lambda^{-1} \over\partial\Lambda}.{\delta^2
W_\Lambda\over\delta J\delta J}\right)\right\}\quad ,$$
which on rewriting in terms of the interaction part of the Legendre effective
action via $\Gamma_\Lambda[\varphi]+\half\varphi.\Delta_\Lambda^{-1}\! .\varphi
=-W_\Lambda[J]+J.\varphi$,\quad $\varphi=\delta W_\Lambda/\delta J$, gives
\eqn\fgam{ {\partial\over\partial\Lambda}\Gamma_\Lambda[\varphi]=
-{1\over2}{\rm tr}\left[{1\over \Delta_\Lambda}{\partial
\Delta_\Lambda\over \partial\Lambda}
.\left( 1+\Delta_\Lambda.{\delta^2\Gamma_\Lambda\over\delta\varphi\delta\varphi}
\right)^{-1}\right]\ \ .}
$\Gamma_\Lambda$ generates the one particle irreducible parts of the Wilson
effective action (Wegner-Houghton effective action\weg\ in the sharp cutoff
limit) \erg.
If we now make the approximation of discarding all momentum
dependence from $\Gamma_\Lambda$ in \fgam, we obtain
\eqn\inter{{\partial\over\partial\Lambda}V(\varphi,\Lambda)=-{1\over2}
\int\!{d^Dq\over(2\pi)^D}\,\left\{ {\partial\te q\over\partial\Lambda}
{1\over \te q \left[ 1+\te q V''(\varphi,\Lambda)/q^2\right]}\right\}\ \ ,}
where we have introduced the potential through
$\Gamma_\Lambda =\int\!d^Dx\,
V(\varphi({\bf x}),\Lambda)$. Primes denote differentiation with respect to $\varphi$.
The integrand (in curly brackets) in \inter\ may be written
$${\partial\over\partial\Lambda}\left\{ \ln\te q-\ln\left[1+\te q
V''(\varphi,{\widetilde\Lambda})/q^2\right]
\right\}\Bigm|_{{\widetilde\Lambda}=\Lambda}\ \ .$$
The first term above yields a field independent vacuum energy which we drop
from $V$. Taking the sharp cutoff limit \thelimit\ we thus obtain for the
integrand
$$\delta(q-\Lambda)\ln(1+V''(\varphi,\Lambda)/q^2)\ \ .$$
The integral in \inter\ is now trivial. Factoring out the scale $\Lambda$
by writing $\varphi\mapsto\Lambda^{D/2-1}\varphi/\zeta$,
$V(\zeta^{-1}\varphi\Lambda^{D/2-1},\Lambda)
\mapsto \zeta^{-2}\Lambda^DV(\varphi,t)$, and $t=\ln(\Lambda_0/\Lambda)$, where
the factor $\zeta=(4\pi)^{D/4}\sqrt{\Gamma(D/2)}$ is chosen for convenience,
we obtain the advertised equation\orig\hashas:
\eqn\Veq{{\partial\over\partial t}V(\varphi,t)+(D/2-1)\varphi
V'(\varphi,t)-D V(\varphi,t)= \ln\left[1+V''(\varphi,t)\right]\ \ .}
The anomalous dimension $\eta=0$
in this case since a non-zero $\eta$ results from non-trivial
wavefunction renormalization. The reader may be puzzled as to why
we obtain the same equation as that for the Wilson effective potential\hashas\
given that ours is the Legendre
effective potential. In fact these are one and the same, since at zero momentum
all vertices of the Wilson effective action $S_\Lambda$
coincide with $\Gamma_\Lambda$ \erg.
Now we set $D=3$. From \Veq, a fixed point
effective potential $V(\varphi,t)\equiv V(\varphi)$ must satisfy
\eqn\fp{\eqalign{
{1\over2}\varphi V'(\varphi)-3 V(\varphi) &= \ln\left[1+V''(\varphi)\right]\ \ ,
\hskip 1cm\cr
{\rm and}\hskip 1cm V'(0)&=0}}
(by $\varphi\leftrightarrow-\varphi$ invariance).
At first sight these equations appear to have many solutions, which may be
parametrized by $V(0)=-\ln(1+\sigma)/3$, $\sigma\Lambda^2$ being the
semi-classical effective mass-squared.
Actually this is not the case\hashas,
because {\sl all but two solutions end at a singularity of
the form} $V(\varphi)\sim 2(1-\varphi/\varphi_c)\ln(\varphi_c-\varphi)$, $\varphi_c$ some
positive constant, or more precisely,
as a series in decreasingly singular terms,
\eqn\sing{\eqalign{V=&\ln(x)\left(x-{3\over8}x^2-{25\over432}x^3
-{5\varphi_c^2\over384}x^4+{3169\over27648}x^4+\cdots\right)\cr
&+\ln(x)^2\left({25\over288}x^3-{25\over1152}x^4-\cdots\right)
+O(\ln(x)^4 x^5)\ \ ,}}
where $x=1-\varphi^2/\varphi_c^2$. We now justify this statement, by dividing
the behaviour of $V(\varphi)$ into three classes.
First of all, if
$V$ ends at a singularity, it, or some derivative of it diverges there. By
considering how
the various terms can balance in \fp, one sees that a singularity
must be of the form above.
Secondly, if $V$ does not end at a singularity but instead it and its
first two derivatives tend to a limit ($\pm\infty$ included) as
$\varphi\to \infty$, then again, considering the balance of terms in
\fp\ one sees that either $V(\varphi)\to 0$ or $V(\varphi)$ satisfies
\eqn\asy{V(\varphi)=A\varphi^6-{4\over3}\ln\varphi -{2\over9} -{1\over3}\ln(30A)
-{1\over150A\varphi^4}+O(1/\varphi^6)\ \ ,}
for some positive constant $A$,
as $\varphi\to\infty$. Linearizing in \fp\
about the first possibility, i.e. setting
$V(\varphi)\mapsto V(\varphi) +\delta V(\varphi)$, one finds that $\delta V$ must behave
for large $\varphi$ as a linear combination of $\varphi^6$ and $\exp(\varphi^2/4)$.
Since both of these corrections are excluded
if we require also $V(\varphi)+\delta V(\varphi)\to0$ as $\varphi\to
\infty$, we conclude that $V(\varphi)$ is identically zero in this case,
which indeed is
the trivial Gaussian solution to \fp. (For a study of perturbations about
the Gaussian in eqn.\Veq, see ref.\hashas). Linearizing about the
second possibility, i.e. taking $V(\varphi)$ to be as in \asy, one finds that
$\delta V$ must behave for large $\varphi$ as a linear combination of $\varphi^6$
and $\exp(5A\varphi^6/2)$. Now the first correction merely perturbs the
coefficient $A$, while the second is excluded if $V+\delta V$ also satisfies
\asy. It follows that the space of
solutions obeying \asy\ divide into {\sl isolated} one-parameter subsets,
each parametrized by $A$. For both possibilities the
exponentially growing corrections were the result of linearizing the singular
behaviour \sing, since they involve balancing the same terms in \fp.
Thirdly, $V$ or one of its first two derivatives may be defined for all
finite real $\varphi$ but not tend to a limit as $\varphi\to\infty$. Studying
\fp\ one sees that this requires solutions to become infinitely
oscillatory as $\varphi\to\infty$. We do not supply a proof that this does
not happen, but we feel confident we can rule it out because we saw no hint
of it in our extensive numerical and analytic investigations (see below).
\midinsert
\centerline{
\psfig{figure=truncfig1.ps,width=4.3in}}
\vskip 0in
\centerline{\vbox{{\fam\bffam\tenbf Fig.1.} The solution $V(\varphi)$ which approximates the
Wilson fixed point.
}}
\endinsert
Therefore, apart from the trivial solution $V(\varphi)\equiv0$, any global
fixed point solution of \fp\ must satisfy {\sl two} boundary
conditions: \asy\ and $V'(0)=0$. We thus expect at most a countable
number of such solutions; we find only
one. It may be characterized by $\sigma=\sigma_*=-.46153372\cdots$
(or $A=A_*=.003033\cdots$) and is displayed in fig.\the\figno\nfig\fpV{ }. (We
describe the search
below).
This is the same solution as in ref.\hashas\ of
course and is a fair approximation to the Wilson fixed point.
To obtain the critical exponents and operator
spectrum one linearizes about this solution in \Veq, i.e. one writes
$V(\varphi,t)=V(\varphi)+\delta V(\varphi,t)$, with $\delta V(\varphi,t)\propto
v(\varphi) \exp(\lambda t)$. If one excludes the exponentially growing
solution (which is $\exp(5A\varphi^6/2)$ again) one obtains a discrete
spectrum, and deduces from the eigenvalues the critical exponents,
in reasonable agreement with the best estimates,
as already described in ref.\deriv\
(cf. also \hashas\ and later). It is surely the case that the
exponentially growing solution is again a linearization of singular
behaviour, this time in the flowing solution $V(\varphi,t)$.
Note that it is wrong to conclude from fig.\fpV, e.g. by na\"\i ve
semi-classical considerations, that the Wilson fixed point
describes a spontaneously broken theory: indeed this is inconsistent with
the fact that the field theory is scale invariant at this point. The
remaining quantum corrections for momenta $q<\Lambda$, which generically
give positive contributions to the mass-squared, in this case exactly cancel
the negative mass-squared $\sigma_* \Lambda^2$, and as $\Lambda\to 0$
one recovers the complete Legendre effective potential as
$V(\varphi)=A_*\varphi^6$ (from \asy\ in unscaled units).
Note also that it is not necessary to consider separately the question
of the physical stability of solutions to \fp. This is
because there are no solutions that are unbounded from below
(contrary to the statement in ref.\hashas), while
solutions that end on the singularity \sing\ are clearly physically
unacceptable: the potential does not {\sl
diverge} at $\varphi_c$, it simply ceases to exist -- or is complex thereafter.
(In the Gaussian case the stability is seen to hold automatically once
perturbations about the fixed point are considered, as a consequence of
the ln in \Veq).
The general structure of the solutions is as follows. For $\sigma$ close but
less than $\sigma_*$,
the solutions look very similar to fig.\fpV\ but end at some $\varphi=\varphi_c$
in a singularity \sing, as is most easily seen by plotting
$V''\sim 2/\varphi_c/(\varphi_c-\varphi)$. As $\sigma$ approaches $\sigma_*$ from
below, $\varphi_c$ moves out along the real axis, but very slowly, so that
for $\varphi_c>3$ we require $\sigma_*-\sigma<\Delta$ where $\Delta\approx.005$,
while $\sigma$ must approximate $\sigma_*$ to high precision for say
$\varphi_c\gsim4$. For $\sigma$ at the same proximity but above
$\sigma_*$ the singularity splits into a complex conjugate pair close to
the real axis, with real positions Re$(\varphi_c)$ in approximately
the same place. These move closer to, and out along, the real axis as
$\sigma\to\sigma_*^+$. The distance from the real axis is also a sensitive
function of $\sigma-\sigma_*$.
(Perhaps the position of these singularities is given by
$\varphi_c\sim (\sigma_*-\sigma)^{-\tau}$, when $\sigma\approx\sigma_*$,
$\sigma{<\atop>}\sigma_*$, for some small positive
constant exponent $\tau$). On the real axis, at $\varphi\approx{\rm Re}(\varphi_c)$,
$V''$ turns over steeply and drops
rapidly to a value just greater than $-1$ and approximately constant over
a large range. In this range $V$ slowly ``rolls over'' as
\eqn\slorol{V(\varphi)=-{1\over3}\ln\delta +c\varphi-{1\over2}\varphi^2
+\delta\int^\varphi_0 \!d\psi\, (\varphi-\psi)\exp\{\psi^2-{5\over2}c
\psi\}\quad+\cdots}
where $\varphi=c+O(\delta)$
is the point inside the range where the potential reaches a maximum, $\delta$
is small ($\delta\to0$ as $\sigma\to\sigma_*^+$), and
$|\varphi-c|<\!\!\!~<\sqrt{\ln(1/\delta)}$.
Eventually, at some
position $\varphi=\varphi_c'$ outside this range, $V$ encounters another singularity
of the form \sing.
For $\sigma=-1+\delta$, $\delta\to0^+$, the solution
$V(\varphi)$ is governed by \slorol\ with $c=0$, ending in a singularity
$\varphi'_c\sim\sqrt{\ln(1/\delta)}$.
As $\sigma\to0^-$, the singularities move out to infinity
in such a way that $V(\varphi)$ tends pointwise to zero. For $\sigma>0$,
$V(\varphi)$ grows monotonically (with real increasing $\varphi$)
ending in a singularity \sing, which moves out slowly
as $\sigma$ is increased.
Returning to the true solution, $V(\varphi)$ at $\sigma=\sigma_*$, we note that
$V$ has a four-fold symmetry in the complex plane:
complex conjugation $\times$ ($\varphi\leftrightarrow-\varphi$).
Factoring out this symmetry, if one
carefully integrates out along rays $\varphi=r\,\e{i\vartheta}$, with $0\le \vartheta
\le \pi/2$, one can determine the position of the closest
singularity. It appears at $r=r_*=3.12$, $\vartheta=\vartheta_*=.257\pi$. (There are
others with $r>r_*$).
We see that it is possible to make a systematic search for new continuum
limits {\sl without making any assumption about the form of the bare
potential}.
Indeed if such a potential can be tuned to a critical point where a continuum
limit is recovered, then the corresponding Wilson effective potential must
satisfy \fp\ at that point. On the the other hand, if we find an acceptable
solution of \fp, then because such a solution is scale invariant, this
equally well serves as the critical bare potential. Thus a search over the
infinite dimensional space of bare potentials reduces to a one dimensional
search for effective potentials obeying \fp\ that do not end in a singularity.
In fig.\the\figno\nfig\searchfp{ }\ we plot the inverse of the
position of the real singularity against $\sigma$. The Gaussian and
Wilson fixed points are clearly seen as sharp downward spikes, at
$\sigma=0$ and $\sigma=\sigma_*$ respectively. We have performed the
equivalent search in $O(N)$ scalar field theory in $D=3,4$ dimensions for
$N$ from 1 to 4, as mentioned in the introduction -- finding only the
expected fixed points. (The relevant equation for general $N$ is given for
example in ref.\hashas). Of course the restriction to considering only general
potentials is a result of our approximation; at higher orders in the
momentum expansion a larger space of lagrangians can be searched.
\midinsert
\centerline{
\psfig{figure=truncfig2.ps,width=4.3in}}
\vskip 0in
\centerline{\vbox{{\fam\bffam\tenbf Fig.2.} The inverse of the position of the real
singularity $\varphi_c$ as a function of $\sigma$, sampled with a stepsize
$\delta\sigma=1/120$. The downward spikes reach further towards zero with
a finer mesh. The various features are explained in the description of the
analytic structure given earlier.
}}
\endinsert
(Note that equation \fp\ is stiff[\the\refno]\nref\NR{ For a discussion of stiffness and
what to do about it see e.g. W.H. Press et al, ``Numerical Recipes .. The
Art of Scientific Computing'', 2nd edition (1992) C.U.P.}, the higher
order equations\deriv\ more severely so. To obtain an accurate representation
of the solution at $\sigma=\sigma_*$ one can binary chop between the slow
rollover behaviour \slorol\ for $\sigma>\sigma_*$ and the singular behaviour
\sing\ for $\sigma<\sigma_*$, but a more efficient method is to require
\asy\ at some large value of $\varphi$. One can then either shoot to the
origin\foot{For $N\ne1$ one would need to shoot to an intermediate fitting
point.}\ -- determining $A$ to zero $V'(0)$, or use relaxation.)
We can now give an intuitive explanation,
based on the simplified context of \fp, for why the truncations in fact
converge at first but then cease to further
converge beyond a certain maximum $n$.
The point is that, as well as the true solutions,
there are many `bad' solutions with singular field dependence on the real axis.
Very bad solutions have (real or complex)
singular field dependence very close to the origin $\varphi=0$,
causing the coefficients of $\varphi^m$ ($m$-point Green functions or vertices
in general) to diverge rapidly with $m$ according to the appropriate
radius of convergence. Naturally, the polynomial field dependence of the
truncations, for which the $2n+2$ vertex vanishes,\foot{For a discussion of
truncations in general see for example ref\erg.}\
tend therefore
to better approximate the Taylor series of a true solution. Increasing $n$
will tend at first to further improve the approximation, by in effect ensuring
that the singularities are forced further from the origin. However a
non-trivial true solution also has singularities for complex
$\varphi$ at (and in general beyond) some radius $|\varphi|=r_*$.
Therefore the truncations cannot be expected to converge to better results
than would be obtained from `moderately bad' solutions with singular field
dependence only at or beyond the radius $r_*$.
Now let us make this argument much more precise.
If we wish to ensure that the potential $V(\varphi)$ is an even function, then
the Taylor expansion must be done about the origin. It is helpful to write
\eqn\expa{V(\varphi)=-{1\over3}\ln(1+\sigma)+{1\over2}\sigma\varphi^2+4\sum_{k=2}
{\alpha_{2k}(\sigma)\over2k(2k-1)}\varphi^{2k}\quad.}
Plugging this into eqn.\fp\ we deduce $\alpha_4=-{1\over4}\sigma(1+\sigma)$,
$\alpha_6={1\over48}\sigma(1+\sigma)(1+7\sigma)$,
$\alpha_8=-{1\over48}\sigma^2(1+\sigma)(1+3\sigma)$, $\cdots$, and for $k\ge4$
that $\alpha_{2k}$ is given as $\sigma^2$ times $(1+\sigma)$ times a
polynomial in $\sigma$ -- as follows from $D=3$ being an upper critical
dimension\truncii, the slow rollover behaviour as $\sigma\to-1$, and the
recurrence relation
$${\alpha_{2k+2}\over1+\sigma}={k-3\over2k(2k-1)}\alpha_{2k}
+\sum_{m=2}^k{(-4)\over m}^{m-1}\!\!\!\!\!\!
\sum_{\scriptstyle k_1,\cdots,k_m\ge1
\atop \scriptstyle k_1+\cdots+k_m=k}{\alpha_{2k_1+2}
\cdots\alpha_{2k_m+2}\over(1+\sigma)^m}\quad,$$
respectively.
The $n^{\rm th}$ truncation is defined by setting $\alpha_{2n+2}(\sigma)=0$.
We concentrate on large $n$, where
the solutions $\sigma$ that result from this, can be understood from
the asymptotic expressions for $\alpha_{2k}$. To leading order in $1/k$,
if the closest singularities \sing\ are
on the real axis at $\varphi_c=\pm r$
one has $\alpha_{2k}\sim1/r^{2k}$, while if they are complex
then there are four in the form $\varphi_c=\pm r\,\exp(\pm i\vartheta)$ and
one has $\alpha_{2k}\sim2\cos(2k\vartheta)/r^{2k}$.
(In fact the $\alpha_{2k}$
asymptote to these expressions even for quite small $k$; for
example one finds for $\sigma=\sigma_*$,
that the
asymptotic expressions give the right sign for $k>2$ and are within
a factor 2 for $k\ge7$). Recalling the analytic behaviour
of $V(\varphi)$ as a function of $\sigma\approx\sigma_*$,
we see that for $\sigma<\sigma_*-\Delta$ the
coefficients $\alpha_{2n+2}(\sigma)$
are all positive, so there are no solutions $\sigma$ in this region for
large $n$. On the other hand for $\sigma=\sigma_*$
the coefficients are controlled by the
singularities $\varphi_c=\pm r_*\exp(\pm i\vartheta_*)$ and thus the
signs of the $\alpha_{2n+2}(\sigma_*)$
are very closely four-fold periodic in $n$
in the pattern $++--$, as a consequence of $\vartheta_*\approx\pi/4$.
For the negative pair, it follows
by continuity in $\sigma$ that $\alpha_{2n+2}(\sigma)$ must
vanish for some $\sigma$ in the range $\sigma_*-\Delta<\sigma<\sigma_*$.
For the positive pair, one has to look at
$\sigma>\sigma_*+\Delta$; here the closest singularities are complex
with an angle $\vartheta$ which is a rapidly increasing function of $\sigma$. This
implies that there is always a zero of $\alpha_{2n+2}(\sigma)$ close to
but greater than $\sigma_*+\Delta$. Together, these solutions $\sigma$ are
the real ones that best approximate the Wilson fixed point. They are shown in
fig.\the\figno\nfig\sigmas{sigmas}\ up to $n=25$. (For the higher $n$ one must work to
an accuracy of at least 20 significant figures to avoid round off
errors). One clearly sees the four-fold
periodicity with amplitude $\approx\Delta$, and that, as expected,
the upper solutions are slightly worse approximations.
Indeed the average of the last four solutions gives $\sigma=-.4607$
which is greater than $\sigma_*$ by $\approx\Delta/6$.
\midinsert
\centerline{
\psfig{figure=truncfig3.ps,width=4.3in}}
\vskip 0in
\centerline{\vbox{{\fam\bffam\tenbf Fig.3.} The solutions $\sigma$ that best approximate
the exact answer ($\sigma=\sigma_*$, shown as a continuous horizontal line)
for the truncations $n$ up to $n=25$.
The solutions not displayed ($n=2,3$) lie outside the range on the $\sigma$
axis. The dotted lines are $\sigma=\sigma_*\pm.005$.
Recall that $\Delta\approx.005$.
}}
\endinsert
The four-fold
periodicity is transferred to the critical exponents, which may be
computed by linearizing, in \Veq, as
$\alpha_{2k}(t)=\alpha_{2k}(\sigma)+\varepsilon\beta_{2k}\,\e{\lambda t}$
(where we have used separation of variables). The
neatest method of computing the eigenvalues $\lambda$ is by the
linear recurrence relations between the $\beta_{2k}$, imposing
$\beta_{2n+2}=0$. All truncations, but $n=23$, have one positive eigenvalue as
required, which yields\Wil\ $\nu=1/\lambda$.
The results are shown in fig.\the\figno\nfig\nus{nus}. One clearly sees again the
limiting periodic behaviour about the exact solution ($\nu=.6895$),
this time with
amplitude $\approx .008$. The exponent for the first correction to
scaling is given by $\omega=-\lambda$ where $\lambda$ is the least
negative eigenvalue. This depends much more sensitively on the
approximations and is even complex for $n=19,22,23$. It bounces
about the exact answer ($\omega=.5952$) with an amplitude $\approx .15$.
These truncations have been considered before, in
refs.\trunci\truncii\ to $n=11,8$ respectively, but without the
deeper understanding
it was possible at these orders to interpret the numerical results as
indicating convergence. (The much worse behaviour for $O(N)$
scalar field theory with $N=2,3$ \truncii\ is surely due to the exact solution
having complex singularities much closer to the origin).
\midinsert
\centerline{
\psfig{figure=truncfig4.ps,width=4.3in}}
\vskip 0in
\centerline{\vbox{{\fam\bffam\tenbf Fig.4.} The exponent $\nu$ for truncations up to
$n=25$. The exact answer is shown as a horizontal line.
}}
\endinsert
Of course these approximate solutions are not the only solutions for
$\sigma$. For $n$ too small the asymptotic pattern has not set in. Even
for $n$ large, there are many `spurious' solutions $\alpha_{2n+2}=0$
in the range $\sigma_*<\sigma<0$, which must be
there by continuity arguments;
away from the boundaries of this range, one finds that they slowly
drift leftwards with increasing $n$ (asymptoting towards
$\sigma=\sigma_*$), reflecting the fact that the closest singularities
have an angle $\vartheta$ which is a slowly
increasing function of $\sigma$ in this range.
Looking only at the truncations, how can one tell that
these solutions are spurious? There is surely no completely reliable method.
There is no good reason
to require the {\sl truncated} potential to be bounded below and
indeed the first cases to violate that criterion are truncations $n=6,7$
in figs.\sigmas,\nus; nor is it compelling to assume the
solutions must be real: the approximations are bad (compared to their
neighbours) for truncations 22 and 23, unless one chooses
certain solutions with small
imaginary parts ($\sigma=-.4572+.0059i$ and $-.4566+.0027i$ respectively --
only the real parts are shown in figs.\sigmas,\nus). For
$n=23$ even the requirement that the approximation should have
only one relevant direction, breaks down: there are no such solutions.
The case we chose has $\lambda=1.472+.0253i$, which we use to
compute $\nu=.6791-.0117i$, and a less relevant direction with
$\lambda=.509+1.695i$. The spurious solutions generally, {\sl but not always},
have about the same number of positive eigenvalues $\lambda$ as negative
eigenvalues: which is many for large $n$.
The best one can do\trunci\ to eliminate spurious solutions,
is to look numerically for convergence/stability with
increasing $n$. That this is not good
enough is nicely demonstrated in ref.\trunci\ where the slow drift of a
sequence of spurious solutions $\sigma$, with two relevant eigenvalues,
is mistaken for convergence to a tricritical point.
Finally we mention two better expansion methods. The first is the
analytic equivalent of shooting to an intermediate fitting point, namely
we require the Taylor expansion \expa\ and its derivative to agree
with the asymptotic expansion \asy\ and its derivative, at some given
intermediate point $\varphi_f$. In this way one obtains just one
solution as expected, with $\sigma\approx\sigma_*$ and $A\approx A_*$.
By comparing a pair of truncated Taylor expansions
with a pair of truncated asymptotic expansions over a range of $\varphi_f$
(to, at the same time, estimate the error and determine the `best' $\varphi_f$)
one can extract moderately accurate bounding values for $\sigma_*$,
for example within a range of width .04 by using Taylor series to $\varphi^8$
and $\varphi^{10}$ and asymptotic series to $\varphi^{-10}$ and $\varphi^{-14}$.
This method will be
reliable providing the asymptotic series is accurate for $\varphi>a$
(when $A\approx A_*$) for some $a$ such that $a<r_*$,
which is certainly case here.
Presumably it works in general if there are no singularities in the
region $a<{\rm Re}(\varphi)<\infty$, except that it cannot give unlimited
accuracy since the expansion \asy\ converges only in the Poincar\'e sense.
\midinsert
\centerline{
\psfig{figure=truncfig5.ps,width=4.3in}}
\vskip 0in
\centerline{\vbox{{\fam\bffam\tenbf Fig.5.} Results for the exponent $\nu$ against $n$,
in an expansion method that utilises the fact that $\vartheta_*\approx\pi/4$.
The exact answer is shown as a horizontal line. Note the much finer vertical
scale compared to fig.4.
}}
\endinsert
In the second method we assume that $\sigma_*$ has been determined to the
accuracy required of the eigenvalues.
Linearising about the fixed point position of the
nearest singularity as $\vartheta=\vartheta_*+\varepsilon \psi\,\e{\lambda t}$
and $r=r_*+\varepsilon s\,\e{\lambda t}$, one obtains the asymptotic
behaviour of $\beta_{2k}$:
$$\beta_{2k}\sim -4k\left\{s\cos(2k\vartheta_*)/r_*^{2k+1}
+\psi\sin(2k\vartheta_*)/r_*^{2k}\right\}\quad .$$
Thus to leading order in $1/n$,
$$(n-1)\beta_{2n+2}\alpha_{2n-2}(\sigma_*)-(n+1)\beta_{2n-2}
\alpha_{2n+2}(\sigma_*)\sim -8 (n^2-1)\psi \sin(4\vartheta_*)/r_*^{4n}\ .$$
Since $\vartheta_*\approx\pi/4$ we can expect that it is a good approximation
to choose $\lambda$ such that the left hand side vanishes. Doing so, we
find a spectacular improvement in convergence, cf. fig.\the\figno\nfig\nushift{ },
which nicely provides further
confirmation for our theory. By looking at running averages of four points,
and assuming no systematic shift from neglecting the right hand side of the
above, we extract $\nu=.689457(8)$. Similarly we find $\omega=.5955(5)$.
\acknowledgements
It is a pleasure to thank the following people for their interest and
discussions:
Richard Ball, Michel Bauer, Simon Catterall, Poul Damgaard, Patrick Dorey,
Dan Freedman, Peter Haagensen, Tim Hollowood, Yuri Kubyshin,
Jose Latorre and Ulli Wolff.
\listrefs
\end
|
2,877,628,091,013 | arxiv | \subsection{High Precision Cosmology Through Cosmological Parallax}
Our motion with respect to the Cosmic Microwave Background (CMB)
results in positional changes with time for extragalactic sources that depend
on the transverse co-moving distance. Thus our space motion provides
a baseline 80 times larger per year than that provided by the Earth's
annular motion. Since our CMB velocity is both an absolute reference and
known to better than 1\% the precision, astrometry of quasars can provide a new
measurement of the Hubble Constant that is independent of the traditional
methods. Thus, it offers a means to directly address the current tension
between empirical bootstrapped measurements and the value inferred from
the consensus standard model (e.g., Freedman 2017).
The cosmological parallactic distance is related to the transverse co-moving
distance ($D_M$). Since this secular measure involves a transverse velocity
it has a different dependence with redshift than does an angular size measure such as
BAO due to time dilation with redshift. The three standard measures of cosmological
distance are related through the transverse co-moving distance ($D_M$) but with differing
dependencies on the cosmological parameters and the Dark Energy equation of state
(e.g., Weinberg 1971; Hogg 2000; Peebles 2000; Huterer \& Turner 2001).
The luminosity and angular size distances are well known but the measurement of cosmological
parallax is less so. Apparent angular positional shifts ($\theta$) due to our motion
through space can be used to computed the cosmological parallax and the
transverse co-moving distance:
$D_P = 1/\theta$. Being purely geometrical minimizes most of the systematics associated with measuring cosmological geometry. The measurement of cosmological parallax would thus
provide a new measure of the cosmological parameters and the Dark Energy
equation of state that is independent of the methods that have been used
to date [SN Ia and Baryon Acoustic Oscillations (BAO), e.g., Alam et al. 2020].
\subsection{Limitations of the GAIA Astrometric Satellite}
At cosmologically interesting redshifts ($z \sim 1$), the secular parallax for
the consensus cosmology is approximately $10^{-6}$ arc seconds over ten years
of our motion. The relatively bright flux limit of the GAIA satellite limits
precision astrometry to the nearest galactic nuclei and quasars. Furthermore, the presence
of systematic errors at the few $10^{-6}$ arcsec level limits any parallax measurement via
ensemble averaging of fainter quasar populations (GAIA Mission Science Performance).
Various space-based astrometric
missions have been proposed that may ultimately result in a measurement of cosmological
parallax using fainter quasars (see Ding \& Croft 2009 for an assessment) but none of these have
been funded. Similarly, a global array of radio telescopes, such as ngVLA, may ultimately
achieve the required astrometric precision using a
sample of compact radio loud quasars (e.g., Paine et al. 2020). The ngVLA was
endorsed by the Astro 2020 Decadal Survey and received a 2nd priority ranking.
If funded, it could potentially begin science operations in 2035 and be applied to the
measurement of cosmological parallax.
\subsection{Precision Cosmology via Astrometry of Gravitationally Lensed Galaxies and Quasars}
An alternative to space-based astrometry and global interferometry is provided by the next
generation of extremely large telescopes (ELTs) equipped with adaptive optics. These
telescopes will provide an unprecedented astrometric precision. US participation in two
of the ELTs currently under development, the Giant Magellan and Thirty Meter Telescopes,
was ranked the highest priority of Astro 2020 Decadal Review. Most of their funding is
already in place such that their technical development is underway. Their fields of view will be
narrow precluding an all sky astrometric survey. However, their narrow-field performance
still offer distinct advantages for the measurement of cosmological parallax.
In particular, the gravitational lensing of quasars by foreground galaxies
magnifies the differential positional shifts between the foreground lens and
the background source $\sim 5\times$ as our line-of-sight changes with time. Particular
attention is being paid to the long term astrometric stability of the ELTs. Simulations to
date imply astrometric precisions of $3 \times 10^{-6}$ arcsec (e.g., Cameron et al. 2009)
suggesting that these ELTs will provide a measurement of cosmological parallax and the
transverse co-moving distance for the first time.
\subsection{Synergies with the Rubin and Roman Surveys and the Path Forward}
The Rubin and Roman surveys are predicted to discover several thousand lensed quasars
and compact galaxies (Oguri \& Marshall 2010). The angular magnifications in a typical lensed
systems is about 5x bringing the signal up to a level measurable with the ELTs. Recent simulations
(Pierce \& McGough, in preparation) have demonstrated that the astrometry of a single system
with the ELTs will provide a several sigma detection of the cosmological parallax signal. Those
simulations show that ELT astrometry of only about 300 systems around the sky would provide
constraints on the cosmological parameters and the Dark Energy equation of
state that are comparable to those currently from BAO and SNIa. Thus, the measurement of
cosmological parallax over the next few decades appears both feasible and inevitable.
The ELTs promise to provide truly ground breaking capabilities for a number of research
areas. As a result, the available time on the ELTs will be highly competed.
We propose two strategies for a precision ELT measurement of the cosmological parallax and
a corresponding constraint on the transverse co-moving distance:
$\bullet$ {\bf Rely on Public ELT time:} Lensed quasars are relatively bright and our simulations
imply that precision astrometry could be acquired for about 10 systems/night of ELT time.
Thus precision astrometry for a sample of 300 systems would require approximately 30 nights of
telescope time for each epoch. The current estimate of the operational cost for the ELTs time is
\$1.5M/night. This may limit the measurement of cosmological parallax to a minimal sample measured at
two
epochs.
$\bullet$ {\bf Construct Dedicated ELTs:} The sample of lensed quasars and compact galaxies predicted
to be found with the Rubin and Roman telescopic surveys could reach 10 thousand systems. Obtaining precision astrometry for the full sample is likely out of the question.
Thus, to fully leverage the sample of lensed systems for the measurement of cosmological
parallax would require some portion of the time on dedicated ELTs located in both the northern and the southern hemispheres.
While expensive, this strategy would also result in constraints on the Dark Energy equation of state
that are as much as 10x higher precision than those currently available from BAO or SNIa.
A lower precision measurement could be accomplished over a shorter baseline,
perhaps in only a few years, with the signal growing
with time. We also note that significant developmental savings in
costs and time could be accomplished by copying the designs of either the Giant Magellan or
the Thirty Meter Telescopes, their adaptive optics systems and limiting their suite of
instrumentation.
The precision astrometry we are proposing for even the largest samples would require about
10\% of the time available on an ELT, meaning that the remainder of the time could be dedicated
to other precision cosmological measurements. The narrow-fields of the ELTs likely precludes
their use in the next generation large-scale spectroscopic surveys that are being
proposed elsewhere in this document but they would naturally be applicable to direct measures
of cosmic deceleration.
\section{Introduction}
Breakthroughs in physics and astrophysics are often driven by technological advances, with the recent detection of gravitational waves being one such example. This white paper focuses upon how improved astrometric and spectroscopic measurements from a new generation of precise, accurate, and stable astronomical instrumentation can address two of the fundamental mysteries of our time -- dark energy and dark matter -- and probe the nature of spacetime.
Instrumentation is now on the cusp of enabling new cosmological measurements based on redshifts (cosmic redshift drift) and extremely precise time-series measurements of accelerations, astrophysical source positions (astrometry), and angles (cosmic parallax). These allow tests of the fundamental framework of the universe (the Friedmann equations of general relativity and whether cosmic expansion is physically accelerating) and its contents (dark energy evolution and dark matter behavior),
while also anchoring
the cosmic distance scale ($H_0$).
The unexpected accelerated expansion of the Universe must arise from physics beyond the Standard Model: a dark energy or vacuum energy of a new field, or a breakdown in General Relativity. To date, this acceleration has been
inferred from the expansion history measured by distances through cosmological probes including Type~Ia supernovae (SN), baryon acoustic oscillations (BAO), and the cosmic microwave background
\citep[CMB; see][and references therein]{scolnic2018,planck2020,alam2021}.
Direct measurement of acceleration of the cosmic expansion would test the
Friedmann-Lema{\^{\i}}tre-Robertson-Walker framework itself, and provide a new probe of cosmic expansion and dark energy. This has been a goal for over 60 years \citep{mcvittie1962,sandage1962}, and is finally within reach given technological developments that enable accurate measurement of the very small change in an object's redshift with observer time (the second derivative of position, i.e.\ the acceleration). Measurements of this redshift drift can reveal the physical nature of cosmic acceleration as well as parameter estimation with precision competitive with and highly complementary to standard methods -- giving a Stage IV experiment the power of a Stage V one.
This same technology can be applied to extremely precise time-series measurements of velocities to determine accelerations of sources within our Galaxy or nearby ones, creating a direct map of the gravitational field of the galaxy. The dark matter mass distribution, clustering, and any nonstandard interactions can be revealed through such maps.
With current generation spectrographs like ESPRESSO \citep{Pepe2021} and NEID \citep{NEID_optical} expected to achieve radial velocity (RV) precision of $\sim$ 10 cm/s, one can directly measure the \emph{changes} in the RVs over decade baselines to obtain a line-of-sight acceleration. These instruments thus far are on less than 10-m telescopes, and therefore cannot access the entire volume of our Galaxy and are limited to observations of relatively bright stars within a few kiloparsec from the Sun. Future instruments on the Extremely Large Telescopes (ELTs) will be able to carry out direct acceleration measurements \emph{across} the Galaxy, and beyond. A key feature of such direct acceleration measurements is that the relative precision improves with time, such that decade-scale precision measurements of dark matter in the Galaxy are feasible if the instruments are designed to yield stable RV measurements on this timescale. This technique thus far has largely been to detect and characterize exoplanets, but it is equally viable for understanding the nature of dark matter.
High precision positional and angle measurements (astrometry),
leveraging large-aperture telescopes and extended time baselines, have the potential to enable direct measurements of secular parallax (seeing an object shift position on the sky due to its motion and cosmic expansion). Such geometric distances beyond our local group of galaxies would provide a new, more robust anchor for the cosmic distance scale.
Quantum-assisted optical interferometers are one path for dramatic improvement of these astrometric measurements.
\section{Cosmological Redshift Drift }
Einstein's Equivalence Principle teaches us
that acceleration {\it is\/} gravitation and defines the curvature of spacetime.
Cosmic acceleration -- the change in the expansion rate of the Universe -- is thus a fundamental tool for understanding the Universe and a signpost of a new realm of physics that directly addresses one of the key goals of DOE HEP and NSF PHY, ``exploring the basic nature of space and time''.
Cosmic acceleration is observable as a change in the measured redshifts of objects, known as cosmic redshift drift. In 1962, McVittie laid out the relation of redshift drift to spacetime acceleration, and Sandage proposed the use of the greatest facilities of the time to observe it \citep{mcvittie1962,sandage1962}.
Redshift probes the spacetime as
\begin{equation}
1+z=\frac{(g_{\mu\nu}k^\mu u^\nu)_{\rm em}}{(g_{\mu\nu}k^\mu u^\nu)_{\rm obs}}\ \ .
\end{equation}
The cosmic redshift drift $dz/dt_{\rm obs}$ thus directly reveals
the evolution of the metric $g_{\mu\nu}$ (e.g.\ through Hubble expansion),
any interactions of the photon four-momentum $k^{\mu}$, and any
inhomogeneous accelerations -- evolution of the peculiar velocities $u^\nu$.
In the standard FLRW cosmology,
\begin{equation}
\frac{dz}{dt_{\rm obs}}=(1+z)\,H_0-H(z)\,,
\end{equation}
giving a redshift drift of $\mathcal{O}(10^{-10}(\Delta t/{\rm yr}))$.
Measurements of cosmic acceleration would not only directly confirm and characterize the Friedmann-Lema{\^{\i}}tre-Robertson-Walker model, but increase the dark energy probative power (figure of merit) by a factor of 3 beyond
Stage 4 experiments.
The key new theoretical elements include the redshift optimization analysis indicating measurements at redshift $z\lesssim0.5$ provide the greatest leverage on testing spacetime properties (FLRW) and dark energy (with the further benefit of higher observing signal to noise), and the extraordinary degree of complementarity with CMB measurements at high redshift \citep[see Figure~\ref{fig:kim}, from][]{kim2015}. Meanwhile, early time observations of cosmic acceleration probe the expansion during the decelerating, matter-dominated era. The largest cosmic accelerations are also expected at $z>3$ during this deceleration phase.
\begin{figure}
\centering
\includegraphics{kim2015_fig3.jpg}
\caption{Figure 3 from \cite{kim2015}, which shows the constraints obtained on dark energy equation of state parameters ($w_0$ is the current value; $w_a$ parameterizes evolution in the equation of state) from an experiment that yields five measurements of redshift drift at a given redshift with an accuracy of 1\%. Ellipses correspond to 68\% confidence intervals. Note that low redshift, $z\approx0.3$, is optimal, and has high complementarity with CMB measurements.}
\label{fig:kim}
\end{figure}
The key new experimental elements include controlling systematics by employing differential measurements of wavelengths in an emission line doublet (e.g.\ well characterized [OII]) and interferometric spectroscopy with ultrahigh stability, e.g.\ as enabled by spatial heterodyning and HEP-inspired “crossfading” (optimized weighting). Externally dispersed interferometric (EDI) spectroscopy with crossfading has already demonstrated a factor 1000 gain in stability and reduction of key systematics; the ongoing LLNL LDRD award has further advanced this. Other ideas include laser frequency comb calibration coupled with ensembles of astrophysical calibrators \citep{cosmicaccelerometer},
and observations at radio wavelengths \citep[e.g.,][]{2020EPJC...80..304L}.
For example,
radio observations of the H I 21 cm absorption line observe redshifts of 10 systems in multiple epochs over the course of 13.5 years \citep{2012ApJ...761L..26D}, with reported uncertainty of $O(10^{-8})$, several orders of magnitude
higher than the expected signal.
Required measurement accuracy for redshift drift is better than a part in $10^{10}$ over a one year baseline (inverse Hubble constant). This corresponds to wavelength shifts equivalent to 1 cm/s/yr, similar to the goal of exoplanet radial velocity experiments
so there is broad interest in the astrophysics community in enabling this technology\footnote{The LLNL LDRD highlighted the wide applicability of the improvements: in cosmology, exoplanets, fusion research (plasma motions), compact spectroscopy for homeland security applications, and Raman spectroscopy for biomedical imaging.}. One can improve on this in two straightforward ways: longer time baselines (e.g.\ a 5 or 10 year experiment) and many sources or redshift features to reduce the statistical uncertainty below the systematic level. Another necessity is large numbers of photons -- this is helped by the optimum leverage being at low redshifts and from bright (emission line) sources, and the upcoming generation of ELTs.
Further ideas include a dedicated $\sim$10 meter class telescope or arrays of smaller telescopes.
In just the first year of the Lawrence Livermore National Laboratory (LLNL) Laboratory Directed Research and Development (LDRD) grant to David Erskine, success of the EDI technique already includes:
\begin{itemize}
\item Demonstrated 500--1000$\times$ reduction in wavelength shift systematics
\item Demonstrated integration with the Keck Planet Finder spectrograph to test wavelength stabilization plus 2$\times$ resolution boost
\item Demonstrated single delay crossfading with simpler optical design and cancellation of thermal drifts and air convection
\item Demonstrated stabilization of irregular wavelength-dependent drifts
\end{itemize}
Most importantly, all elements seem to be falling into place with the requisite technology now feasible -- and of great interest to other science fields. Experiment and theory have come together to enable one of the most fundamental tests of spacetime and cosmology, finally achievable after 60 years of waiting, in this next decade.
\section{Measuring dark matter sub-structure in the Milky Way}
Measurements of the accelerations of stars give us the most direct window into the mass distributions of galaxies, both the stars and dark matter (the smooth component, as well as dark matter sub-structure). Traditionally, inferences about the nature of dark matter have been drawn from \emph{estimates} of these accelerations by modeling the positions and velocities of stars, as compared to the distributions of competing models of dark matter. For example, self-interacting dark matter (SIDM) is expected to produce a flatter density profile relative to that produced in cold dark matter models (CDM), as dark matter particles scatter elastically with each other at a rate that is quantified by the self-scattering cross-section \citep{Tulin_Yu2018}. SIDM cosmological simulations also tend to produce more disk-like potentials in Milky-Way type galaxies relative to CDM \citep{Vargya2021}, and a greater diversity in their acceleration and rotation curve profiles \citep{Sameie2020}. The so-called fuzzy dark matter model which is composed of ultra-light bosons is expected to produce a distinctly different distribution of dark matter on small scales relative to cold dark matter \citep{Hu2000}, i.e., on scales of the de Broglie wavelength (of order a kiloparsec), where it behaves like a wave \citep{Mocz2019,Lancaster2020}. These small-scale features of competing dark matter models can in principle be constrained from direct acceleration measurements of stars within the Milky Way.
Kinematic estimates of the acceleration usually rely on assumptions of equilibrium or symmetry that are unlikely to be valid for the Milky Way - and therefore may yield inaccurate descriptions of dark matter in the Milky Way. Direct measurement of the Galactic acceleration allows us to capture the complexity of the time-dependent Galactic mass distribution (the dark matter and the stars) via extremely precise, time-series observations. Using the Poisson equation, $\nabla^2 \Phi = - \nabla \cdot \vec a = 4 \pi G \rho $, acceleration measurements can be very straightforwardly be related to the Galactic potential, $\Phi$, and the mass density, $\rho$, without assumption. The accelerations of stars that live within the gravitational potential of the MW are small (of order $\sim$ 10 cm/s over a decade for stars within $\sim$ kpc of the Sun) but advances in technology have led to extreme precision spectrographs that can achieve $\sim$ 10 cm/s \citep{Pepe2010,WrightRobertson} and measure the Galactic acceleration directly \citep{Silverwood2019,Chakrabarti2020}. The current generation of spectrographs like NEID and ESPRESSO have achieved RV precision $\sim$ 10 cm/s, and should enable a measurement of dark matter sub-structure in the Milky Way down to $\sim 10^{9}-10^{10}~M_{\odot}$, as well as the smooth component of the potential, in less than a decade with currently ongoing extreme precision radial velocity surveys. However, these instruments are on less than 10-m telescopes, and do not access the entire volume of the Milky Way, and are practically limited in the scale of dark matter sub-structure that they can probe. With the advent of the ELTs - for example spectrographs like G-CLEF on the GMT \citep{GCLEF} and MODHIS on the TMT \citep{Mawet2019}, we can expect to probe the dark matter sub-halo mass function down to $\sim 10^{6}~M_{\odot}$ with measurements across the Galaxy.
Perhaps the most mature precision technique is pulsar timing. Pulsars are extremely precise Galactic clocks that can be used as accelerometers to measure the Galactic potential.
Recent analysis of compiled pulsar timing observations of the observed period drift rate of binary pulsars produced the first direct measurement of Galactic accelerations \citep{Chakrabarti2021} using a sample of precisely timed 14 binary pulsars within a $\sim$ kiloparsec of the Sun. Measurements of these accelerations enabled a determination the mid-plane density (also known as the Oort limit), the local dark matter density, and the shape of the Galactic potential as traced by the pulsars.
A key signature of fuzzy dark matter that reveals its wave-like nature on scales of the de Broglie wavelength may be detectable by future pulsar timing observations \citep{Porayko2018}. Expected contributions from future pulsar timing facilities such as the next generation Very Large Array (ng-VLA) and the Deep Synoptic Array (DSA-2000), which will benefit from improvements in sensitivity of about an order of magnitude should enable direct acceleration measurements across the Galaxy and a measurement of dark matter sub-structure, potentially down to the $\sim 10^{6}~M_{\odot}$ scale; these facilities are discussed in the CF3 Facilities White paper "Snowmass 2021 Cosmic Frontier White Paper: Observational Facilities To Study Dark Matter".
Direct acceleration measurements are also now imminently possible by measuring the small shift in the mid-point of the eclipse time (about 0.1 seconds) induced by the Galactic potential of eclipsing binaries observed by \textit{Kepler} about a decade ago \citep{ChakrabartiET}. These precision measurements are enabled today with \textit{HST}, and in the future with \textit{JWST} and \textit{Roman}.
Planetary astrometric data can also be used to constrain dark matter in the solar system \citep{Pitjev:2013sfa}, study general relativity \citep{DeMarchi:2019lei}, and set new limits on ultralight dark sectors \citep{Tsai:2021irw,Poddar:2020exe}. These studies are extremely crucial for dark matter direct detection studies \citep{Banerjee:2019epw,Banerjee:2019xuy,Tsai:2021lly,Alonso:2022oot} and other areas of fundamental physics studies.
\section{High Precision Cosmology Through Cosmological Parallax}
Our motion with respect to the Cosmic Microwave Background (CMB)
results in positional changes with time for extragalactic sources that depend
on the transverse co-moving distance. Thus our space motion provides
a baseline 80 times larger per year than that provided by the Earth's
annular motion. Since our CMB velocity is both an absolute reference and
known to better than 1\% the precision, astrometry of quasars can provide a new
measurement of the Hubble Constant that is independent of the traditional
methods. Thus, it offers a means to directly address the current tension
between empirical bootstrapped measurements and the value inferred from
the consensus standard model (e.g., Freedman 2017).
The cosmological parallactic distance is related to the transverse co-moving
distance ($D_M$). Since this secular measure involves a transverse velocity
it has a different dependence with redshift than does an angular size measure such as
BAO due to time dilation with redshift. The three standard measures of cosmological
distance are related through the transverse co-moving distance ($D_M$) but with differing
dependencies on the cosmological parameters and the Dark Energy equation of state
(e.g., Weinberg 1971; Hogg 2000; Peebles 2000; Huterer \& Turner 2001).
The luminosity and angular size distances are well known but the measurement of cosmological
parallax is less so. Apparent angular positional shifts ($\theta$) due to our motion
through space can be used to computed the cosmological parallax and the
transverse co-moving distance:
$D_P = 1/\theta$. Being purely geometrical the parallax distance minimizes many of the systematics associated with other methods for measuring cosmological geometry. The measurement of cosmological parallax would thus
provide a new, high precision measure of the cosmological parameters and the Dark Energy
equation of state that is independent of the methods that have been used
to date [SN Ia and Baryon Acoustic Oscillations (BAO), e.g., Alam et al. 2020].
\subsection{Limitations of the GAIA Astrometric Satellite}
At cosmologically interesting redshifts ($z \sim 1$), the secular parallax for
the consensus cosmology is approximately $10^{-6}$ arc seconds over ten years
of our motion. The relatively bright flux limit of the GAIA satellite limits
precision astrometry to the nearest galactic nuclei and quasars. Furthermore, the presence
of systematic errors at the few $10^{-6}$ arcsec level limits any parallax measurement via
ensemble averaging of fainter quasar populations (GAIA Mission Science Performance).
Various space-based astrometric
missions have been proposed that may ultimately result in a measurement of cosmological
parallax using fainter quasars (see Ding \& Croft 2009 for an assessment) but none of these have
been funded. Similarly, a global array of radio telescopes, such as ngVLA, may ultimately
achieve the required astrometric precision using a
sample of compact radio loud quasars (e.g., Paine et al. 2020). The ngVLA was
endorsed by the Astro 2020 Decadal Survey and received a 2nd priority ranking.
If funded, it could potentially begin science operations in 2035 and be applied to the
measurement of cosmological parallax.
\subsection{Precision Cosmology via Astrometry of Gravitationally Lensed Galaxies and Quasars}
An alternative to space-based astrometry and global interferometry is provided by the next
generation of extremely large telescopes (ELTs) equipped with adaptive optics. These
telescopes will provide an unprecedented astrometric precision. US participation in two
of the ELTs currently under development, the Giant Magellan and Thirty Meter Telescopes,
was ranked the highest priority of Astro 2020 Decadal Review. Most of their funding is
already in place such that their technical development is underway. Their fields of view will be
narrow precluding an all sky astrometric survey. However, their narrow-field performance
still offer distinct advantages for the measurement of cosmological parallax.
In particular, the gravitational lensing of quasars by foreground galaxies
magnifies the differential positional shifts between the foreground lens and
the background source $\sim 5\times$ as our line-of-sight changes with time. Particular
attention is being paid to the long term astrometric stability of the ELTs. Simulations to
date imply astrometric precisions of $3 \times 10^{-6}$ arcsec (e.g., Cameron et al. 2009)
suggesting that these ELTs will provide a measurement of cosmological parallax and the
transverse co-moving distance for the first time.
\subsection{Synergies with the Rubin and Roman Surveys and the Path Forward}
The Rubin and Roman surveys are predicted to discover several thousand lensed quasars
and compact galaxies (Oguri \& Marshall 2010). The angular magnifications in a typical lensed
systems is about 5x bringing the signal up to a level measurable with the ELTs. Recent simulations
(Pierce \& McGough, in preparation) have demonstrated that the astrometry of a single system
with the ELTs will provide a several sigma detection of the cosmological parallax signal. Those
simulations show that ELT astrometry of only about 300 systems around the sky would provide
constraints on the cosmological parameters and the Dark Energy equation of
state that are comparable to those currently from BAO and SNIa. Thus, the measurement of
cosmological parallax over the next few decades appears both feasible and inevitable.
The ELTs promise to provide truly ground breaking capabilities for a number of research
areas. As a result, the available time on the ELTs will be highly competed.
We propose two strategies for a precision ELT measurement of the cosmological parallax and
a corresponding constraint on the transverse co-moving distance:
$\bullet$ {\bf Rely on Public ELT time:} Lensed quasars are relatively bright and simulations
imply that precision astrometry could be acquired for about 10 systems/night of ELT time.
Thus precision astrometry for a sample of 300 systems would require approximately 30 nights of
telescope time for each epoch. The current estimate of the operational cost for the ELTs time is
\$1.5M/night. This high cost and the extreme competition for public ELT time may limit the measurement of cosmological parallax to a minimal sample measured at two epochs. However, the sample of targets predicted to be discovered with the Rubin and Roman telescopes is sufficiently large to enable higher precision measurements if more ELT time were available.
$\bullet$ {\bf Dedicated ELT Experiment:} The sample of lensed quasars and compact galaxies predicted
to be found with the Rubin and Roman telescopic surveys could reach several thousand systems. Obtaining precision astrometry for the full sample with public ELT time is likely out of the question.
Thus, to fully leverage the sample of lensed systems for the highest precision measurement of cosmological
parallax would require a significant, dedicated ELT allocation in both the northern and the southern hemispheres.
While more expensive, this strategy would also result in constraints on the Dark Energy equation of state
that are several times higher precision than those currently available from BAO or SNIa.
A lower precision measurement could be accomplished over a shorter temporal baseline,
perhaps in only a few years, with a large sample while a corresponding signal could be achieved with a smaller sample over an extended temporal baseline. A dedicated ELT experiment obviously provides the maximum signal. The precision astrometry we are proposing for even the largest samples would require about 10\% of the time available on an ELT.
\section{High precision astrometry with quantum-assisted optical interferometers}
High precision astrometry at the microarcsecond level could open science avenues for imaging black hole accretion disks, improving the local distance ladder, detailing dark matter subhalo influence on microlensing, and dark matter impact visible in Galactic stellar velocity maps. This could be enabled by new ideas cross-cutting the optical interferometry and quantum information science.
Observations using interferometers provide sensitivity to features of images on angular scales much smaller than any single telescope. Traditional (Michelson stellar) optical interferometers are essentially classical, interfering single photons with themselves \citep{Pedretti2009, Martinod2018, tenBrummelaar2005}, and the single-photon technique is highly developed and approaching technical limits. Qualitatively new avenues for optical interferometery can be opened up, however, once we consider using multiple-photon states; these generally require a quantum description, especially in conjunction with non-classical quantum technologies such as single-photon sources, entangled pair sources, and quantum memories. We will focus here on a particular two-photon state technique with application for precision astrometry.
It has been recently proposed that stations in optical interferometers would not require a phase-stable optical link if instead sources of quantum-mechanically entangled pairs could be provided to them, potentially enabling hitherto prohibitively long baselines \citep{Gottesman2012}. If these entangled states could then be interfered locally at each station with an astronomical photon that has impinged on both stations, the single photon counts at the two stations would be correlated in a way that is sensitive to the phase difference in the two paths of the photon, thus reproducing the action of an interferometer.
Several variations of this idea have been proposed. For one of them, which perhaps is a longer term for practical implementation, high intensity wide-band sources of entangled photons and quantum memories would be employed to measure correlations between the stations as explained above \citep{harvard1} . The approach can be generalized from the entanglement of photon pairs to multipartite entanglement in multiple stations and quantum protocols to process information in noisy environment for evaluation of experimental observables. In another approach, which potentially could be implemented in a shorter term, two photons from different sky sources are interfered at two separate stations, requiring only a slow classical information link between them \citep{stankus2020}. The latter scheme can be contrasted with the Hanbury Brown \& Twiss intensity interferometry \citep{hbt} and could allow robust high-precision measurements of the relative astrometry of the two sources. A calculation based on photon statistics suggests that angular precision on the order of $10\mu$as could be achieved in a single night’s observation of two bright stars for a 200m baseline \citep{stankus2020}. We note that this estimate serves only to demonstrate the potential of the technique and is a useful goal for benchmarking in the forthcoming first measurements. Increased sensitivity to fainter objects like galaxies can be achieved for the schemes with bright entangled photon sources and quantum memories \citep{harvard2} employing technologies, which are under development for quantum networks. Though it looks quite futuristic now, the field of quantum information science is going through exponential expansion driven by the industry and, within a decade, may offer capabilities matching the requirements.
Formally, as the baseline is increased to 1,000s km, the projected astrometric resolution could be very small, at the sub-microarcsec level. Of course, the ultimately achievable resolution for this technique remains to be seen as there are important systematics that need to be considered, like the atmospheric turbulence if it is ground based. There is no comprehensive analysis yet of those effects but we note that as a two-photon technique it may have cancellation of uncertainties if the two photons are close enough and propagating through the same atmosphere.
\subsection{Impact on Dark Energy and Dark Matter}
Below we will touch on just a few of the many scientific opportunities afforded by considerable improvements in astrometric precision, which are directly relevant to the dark energy and dark matter studies.
\textbf{Testing theories of gravity by direct imaging of black hole accretion discs}: The power of intereferometry has recently been demonstrated by the direct imaging of the black hole event horizon in M87 by the Event Horizon Telescope \citep{2019ApJ...875L...2E}. This telescope used the Earth-sized array of telescopes operating in radio bands at 1.1mm to achieve resolution of 25 microarcseconds. Since the telescopes were already spread around earth as much as possible, it is only possible to increase the resolution by using telescopes in space or observing at a smaller wavelength. The quantum-improved techniques advocated here will allow, in principle, for arbitrary baselines, and so by repeating this observation in optical wavelengths it would be possible to increase the resolution by three orders of magnitude (ratio of wavelengths between 1 mm radio and 1 micron optical), bringing about a game changing improvement in resolution. This would open completely new avenues in study of theories of modified gravity that may modify the black hole topology \citep{Moffat2021} and could potentially have large impact on our understanding of dark energy.
\textbf{Precision parallax and cosmic distance ladder}:
Significant improvements in astrometric precision would allow for direct parallax measurements of low redshift galaxies hosting Type Ia supernovae. This would enable skipping over several rungs of the local distance ladder, with their prospects for systematic issues, and tie local supernovae more directly to cosmological supernova distance indicators. Direct parallax measurements are systematically
very robust, but are necessarily limited by the achievable astrometric precision. The most sensitive astrometric data
with precision of few dozens microarcsec is provided by the recent Gaia space mission \citep{Katz2019, Lindegren2021}.
\textbf{Mapping microlensing events}: Amongst the candidates for DM are compact star-sized objects, such as black holes, or extended virialized subhalos comprised of yet undiscovered dark matter particles. To probe these DM candidates, a more rigorous and direct method of observing their expected gravitational microlensing effects on stars is needed. The typical photometric measurement of microlensing events both obfuscates details of the lens’s mass and spatial distribution and is less straightforward than an astrometric approach \citep{Erickcek2011, Wyrzykowski2016}. Improvements in astrometric precision would allow for the more direct astrometric approach to mapping microlensing effects and would therefore be beneficial in assessing the viability of certain DM candidates \citep{Grant2021}.
\textbf{Peculiar motions and dark matter}: DM’s effects on the dynamics of our Galaxy are of great interest for understanding its properties and local distribution \citep{Majewski2007, Gardner2021}. The ability to fully measure and reconstruct 3D velocities of a large swath of the stars in the Galaxy would unlock important, thus far inaccessible data that could illuminate many of the unknown characteristics of DM in our galaxy \citep{Steinmetz2020, Simon2018,Katz2019}. Improvements in astrometric measurements are needed to measure the peculiar motion of more distant stars in our Galaxy and subsequently extract their transverse velocity. The improved 3D velocity data afforded by more precise astrometric measurements would pave the way for an inferred measurement of the dark matter halo’s gravitational potential. Moreover, it would allow us to survey historical merging events in the Milky Way halo and open a unique window into DM’s interaction with itself and with ordinary matter \citep{Chu2019}.
\subsection{Instrument requirements}
An important consideration from the instrumentation viewpoint is that the photons must be close enough in time and in frequency to efficiently interfere; or, formulating it differently, to be indistinguishable within $ \Delta t \cdot \Delta E \sim \hbar $. Converting energy to wavelength, the above is satisfied for $\Delta t \cdot \Delta \lambda = 10~\mathrm{ps} \cdot 0.2~\mathrm{nm}$ at 800~nm wavelength, setting useful target goals for the temporal and spectral resolutions \citep{Nomerotski2020_1}.
Another important parameter for the imaging system is the photon detection efficiency, which needs to be as high as possible, since the two-photon coincidences have a quadratic dependence on it.
An efficient scheme of spectroscopic binning can be implemented by employing a traditional diffraction grating spectrometer where incoming photons are passed though a slit, dispersed, and then focused onto a linear array of single-photon detectors \citep{Dey2019, Vogt1994, Zhang2020}. However, improvement of timing resolution appears to be the most straightforward way to achieve the targeted performance. Fast technologies, such as superconducting nanowire single photon detectors (SNSPD) and single photon avalanche devices (SPAD), can be considered for this application. The superconducting nanowire detectors have excellent photon detection efficiency, in excess of 90\% \citep{Zhu2020, Divochiy2008}, with demonstrated 3~ps timing resolution for single devices~\citep{Korzh2020}.
The SPAD sensors are based on silicon diodes with engineered junction breakdown producing fast pulses of big enough amplitude for single photons. These devices also have excellent timing resolution, which can be as good as 10~ps for single-channel devices, and most importantly, good potential for scalability with multi-channel imagers already reported \citep{Gasparini2017, Morimoto2020}. Benchmarking of these promising technologies for a spectrograph with required spectral and timing resolutions is currently in progress \citep{nomerotski2021}.
\begin{center}
Table 1: Precision Frontiers
\begin{tabular}{ |p{3cm}||p{4.5cm}|p{3.5cm}|p{4cm}| }
\hline
Precision Frontier & Science & Key Technologies & Technology Status \\
\hline
Extreme Precision & Redshift Drift (Dark Energy) & EDI Spectroscopy & Deployed Prototype \\
Radial Velocity & Dark Matter Substructure & Actively Stabilized Spectrographs & Designed, untested \\
& & Dedicated large-aperture facility & Single aperture technology mature; prototype exist for telescope arrays \\
\hline
Astrometry & Cosmological Parallax |& ELT-class telescopes & N/A \\
& Distance Ladder & Quantum-assisted & In development \\
& Dark Matter Substructure &optical interferometers & \\
\hline
\end{tabular}
\end{center}
\section{Technology Status and the Path Forward}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.6]{SL2019.png}
\caption{Improvements in RV precision for various upcoming instruments (adapted from Silverwood \& Easther 2019). Cosmological redshift drift requires $\sim 1$ cm s$^{-1}$ precision (red line) with stability of years to decades.}
\label{f:RVprecision}
\end{center}
\end{figure}
A unifying theme with the above science cases is that they are all now technologically feasible. Table 1 collates the key science cases associated with high-precision spectroscopic velocity measurements and high-precision astrometric measurements, and summarizes the status of key technologies associated with each.
For radial velocities, the next generation of instruments must achieve $<1$ cm s$^{-1}$ velocity precision with stability on timescales of years to decades, and must be deployed at facilities with 10m to ELT-class apertures (depending on the redshift probed). Of currently planned spectrographs, G-CLEF for the GMT and MODHIS for the TMT represent the state-of-the-art and are designed to yield 10 cm s$^{-1}$ precision \citep{GCLEF,Mawet2019}.
There are two promising techniques that have been suggested for achieving the required precision and stability. The earlier generation of instruments have already demonstrated RV precision; the remaining technical challenge is now demonstrating stability over the decade timescales that are necessary to measure dark matter and dark energy. One technique for achieving the requisite stability is the crossfading method for externally dispersed interferometric spectroscopy \citep{2020SPIE11451E..2DE,2021JATIS...7b5006E}. This approach has been demonstrated on-sky to yield a $10^3$ gain in stability \citep{2021JATIS...7b5006E,2020SPIE11451E..2DE}, and if used with doublet lines for redshift drift yields a differential measurement that mitigates experimental systematics.
The second technique is the use of active stabilization of spectrographs for extreme precision radial velocities. For a standard high-resolution spectrograph any changes in the optomechanics due to environment or other factors induce a corresponding drift in where light is dispersed onto the detector. Even if the changes are at the sub-pixel level, such shifts can induce systematic biases in the recovered velocities due to sub-pixel detector physics. The most recent generation of high-precisions spectrographs such as NEID \citep{NEID_optical} include thermal stabilization techniques to mitigate these factors, however this mitigation is still passive. An alternative that has been proposed \citep{cosmicaccelerometer} is to incorporate LIGO technology, using a laser cavity to continuously measure the optical dimensions of the spectrograph and feed these into an active control loop with thermal heaters directly coupled to the optical bench of the spectrograph. The components of this technology are demonstrated, but no prototype of an actively stabilized spectrograph exists at this time.
The above technologies are also applicable to radial velocity measurements for galactic sub-structure; however for this science the RV precision requirements are less severe. The G-CLEF and MODHIS instruments on the GMT and TMT respectively, which are designed for a velocity precision of $10$ cm s$^{-1}$, enable this science, at least to constrain the sub-halo mass function on scales greater than $10^{6}~M_{\odot}$. To constrain the low-mass end of the sub-halo mass function would require $\sim$ cm/s RV precision.
These spectrographs must then be deployed on large-aperture telescopes. Deployment of the EDI technology for low-redshift doublet measurements requires 10m-class telescopes \citep{2020SPIE11451E..2DE}, while the active spectrograph design coupled to 20m-class telescopes has been suggested for high-redshift Lyman-forest redshift drift measurements.\footnote{See \citet{Liske2008} for more information on Lyman-forest redshift drift measurements, which were first suggested by \citet{loeb1998}.} The dark matter sub-structure science also requires $>$ 10m-class telescopes to enable sufficient SNR to allow for acceleration measurements across the Galaxy. While one option is the use of current and upcoming telescopes, the ELTs in particularly will be highly oversubscribed when they come online, with a number of competing science priorities. A dedicated facility for high-precision astrophysics has multiple advantages. These include the ability to optimize the design to robustly control systematics (including those which may arise from switches between various instruments at a general use facility), a guaranteed large amount of observing time for these investigations, and the ability optimize the experimental setup for calibrations and optimal precision. As an alternative to a traditional telescope design, \cite{eikenberry2019b} have suggested the construction of an array of small, fiber-coupled telescopes that feed into a single spectrograph via photonic lanterns. The authors argue that such an approach can be a factor of ten lower cost than a traditional design, and is compatible with EDI technology. At present, this team is constructing a small demonstrator array for deployment at Mt.~Laguna Observatory by early 2023, which will validate field deployment of photonic lantern technology and the array design for spectroscopy.
For astrometry, one of the key sciences (cosmological parallaxes) will be feasible with planned instrumentation on upcoming ELTs with no additional facilities or technical development. However, a dedicated ELT experiment may be required to maximize the cosmological parallax signal over the shortest temporal baseline. Another promising venue for astrometry relies upon development of quantum-assisted optical interferometers to enable $\mu$as-scale astrometric precision, potentially even for ground-based installations. As noted in section 5, this technology is at an R \& D state at present and will require dedicated resources for continued development and deployment of technology demonstrators before it can be deployed to address the core science.
Given this frontier for high-precision cosmological and Galactic radial velocity, and astrometric, measurements the path towards enabling deployment of these technologies requires a combination of instrumentation R \& D -- with the maturity level of the technologies varying from the design phase to demonstrated prototypes -- and securing access to large aperture facilities to obtain sufficient sensitivity for the proposed measurements.
\bibliographystyle{aasjournal}
|
2,877,628,091,014 | arxiv | \section{Introduction}
Half a century has passed since Skyrme proposed \cite{Skyrme:1962vh}
that Skyrmions characterized by the topological charge
$\pi_3(S^3)\simeq\mathbb{Z}$ describe nucleons in the pion effective
field theory or the chiral Lagrangian \cite{Adkins:1983ya}, where the
Skyrme term, i.e.~quartic in derivatives, is needed to stabilize
Skyrmions against shrinkage.
Although nucleons are now known to be bound states of quarks, the idea
of the Skyrme model is still attractive.
In fact, the Skyrme model is still valid as a low-energy description
of QCD, has only a small number of parameters and is, for instance,
used also in holographic QCD \cite{Sakai:2004cn,Hata:2007mb}.
Meanwhile in condensed matter physics, considerable efforts have been
made recently to realize stable 3-dimensional Skyrmions in
two-component Bose-Einstein condensates (BECs)
\cite{Ruostekoski:2001fc,3D-skyrmions,Kawakami:2012zw,Nitta:2012hy}
(see Ref.~\cite{Kasamatsu:2005} for a review of
two-component BECs).
In Ref.~\cite{Nitta:2012hy}, the creation of a Skyrmion is proposed to
be a consequence of the annihilation of a brane and an anti-brane
\cite{Takeuchi:2012ee}.
At strong coupling, these systems reduce to the SU(2) principal chiral
model, but the existence of Skyrmions is elusive due to the lack of
the Skyrme term (or an even higher-order derivative term) \footnote{It
has also been proposed that a stable 3-dimensional Skyrmion can
exist as a ground state in the SU(2)-symmetric case, by introducing
``artificial'' non-Abelian gauge fields \cite{Kawakami:2012zw}.}.
One interesting feature in these systems is that a potential term,
breaking the SU(2) symmetry is present, which deforms the (would-be)
Skyrmion to the shape of a torus \cite{Ruostekoski:2001fc}.
Consequently, the Skyrmion can be interpreted
\cite{Ruostekoski:2001fc,Nitta:2012hy,Metlitski:2003gj}
as a vorton \cite{Davis:1988jq,Vilenkin:2000,Radu:2008pp,Garaud:2013iba},
that is, a vortex ring in the first component with the second
component flowing inside said ring.
In this paper, we consider a Skyrme-like model with a potential term
in the form $V = m^2|\phi_1|^2|\phi_2|^2$ which was introduced in our
previous papers \cite{Gudnason:2014hsa,Gudnason:2014gla}
and is motivated by two-component BECs
\cite{Ruostekoski:2001fc,3D-skyrmions,Nitta:2012hy},
where we use a notation of two complex scalar fields $\phi_1(x)$ and
$\phi_2(x)$ with the constraint $|\phi_1|^2 + |\phi_2|^2=1$ along the
lines of two-component BECs.
For higher-derivative terms needed to stabilize Skyrmion,
we consider either the conventional fourth-order derivative term,
i.e.~the Skyrme term or a sixth-order derivative term, which is
the baryon charge density squared
(see,
e.g.~Refs.~\cite{Adam:2010fg,Gudnason:2013qba,Gudnason:2014gla}); for
a short-term notation we will call the first case the 2+4 model and the
second case the 2+6 model.
We construct stable Skyrmions which were elusive in two-component BECs
in the absence of the Skyrme term or other higher-order derivative
terms, and find that they take the shape of a torus as two-component
BECs.
We find that the most general solutions are characterized by two
integers $P$ and $Q$, representing the winding numbers of
the scalar fields $\phi_1$ and $\phi_2$ along the toroidal and
poloidal cycles of the torus, respectively, and show that the baryon
number or the Skyrmion number of $\pi_3(S^3)\simeq\mathbb{Z}$ is
$B=PQ$ (which is also known as the linking number).
We explicitly construct stable Skyrmion solutions with $P=1,2,3,4,5$
and $Q=1$, yielding the baryon numbers $B=1,2,3,4,5$. We also
construct the $P=6$, $Q=1$ solution and find that it is metastable,
i.e.~is energetically prone to decay into two $B=3$ objects.
This turns out to be the case for both the 2+4 and the 2+6 model.
The energy and baryon charge distributions of the configuration of
$P=1$ are spherically symmetric in the 2+4 model, whereas in the
2+6 model it is a deformed ball (with a hint of a torus-like shape).
The configurations with $P>1$ are all of toroidal shapes (for both
models) when the mass is bigger than a certain critical mass.
This is in contrast to the conventional Skyrmions (i.e.~without our
BEC-motivated potential) for which the configuration of $B=1$ is
spherically symmetric, that of $B=2$ is toroidal, and those of $B>1$
have energy distributions with some point symmetry.
We compare our $B=2$ solutions in the 2+4 and 2+6 models to those of
the conventional model (i.e.~without the BEC-motivated potential), and
find that the energy distribution of the solution in the 2+6 model is
a surface of a torus while the energy distributions of the solutions
in the 2+4 model and the conventional model are solid torii,
i.e.~filled torii.
Although the classification of our solutions is given by the integers
$P$ and $Q$, we find that configurations with $Q>1$ are unstable, that
is, a configuration with $(P,Q)$ decays into $Q$ copies of the $(P,1)$
configuration.
We also note that our configurations can be identified as global
analogues of vortons \cite{Davis:1988jq,Vilenkin:2000,Radu:2008pp},
that is, twisted closed global vortex strings as in two-component
BECs \footnote{Strictly speaking, there is a supercurrent or a
superflow due to the trapped field along the ring of the vorton.
This can be achieved by rotating the phase of the trapped field
linearly in time as $\phi_2 \sim e^{i z + i \alpha t}$ with $z$
being the coordinate along the string.
In the case of BECs, such a time dependence is automatic in the
presence of the phase gradient along the string, because of the
first derivative in time in the non-relativistic Lagrangian.}.
While vortices in this model are global vortices so that straight
vortices have divergent energy per unit length, a closed string has
finite energy because of cancellation of vorticity.
A vortex in the field $\phi_1$ traps the field $\phi_2$ in its core
and has the U(1) phase modulus of $\phi_2$.
The integers $Q$ and $P$ are identified with the winding numbers of the
vortex of the $\phi_1$ field and of the $\phi_2$ field along the ring
inside the vortex core, respectively.
The identification of the Skyrmions with global vortex rings also
explains why configurations with higher $Q>1$ are unstable.
This is because $Q$ is the winding number of the vortex in the field
$\phi_1$, and a global vortex with higher winding is unstable to decay
as two global vortices repel each other.
Finally we discover a first-order phase transition between the
configuration (local minimum) where the Skyrmions have a (discrete)
point symmetry and the toroidal configuration (another local minimum)
at some critical mass, $m_{\rm critical}$.
For concreteness we carry out this investigation at $B=3$ where the
Skyrmion has tetrahedral symmetry for $m<m_{\rm critical}$ and has
axial symmetry (i.e.~it is a torus) for $m>m_{\rm critical}$. For
$m<m_{\rm critical}$ the toroidal state is metastable and for
$m>m_{\rm critical}$ the tetrahedral state is metastable. For
sufficiently large $m\sim 2m_{\rm critical}$, the tetrahedral solution
becomes unstable and thus for large $m$ only the torus exists.
This paper is organized as follows.
In Sec.~\ref{sec:model}, we present our model and explain the
symmetries and topological structures of the model.
In Sec.~\ref{sec:wall-vortex}, we construct a domain wall and a global
vortex which serve as constituents of the torus.
Finally, in Sec.~\ref{sec:toroidal-wall}, we construct toroidal
Skyrmions which are the strings wrapped up on a circle and we further
study their stability.
The phase transition between the Skyrmions with point symmetry and
with axial symmetry is studied in Sec.~\ref{sec:transition}.
Sec.~\ref{sec:summary} is devoted to a summary and discussions.
In Appendix \ref{app:stringsplitting},
we show that solutions with $P=1,2$ and
$Q=2$ are unstable to decay into
two configurations of $P=1,2$ and $Q=1$.
In Appendix \ref{app:comparison}, we compare our $B=2$ solutions in
the 2+4 and 2+6 models and that in the conventional models
(i.e.~without the BEC-motivated potential).
\section{A Skyrme-like model with BEC-motivated
potential \label{sec:model}}
We consider the SU(2) principal chiral model with the addition of the
Skyrme term and a sixth-order derivative term in $d=3+1$ dimensions.
In terms of the SU(2)-valued field $U(x)\in$ SU(2), the Lagrangian
which we are considering is given by
\beq
\mathcal{L} = \frac{f_\pi^2}{16}
\tr (\p_{\mu}U^{\dagger} \p^{\mu} U)
+ \mathcal{L}_4
+ \mathcal{L}_6
- V(U),
\eeq
where we use the mostly-negative metric and the higher-derivative
terms are given by
\begin{align}
\mathcal{L}_4 &= \frac{\kappa}{32 e^2}
\tr (\big[U^\dag \p_{\mu} U, U^\dag \p_{\nu} U\big]^2),\\
\mathcal{L}_6 &= \frac{c_6}{36 e^4 f_\pi^2}
\left(\epsilon^{\mu\nu\rho\sigma}\tr\big[U^\dag\p_\nu U U^\dag \p_\rho
U U^\dag \p_\sigma U\big]\right)^2.
\end{align}
The symmetry of the Lagrangian for $V=0$ is
$\tilde G =$ SU(2)$_{\rm L} \times $SU(2)$_{\rm R}$ acting on $U$ as
$U \to U'= g_{\rm L} U g_{\rm R}^\dag$.
The requirement of a finite-energy configuration, however,
spontaneously breaks this symmetry down to $\tilde H \simeq$
SU(2)$_{\rm L+R}$, which in turn acts as $U \to U'= g U g^\dag$ so
that the target space is
$\tilde G/\tilde H \simeq$ SU(2)$_{\rm L-R}$.
The conventional potential term, i.e.~the pion mass term, is
$V = m_{\pi}^2\tr (2{\bf 1}_2 - U - U^\dag)$,
which breaks the symmetry $\tilde G$ to SU(2)$_{\rm L+R}$
\emph{explicitly}.
In this paper, it will prove convenient to use the following notation
where we express the field $U$ in terms of two complex scalar fields,
$\phi^{\rm T} = (\phi_1(x),\phi_2(x))$, as
\beq
U =
\begin{pmatrix}
\phi_1 & -\phi_2^*\\
\phi_2 & \phi_1^*
\end{pmatrix},
\eeq
subject to the constraint
\beq
\det U = |\phi_1|^2 + |\phi_2|^2 = 1.
\eeq
We further rescale the lengths to be in units of $2/(e f_\pi)$ and
energy to be in units of $f_\pi/(2e)$, for which we can write the
static Lagrangian density as
\begin{align}
-\mathcal{L}
&= \1{2} \p_i\phi^\dag \p_i\phi
+ \frac{\kappa}{4} \left[(\p_i \phi^\dag \p_i\phi)^2
-\1{4}(\p_i\phi^\dag \p_j\phi + \p_j\phi^\dag \p_i\phi)^2\right]
+ \frac{c_6}{4} \left(\epsilon^{i j k}
\phi^\dag\p_i\phi\p_j\phi^\dag\p_k\phi\right)^2 \non
&\phantom{=\ }
+ V(\phi,\phi^*).
\end{align}
The full symmetry $\tilde G$ is not manifest in terms of $\phi$,
where only SU(2)$_{\rm L}$ is manifest but SU(2)$_{\rm R}$ is not.
The U(1) subgroup generated by $\sigma_3$ in SU(2)$_{\rm R}$, however,
is manifest and acts on $\phi$ as $\phi \to e^{i\alpha} \phi$,
constituting a U(2) group with SU(2)$_{\rm L}$.
The target space (the vacuum manifold with $m=0$) $M\simeq$ SU(2)
$\simeq S^3$ has a nontrivial homotopy group
\beq
\pi_3(M) = \mathbb{Z},
\eeq
which admits Skyrmions as usual.
The baryon number (the Skyrme charge) of $B \in \pi_3(S^3)$ is defined
as
\beq
B &=& -\1{24\pi^2} \int d^3x \; \epsilon^{ijk}
\tr \left( U^\dag\p_i U U^\dag\p_j U U^\dag\p_k U\right) \non
&=& \1{24\pi^2} \int d^3x \; \epsilon^{ijk}
\tr \left( U^\dag\p_i U\p_j U^\dag\p_k U\right) \non
&=& \1{4\pi^2} \int d^3x \; \epsilon^{ijk}
\phi^\dag \p_i\phi \p_j\phi^\dag \p_k \phi .
\eeq
Instead of the conventional potential term, we consider here a
potential term motivated by two-component Bose-Einstein condensates
(BECs), given by
\beq
V(\phi,\phi^*)
= \frac{m^2}{8} \left[1 - (\phi^\dagger \sigma_3 \phi)^2 \right]
= \frac{1}{2} m^2 |\phi_1|^2 |\phi_2|^2;
\label{eq:potential}
\eeq
see the Appendix of Ref.~\cite{Gudnason:2014hsa} for a relation to
BECs.
With this potential, the full symmetry $\tilde G$ is explicitly broken
down to
\beq
G = \mathrm{U}(1) \times \mathrm{O}(2) \simeq
\mathrm{U}(1)_0 \times [\mathrm{U}(1)_3 \rtimes (\mathbb{Z}_2)_{1,2}].
\eeq
Here, each group is defined as
\beq
\mathrm{U}(1)_0:&& \quad \phi \to e^{i \alpha} \phi, \\
\mathrm{U}(1)_3:&& \quad \phi \to e^{i \beta \sigma_3} \phi, \\
(\mathbb{Z}_2)_{1,2}:&& \quad e^{i (\pi/2) \sigma_{1,2}} \phi
\eeq
where U(1)$_3$ acts on $\mathbb{Z}_2$ so that they are a semi-direct
product denoted by $\rtimes$.
The vacua of the potential in Eq.~\eqref{eq:potential} are
\begin{align}
\begin{split}
\odot\ : \quad \phi^{\rm T} = (e^{i\alpha},0), \\
\otimes\ : \quad \phi^{\rm T} = (0,e^{i\beta}),
\end{split}
\label{eq:vacua}
\end{align}
and the unbroken symmetry $H$ is
\beq
H_{\odot} = \mathrm{U}(1)_{0-3}: && \quad
\phi \to e^{i \alpha} e^{-i \alpha \sigma_3} \phi, \non
H_{\otimes} = \mathrm{U}(1)_{0+3}: && \quad
\phi \to e^{i \alpha} e^{+i \alpha \sigma_3} \phi,
\eeq
for the $\odot$ and the $\otimes$ vacuum of Eq.~(\ref{eq:vacua}),
respectively.
Therefore, the vacuum manifold (or the moduli space of vacua) is given
by
\beq
\mathcal{M} \simeq G/H =
\frac{\mathrm{U}(1)_0 \times [\mathrm{U}(1)_3 \rtimes (\mathbb{Z}_2)_{1,2}]}{\mathrm{U}(1)_{0 \pm 3}}
\simeq \mathrm{SO}(2)_{0\mp 3} \rtimes (\mathbb{Z}_2)_{1,2} = \mathrm{O}(2).
\eeq
The nontrivial homotopy groups of the vacuum manifold are
\beq
\pi_0(\mathcal{M}) = \mathbb{Z}_2, \quad
\pi_1(\mathcal{M}) = \mathbb{Z},
\eeq
admitting domain walls and vortices, respectively.
By means of the Hopf map $\vec{n} = \phi^\dagger \vec{\sigma} \phi$,
the principal chiral SU(2) model can be mapped to
the O(3) nonlinear sigma model with $\vec{n}^2 =1$ or equivalently
the ${\mathbb C}P^1$ model.
The potential term in Eq.~(\ref{eq:potential}) is
mapped to $V=\frac{m^2}{8} (1-n_3^2)$,
which is referred to as the Ising-type potential in ferromagnets
\cite{Kobayashi:2014xua}.
The ${\mathbb C}P^1$ model with the same potential is
often called the massive ${\mathbb C}P^1$ model
\cite{Abraham:1992vb,Arai:2002xa,Nitta:2012xq,Nitta:2012kj}.
This map can be obtained by
coupling a U(1) gauge field to $\phi$ with
common U(1) charges
and subsequently taking the strong gauge coupling limit
$e \to \infty$.
\section{Domain walls and vortices}\label{sec:wall-vortex}
In this section, we will review the constituents which will be used in
the next section in modified or compactified forms.
\subsection{Domain walls}
In $d=1+1$ dimensions, a (n anti-)kink solution interpolating between
the two vacua in Eq.~(\ref{eq:vacua}) is given by
\beq
\phi^{\rm T} = \1{\sqrt{1 + e^{\pm 2m(x-X)}}}
(e^{i \alpha} , e^{\pm m(x-X) + i \beta }),
\label{eq:kink_sol}
\eeq
with $X\in\mathbb{R}$ being the translational modulus of the kink.
Here $\alpha$ and $\beta$ are not moduli of the kink but moduli of the
vacua in Eq.~(\ref{eq:vacua}).
Note that this solution is (statically) exact in the form given above,
even in the presence of the Skyrme or sixth-order derivative term
(this can easily be understood as the Skyrme (sixth-order derivative)
term is nonzero only when a solution nontrivially depends on two
(three) spatial coordinates).
Once waves on top of this static solution are considered, the
higher-order derivative terms must be taken into account; see
e.g.~Ref.~\cite{Kudryavtsev:1997nw}.
In the static case, the kink can trivially be extended
to a domain line in $d=2+1$ dimensions and to a domain wall in $d=3+1$
dimensions, with a one- and two-dimensional world volume,
respectively.
By the Hopf map, the solution \eqref{eq:kink_sol} is mapped to a kink
in the massive $\mathbb{C}P^1$ model
\cite{Abraham:1992vb,Arai:2002xa,Nitta:2012wi}.
In that case, the phase difference $\beta - \alpha$ becomes a modulus
of the kink.
In the $(3+1)$-dimensional case, we can think of our toroidal objects
in Sec.~\ref{sec:toroidal-wall} as a domain wall wrapped up on a torus
with its $S^1$ moduli twisted in both world-volume directions.
It will, however, prove convenient to take a different point of view,
as we shall see, namely to consider first a vortex string which is
then wrapped up on a circle. In the next subsection we therefore
review the (global) vortex.
\subsection{Vortices}
In $d=2+1$ dimensions the model allows for global vortices.
The vortices of $\phi_1$ trap or localize $\phi_2$ in their cores and
they carry a U(1) modulus being the phase of $\phi_2$.
We will now review the global vortex in the nonlinear sigma model
with the potential \eqref{eq:potential}, see \cite{Gudnason:2014hsa}.
The vortex can be constructed using the following Ansatz
\beq
\phi^{\rm T} = \left(\sin f(r) e^{i\varphi + i \alpha},
\cos f(r) e^{i \beta}\right),
\eeq
where $r\in[0,\infty),\varphi\in[0,2\pi)$ are polar coordinates in a
plane.
The constant, $\alpha$, can be absorbed by
a redefinition of the coordinate $\varphi$,
while the constant $\beta$ is a U(1) modulus.
This simplifies the Lagrangian density to \cite{Gudnason:2014hsa}
\beq
-\mathcal{L} =
\frac{1}{2}f_r^2
+\frac{1}{2r^2}\sin^2 f
+\frac{\kappa}{2r^2}\sin^2(f) f_r^2
+\frac{1}{8} m^2 \sin^2(2f),
\eeq
for which the equation of motion reads \cite{Gudnason:2014hsa}
\beq
f_{rr} + \frac{1}{r} f_r - \frac{1}{2r^2}\sin 2f
+\frac{\kappa}{r^2}\sin^2 f\left(f_{rr} - \frac{1}{r} f_r\right)
+\frac{\kappa}{2r^2}\sin(2f) f_r^2
-\frac{1}{4} m^2 \sin 4f = 0.
\eeq
The boundary conditions for the vortex system are given by
\beq
f(0) = 0, \qquad
f(\infty) = \frac{\pi}{2}.
\eeq
We show numerical solutions in Fig.~\ref{fig:vortex} for $m=1,4$ and
$\kappa=0,1$.
\begin{figure}[!htb]
\mbox{
\includegraphics[width=0.45\linewidth]{bec_vortex_m1_condensates}\quad
\includegraphics[width=0.45\linewidth]{bec_vortex_m4_condensates}}
\mbox{
\includegraphics[width=0.45\linewidth]{bec_vortex_m1_energy}\quad
\includegraphics[width=0.45\linewidth]{bec_vortex_m4_energy}}
\caption{Vortex profiles and energy densities for solutions without
the Skyrme term $\kappa=0$ (blue curve) and with the Skyrme term
$\kappa=1$ (dotted red curve) for $m=1$ (left panels) and $m=4$
(right panels).}
\label{fig:vortex}
\end{figure}
By the Hopf map, they can (topologically) be mapped to lumps.
In $d=3+1$ dimensions, these vortices are extended to vortex strings
or cosmic strings. They are global analogues of Witten's
superconducting strings \cite{Witten:1984eb}.
We may call them superflowing cosmic strings.
Once extended to $(3+1)$-dimensional spacetime, the strings bear a
U(1) modulus, which we can parametrize as
\beq
\phi^{\rm T} = \left(\sin f(r) e^{i\varphi},
\cos f(r) e^{i\zeta(z)}\right),
\eeq
In the next section we will compactify these strings on a circle which
requires a nontrivial twist of the modulus $\zeta$.
\section{Toroidal Skyrmions in $3+1$ dimensions
\label{sec:toroidal-wall}}
In this section we will consider a closed vortex string, i.e.~the
vortex string wound up on a circle and thus forming a torus-like
object.
Such a closed vortex string is unstable unless its U(1) modulus is
twisted along the string (viz.~it is topologically trivial otherwise).
In the final configuration, the U(1) modulus is twisted $P$ times
along the toroidal ($\alpha$) cycle of the torus and the global string
winds $Q$ times ``along'' the poloidal ($\beta$) cycle of the torus;
see Fig.~\ref{fig:cycles}.
\begin{figure}[!tbh]
\begin{center}
\includegraphics[width=0.5\linewidth]{torus-cycles}
\end{center}
\caption{The two cycles of the torus.
The toroidal and poloidal cycles are
denoted by $\alpha$ and $\beta$, respectively.
The $\odot$ and $\otimes$ denote the vacua
in Eq.~(\ref{eq:vacua}), respectively.
The U(1) modulus is twisted $P$ and $Q$ times
along the cycles $\alpha$ and $\beta$,
respectively. \label{fig:cycles}}
\end{figure}
The torus-shaped solution requires us to study the full partial
differential equation (PDE) numerically, for which we will use the
relaxation method on a cubic square lattice. Because of the
topological nature of the objects we study, it is sufficient to employ
Neumann conditions on the boundary of the lattice whereas the initial
condition is very important.
For the initial configuration we will use the following Ansatz
\begin{align}
\phi^{\rm T} = \left(\sin\left[
\cos^{-1}\{\sin f(r) \sin\theta\} \right] e^{i Q \tan^{-1}
(\tan f(r)\cos\theta)}, \cos\left[
\cos^{-1}\{\sin f(r) \sin\theta\} \right] e^{i P \phi}
\right), \label{eq:ansatz}
\end{align}
where $r\in[0,\infty)$, $\theta\in[0,\pi]$, $\varphi\in[0,2\pi)$ and
$f(r)$ is an appropriately chosen monotonically decreasing function
satisfying the boundary conditions
\begin{align}
f(r \to 0) \to \pi, \quad
f(r \to \infty) \to 0.
\end{align}
The baryon number (Skyrme charge) of $\pi_3(S^3)\simeq\mathbb{Z}$ for
the configuration given in Eq.~\eqref{eq:ansatz} is
\begin{align}
\begin{split}
B &= \frac{1}{4\pi^2} \int d^3x\: \epsilon^{i j k}
\phi^\dag \p_i \phi \p_j \phi^\dag \p_k \phi \\
&= -\frac{1}{2 \pi^2} \int_0^{\infty} dr\: \int_0^\pi d\theta\: \int_0^{2 \pi} d\phi\; \sin\theta P Q f^\prime(r) \sin^2 f(r) \\
&= -\frac{P Q}{\pi} \int_0^\infty dr\: \p_r \left[f(r) - \sin f(r) \cos f(r) \right] \\
&= P Q.
\end{split}
\end{align}
Although we seemingly have two quantum numbers to dial in the
configuration, it will prove convenient to think about the winding
number $Q$ as that of the global vortex. This may suggest that $Q>1$
will be unstable as global vortices repel with a force $\sim 1/d$,
where $d$ (here) is the separation distance between two strings.
We confirm this expectation by numerically solving the equations and
find for a wide range of parameters that for $Q>1$, the
relaxation method always splits up the object into $Q$ individual
strings; each with a $P$-wound U(1) phase. For details, see Appendix
\ref{app:stringsplitting}.
We can therefore study the numerical solutions with baryon number
$B=P$, for which the Ansatz \eqref{eq:ansatz} reduces to
\beq
\phi^{\rm T} = \left(\cos f(r) + i\sin f(r)\cos\theta,
\sin f(r) \sin\theta e^{i P \phi} \right).
\label{eq:torus_reduced}
\eeq
This is exactly the axially symmetric generalization of the hedgehog
Ansatz and this is just what we need (note that for Skyrmions without
our BEC-motivated potential, this Ansatz is only appropriate for
$B=1,2$ while for $B>2$ the axial symmetry no longer yields the
minimum-energy configuration).
We will study two cases in turn; in the first we turn on only the
fourth-order derivative term, i.e.~$\kappa=1$ and $c_6=0$ while in the
second case we switch off the fourth-order but use the sixth-order
derivative term, i.e.~$\kappa=0$ and $c_6=1$. We will call them the
2+4 model and the 2+6 model, respectively.
In Figs.~\ref{fig:t4}, \ref{fig:t4_baryonslice} and
\ref{fig:t4_energyslice} we show solutions for case of the 2+4 model
($\kappa=1$ and $c_6=0$) with mass $m=4$. In Fig.~\ref{fig:t4} is
shown the 3-dimensional isosurfaces at half the maximum value of the
baryon charge density. The color scheme used is chosen such that the
U(1) phase, $\mathop{\rm arg}\phi_2$, is mapped to the hue while the
lightness is given by the absolute value of the imaginary part of the
vortex condensate: $|\Im(\phi_1)|$.
In Figs.~\ref{fig:t4_baryonslice} and \ref{fig:t4_energyslice} are
shown the baryon charge density and energy density, at two different
cross sections cutting through the origin of the torus, respectively.
In this case, they are practically identical, which means that the
energy density is located where the baryon charge is.
\begin{figure}[!htp]
\begin{center}
\captionsetup[subfloat]{labelformat=empty}
\mbox{
\subfloat[$(P,Q)=(1,1)$]{\includegraphics[width=0.32\linewidth]{T4B11n}}
\subfloat[$(P,Q)=(2,1)$]{\includegraphics[width=0.32\linewidth]{T4B21n}}
\subfloat[$(P,Q)=(3,1)$]{\includegraphics[width=0.32\linewidth]{T4B31n}}}
\mbox{
\subfloat[$(P,Q)=(4,1)$]{\includegraphics[width=0.32\linewidth]{T4B41n}}
\subfloat[$(P,Q)=(5,1)$]{\includegraphics[width=0.32\linewidth]{T4B51n}}
\subfloat[$(P,Q)=(6,1)$]{\includegraphics[width=0.32\linewidth]{T4B61n}}}
\mbox{
\includegraphics[width=0.5\linewidth]{colorbar}}
\caption{
Isosurfaces showing the solutions for the 2+4 model, i.e.~for
$\kappa=1$ and $c_6=0$, at constant baryon charge density equal to half
its maximum value.
The color represents the phase of the scalar field $\phi_2$ and the
lightness is given by $|\Im(\phi_1)|$.
The calculations are done on an $81^3$ cubic lattice with the
relaxation method.
}
\label{fig:t4}
\end{center}
\end{figure}
\begin{figure}[!htp]
\begin{center}
\captionsetup[subfloat]{labelformat=empty}
\mbox{
\subfloat[$(P,Q)=(1,1)$]{\includegraphics[width=0.49\linewidth]{T4B11_baryonslice}}
\subfloat[$(P,Q)=(2,1)$]{\includegraphics[width=0.49\linewidth]{T4B21_baryonslice}}}
\mbox{
\subfloat[$(P,Q)=(3,1)$]{\includegraphics[width=0.49\linewidth]{T4B31_baryonslice}}
\subfloat[$(P,Q)=(4,1)$]{\includegraphics[width=0.49\linewidth]{T4B41_baryonslice}}}
\mbox{
\subfloat[$(P,Q)=(5,1)$]{\includegraphics[width=0.49\linewidth]{T4B51_baryonslice}}
\subfloat[$(P,Q)=(6,1)$]{\includegraphics[width=0.49\linewidth]{T4B61_baryonslice}}}
\caption{
Baryon charge density for solutions in the 2+4 model, i.e.~with
$\kappa=1$ and $c_6=0$, at $xz$ slices (for $y=0$) and $xy$ slices (for
$z=0$). $yz$ slices are omitted as they are identical to the $xz$
slices by rotational symmetry of the torus. The calculations are done
on an $81^3$ cubic lattice with the relaxation method.
}
\label{fig:t4_baryonslice}
\end{center}
\end{figure}
\begin{figure}[!htp]
\begin{center}
\captionsetup[subfloat]{labelformat=empty}
\mbox{
\subfloat[$(P,Q)=(1,1)$]{\includegraphics[width=0.49\linewidth]{T4B11_energyslice}}
\subfloat[$(P,Q)=(2,1)$]{\includegraphics[width=0.49\linewidth]{T4B21_energyslice}}}
\mbox{
\subfloat[$(P,Q)=(3,1)$]{\includegraphics[width=0.49\linewidth]{T4B31_energyslice}}
\subfloat[$(P,Q)=(4,1)$]{\includegraphics[width=0.49\linewidth]{T4B41_energyslice}}}
\mbox{
\subfloat[$(P,Q)=(5,1)$]{\includegraphics[width=0.49\linewidth]{T4B51_energyslice}}
\subfloat[$(P,Q)=(6,1)$]{\includegraphics[width=0.49\linewidth]{T4B61_energyslice}}}
\caption{
Energy density for solutions in the 2+4 model, i.e.~with $\kappa=1$
and $c_6=0$, at $xz$ slices (for $y=0$) and $xy$ slices (for
$z=0$). $yz$ slices are omitted as they are identical to the $xz$
slices by rotational symmetry of the torus. The calculations are done
on an $81^3$ cubic lattice with the relaxation method.
}
\label{fig:t4_energyslice}
\end{center}
\end{figure}
As a check on our numerical precision, we calculate the baryon charge
density and integrate it numerically, see Table \ref{tab:EB_T4}.
As already explained, our Skyrmionic torii are only stable for $Q=1$,
but to study whether they are stable for higher $P>1$, we need to
compare the energy of the configurations. In Table \ref{tab:EB_T4}, we
calculate the energy per $B=P$ and find that the energy drops for the
first four torii, viz.~$P=1,2,3,4$, but then it starts to increase
slightly. The increase is so small that the $P=5$ solution is still
energetically stable (also taking into account the numerical accuracy)
while $P=6$ is only metastable \footnote{The question of stability may
also depend on the coefficients of the higher-derivative terms and
the mass.}. That is, the energy of the $P=6$
solution is larger than two times that of the $P=3$ solution and hence
it is bound to decay. Here we have not studied the potential barrier
for the decay and thus cannot calculate its life time.
\begin{table}[!htp]
\begin{center}
\caption{Numerically integrated baryon charge and energy (mass) for
the solutions in the 2+4 model. Stability is observed for the first
five solutions whilst $P=6$ is only energetically metastable. }
\label{tab:EB_T4}
\begin{tabular}{c||cc}
$B$ & $B^{\rm numerical}$ & $E^{\rm numerical}/B$ \\
\hline\hline
$1$ & $0.9995$ & $93.3151\pm 0.0297$ \\
$2$ & $1.9994$ & $85.2782\pm 0.0223$ \\
$3$ & $2.9985$ & $84.0152\pm 0.0200$ \\
$4$ & $3.9981$ & $83.6919\pm 0.0516$ \\
$5$ & $4.9959$ & $84.1664\pm 0.0312$ \\
$6$ & $5.9921$ & $84.7335\pm 0.0204$
\end{tabular}
\end{center}
\end{table}
Next we will turn to the case of the 2+6 model, i.e.~with only
sixth-order derivative terms ($\kappa=0$ and $c_6=1$) and again
with a mass of $m=4$. Numerical solutions are shown in
Figs.~\ref{fig:t6}, \ref{fig:t6_baryonslice} and
\ref{fig:t6_energyslice}. As in the previous case, we show the
3-dimensional isosurfaces of the baryon charge density at half the
maximum value in Fig.~\ref{fig:t6}.
In Figs.~\ref{fig:t6_baryonslice} and \ref{fig:t6_energyslice} are
shown the baryon charge density and energy density, respectively, at
two different cross sections cutting the torus through the origin.
Notice that the energy densities for these solutions are somewhat more
complex than their respective baryon charge densities. This is one
difference between the 2+6 model and the 2+4 model. The second
difference is that in this case, the torus shape is vaguely visible
already for $P=1$, whereas for the previous case $P=1$ has (unbroken)
spherical symmetry.
Let us also comment on the circular shape of the torus for the
$(P,Q)=(6,1)$ solution along the toroidal direction in
Fig.~\ref{fig:t6}; this flattening out of the circle is not aligned
with the lattice, but is at almost 45 degrees to the lattice
axis. Since the small $P$ solutions do possess almost perfect circular
symmetry, we believe that this is not a lattice effect, but instead
signals metastability of the string: for high enough $B=P$ the string
wants to collapse and break up. The same effect can also be observed
in the $(P,Q)=(6,1)$ solution in Fig.~\ref{fig:t6_energyslice} on the
$xy$ slice where the energy density displays four distinct wave tops
around the toroidal cycle.
\begin{figure}[!htp]
\begin{center}
\captionsetup[subfloat]{labelformat=empty}
\mbox{
\subfloat[$(P,Q)=(1,1)$]{\includegraphics[width=0.32\linewidth]{T6B11n}}
\subfloat[$(P,Q)=(2,1)$]{\includegraphics[width=0.32\linewidth]{T6B21n}}
\subfloat[$(P,Q)=(3,1)$]{\includegraphics[width=0.32\linewidth]{T6B31n}}}
\mbox{
\subfloat[$(P,Q)=(4,1)$]{\includegraphics[width=0.32\linewidth]{T6B41n}}
\subfloat[$(P,Q)=(5,1)$]{\includegraphics[width=0.32\linewidth]{T6B51n}}
\subfloat[$(P,Q)=(6,1)$]{\includegraphics[width=0.32\linewidth]{T6B61n}}}
\mbox{
\includegraphics[width=0.5\linewidth]{colorbar}}
\caption{
Isosurfaces showing the solutions for the 2+6 model, i.e.~for
$\kappa=0$ and $c_6=1$, at constant baryon charge density equal to half
its maximum value.
The color represents the phase of the scalar field $\phi_2$ and the
lightness is given by $|\Im(\phi_1)|$.
The calculations are done on an $81^3$ cubic lattice
with the relaxation method.
}
\label{fig:t6}
\end{center}
\end{figure}
\begin{figure}[!htp]
\begin{center}
\captionsetup[subfloat]{labelformat=empty}
\mbox{
\subfloat[$(P,Q)=(1,1)$]{\includegraphics[width=0.49\linewidth]{T6B11_baryonslice}}
\subfloat[$(P,Q)=(2,1)$]{\includegraphics[width=0.49\linewidth]{T6B21_baryonslice}}}
\mbox{
\subfloat[$(P,Q)=(3,1)$]{\includegraphics[width=0.49\linewidth]{T6B31_baryonslice}}
\subfloat[$(P,Q)=(4,1)$]{\includegraphics[width=0.49\linewidth]{T6B41_baryonslice}}}
\mbox{
\subfloat[$(P,Q)=(5,1)$]{\includegraphics[width=0.49\linewidth]{T6B51_baryonslice}}
\subfloat[$(P,Q)=(6,1)$]{\includegraphics[width=0.49\linewidth]{T6B61_baryonslice}}}
\caption{
Baryon charge density for solutions in the 2+6 model, i.e.~with
$\kappa=0$ and $c_6=1$, at $xz$ slices (for $y=0$) and $xy$ slices (for
$z=0$). $yz$ slices are omitted as they are identical to the $xz$
slices by rotational symmetry of the torus. The calculations are done
on an $81^3$ cubic lattice with the relaxation method.
}
\label{fig:t6_baryonslice}
\end{center}
\end{figure}
\begin{figure}[!htp]
\begin{center}
\captionsetup[subfloat]{labelformat=empty}
\mbox{
\subfloat[$(P,Q)=(1,1)$]{\includegraphics[width=0.49\linewidth]{T6B11_energyslice}}
\subfloat[$(P,Q)=(2,1)$]{\includegraphics[width=0.49\linewidth]{T6B21_energyslice}}}
\mbox{
\subfloat[$(P,Q)=(3,1)$]{\includegraphics[width=0.49\linewidth]{T6B31_energyslice}}
\subfloat[$(P,Q)=(4,1)$]{\includegraphics[width=0.49\linewidth]{T6B41_energyslice}}}
\mbox{
\subfloat[$(P,Q)=(5,1)$]{\includegraphics[width=0.49\linewidth]{T6B51_energyslice}}
\subfloat[$(P,Q)=(6,1)$]{\includegraphics[width=0.49\linewidth]{T6B61_energyslice}}}
\caption{
Energy density for solutions in the 2+6 model, i.e.~with $\kappa=0$
and $c_6=1$, at $xz$ slices (for $y=0$) and $xy$ slices (for
$z=0$). $yz$ slices are omitted as they are identical to the $xz$
slices by rotational symmetry of the torus. The calculations are done
on an $81^3$ cubic lattice with the relaxation method.
}
\label{fig:t6_energyslice}
\end{center}
\end{figure}
We again check the numerical precision by numerically evaluating the
total baryon charge, see Table \ref{tab:EB_T6}. As for the stability
of the higher $P>1$ solutions, we numerically evaluate the energy
(mass) of the solutions and again find that the energy decreases as
$P$ is increased, for the first few solutions, but this time only for
the first three $P=1,2,3$ and then it starts to increase slightly. The
first five solutions are all energetically \emph{stable} while $P=6$
is only metastable.
\begin{table}[!htp]
\begin{center}
\caption{Numerically integrated baryon charge and energy (mass) for
the solutions in the 2+6 model. Stability is observed for the first
five solutions whilst $P=6$ is only energetically metastable. }
\label{tab:EB_T6}
\begin{tabular}{c||cr@{$\,\pm\,$}l}
$B$ & $B^{\rm numerical}$ & \multicolumn{2}{c}{$E^{\rm numerical}/B$} \\
\hline\hline
$1$ & $0.9999$ & $100.8613$ & $0.0410$ \\
$2$ & $1.9998$ & $89.7184$ & $0.0532$ \\
$3$ & $2.9995$ & $87.3095$ & $0.1871$ \\
$4$ & $3.9981$ & $87.5179$ & $0.0721$ \\
$5$ & $4.9970$ & $87.5560$ & $0.0901$ \\
$6$ & $5.9939$ & $88.1414$ & $0.1145$
\end{tabular}
\end{center}
\end{table}
\section{Transition to Toroidal Skyrmions
\label{sec:transition}}
In this section we study the transition from the normal Skyrmion of
higher charge (i.e.~with $m=0$) to the toroidal Skyrmion (i.e.~with
$m$ sufficiently large). For concreteness, we study the transition in
the normal Skyrme model ($\kappa=1$ and $c_6=0$) and for $B=3$ where the
transition is very visible (as opposed to for instance $B=1$ and
$B=2$).
When the potential is turned off, the $B=3$ Skyrmion in the normal
Skyrme model is of tetrahedral shape \cite{Battye:1997qq}.
Turning on the potential \eqref{eq:potential}, vorton-like Skyrmions
become the lowest-energy state for a sufficiently large mass
parameter, $m$.
In order to find the critical mass necessary for obtaining torii or
global strings in the Skyrme model, we vary the mass parameter and
repeat the numerical calculation.
We are using the relaxation method to find numerical solutions. One
weakness of this method is that it only finds the nearest \emph{local}
minimal-energy solution, as opposed to the \emph{global} one.
For this reason we make two series of numerical calculations: one
starting from the tetrahedral solution, whose initial guess is
\cite{Houghton:1997kg}
\beq
\mathbf{n} = \left\{
\frac{R + \bar{R}}{1+R\bar{R}}\sin f,
\frac{i(\bar{R}-R)}{1+R\bar{R}}\sin f,
\frac{1 - R\bar{R}}{1+R\bar{R}}\sin f,
\cos f\right\},
\eeq
where $R$ is the rational map Ansatz and for $B=3$ the tetrahedral
Ansatz is \cite{Houghton:1997kg}
\beq
R = \frac{z^3 - \sqrt{3}iz}{\sqrt{3}i z^2 - 1}, \qquad
z = \tan\left(\frac{\theta}{2}\right)e^{i\phi},
\label{eq:tetrahedral_ansatz}
\eeq
where $\theta,\phi$ are angles on the 2-sphere.
The other series of numerical solutions use the initial guess provided
by the torus Ansatz of Eq.~\eqref{eq:torus_reduced}.
Figs.~\ref{fig:T4B3_TEIC} and \ref{fig:T4B3_TOIC} show the two series
of numerical solutions starting from the tetrahedral and toroidal
initial guess, respectively. It is observed that for $m\gtrsim 3$ both
series converge to a flat torus. The difference in the colors is due
to a permutation in the fields $n_3$ and $n_2$. The two flat torii for
$m=4$ are physically the same and are not shown in
Figs.~\ref{fig:T4B3_TEIC} and \ref{fig:T4B3_TOIC}, but can be seen in
Fig.~\ref{fig:t4}.
\begin{figure}[!htp]
\begin{center}
\captionsetup[subfloat]{labelformat=empty}
\mbox{
\subfloat[$m=0$, $B^{\rm numerical}=2.9977$]{\includegraphics[width=0.33\linewidth]{T4B31_TEIC_m0}}
\subfloat[$m=1/2$, $B^{\rm numerical}=2.9983$]{\includegraphics[width=0.33\linewidth]{T4B31_TEIC_m05}}
\subfloat[$m=1$, $B^{\rm numerical}=2.9986$]{\includegraphics[width=0.33\linewidth]{T4B31_TEIC_m1}}}
\mbox{
\subfloat[$m=3/2$, $B^{\rm numerical}=2.9993$]{\includegraphics[width=0.33\linewidth]{T4B31_TEIC_m15}}
\subfloat[$m=2$, $B^{\rm numerical}=2.9986$]{\includegraphics[width=0.33\linewidth]{T4B31_TEIC_m2}}
\subfloat[$m=3$, $B^{\rm numerical}=2.9982$]{\includegraphics[width=0.33\linewidth]{T4B31_TEIC_m3}}}
\caption{Isosurfaces showing the solutions for the 2+4 model,
$B=3$ and various values of the mass parameter, $m$, and the
tetrahedral Ansatz \eqref{eq:tetrahedral_ansatz} as initial guess for
the relaxation.
The color represents the phase of the scalar field $\phi_2$ and the
lightness is given by $|\Im(\phi_1)|$.
The calculations are done on an $81^3$ cubic lattice
with the relaxation method. }
\label{fig:T4B3_TEIC}
\end{center}
\end{figure}
\begin{figure}[!htp]
\begin{center}
\captionsetup[subfloat]{labelformat=empty}
\mbox{
\subfloat[$m=0$, $B^{\rm numerical}=2.9980$]{\includegraphics[width=0.33\linewidth]{T4B31_TOIC_m0}}
\subfloat[$m=1/2$, $B^{\rm numerical}=2.9983$]{\includegraphics[width=0.33\linewidth]{T4B31_TOIC_m05}}
\subfloat[$m=1$, $B^{\rm numerical}=2.9987$]{\includegraphics[width=0.33\linewidth]{T4B31_TOIC_m1}}}
\mbox{
\subfloat[$m=3/2$, $B^{\rm numerical}=2.9992$]{\includegraphics[width=0.33\linewidth]{T4B31_TOIC_m15}}
\subfloat[$m=2$, $B^{\rm numerical}=2.9989$]{\includegraphics[width=0.33\linewidth]{T4B31_TOIC_m2}}
\subfloat[$m=3$, $B^{\rm numerical}=2.9994$]{\includegraphics[width=0.33\linewidth]{T4B31_TOIC_m3}}}
\caption{Isosurfaces showing the solutions for the 2+4 model,
$B=3$ and various values of the mass parameter, $m$, and the
toroidal Ansatz \eqref{eq:torus_reduced} as initial guess for the
relaxation.
The color represents the phase of the scalar field $\phi_2$ and the
lightness is given by $|\Im(\phi_1)|$.
The calculations are done on an $81^3$ cubic lattice
with the relaxation method. }
\label{fig:T4B3_TOIC}
\end{center}
\end{figure}
In order to determine which state is the lowest-energy state, we also
compare the energies for the two series of numerical solutions.
In Fig.~\ref{fig:energycomparison} we show the energies of the two
series of numerical solutions: the blue solid line shows the solution
whose initial guess is the tetrahedral Ansatz and the red dashed line
has the torus Ansatz as initial guess. We see that for $m\lesssim 1.5$
the tetrahedral is the lowest-energy state.
Our calculation indicates that a first-order phase transition takes
place around $m=m_{\rm critical}\sim 1.5$, where the tetrahedral
state rises above that of the toroidal one.
For $m\gtrsim 3$ both Ans\"atze give a flat torus after
relaxation has found a solution. Thus the tetrahedral state either
becomes unstable or slowly merges together with that of the toroidal
one.
The instability sets in for $m$ between 3 and 4.
In order to check that the phase transition around $m\sim 1.5$ really
takes place, we have run the solutions with an exceptionally long
relaxation time and found solutions with a very high accuracy (the
equations of motion are satisfied at every spatial position better
than $10^{-4}$ and about $10^{-5}$ on average).
\begin{figure}[!htp]
\begin{center}
\includegraphics[width=0.8\linewidth]{energycomparison}
\caption{Energies of numerical solutions whose initial guesses are
tetrahedrals (blue line) and torii (red line) with varying mass
$m$. For small $m\leq 1$, the tetrahedral Ansatz gives tetrahedral
solutions. At $m\gtrsim 1.5$ the tetrahedral Ansatz gives rise to
solutions that are heavier than that with the toroidal Ansatz,
suggesting a first-order phase transition. For large $m\gtrsim 3$,
both series give torii. }
\label{fig:energycomparison}
\end{center}
\end{figure}
Indicative of the two different states crossing around the critical
mass, $m_{\rm critical}\sim 1.5$, is that the two different solutions
have almost exactly the same energy.
In order to see that the numerical solutions for $m$ between 1 and 2
are actually tetrahedral in nature as opposed to bent torii, we show
the solutions with colored isosurfaces at half the maximal value as
well as at a quarter of the maximal value of the baryon charge density
in Fig.~\ref{fig:T4B3_TEIC_aura}. It is observed that there is a cloud
connecting the solution between the string at antipodal points.
For sufficiently large $m\sim 2m_{\rm critical}$, the tetrahedral
solution becomes unstable and thus for large $m$ only the torus
exists.
\begin{figure}[!htp]
\begin{center}
\captionsetup[subfloat]{labelformat=empty}
\mbox{
\subfloat[$m=0$, $B^{\rm numerical}=2.9977$]{\includegraphics[width=0.33\linewidth]{T4B31_TEIC_m0_aura}}
\subfloat[$m=1/2$, $B^{\rm numerical}=2.9983$]{\includegraphics[width=0.33\linewidth]{T4B31_TEIC_m05_aura}}
\subfloat[$m=1$, $B^{\rm numerical}=2.9986$]{\includegraphics[width=0.33\linewidth]{T4B31_TEIC_m1_aura}}}
\mbox{
\subfloat[$m=3/2$, $B^{\rm numerical}=2.9993$]{\includegraphics[width=0.33\linewidth]{T4B31_TEIC_m15_aura}}
\subfloat[$m=2$, $B^{\rm numerical}=2.9986$]{\includegraphics[width=0.33\linewidth]{T4B31_TEIC_m2_aura}}
\subfloat[$m=3$, $B^{\rm numerical}=2.9982$]{\includegraphics[width=0.33\linewidth]{T4B31_TEIC_m3_aura}}}
\caption{Isosurfaces showing the solutions for the 2+4 model, for
$B=3$ and various values of the mass parameter, $m$, and the
tetrahedral Ansatz \eqref{eq:tetrahedral_ansatz} as initial guess for
the relaxation. The colored isosurface and the magenta shadow show the
isosurface at half and a quarter of the maximal value of the baryon
charge density, respectively.
The color represents the phase of the scalar field $\phi_2$ and the
lightness is given by $|\Im(\phi_1)|$.
The calculations are done on an $81^3$ cubic lattice
with the relaxation method. }
\label{fig:T4B3_TEIC_aura}
\end{center}
\end{figure}
\section{Summary and Discussion \label{sec:summary} }
We have studied Skyrmion solutions in the BEC Skyrme model,
which is a Skyrme model with the potential
term motivated by two-component BECs.
We have constructed stable Skyrmion solutions
for $P=1,2,3,4,5$ and $Q=1$,
yielding the baryon numbers $B=1,2,3,4,5$ as well as a metastable
solution for $P=6$ and $Q=1$ ($B=6$). We suspect that higher baryon
charged solutions will all be metastable.
The energy and baryon charge distributions of
the configurations with $P>1$ are all of toroidal shape.
They are vortex rings of the field $\phi_1$,
with the field $\phi_2$ trapped in their cores,
where the phase of the field $\phi_2$ winds $P$ times
along the ring.
We have found that configurations with charge $(P,Q)$ decay into $Q$ rings
of charge $(P,1)$. This string splitting can be understood as
repulsion of global vortex strings.
Finally we have discovered a first-order phase transition between
Skyrmions with a discrete point symmetry and axial (toroidal)
symmetry.
In two-component BECs,
one can introduce a Rabi oscillation term
$\gamma(\phi_1(x)^* \phi_2(x)+{\rm c.c.})$,
known as a Josephson term in superconductors,
in the Lagrangian.
Introduction of this term deforms the Skyrmions inside a domain wall
\cite{Nitta:2012wi,Nitta:2012xq,Gudnason:2014nba,Garaud:2012}.
What deformation this term introduces for toroidal Skyrmions in the
BEC Skyrme model remains as a future problem.
On the other hand, if we introduce the potential term
$V \sim \phi_1 + \phi_1^*$ \cite{Gudnason:2014hsa},
our configurations will become $P$ sine-Gordon kinks on a vortex ring,
which is a $(3+1)$-dimensional analogue of
Ref.~\cite{Kobayashi:2013ju},
in which sine-Gordon kinks on a domain wall ring
were constructed in $2+1$ dimensions.
Two-component BECs are known
to admit a stable composite soliton, viz.~a D-brane soliton, that is,
a domain wall on which
vortices end from both sides \cite{Kasamatsu:2010aq},
originally found in the massive $\mathbb{C}P^1$ model
\cite{Gauntlett:2000de,Isozumi:2004vg}.
The (BEC) Skyrme model discussed in this paper
has the same potential term and
is expected to admit such a D-brane soliton.
A configuration made of
a domain wall and an anti-domain wall
stretched by lump-strings in the massive $\mathbb{C}P^1$ model
was considered in Ref.~\cite{Nitta:2012kk},
in which it was discussed that
such a configuration is unstable to decay,
resulting in the creation of Hopfions.
Therefore, the same mechanism should work
also in the (BEC) Skyrme model discussed in this paper
creating Skyrmions from brane annihilation,
as was discussed for two-component BECs \cite{Nitta:2012hy}.
The Bogomol'nyi-Prasad-Sommerfield (BPS) Skyrme model, proposed
recently \cite{Adam:2010fg}, consists of only the sixth-order
derivative term as well as appropriate potentials. This model admits
exact solutions with compact support. By choosing the potential of the
BEC Skyrme model in this paper, we may be able to construct exact
solutions of Skyrmions with toroidal shape.
The Skyrmions with the charge $(P,Q)$ are related through the Hopf
map to $(P,Q)$ Hopfions
\cite{Kobayashi:2013bqa,Kobayashi:2013xoa}
in the Ising Faddeev-Skyrme (FS) model \cite{Nitta:2012kk},
that is, the FS model
\cite{Faddeev:1975,Faddeev:1996zj} with an Ising-type
potential term admitting two discrete vacua.
The domain wall in the BEC Skyrme model
is mapped to a domain wall
with a U(1) modulus interpolating between these two vacua
\cite{Abraham:1992vb,Kudryavtsev:1997nw,Arai:2002xa},
and a global vortex is mapped to a lump or baby Skyrmion
\cite{Piette:1994ug,Weidig:1998ii}.
This model also admits
a twisted domain-wall tube with the U(1) modulus twisted along the
cycle of the tube \cite{Kobayashi:2013ju} as a baby-Skyrmion string.
The original FS model without said potential term is known to admit
Hopfions, i.e.~solitons with Hopf charge
$\pi_3(S^2) \simeq \mathbb{Z}$
\cite{deVega:1977rk,Gladikowski:1996mb,Faddeev:1996zj,Battye:1998pe,Hietarinta:2000ci,Sutcliffe:2007ui,Radu:2008pp},
and, in particular, Hopfions with Hopf charge 7 or
higher were found to have knot structures
\cite{Battye:1998pe,Hietarinta:2000ci,Sutcliffe:2007ui}.
The $(P,Q)$ Hopfions in the Ising FS model are
not knots but toroidal domain walls
characterized by two integers $(P,Q)$,
where the U(1) modulus of the domain wall is twisted $P$ and $Q$ times
along the toroidal and poloidal cycles of the torus, respectively.
In this case, some configurations with $Q>1$ were found to be stable
\cite{Kobayashi:2013xoa},
unlike our case of Skyrmions for which
all configurations for $Q>1$ are unstable.
This is because there is no repulsion between lumps.
If we consider compactifying space to $\mathbb{R}^2\times S^1$ we have
another solution in addition to the one studied here, in which the
vortex string extends along the $S^1$ direction and has $P$ twists on
its U(1) modulus. The corresponding solution for the case of the
Hopfion was discussed in Ref.~\cite{Kobayashi:2013aza}.
Skyrmions in the conventional model on $S^2\times S^1$ were
discussed in Ref.~\cite{Canfora:2014aia}.
\section*{Acknowledgments}
We thank Michikazu Kobayashi for discussions in the early stage
of this work.
The work of M.~N.~is supported in part by Grant-in-Aid for Scientific Research
No.~25400268
and by the ``Topological Quantum Phenomena''
Grant-in-Aid for Scientific Research
on Innovative Areas (No.~25103720)
from the Ministry of Education, Culture, Sports, Science and Technology
(MEXT) of Japan.
S.~B.~G.~thanks Keio University for hospitality during which this
project took shape.
The authors thank the referee for valuable comments.
\begin{appendix}
\section{String splitting for $Q>1$\label{app:stringsplitting}}
In this section we show that the relaxation of the $(P,Q)=(P,2)$ torus
splits into two separate $(P,Q)=(P,1)$ objects for $P=1,2$. For
concreteness we carry out the calculations in the 2+6 model
($\kappa=0$ and $c_6=1$).
In Figs.~\ref{fig:stringsplitting12} and \ref{fig:stringsplitting22} are
shown the $(1,2)\to 2\times(1,1)$ and $(2,2)\to 2\times(2,1)$ string
splittings as function of relaxation time $\tau$, respectively.
\begin{figure}[!tph]
\begin{center}
\mbox{
\includegraphics[width=0.24\linewidth]{T6B12s0}
\includegraphics[width=0.24\linewidth]{T6B12s1}
\includegraphics[width=0.24\linewidth]{T6B12s2}
\includegraphics[width=0.24\linewidth]{T6B12s4}}
\mbox{
\includegraphics[width=0.24\linewidth]{T6B12s6}
\includegraphics[width=0.24\linewidth]{T6B12s7}
\includegraphics[width=0.24\linewidth]{T6B12s8}
\includegraphics[width=0.24\linewidth]{T6B12s9}}
\mbox{
\includegraphics[width=0.24\linewidth]{T6B12s10}
\includegraphics[width=0.24\linewidth]{T6B12s12}
\includegraphics[width=0.24\linewidth]{T6B12s14}
\includegraphics[width=0.24\linewidth]{T6B12s20}}
\caption{Isosurfaces showing an initial configuration with
$(P,Q)=(1,2)$ ($B=2$) in 2+6 model ($\kappa=0$, $c_6=1$ and $m=4$)
which after some finite relaxation time splits the Skyrmion into two
separate Skyrmions of charge one, i.e.~$(P,Q)=(1,1)$.
The isosurfaces show constant baryon charge density equal to half
its maximum value.
The color represents the phase of the scalar field $\phi_2$ and the
lightness is given by $|\Im(\phi_1)|$.
The calculation is carried out on an $81^3$ cubic lattice with the
relaxation method. }
\label{fig:stringsplitting12}
\end{center}
\end{figure}
\begin{figure}[!tph]
\begin{center}
\mbox{
\includegraphics[width=0.24\linewidth]{T6B22s0}
\includegraphics[width=0.24\linewidth]{T6B22s1}
\includegraphics[width=0.24\linewidth]{T6B22s22}
\includegraphics[width=0.24\linewidth]{T6B22s44}}
\mbox{
\includegraphics[width=0.24\linewidth]{T6B22s65}
\includegraphics[width=0.24\linewidth]{T6B22s87}
\includegraphics[width=0.24\linewidth]{T6B22s109}
\includegraphics[width=0.24\linewidth]{T6B22s131}}
\mbox{
\includegraphics[width=0.24\linewidth]{T6B22s133}
\includegraphics[width=0.24\linewidth]{T6B22s153}
\includegraphics[width=0.24\linewidth]{T6B22s175}
\includegraphics[width=0.24\linewidth]{T6B22s218}}
\caption{Isosurfaces showing an initial configuration with
$(P,Q)=(2,2)$ ($B=4$) in 2+6 model ($\kappa=0$, $c_6=1$ and $m=4$)
which after some finite relaxation time splits the Skyrmion into two
separate Skyrmions of charge two, i.e.~$(P,Q)=(2,1)$.
The isosurfaces show constant baryon charge density equal to half
its maximum value.
The color represents the phase of the scalar field $\phi_2$ and the
lightness is given by $|\Im(\phi_1)|$.
The calculation is carried out on an $81^3$ cubic lattice with the
relaxation method. }
\label{fig:stringsplitting22}
\end{center}
\end{figure}
\section{Comparison of torus and Skyrmion\label{app:comparison}}
In this section we will compare the case of $(P,Q)=(2,1)$ and thus
baryon number $2$ and $m=4$, where the Skyrmion is a torus, with the
case of $m=0$, which is just the normal $B=2$ Skyrmion and also in the
form of a torus. We will make the comparison for both the 2+4 model
and the 2+6 model.
In Figs.~\ref{fig:24comparison} and \ref{fig:26comparison} are shown
the comparison for the 2+4 and 2+6 models, respectively. For the 2+4
model, the main difference is the size (and in turn the total mass) of
the two solutions. For the 2+6 model, differences are evident both in
the baryon charge density slices (middle row) and the energy density
slices (bottom row). For the BEC Skyrmion in the 2+6 model, the torus
is more hollow with respect to its potential-less counterpart.
\begin{figure}[!thp]
\begin{center}
\captionsetup[subfloat]{labelformat=empty}
\mbox{
\includegraphics[width=0.4\linewidth]{T4B21n}
\includegraphics[width=0.4\linewidth]{T4B2}}
\mbox{
\includegraphics[width=0.49\linewidth]{T4B21_baryonslice}
\includegraphics[width=0.49\linewidth]{T4B2_baryonslice}}
\mbox{
\subfloat[$m=4$, $B^{\rm numerical}=1.9994$]{\includegraphics[width=0.49\linewidth]{T4B21_energyslice}}
\subfloat[$m=0$, $B^{\rm numerical}=1.9715$]{\includegraphics[width=0.49\linewidth]{T4B2_energyslice}}}
\caption{Comparison between the BEC Skyrmion in the 2+4 model ($m=4$)
on the left and the normal Skyrmion ($m=0$) on the right. From top
to bottom is shown the isosurface of the baryon density at half
maximum, the baryon density at $xz$ and $xy$ slices through the
origin of the torus, and finally similar energy density slices. }
\label{fig:24comparison}
\end{center}
\end{figure}
\begin{figure}[!thp]
\begin{center}
\captionsetup[subfloat]{labelformat=empty}
\mbox{
\includegraphics[width=0.4\linewidth]{T6B21n}
\includegraphics[width=0.4\linewidth]{T6B2}}
\mbox{
\includegraphics[width=0.49\linewidth]{T6B21_baryonslice}
\includegraphics[width=0.49\linewidth]{T6B2_baryonslice}}
\mbox{
\subfloat[$m=4$, $B^{\rm numerical}=1.9998$]{\includegraphics[width=0.49\linewidth]{T6B21_energyslice}}
\subfloat[$m=0$, $B^{\rm numerical}=1.9834$]{\includegraphics[width=0.49\linewidth]{T6B2_energyslice}}}
\caption{Comparison between the BEC Skyrmion in the 2+6 model ($m=4$)
on the left and the ``normal'' Skyrmion in the 2+6 model ($m=0$) on
the right. From top to bottom is shown the isosurface of the baryon
density at half maximum, the baryon density at $xz$ and $xy$ slices
through the origin of the torus, and finally similar energy density
slices. }
\label{fig:26comparison}
\end{center}
\end{figure}
\end{appendix}
\newcommand{\J}[4]{{\sl #1} {\bf #2} (#3) #4}
\newcommand{\andJ}[3]{{\bf #1} (#2) #3}
\newcommand{\AP}{Ann.\ Phys.\ (N.Y.)}
\newcommand{\MPL}{Mod.\ Phys.\ Lett.}
\newcommand{\NP}{Nucl.\ Phys.}
\newcommand{\PL}{Phys.\ Lett.}
\newcommand{\PR}{ Phys.\ Rev.}
\newcommand{\PRL}{Phys.\ Rev.\ Lett.}
\newcommand{\PTP}{Prog.\ Theor.\ Phys.}
\newcommand{\hep}[1]{{\tt hep-th/{#1}}}
|
2,877,628,091,015 | arxiv | \section{Introduction}
Over the last decade, measuring the number density of galaxy clusters as a function of observable and redshift
has proven to be a potent way to determine not only the density and clustering of matter in the Universe, but
also to shed light on the yet unknown source of the late time accelerated expansion of the Universe
\citep{Koester07, vikhlinin08, mantz10, rozo10, benson13, Mantz15, bocquet15, planck16cluster_cosmo,
dehaan16, bocquet18}. To this end, ever larger samples of galaxy clusters have been selected in X-rays
\citep[][]{vikhlinin98,boehringer01,romer01,clerc14,klein19}, at millimeter wavelengths \citep{Hasselfield13,
bleem15, planck16_sze}, and in the optical \citep{Koester07, rykoff16}. Extracting accurate cosmological
constraints from these samples depends critically on the ability to determine the mapping between the
observable in which the samples have been selected, and the halo mass over the relevant range of redshifts.
This aspect is commonly referred to as \textit{mass calibration}.
Two main methods have been developed for this purpose.
The first-- weak lensing (hereafter WL)-- the coherent distortion of the shapes of galaxies behind galaxy clusters
by the cluster gravitational potential has
proven to be the method of choice to calibrate masses \citep[e.g.,][]{bardeau07, okabe10, hoekstra12,
applegateetal14, israel14, Melchior15, okabesmith16, melchior17, schrabback18a, dietrich19}. Alternatively,
the dynamics of the cluster galaxies themselves has been used within recent cluster surveys to calibrate the
cluster halo masses \citep{sifon13,bocquet15,capasso19, zhang17}.
On individual clusters, these methods characteristically provide a low signal to noise mass constraint with low bias
that, importantly, can be reliably characterized using numerical structure formation simulations. For example,
the scaling between the mass observed through WL (hereafter the WL mass) and the halo
mass can be calibrated to robustly characterize the biases and scatter \citep[e.g.,][]{becker11}. With modern
hydrodynamical simulations it is now possible to include baryon physics in this calibration \citep{lee18}.
Similarly, the biases and scatter in dynamical mass estimators can be characterized using numerical
simulations \citep[e.g.,][]{evrard08,mamon13} in a manner that includes the impact of the (red) galaxy sample
selection \citep{saro13}.
A third method-- hydrostatic masses using X-ray observations-- has played an important role in the development of
our understanding of galaxy clusters, but through simulation studies and comparison with WL
masses, these hydrostatic masses have been shown to be biased at the $\sim20$\% level or more \citep[see,
e.g.,][]{nagai07,rasia12,vonderlinden14,hoekstra15,shi15,planck16cluster_cosmo,planck_cosmo_legacy18}, although the
scale of the
bias remains a topic of ongoing research \citep{smith16,gupta17}. This hydrostatic mass bias together with
the availability of shear catalogs from deep, multiband surveys and the increasingly large wide field
spectroscopic datasets, have created a situation where the X-ray hydrostatic masses no longer offer clear
benefits within the context of large scale cluster cosmological studies.
The low signal to noise of individual cluster WL mass measurements is compensated to some degree by
the larger number of galaxy clusters that can be studied. This stems from the fact that, in addition to the
cluster observables of redshift and position, cluster weak lensing mass calibration requires the same data as
cosmic shear experiments. The advent of deep, large area photometric imaging surveys with a well controlled
point spread function correction for accurate shape measurements and high quality photometric redshifts now
enables the WL study of large samples of galaxy clusters \citep{Melchior15, murata18, miyatake18,
mcclintock19,stern19}.
It is in this context that we investigate the impact of WL mass calibration on the cluster cosmology results from the
X-ray selected sample that will be extracted from the all sky X-ray survey undertaken with the forthcoming
eROSITA\footnote{\url{http://www.mpe.mpg.de/eROSITA}} telescope \citep{Predehl10,merloni12} on board
the Russian "Spectrum-Roentgen-Gamma" satellite. Previous analyses adopting a Fisher matrix approach have explored
the constraining power of the eROSITA cluster sample on non Gaussianities \citep{pillepich12} and the dark
energy equation of state parameter \citep{pillepich18}, further underscoring the promise of
cluster number counts as a cosmological probe \citep[e.g.,][]{haiman01}.
In this work, we create a mock cluster catalog with
characteristics of the expected eROSITA catalog, and we use a prototype of the eROSITA cluster cosmology
analysis code to perform the number counts experiment. We consider the improvement in constraining
power when the eROSITA X-ray cluster catalog is calibrated with realistic WL lensing shear profiles from the
ongoing Dark Energy Survey\footnote{\url{https://www.darkenergysurvey.org}} \citep[DES,][]{DES16}
and Hyper-Suprime-Cam Survey\footnote{\url{https://www.naoj.org/Projects/HSC/}}
\citep[HSC,][]{HSC},
and the forthcoming Euclid\footnote{\url{http://sci.esa.int/euclid/42266-summary/}} \citep{laureijs11}
and Large Synoptic Survey Telescope\footnote{\url{https://www.lsst.org/}} \citep[LSST,][]{Ivezic08} surveys.
We explore parameter sensitivities and probe for limiting degeneracies in the analysis.
Finally, we explore the synergies of combining the eROSITA cluster counts cosmological constraints with
those from existing CMB temperature anisotropy measurements \citep{planck16_cosmo} and with those from
the future DESI BAO measurements \citep{levi13}.
The paper is organized as follows: in Section~\ref{sec:setup} we discuss how we create the mock data. In
Section~\ref{sec:method} we discuss the modeling used to determine the cosmological parameters, and we
present and validate a prototype of the eROSITA cosmological analysis pipeline. In Section~\ref{sec:results},
we present the results of the impact of WL mass calibration on our knowledge of the cosmological
parameters and the observable mass relation. Various aspects of these results together with the parameter
sensitivities and important degeneracies are then discussed in
Section~\ref{sec:discussion}. We conclude this work by summarizing the main results in
Section~\ref{sec:conclusions}.
\section{Experimental setup}\label{sec:setup}
To constrain the impact of direct mass calibration through WL tangential shear measurements on
eROSITA cluster cosmology, we create an eROSITA mock cluster catalog. The actual eROSITA cluster candidate
catalog will be extracted from the eROSITA X-ray sky survey using specially designed
detection and characterization tools \citep{brunner18}.
Each candidate source will be assigned a detection significance, an extent
significance, an X-ray count rate and uncertainty, and other more physical parameters such as the flux within
various observing bands \citep{merloni12}. For a subset of this sample, precise X-ray temperatures and
rough X-ray redshifts will also be available \citep{borm14, hofmann17}.
This X-ray cluster candidate catalog will then be studied in the optical to identify one or more optical counterparts
(assigning a probability to each) and to estimate a photometric redshift.
A special purpose Multi-Component-Matched-Filter (MCMF) optical followup tool \citep{klein18}
has been designed for eROSITA cluster analysis
and has been tested on available X-ray and SZE catalogs. It has been
shown in RASS+DES analyses that one can reliably obtain both cluster and group redshifts over the
relevant ranges of redshift \citep{klein19},
and thus for the analysis undertaken here
we assume redshifts are available for all the eROSITA clusters.
The MCMF tool also allows one to quantify the
probability of chance superposition between X-ray cluster candidates and optical counterparts, using the
statistics of optical systems along random lines of sight together with estimates of the initial contamination in
the X-ray cluster candidate catalog.
Synthetic sky simulations by \citet{clerc18} have shown that the initial X-ray cluster candidate list selected on
both detection and extent significance will be contaminated at the 10\% level, consistent with experience
in X-ray selection from archival ROSAT PSPC data that have a similar angular resolution to eROSITA \citep{vikhlinin98}.
After processing with MCMF the resulting eROSITA X-ray cluster catalog is expected to have contamination at
the sub-percent level. Therefore, we do not include contamination in the mock catalogs produced for
this study.
For the WL mass calibration we will be using shear and photometric redshift catalogs from wide field, deep
extragalactic surveys, including DES and HSC in the near term and Euclid and LSST on the longer term.
The label ``Euclid'' refers to the nominal requirements for Euclid \citep{laureijs11}, although
these requirements will realistically be met when combining Euclid with LSST, where the LSST data
would be used for the photometric redshifts.
We also explore the impact of LSST WL alone, where we
adopt the requirements described in the following references\citep{lsst_desc12,lsst_desc18}.
There is also the promise of CMB lensing
as another method of mass calibration that is expected to be especially helpful for the highest redshift end of
our cluster sample, but in our current analysis we do not model the impact of CMB lensing.
\begin{figure*}
\includegraphics[width=\columnwidth]{hist_mass_z_high}
\includegraphics[width=\columnwidth]{hist_mass_z_low}
\vskip-0.15in
\caption{Distribution in halo mass $M_\text{500c}$ and cluster redshift $z$ of the mock, X-ray selected
cluster catalogs used in this analysis. \textit{Left:} Above redshift $\sim 0.7$, the 13k cluster baseline
sample is selected using the fiducial count
rate cut $\eta=2.5\times10^{-2}$~cts~s$^{-1}$ that corresponds approximately to
40 photons at the median exposure
time and a signal to noise $\xi_\mathrm{det}>7$.
Below that redshift the observable cut is pushed upward to mimic a mass exclusion at $M_\text{500c}\sim2\times 10^{14}
\text{M}_{\sun}$.
Due to intrinsic and observational scatter between halo mass and
the observable count rate, the cuts in observable used to create these samples appear
smoothed in halo mass-redshift space.
\textit{Right:} The 43k sample that includes groups is selected similarly but the count rate cut is adjusted to mimic a mass
exclusion at
$M_\text{500c}\sim5\times 10^{13} \text{M}_{\sun}$.}
\label{fig:hist_mz}
\end{figure*}
Our strategy in the analysis that follows is to adopt direct, cosmology independent cluster observables,
including the cluster (1) X-ray
detection significance or count rate, (2) photometric redshift, (3) WL tangential shear profile and (4)
shear source redshift distributions for use in the cosmological analysis of the cluster sample. A
benefit of using the count rate rather than
the physical flux is that uncertainties in effective area and the temperature dependence of the conversion from
count rate to physical flux do not contribute to cosmological uncertainties.
Empirically mapping these
observables to mass as a function of redshift and testing consistency of observed and theoretical cluster
distributions as a function of cosmological parameters is described in Section~\ref{sec:method}.
Below, in Section~\ref{sec:x-ray_mock}, we describe how the mock cluster catalog is generated and how the X-ray and
optical cluster properties are assigned. In Section~\ref{sec:wl_signal} we describe how we model the shear
profiles that are produced for an appropriate subset of the mock eROSITA cluster sample. We discuss briefly
our choice of fiducial cosmology and input X-ray scaling relations in Section~\ref{sec:fiducial}.
\subsection{Creating the mock cluster catalog} \label{sec:x-ray_mock}
To create the X-ray catalog, we perform the following calculations.
\begin{enumerate}
\item For our choice of input cosmology (see Table~\ref{tab:input_parmas} and Section~\ref{sec:fiducial}), we
compute the number of expected clusters as a function of halo mass $M_{500 \text{c}}$ and redshift $z$
using the halo mass function \citep{tinker08}. We then draw a Poisson realization of the number of
expected clusters, obtaining a mass selected cluster sample with $ M_{500\text{c}} > 1.3\times 10^{13}
\text{M}_{\sun} $ and $0.05<z<1.8$. For this calculation we assume a survey solid angle of $
\text{Area}_\text{DE} = 0.361 \times 4 \upi$, corresponding to regions of the western
galactic hemisphere with a galactic hydrogen column $N_{\text{H}}<10^{21}$ cm$^{-2}$ \citep{kalberla05}.
This corresponds approximately to a galactic latitude cut of $|b|>20$ deg.
We adopt the cluster true redshift as the photometric redshift, because the MCMF optical followup tool has
been demonstrated to achieve photometric redshift uncertainties with the DES dataset with an accuracy of $
\sigma_z/(z+1)\loa 0.01$ \citep{klein18,klein19} out to redshifts $z\sim1.1$. Photometric redshift uncertainties at
this level are small enough to play no role in the cosmological analysis of the eROSITA cluster counts.
\item We use the scaling between X-ray luminosity $L_{[0.5-2]\text{keV}}$ ($L_X$ hereafter) in the rest frame
$0.5-2$ keV band and halo mass
\begin{equation} \label{eq:l_mz}
\frac{L_X}{L_0}= e^{\ln A_\text{L}}\left(\frac{M_{500\text{c}}}{M_0}\right)^{B_\text{L}} \left(\frac{E(z)}{E_0}\right)^2
\left(\frac{1+z}{1+z_0}\right)^{\gamma_\text{L}} e^{\Delta_\text{L}},
\end{equation}
that was extracted from a large sample of SPT selected clusters
with pointed XMM-{\it Newton} observations \citep{bulbul19}. In this relation
$E(z)=H(z)/H_0$ encodes the expansion history of the universe and is used to calculate the impact of changes in the
critical density of the Universe ($\rho_\mathrm{crit}\propto E^2(z)$), $\ln A_\text{L}$, $B_\text{L}$ and $
\gamma_\text{L}$ are the amplitude, the mass trend and the non-self-similar redshift trend parameters
of the luminosity--mass scaling relation, and $\Delta_\text{L}\sim \mathcal{N}(0,
\sigma^2_\text{L})$ is a random number drawn from a Gaussian with standard deviation $\sigma_\text{L}$,
which models the log-normal intrinsic scatter of the relation.
The \citet{bulbul19} X-ray scaling relations are
derived from the Sunyaev-Zel'dovich effect (SZE) selected cluster sample from the SPT-SZ 2500 deg$^2$
survey \citep{carlstrom11,bleem15} that have available XMM-{\it Newton} observations. This is a sample of
59 clusters with $0.2\le z\le 1.5$ and masses $ M_{500\text{c}} > 3\times 10^{14} \text{M}_{\sun}$. These halo
masses have been
calibrated separately in a cosmological analysis \citep{dehaan16} and exhibit a characteristic
uncertainty of $\sim$20\% (statistical) and $\sim15$\% (systematic).
The scaling relation parameter uncertainties from \citet{bulbul19} include both statistical and systematic
uncertainties.
We also utilize the temperature mass relation
\begin{equation}\label{eq:t_mz}
\frac{T}{T_0}= e^{\ln A_\text{T}}\left(\frac{M_{500\text{c}}}{M_0}\right)^{B_\text{T}} \left(\frac{E(z)}
{E_0}\right)^{\frac{2}{3}} \left(\frac{1+z}{1+z_0}\right)^{\gamma_\text{T}} e^{\Delta_\text{T}},
\end{equation}
from the same analysis \citep{bulbul19}, where the parameters $(\ln A_\text{T}, B_\text{T}, \gamma_\text{T})$
have the same meaning as in the luminosity
scaling relation, with $\Delta_\text{T}\sim \mathcal{N}(0, \sigma^2_\text{T})$ for the scatter $\sigma_\text{T}$.
The only difference is the scaling with the critical density, derived from self similar collapse theory.
Following these relations, we attribute to each cluster an X-ray luminosity $L_\text{X}$ and a temperature $T$,
randomly applying the respective intrinsic log normal scatter and assuming that the two scatters are
uncorrelated.
\item Given the cluster rest frame 0.5-2 keV luminosity $L_\text{X}$ and its redshift $z$, we compute the rest
frame 0.5-2 keV flux
\begin{equation} \label{eq:flux}
f_\text{X} = \frac{L_X}{4 \upi d_\mathrm{L}^2(z)},
\end{equation}
where $d_\mathrm{L}(z)$ is the luminosity distance.
\item For each cluster we calculate the X-ray spectrum assuming an APEC plasma emission model \citep{apec}
with temperature $T$ and metallicity $Z = 0.3$ Z$_{\sun}$\footnote{For simplicity, we do not apply any scatter
to the metallicity, and assume it is constant as a function of redshift, as recent measurements of the
metallicity of SPT selected clusters suggest \citep{McDonald16}. We assume the solar abundances model of
\citet{andersgrevesse89}}. This spectrum is normalized to the cluster rest frame 0.5-2 keV flux.
\item We compute the eROSITA count rate $\eta$ for each cluster by shifting the spectrum to the observed
frame and by averaging it with the eROSITA Ancillary Response Function (hereafter ARF) in the observed
frame 0.5-2 keV band\footnote{Of the seven eROSITA cameras, two have a 100 nm Al and 200 nm Pl filter,
while the remaining five have a 200 nm Al and 200 nm Pl filter \citep{Predehl10,merloni12}. Consequently,
the total ARF is the sum of two (100 nm Al + 200 nm Pl)-ARFs and five (200 nm Al + 200 nm Pl)-ARFs.}. For
simplicity, we do not follow the variation in neutral hydrogen column across the eROSITA-DE field.
In fact, we ignore the impact of Galactic absorption altogether in our count rate calculation, which for the median neutral
hydrogen column density in our footprint, $N_\text{H}=3\times10^{20}$~cm$^{-2}$ would lead on average to 5\%
lower rates.
\item To model the measurement uncertainty on the rate, we draw a Poisson realization of the expected
rate $\hat \eta = \eta \pm \sqrt{\eta / t_\text{exp}}$, where $t_\text{exp}=1600$~s is the expected median
exposure time of the 4 year eROSITA survey \citep{pillepich12}. With this we account for the Poisson noise in
the rate measurement. The count rate uncertainty for each cluster will be included in the real eROSITA
cluster catalogs.
\item Finally, we select our baseline cluster sample using the count rate $\eta>2.5\times10^{-2}$~ct~s$^{-1}$
(corresponding for our median exposure to $\hat n_\gamma > 40$).
For reference, given the background expectations, survey PSF and clusters modeled as $\beta$ models with
core radii that are 20\% of the virial radius $r_{500}$, this selection threshold corresponds approximately
to a cut in detection significance of $\xi_\text{det}>7$,
irrespective of the cluster redshift. Simple mock observations (see discussion in Appendix~\ref{app:selection}) indicate
that at this threshold and above the extent likelihood for the eROSITA sample is
$\xi_\text{ext}>2.5$, enabling an initial eROSITA cluster
candidate list after X-ray selection (but prior to optical followup) that is contaminated at the $\sim$10\% level.
At low redshift ($z<0.7$), we raise the detection threshold above the nominal level in such a way
as to exclude most clusters with masses $M_\text{500c}\lessapprox 2\times 10^{14} \text{M}_{\sun}$ at each redshift.
We create a second sample to examine the impact of
lower mass clusters and groups (see Section~\ref{sec:low_mass}) by adjusting the
low redshift count rate cut so that systems with masses
$M_\text{500c}\lessapprox 5\times 10^{13} \text{M}_{\sun}$ are excluded at each redshift.
We discuss the X-ray selection in more detail in
Appendix~\ref{app:selection}.
The reasons for excluding lower mass systems are discussed below (cf.
Section~\ref{sec:low_mass}).
\end{enumerate}
The procedure described above provides us with a baseline cosmology catalog of $\sim 13$k clusters.
Their distribution in halo
mass\footnote{We use this binning in mass just to visualize our sample, the number counts analysis will be
performed on a fixed grid of observed rate $\hat \eta$ and redshift, as specified in
Section~\ref{sec:number_counts}. The corresponding mass grid depends on the cosmological and the scaling
relation parameters, and is thus recomputed every time the likelihood function is called on a specific set of
parameters.} and redshift is shown in the left panel of Fig.~\ref{fig:hist_mz}. They span a redshift range $z\in
(0.05, 1.6)$. The total number of clusters and their redshift range are mainly impacted by the choice of the
input cosmology, the observed luminosity mass relation, and the choice of cut in eROSITA count rate for selection. The
sample
has a median redshift $\bar z=0.51$ and median halo mass of $\bar M_{500\text{c}} =2.5\times 10^{14}
\text{M}_{\sun}$. This sample extends to high redshift with 3\% of the sample, corresponding to 420 clusters, at $z>1$.
The sample of 43k objects with the count rate cut that only excludes lower mass systems
with $M_\text{500c}\le5\times 10^{13} \text{M}_{\sun}$ is shown
in Fig.~\ref{fig:hist_mz} (right). The bulk of the additional low mass systems in this sample appear at redshifts $z\le0.7$.
As with the overall number of clusters, the median mass and redshift depend on the observable cut used to
exclude low mass objects, with these being $\bar z=0.30$, and $\bar M_{500\text{c}} =1.4\times 10^{14}
\text{M}_{\sun}$.
We discuss the implications of lowering the mass limit in Section~\ref{sec:low_mass}.
The number of objects in this $\xi_\mathrm{det}>7$ group dominated sample is in good agreement with the
numbers presented in previous discussions of the eROSITA cluster sample \citep{merloni12,pillepich12,pillepich18}.
Importantly, there are significantly more eROSITA clusters that can be detected if one reduces the detection
threshold below $\sim7\sigma$. But at that level there will be little extent information for
each X-ray source, and so the candidate sample
will be highly contaminated by AGN. Interestingly, \citet{klein18} have demonstrated that for the
RASS faint source catalog where the survey PSF was so poor that little extent information is available,
it is possible to filter out the non-cluster sources to produce low contamination cluster catalogs. The price for this
filtering is that one introduces incompleteness for those systems that contain few galaxies
\citep[i.e., low mass clusters and groups at each redshift; see][]{klein19}.
\subsection{Forecasting the WL signal}\label{sec:wl_signal}
We adopt the cosmology independent tangential reduced shear profile $\hat g_\text{t}(\theta_i)$
in radial bins $\theta_i$ around the cluster as the observable for cluster WL mass calibration.
A crucial complementary observable is the redshift distribution of the source galaxies $N(z_\text{s},
z_\text{cl})$ behind the galaxy cluster, where $z_\text{s}$ is the source redshift, and $z_\text{cl}$ the cluster
redshift. Assuming that the galaxy cluster mass profile is consistent with a Navarro-Frenk-White model \citep[]
[hereafter NFW]{NFW}, these two observables can be combined into a measurement of the halo mass.
Although, in theory, WL mass calibration provides a direct mass measurement, in practice we refer to the mass
resulting from an NFW fit to the shear profile as the WL mass $M_\text{WL}$. Following \citet{becker11}, the
WL mass is related to the halo mass by
\begin{equation}\label{eq:mwl}
M_\text{WL} = b_\text{WL} M_\text{200c} e^{\Delta_\text{WL}},
\end{equation}
with $\Delta_\text{WL}\sim\mathcal{N}(0, \sigma_\text{WL}^2)$, where $\sigma_\text{WL}$ is the intrinsic log-
normal scatter between WL mass and halo mass, induced by the morphological deviation of observed galaxy cluster
mass profiles from the NFW profile, and $b_\text{WL}$ is the WL mass bias describing the characteristic bias
in the WL mass compared to the halo mass. This bias encodes
several theoretical and observational systematics, as discussed below in Section~\ref{sec:WL_syst}.
Given that DES, HSC, Euclid and LSST will not overlap completely with the German eROSITA sky, only a fraction
$f_\text{WL}$ of the galaxy clusters of our X-ray mock catalog will have WL information available. Comparing
the survey footprints, we estimate $f_\text{WL}=0.3$ for DES, $f_\text{WL}=0.05$ for HSC, $f_\text{WL}=0.5$ for
Euclid,
and $f_\text{WL}=0.62$ for LSST. For the LSST case we also assume that the northern celestial hemisphere
portion of the German eROSITA sky with $0^\circ<\delta<30^\circ$ will be observed.
For this northern extension of LSST, we adopt $f_\text{WL}=0.2$
and treat it as if it has the equivalent of DES depth.
Therefore, we assign a WL mass only to a corresponding fraction of the eROSITA clusters in our mock catalogs,
by drawing from equation~(\ref{eq:mwl}).
Besides the WL mass and the cluster redshift, the background source distribution of the survey $N(z_\text{s})
$ in redshift and the background source density $n_\epsilon$ are necessary to predict the WL signal. For
DES, we project $n_\epsilon=10 \, \text{arcmin}^{-2}$ and utilize the redshift distribution presented in
\citet{stern19}, whose median redshift is $z_\text{s,m}=0.74$. These parameters are derived from the Science Verification
Data and their extrapolation to Y5
data will depend on the details of the future calibration (Gruen, priv. comm.).
For HSC we assume $n_\epsilon=21 \, \text{arcmin}^{-2}$, and for the redshift distribution of HSC sources
we adapt the parametrization by \citet{smail94} with a median redshift $z_\text{s,m}=1.1$.
For Euclid, we use
$n_\epsilon=30 \, \text{arcmin}^{-2}$ \citep{laureijs11}. For the source redshift distribution we assume the
parametric form proposed by \citet{smail94} and utilized by \citet{Giannantonio14}, adopting a median redshift
of $z_\text{s,m}=0.9$ \citep{laureijs11}.
For LSST we assume $n_\epsilon=40 \, \text{arcmin}^{-2}$ and parametrise the source redshift distribution as
$p(z_\text{s})=1./(2 z_0) (z_\text{s}/z_0)^2 \exp(-z_\text{s}/z_0)$ with median redshift $z_\text{m,s} = 2.67 z_0 = 0.8$\footnote{These specification are taken from \url{https://www.lsst.org/sites/default/files/docs/sciencebook/SB_3.pdf}, Section 3.7.2}.
\begin{figure*}
\includegraphics[width=0.49\textwidth]{WL_two_profiles}
\includegraphics[width=0.49\textwidth]{WL_s2n}
\vskip-0.1in
\caption{\textit{Left:} Example of a shear profile in DES (orange), Euclid (blue) and LSST (green) data quality for
a cluster.
We show both the measured shear profile (dots with error bars) and the prediction (line). For all
data quality cases, the measurement uncertainty is larger than the actual signal.
\textit{Right:} Distribution of WL signal to noise for DES+HSC (orange), Euclid (blue) and LSST (green), computed for
each single cluster from the measured shear profile and the covariance matrix. While Euclid and LSST provide
both more objects and higher signal to noise, objects with a clear WL mass measurement (e.g., S/N$>5$) are rare
for all datasets.}
\label{fig:wl}
\end{figure*}
The actual redshift distribution behind a galaxy cluster is assumed to be the survey redshift distribution with the cut
$N(z_\text{s}<z_\text{cl}+0.1) = 0$, where $z_\text{cl}$ is the cluster redshift. This cut is helpful in reducing the
contamination of the background source galaxies by cluster galaxies (that are not distorted by the cluster potential).
This cut also leads to a reduction
of the source density $n_\epsilon(z_\text{s}>z_\text{cl}+0.1)$ used to infer the observational noise on the
cluster shear signal.
Given a redshift distribution, the mean reduced shear signal can be estimated, following \citet{seitzschneider97},
as
\begin{equation}\label{eq:gt}
g_\text{t} (\theta_i) = \frac{\gamma (\theta_i) }{1-\kappa (\theta_i)} \left(1+\kappa (\theta_i) \frac{\langle \beta^2
\rangle}{\langle \beta \rangle ^2} \right),
\end{equation}
where $\gamma (\theta_i)$ and $\kappa (\theta_i)$ are the shear and the convergence of an NFW mass
profile, $\theta_i$ the angular bins corresponding to radii between $0.25$ and $5.0$~Mpc at the cluster redshift
in our fiducial cosmology.
This has the effect that low redshift clusters will have larger angular bins than high redshift clusters in
to probe the similar physical scales.
Also note that the inner radius, which we probe ($0.25$~Mpc), is smaller than in some previous studies
\citep[$0.75$~Mpc in][]{applegateetal14, stern19, dietrich19}.
While this will require a more precise treatment of systematic effects such a cluster member contamination,
miscentering and the impact of intra-cluster light on the shape and redshift measurements,
theoretical predictions for the resulting WL mass bias and WL mass scatter associated with these smaller inner radii have
already been presented \citep{lee18}. Furthermore, \citet{gruen18} investigated the impact of intra-cluster light on the
photometric redshift measurement of background galaxies.
We therefore assume that ongoing and future studies will demonstrate the possibility of exploiting shear information at
smaller cluster radii, thereby increasing the amount of extracted mass information.
Following \citet{bartelmann96}, the shear and the convergence can be computed analytically for any halo, given
the mass, the concentration, and the source galaxy redshift distribution $N(z_\text{s},z_\text{cl})$.
Throughout this
work, the concentration of any cluster will be derived from its halo mass, following the relation presented by
\citet{duffy08}. The scatter in concentration at fixed halo mass is a contributor to the bias $b_\text{WL}$ and scatter $
\sigma_\text{WL}$ in the WL mass to halo mass relation (equation~\ref{eq:mwl}).
The lensing efficiency $\beta=d_\mathrm{A}(z_\text{cl},z_\text{s}) / d_\mathrm{A}
(z_\text{s})$ is the ratio between the angular diameter distance $d_\mathrm{A}(z_\text{cl},z_\text{s})$ from
the cluster to the source, and the angular diameter distance $d_\mathrm{A}(z_\text{s})$ from the observer to
the source. In equation~(\ref{eq:gt}) the symbol $\langle \cdot \rangle$ denotes averaging over the source redshift
distribution
$N(z_\text{s},z_\text{cl})$.
The covariance of the measurement
uncertainty on the reduced shear is
\begin{equation}
\mtrx{C}_{i,j} = \text{Cov} [g_\text{t} (\theta_i) , g_\text{t} (\theta_j) ] = \frac{\sigma^2_\epsilon}{\Omega_i
n_\epsilon(z_\text{cl})}\delta_{i,j} + (\mtrx{C}_\text{uLSS})_{i,j}
\end{equation}
where $\delta_{i,j}=1$, if $i=j$, and $\delta_{i,j}=0$ else. The first term accounts for the shape noise in each radial bin,
estimated by scaling the intrinsic shape noise of the source galaxies $
\sigma_\epsilon=0.27$ by the number of source galaxies in each radial bin, taking into
account the reduction of source galaxy density $n_\epsilon(z_\text{cl})=n_\epsilon(z_\text{s}>z_\text{cl}+0.1)$ and the
angular area of the $i$-th radial bin $\Omega_i$. We also add a contribution
coming from uncorrelated large scale structure $(\mtrx{C}_\text{uLSS})_{i,j}$ \citep{hoekstra03}. We draw the
measured reduced shear profile $\hat g_\text{t}$ from the Gaussian multivariate distribution with mean
$g_\text{t}$ and covariance $\mtrx{C}$.
For each cluster with WL information, we thus save the source redshift distribution $N(z_\text{s},z_\text{cl})$, the
measured reduced shear profile $\hat g_\text{t}$, and the covariance $C$. We show an example for a
measured reduced shear profile, both in DES, in Euclid and in LSST data quality in the left panel of
Fig.~\ref{fig:wl}.
The WL signal around individual galaxy clusters derived from wide and deep photometric surveys is typically low
signal to noise. In the right panel of Fig.~\ref{fig:wl}, we explore the distribution of WL signal to noise for the
subsamples with DES+HSC WL data, Euclid WL data and LSST WL data. To this end we define the signal to noise as $
\text{S}/\text{N}=\sqrt {0.5\,\hat g_\text{t}^T\mtrx{C}^{-1}g_\text{t}}$. While the Euclid and LSST data provide a
higher signal to noise on average, it rarely exceeds $\text{S}/\text{N}>5$. Thus, we confirm that WL mass
calibration provides a low signal to noise, direct mass measurement for a large subset of our cluster catalog.
\begin{table}
\centering
\caption{Input parameters for our analysis. The exact definition of the parameters listed below is given in
Section~\ref{sec:sampling}, Section~\ref{sec:scalingrelation} and Section~\ref{sec:wl_signal} for the
cosmological parameters, the scaling relation parameters and the WL calibration parameters, respectively.
\newline \textit{Comments: a)}~This value is determined to match $\sigma_8=0.768$ by \citet{dehaan16}.
~\textit{b)}~We utilize here the value corresponding to the minimal model of a Cosmological Constant causing
the accelerated expansion. \textit{c)}~This is the minimal value allowed by flavor neutrino oscillations, as
reviewed by \citet{tanabashi18}. }
\label{tab:input_parmas}
\begin{tabular}{lcc}
\hline
\multicolumn{3}{l}{Cosmological Parameters}\\
\hline
$H_\text{0}$ & 73.02 & \citet{riess16}\\
$\omega_\text{b}$ & 0.02202 & \citet{cooke14}\\
$\Omega_\text{M}$ & 0.306 & \citet{dehaan16}\\
$A_\text{S}$ & 1.5792e-9 & a) \\
$n_\text{S}$ & 0.9655 & \citet{planck16_cosmo} \\
$w$ & -1.00 & b) \\
$\sum m_\nu$ & 0.06 eV & c) \\
$\Omega_\text{K}$ & 0. & \\
\hline
\multicolumn{3}{l}{Luminosity--Mass--Redshift Relation}\\
\hline
$\ln A_\text{L}$ & 1.52 & \citet{bulbul19} \\
$B_\text{L}$ & 1.95 & \\
$\gamma_\text{L}$ & -0.20 & \\
$\sigma_{L}$ & 0.237 & \\
\hline
\multicolumn{3}{l}{Temperature--Mass--Redshift Relation}\\
\hline
$\ln A_\text{T}$ & 1.83 & \citet{bulbul19} \\
$B_\text{T}$ & 0.849 & \\
$\gamma_\text{T}$ & -0.28 & \\
$\sigma_{T}$ & 0.177 & \\
\hline
\multicolumn{3}{l}{WL Mass Bias and Scatter}\\
\hline
$b_\text{WL}$ & 0.94 & \citet{dietrich19} \& \\
$\sigma_\text{WL}$ & 0.24 & \citet{lee18}\\
\hline
\end{tabular}
\end{table}
\subsection{Fiducial cosmology and scaling relations}
\label{sec:fiducial}
Several steps in the above outlined creation of the mock data are cosmology sensitive. Therefore, the choice of
input cosmology will impact the catalog properties. As an input cosmology, we choose the best fitting
$\Omega_\text{M}$ and $\sigma_8$ results
from the most recent SPT galaxy cluster cosmology analysis \citep{dehaan16}.
We also assumed that dark energy can be described by a cosmological constant, i.e. that the dark energy equation of state
parameter $w=-1$. Furthermore, we adopt the minimal neutrino mass allowed by flavor neutrino oscillation measurements, $
\sum m_\nu=0.06$ eV \citep{tanabashi18}.
The parameter values are listed
in Table~\ref{tab:input_parmas}.
It is worth noting here that these input values for $\Omega_\text{M}$ and $\sigma_8$ are somewhat different (at less than 2$
\sigma$ significance)
from the best fit values derived from the Planck CMB anisotropy measurements \citep{planck16_cosmo}. This choice is
intentional, as the masses of SPT clusters derived from a mass function fit with Planck CMB priors have been
shown to be systematically high by studies of their WL signal \citep{dietrich19, stern19}, their
dynamical mass \citep{capasso19} and their baryon content \citep{chiu18}. Furthermore, the input X-ray
scaling relations by \citet{bulbul19}, adapted to determine the X-ray properties of our catalog entries, assume
an SZE signature--mass--redshift scaling relation consistent with the best fit results from the SPT galaxy
cluster cosmology analysis. In summary, the input values for our analysis are chosen from the latest results of the
SPT galaxy cluster sample, guaranteeing consistency between the assumed cosmology and the input X-ray scaling
relations that we use to construct the mock eROSITA sample.
Given that SPT covers a mass range of $M_{500\text{c}}\goa 3 \times 10^{14} M_{\sun}
$, and a redshift range of $z\in (0.20, 1.7)$, adopting SPT results within the eROSITA context implies only a
modest extrapolation in mass and redshift.
On the other hand, the minimal neutrino mass is slightly inconsistent with recent results from joint fits to number counts of
SPT selected clusters and Planck CMB measurements \citep{dehaan16, bocquet18},
which detect the neutrino mass at $2$-$3$ sigma.
This detection is likely sourced by the slight inconsistency in the $(\Omega_\text{M}, \sigma_8$) plane discussed above.
For the sake of this work, we adapt the minimal neutrino mass to predict improvement on the upper limits obtained,
if cluster number counts and CMB measurements were in perfect agreement.
\section{Cosmology analysis method}
\label{sec:method}
In this section we describe the method we have developed for the cosmological analysis of an eROSITA cluster
sample in the presence of WL mass calibration information. This method builds upon a method developed
and used for the analysis of the SPT SZE selected cluster sample \citep{bocquet15,dietrich19,stern19,bocquet18}. We
start with a description of the minimal scaling relation to describe the mapping of the selection observable to
halo mass as a function of redshift (Section~\ref{sec:scalingrelation}), present the likelihoods in
Section~\ref{sec:likelihoods} and discuss the likelihood sampling tool and our adopted priors in
Sections~\ref{sec:sampling} and \ref{sec:priors}.
\subsection{Cluster selection scaling relation}\label{sec:scalingrelation}
The cosmological analysis of a galaxy cluster sample requires a model for the relation between the halo mass and
the observable. In this work, we take an approach which is conceptually similar to the modeling of the scaling
relation used for the SPT galaxy cluster sample first presented and applied to derive cosmological constraints
by \citet{vanderlinde10} (for further applications, see for instance \citet{benson13, bocquet15, dehaan16,bocquet18}).
We empirically calibrate a scaling relation between the selection observable, i.e. the eROSITA count rate $\eta$,
and the halo mass and redshift. As motivated in Appendix~\ref{app:scaling_relations}, we adopt the following
scaling of the count rate with mass and redshift:
\begin{equation}\label{eq:eta_mz}
\begin{split}
\frac{\eta}{\eta_0} =&e^{\ln{A_\text{X}}}\left(\frac{M_{500\text{c}}}{M_0}\right)^{B(z)} \left(\frac{E(z)}{E_0}\right)^{2}
\left(\frac{d_\text{L}(z)}{d_{\text{L},0}}\right)^{-2}\\
& \left(\frac{1+z}{1+z_0}\right)^{\gamma_\text{X}}
e^{\Delta_\text{X}},
\end{split}
\end{equation}
where the amplitude is $A_\text{X}$, the redshift dependent mass slope is given by
\begin{equation}
B(z) = B_\text{X} + B_\text{X}^\prime \ln \left(\frac{1+z}{1+z_0}\right),
\label{eq:masstrend}
\end{equation}
the redshift trend describing departures from self-similar evolution is $\gamma_\text{X}$,
and the deviation of a particular cluster from the mean scaling relation is described as
$\Delta_\text{X}\sim\mathcal{N}(0, \sigma_\text{X}^2)$, with scatter $\sigma_\text{X}$
(i.e., log-normal scatter in observable at fixed halo mass). As pivot points we choose $M_0=2\times10^{14}\text{ M}_{\sun}$,
$z_0=0.5$, $E_0=1.314$, $d_\mathrm{L,0}=2710$ Mpc, and $\eta_0=0.06$ cts s$^{-1}$.
Empirical calibration of the scaling relation has some major advantages compared to trying to measure accurate
physical cluster quantities such as the flux. In doing the latter, the one might suffer biases \citep[e.g.
the effect of substructures in the context of eROSITA found by][]{hofmann17} or additional sources of scatter from lack of
knowledge about the cluster physical state.
Furthermore, any such biases might themselves have trends with mass or redshift. An alternative approach, which has been
adopted with success within SPT, is to use mass calibration to empirically determine
the values of the scaling relation parameters. In this approach, an unbiased solution is found assuming the correct likelihood
is
adopted (see Section~\ref{sec:likelihoods}) and that the form of the observable mass scaling relation that is adopted has
sufficient flexibility to describe the cluster population. One can examine this using goodness of fit tests
\citep[see][]{bocquet15,dehaan16}.
There is now considerable evidence in the literature that empirical calibration leads to a more robust cosmological
experiment.
In summary, our model for the rate mass scaling assumes that the rate is a power law in mass and redshift with
log-normal intrinsic scatter that is independent of mass and redshift.
Our model allows the mass slope to vary with redshift, which is required given the redshift dependence of the eROSITA
counts to physical flux conversion (see discussion in Appendix~\ref{app:scaling_relations}).
Natural extensions of this model to, e.g., follow mass or redshift dependent
scatter are possible, but for the analysis presented here we adopt a scaling relation with the following five free
parameters: $(\ln A_\text{X}, B_\text{X}, \gamma_\text{X}, \sigma_\text{X}, B_\text{X}^\prime)$.
\subsection{Likelihood functions}
\label{sec:likelihoods}
The likelihood functions we employ to analyze our mock eROSITA and WL data are hierarchical,
Bayesian models, introduced in this form
by \citet{bocquet15}. The functions account self-consistently for (1) the Eddington and Malmquist bias, (2) the cosmological
dependencies of both the direct mass measurements and of the cluster number counts, and (3) systematic uncertainties in
the halo mass of objects observed with a particular rate and redshift. Given that we
utilize a realistic mock catalog, these likelihoods constitute a prototype of the eROSITA cosmological analysis
pipeline. Using this scheme, we design three likelihoods: (1) mass calibration with perfect masses, (2) mass
calibration with WL observables and (3) number counts. In the following, to ensure a concise notation, we will
refer to the halo mass $M_\text{500c}$ as $M$, and specify when we mean a mass defined w.r.t. any other
overdensity.
\subsubsection{Mass calibration with perfect masses}
The likelihood that a cluster of measured rate $\hat \eta$ and redshift $z$ has a given mass $M$ is given by
\begin{equation} \label{eq:p_m_eta_z}
P(M|\hat\eta, z) \propto \int \text{d}\eta \,P(\hat\eta|\eta,z)\,P(\eta| M, z) \frac{\text{d}N}{\text{d}M}(M,z) ,
\end{equation}
where
\begin{enumerate}
\item $P(\hat\eta|\eta,z)$ is the probability density function (hereafter pdf) encoding the measurement error on the rate,
\item $P(\eta| M, z)$ is the pdf describing the scaling relation between rate
and halo mass at a given redshift. We model it as a log-normal distribution with central value given by
equation~(\ref{eq:eta_mz}) with scatter $\sigma_\text{X}$,
\item $\frac{\text{d}N}{\text{d}M}(M,z)$ is the derivative of the number of clusters w.r.t. to the mass at that redshift,
which is the product of the halo mass function $\frac{\text{d}n}{\text{d}M}(M,z)$ by \citet{tinker08}, the co-
moving volume element $\frac{\text{d}V}{\text{d}z}(z)$ and the survey solid angle $\Omega_\text{DE}$.
\end{enumerate}
These quantities, with the exception of the rate measurement uncertainty kernel,
depend on scaling relation parameters,
mass function parameters and cosmological parameters. Also note, that equation~(\ref{eq:p_m_eta_z}) needs to be
properly normalized to be a pdf in halo mass $M$.
The total log-likelihood for mass calibration with perfect masses is then given by the sum of the natural logarithms
of the likelihoods of the single clusters
\begin{equation}\label{eq:pfct_mss_lkl}
\ln\mathcal{L_\text{pfct}} = \sum_{j} \ln P(M^{(j)}|\hat\eta^{(j)}, z^{(j)}),
\end{equation}
where $j$ runs over all clusters whose halo mass is known. Note that the perfect mass is only accessible in the
case of a mock catalogue. This likelihood is thus not applicable to real data. Nevertheless, it is a function
of the scaling relation and the cosmological parameters and can be used to extract the true
underlying scaling relation from a mock dataset.
\subsubsection{WL mass calibration}
The likelihood that a cluster with measured rate $\hat \eta$ and redshift $z$ has an observed tangential shear
profile $\hat g_\text{t}(\theta_i)$ can be computed as
\begin{equation}\label{eq:P_gt_eta_z}
P(\hat g_\text{t}|\hat\eta, z) = \int \text{d} M_\text{WL} \, P(\hat g_\text{t}(\theta_i) |M_\text{WL}, z_\text{cl})
P(M_\text{WL}|\hat\eta, z),
\end{equation}
where
\begin{enumerate}
\item the probability of a cluster with measured rate $\hat \eta$ and redshift $z$ to have a WL mass
$M_\text{WL}$ is
\begin{equation}\label{eq:P_mwl_eta_z}
\begin{split}
P(M_\text{WL}|\hat\eta, z) \propto & \int \text{d} M \int \text{d} \eta \, P(\hat\eta|\eta,z) P(M_\text{WL}, \eta| M, z) \\
& \frac{\text{d}N}{\text{d}M}(M,z),
\end{split}
\end{equation}
with $P(M_\text{WL}, \eta| M, z)$ being the joint pdf describing the scaling relations for the rate and the WL mass,
given in equations~(\ref{eq:eta_mz} and \ref{eq:mwl}), respectively,
\item the probability of a cluster of WL mass $M_\text{WL}$ having an observed reduced shear profile $\hat
g_{\text{t},i} = \hat g_\text{t}(\theta_i)$ is given by a Gaussian likelihood
\begin{equation}
\ln P(\hat g_\text{t} | M_\text{WL}, z) = -\frac{1}{2}\ln \big(2 \upi \det \mtrx{C} \big) - \frac{1}{2} \Delta \hat g_\text{t}
^T \mtrx{C}^{-1} \Delta \hat g_\text{t},
\end{equation}
with $\Delta \hat g_\text{t} = \hat g_\text{t} - g_\text{t}$, where $g_\text{t}$ is the tangential shear profile computed
following equation~(\ref{eq:gt}) for a cluster of mass $M_\text{WL}$ and the redshift distribution
$N(z_\text{s},z_\text{cl}=z)$.
\end{enumerate}
The total log-likelihood for mass calibration with WL then reads
\begin{equation}
\ln\mathcal{L_\text{WL mssclbr}} = \sum_{j} \ln P(\hat g_\text{t}^{(j)}|\hat\eta^{(j)}, z^{(j)}),
\end{equation}
where $j$ runs over all clusters with WL information.
\subsubsection{Number counts}\label{sec:number_counts}
We also model the observed number of clusters $\hat N$ in bins of measured rate $\hat \eta$ and redshift $z$.
We predict this number by computing the expected number of clusters in each bin, given the scaling relation,
halo mass function
and cosmological parameters
\begin{equation}\label{eq:N_eta_z}
\begin{split}
N(\hat \eta, z) = & P(\text{det}| \hat\eta,z)\\
&\int \text{d} M \int \text{d} \eta \, P(\hat\eta|\eta,z) P(\eta| M, z) \frac{\text{d}N}{\text{d}M}(M,z),
\end{split}
\end{equation}
where $P(\text{det}| \hat\eta,z)$ is a binary function parameterizing if the bin falls within the selection criteria or
not. Assuming a pure rate selection might be a simplification compared to the actual cluster selection function
of the forthcoming eROSITA survey (for a study of this selection function, c.f. \citet{clerc18}). In summary, the
expected number of clusters in observable space can be computed using the cosmology dependent
halo mass function, volume-- redshift relation and observable--mass relation.
The number counts likelihood for the entire sample is the sum of the Poisson log-likelihoods in the individual
bins
\begin{equation}
\ln\mathcal{L_\text{nmbr cts}} = \sum_\text{bins} \hat N \ln N - N.
\end{equation}
As above, this likelihood is a function of the scaling relation, halo mass function and the cosmological parameters.
\subsubsection{Validation}\label{sec:validation}
To validated these likelihoods, we create a mock which is ten times larger than the eROSITA mock (by considering
the unphysical survey footprint $\text{Area}_\text{test} = 10\, \text{Area}_\text{DE}$). This leads to a reduction of
the statistical uncertainties that enables us to better constrain systematic biases. We analyze this mock with
the number counts and the Euclid WL mass calibration likelihood. We find that all parameters are
consistent with the input values within less then two sigma.
Scaling this up to the
normal sized mock, we conclude that our code is unbiased at or below $\sim{2\over3}$ sigma. We present
for inspection a plot showing the results of the validation run as
Fig.~\ref{fig:validation} at the end of the paper. The plot shows the marginal contours of the
posterior distributions for the parameters with the input values
marked.
Given that our mock catalog is a random realization of the stochastic processes modeled by the above described
likelihoods, and that these likelihoods retrieve the input values even for a ten times larger mock, we take the
liberty to shift best fit parameter values of the posterior samples presented in the following sections. These
shifts are of the order of one sigma. Putting all posteriors to the same central value allows us to highlight the
improvement of constraining power visible in the shrinking of the contours.
\subsection{Comments on sampling and model choice}
\label{sec:sampling}
Various combinations of the above described likelihood functions are sampled using \texttt{pymultinest}
\citep{pymultinest}, a python wrapper of the nested sampling code \texttt{multinest} \citep{multinest}. Nested
sampling was originally developed to compute the evidence, or marginal likelihood, but has the added
advantage of providing a converged posterior sample in the process \citep{nestedsampling}.
The parameters we sample depend on the specific application. In all cases considered, we sample the parameters
of the X-ray selection scaling relation: $(\ln A_\text{X}, B_\text{X}, \gamma_\text{X}, \sigma_\text{X},
B_\text{X}^\prime)$. When the WL mass calibration likelihood is sampled in Section~\ref{sec:cosmo_constr},
also the parameters governing the WL mass scaling relation are sampled: $(b_\text{WL}, \sigma_\text{WL})$.
We explore two different flat cosmological models: (1) $\nu$-$\Lambda$CDM, and (2) $\nu$-$w$CDM.
For both, we consider the following parameters: $H_\text{0}$, the current
expansion rate of the Universe in units of km s$^{-1}$ Mpc$^{-1}$; $\omega_b$, the current day co-moving
density of baryons w.r.t. the critical density of the Universe; $\Omega_M$, the current day density of matter
w.r.t. the critical density; $A_\text{S}$, the amplitude of primordial curvature fluctuations; $n_\text{S}$, the
spectral index of primordial curvature fluctuations; and the sum of neutrino masses $\sum m_\nu$ in eV.
The cosmological model where only these parameters are allowed to vary is called $\nu$-$\Lambda$CDM, because
we allow for massive neutrinos of yet unknown mass, and assume that the
agent of the late time accelerated expansion is a cosmological constant $\Lambda$.
As a more complex model $\nu$-$w$CDM, we also consider the case that the late time acceleration is not caused by the
cosmological constant, but by an as yet unknown form of energy, usually referred to as dark energy. The
properties of dark energy are described here by a single equation of state parameter $w$.
For better comparison, with other Large Scale Structure experiments,
in both models,
we also compute $\sigma_8$, the
root mean square of linear matter fluctuations in a spherical region of 8 h$^{-1}$Mpc radius, as a derived quantity in each
step of
the chain and present the posterior distribution in this quantity rather than in the
primordial power spectrum fluctuation amplitude $A_\text{S}$.
\subsection{Choice of priors}
\label{sec:priors}
\begin{table}
\centering
\caption{Priors used in our analysis. $\mathcal{U}(a, b)$ is a uniform flat prior in the interval $(a,b)$, $\ln
\mathcal{U}(a, b)$ a uniform flat prior in log space, $\mathcal{N}(\mu, \sigma^2)$ refers to a Gaussian
distribution with mean $\mu$ and variance $\sigma^2$, $\mathcal{N}_{>a}(\mu, \sigma^2)$ to a Gaussian
distribution truncated for values smaller than $a$.\newline \textit{Comment: a)} Numerical stability when
computing the equations~(\ref{eq:p_m_eta_z}, \ref{eq:P_gt_eta_z}, \ref{eq:P_mwl_eta_z} and
\ref{eq:N_eta_z}), requires the scatter to be larger than the sampling size of the numerical integrals. }
\label{tab:priors}
\begin{tabular}{lll}
\hline
\multicolumn{3}{l}{Cosmology for Number counts w/o CMB}\\
\hline
$H_\text{0}$ & $\mathcal{U}(40, 120)$ & cf. Section~\ref{sec:cosmo_priors}\\
$\omega_\text{b}$ & $\mathcal{U}(0.020, 0.024)$ & \\
$\Omega_\text{M}$ & $\mathcal{U}(0.1, 0.5)$ & \\
$A_\text{S}$ & $\ln \mathcal{U}(0.6\text{e}-9, 2.5\text{e}-9)$ & \\
$n_\text{S}$ & $\mathcal{U}(0.94, 1.0)$ & \\
$\sum m_\nu [eV]$ & $\mathcal{U}(0., 1.)$ & \\
$w$ & $\mathcal{U}(-1.6, -0.6)$ & \\
\hline
\multicolumn{3}{l}{Cosmology for Number counts w/ CMB}\\
\hline
& cf. Section~\ref{sec:cosmo_priors} & \\
\hline
\multicolumn{3}{l}{X-ray Selection Scaling Relation}\\
\hline
$\ln A_\text{X}$ & $\mathcal{N}(-0.33, 0.23^2)$ & cf. Appendix~\ref{app:scaling_relations} \\
$B_\text{X}$ & $\mathcal{N}(2.00, 0.17^2)$ & \\
$\gamma_\text{X}$ & $\mathcal{N}(0.45, 0.42^2)$ & \\
$\sigma_{X}$ & $\mathcal{N}_{>0.1}(0.28, 0.11^2)$ & a) \\
$B_\text{X}^\prime$ & $\mathcal{N}(0.36, 0.78^2)$ & \\
\hline
\multicolumn{3}{l}{DES/HSC WL}\\
\hline
$b_\text{WL}$ & $\mathcal{N}(0.94, 0.051^2)$ & cf. Section~\ref{sec:WL_syst}\\
$\sigma_\text{WL}$ & $\mathcal{N}_{>0.1}(0.24, 0.02^2)$ & a) \\
\hline
\multicolumn{3}{l}{Euclid WL}\\
\hline
$b_\text{WL}$ & $\mathcal{N}(0.94, 0.013^2)$ & cf. Section~\ref{sec:WL_syst}\\
$\sigma_\text{WL}$ & $\mathcal{N}_{>0.1}(0.24, 0.008^2)$ & a) \\
\hline
\multicolumn{3}{l}{LSST WL}\\
\hline
$b_\text{WL}$ & $\mathcal{N}(0.94, 0.015^2)$ & cf. Section~\ref{sec:WL_syst}\\
$\sigma_\text{WL}$ & $\mathcal{N}_{>0.1}(0.24, 0.008^2)$ & a) \\
\hline
\end{tabular}
\end{table}
In general, any Bayesian analysis, and more specifically \texttt{pymultinest}, requires the specification of priors for
all parameters one intends to sample. In the following, we present our choice of priors. If the parameter is
not mentioned below, it has a uniform prior in a range that is larger than the typical posterior uncertainties of
that parameter. The prior choices are summarized in Table~\ref{tab:priors}.
\subsubsection{Current priors on scaling relation}\label{sec:prior_SR}
As mentioned above-- and discussed in detail in Appendix~\ref{app:scaling_relations}-- the eROSITA count rate
scaling relation is described by five parameters: $(\ln A_\text{X}, B_\text{X}, \gamma_\text{X}, \sigma_\text{X},
B_\text{X}^\prime)$. We put Gaussian priors on these parameters. The mean values are obtained in
Section~\ref{sec:fid_SR_params} by determining the maximum likelihood points of the mass calibration
likelihood when using perfect masses. The corresponding uncertainties in the priors are taken to match the
uncertainties on the respective parameters presented in Table 5 of \citet{bulbul19} for the core included
0.5-2.0~keV luminosity-mass-redshift relation when fit with the scaling relation of Form II. These parameter
uncertainties were extracted using a sample of 59 SPT selected galaxy clusters observed with XMM-{\it
Newton} together with the SPT SZE-based halo masses calculated using the calibration from \citet[][see
Table 3 results column 2]{dehaan16}.
When we extract cosmological constraints {\bf only} with these priors (i.e., without any WL information)
we consider that a ``baseline'' result representing a currently achievable knowledge of the parameters of the
eROSITA rate-mass relation.
\subsubsection{Priors on WL calibration}\label{sec:WL_syst}
The priors on the parameters of the WL mass -- halo mass relation reflect the understanding of both the
observational and theoretical systematics of the WL mass calibration. In this work, we consider, the following
sources of systematic uncertainty:
\begin{enumerate}
\item the accuracy of the shape measurement in the optical survey parameterized as the uncertainty on the
multiplicative shear bias $\delta m$,
\item the systematic mis-estimation of the lensing efficiency $\langle \beta \rangle$ due to the bias in the
photometric redshift estimation $b_{\hat z}$,
\item the uncertainty in the estimation of the contamination by cluster members $f_\text{cl}$ which results from the
statistical uncertainty of the photometric redshifts $\sigma_{\hat z}$ and the background galaxy selection,
\item the statistical uncertainty with which the theoretical bias and scatter of the WL mass $\delta b_\text{WL, sim}
$, and $\delta \sigma_\text{WL, sim}$, respectively, can be constrained with large structure formation
simulations.
\end{enumerate}
The first three effects do not directly induce a bias in the mass estimation, but affect the NFW fitting procedure. To
estimate their impact on the WL mass estimate, we consider a shear profile for WL mass $3\times10^{14}$
M$_{\sun}$ and $z=0.4$, add the systematic shifts, and fit for the mass again. The difference in input and
output masses is then taken as the WL mass systematic uncertainty induced by these effects. This technique
provides an overall estimate of the systematic uncertainty level, while ignoring potential dependences on
cluster redshift and mass.
For DES, we assume $\delta m=0.013$ \citep{zuntz18}. The bias on the photometric redshift estimation of the
source galaxies is $b_{\hat z}=0.02$ \citep{cooke14} which, considering the source redshift distribution of
DES (cf. Section~\ref{sec:wl_signal}), leads to an uncertainty on the lensing efficiency $\delta \langle \beta
\rangle = 0.02$. For the uncertainty on the contamination, we project $\delta f_\text{cl} = 0.01$ based on
\citet{dietrich19}. Taken all together, these uncertainties propagate to a WL mass uncertainty of $\delta
b_\text{WL, obs, DES}=0.045$.
The current uncertainty on the theoretical WL mass bias is $\delta b_\text{WL,
sim, to day}=0.05$ in \citet{dietrich19}, when considering the effects of halo triaxiality, morphological variety,
uncertainties in the mass-concentration relation and mis-centering. Due to larger available simulations \citep{lee18}, a
better measurement of the mis-centering distribution and an improvement of the understanding of the mass--
concentration relation, for DES we project a reduction of this uncertainty by a factor 2, yielding $\delta
b_\text{WL, sim, DES}=0.025$. The same scaling is applied to the uncertainty on the scatter, yielding $\delta
\sigma_\text{WL, DES} = 0.02$.
Given the level of observational uncertainty, this projection can also be read
as a necessity to improve the understanding of the theoretical biases. The estimates above provide a
total uncertainty of the bias of the WL mass
\begin{equation}
\begin{split}
\delta b_\text{WL, DES} &= \sqrt{ b_\text{WL, sim, DES}^2 + b_\text{WL, obs, DES}^2}\\
&=0.051,
\end{split}
\end{equation}
and an uncertainty on the scatter of the WL mass $\delta \sigma_\text{WL, DES} = 0.02$. This amounts to a $5.1\%$
mass uncertainty from systematic effects, which is a conservative assumption, given that
\citet{mcclintock19} already achieved such a level of systematics control for DES cluster mass calibration. For sake of
simplicity, we assume that the final level of systematics in HSC is of the same as in DES. This assumption will be inadequate
for the actual analysis of the data. We postpone the discussion about the difference between the analysis methods to the
respective future works.
The specifications for Euclid are given in \citet{laureijs11}. The requirement for the shape measurement is $
\delta m = 0.001$. For the bias on the photometric redshift estimation, the requirement is $b_{\hat z} = 0.002$, which
translates into $\delta \langle \beta \rangle = 0.0014$. For the projection of the uncertainty on the
contamination, we assume that in the case of DES it has equal contribution from (1) the number of clusters used
for to characterize it and (2) the photometric redshift uncertainty. Thus, for Euclid we estimate
\begin{equation}
\begin{split}
\delta f_\text{cl, Eu}^2 &= \frac{\delta f_\text{cl, DES}^2}{2} \frac{N_\text{DES}}{N_\text{Eu}} + \frac{\delta f_\text{cl,
DES}^2}{2} \left(\frac{ \sigma_{\hat z\text{, Eu}}}{\sigma_{\hat z\text{, DES}}}\right)^2 \\
&= 0.0065^2,
\end{split}
\end{equation}
where $N_\text{DES}\approx 3.8$k, and $N_\text{Eu}\approx 6.4$k, are the number of clusters with DES and
Euclid shear information in our catalog (cf. Section~\ref{sec:wl_signal}), $\sigma_{\hat z\text{, Eu}}=0.06$ is
the photometric redshift uncertainty for Euclid \citep{laureijs11}, and $\sigma_{\hat z\text{, DES}}=0.1$ is the
photometric redshift uncertainty for DES \citep{sanchez14}.
Taking all the above mentioned values together, we find $\delta b_\text{WL, obs, Eu}=0.0085$ for Euclid. To match
this improvement in data quality, we project an improvement in the understanding of the theoretical biases by
a factor of 5, providing $\delta b_\text{WL, sim, Eu}=0.01$, and $\delta \sigma_\text{WL, Eu} = 0.008$.
Thus, the total uncertainty on the WL mass bias for Euclid is
\begin{equation}
\delta b_\text{WL, Eu} = 0.013.
\end{equation}
The specifications for LSST systematics are summarized in \citet{lsst_desc18}. The requirement for the shape
measurement is $
\delta m = 0.003$, while the requirement for the bias on the photometric redshift estimation $b_{\hat z} = 0.001$, leading to $
\delta \langle \beta \rangle = 0.0007$. Using $N_\text{LSST}\approx 11$k, and $\sigma_{\hat z\text{, LSST}}=0.02$, we find
an uncertainty on the cluster member contamination of $\delta f_\text{cl, LSST}=0.0044$. Summing all the above mentioned
values together, we get $\delta b_\text{WL, obs, LSST}=0.011$. We project the same understanding in theoretical
systematics for LSST as for Euclid. Thus, the total uncertainty on the WL mass bias for LSST is
\begin{equation}
\delta b_\text{WL, LSST} = 0.015.
\end{equation}
These values are adopted throughout this work as priors for the WL mass scaling relation parameters, as
summarized in Table~\ref{tab:priors}. We note that the effort required to theoretically constrain
the WL bias and scatter parameters with this accuracy is considerable.
\begin{figure*}
\includegraphics[width=\textwidth]{nmbrcnts_cosmo_w_relev}
\vskip-0.1in
\caption{Predicted constraints on the scaling relation and cosmological parameters in $w$CDM. In red the
constraints from the number counts alone (eROSITA+Baseline), in orange the constraints from number counts
and DES+HSC WL calibration (eROSITA+DES+HSC), in green number counts and Euclid WL calibration
(eROSITA+Euclid), and in blue number counts and LSTT WL calibration
(eROSITA+LSST). The median values, all statistically consistent with the input values, are shifted to the input
values to better highlight the increase in constraining power.}
\label{fig:cosmo_constr}
\end{figure*}
\subsubsection{Cosmological priors}\label{sec:cosmo_priors}
When sampling the number counts likelihood, we assume flat priors on all cosmological parameters except for
$A_\text{S}$, for which we use a flat prior in log-space, as is good practice for strictly positive amplitudes.
Similarly, we use priors on $\Omega_\text{M}$, $H_\text{0}$ and $w$ that are larger than the typical
uncertainties on these parameters. For $\sum m_\nu$ we only explore the regime up to 1 eV, as current
cosmological measurements, such as \citet{planck16_cosmo} give upper limits on the summed neutrino mass
around and below that value.
For $\omega_b$ and $n_\text{S}$ we use tight flat priors around the measured values of these parameters by the
CMB experiments \citep{planck16_cosmo} and Big Bang Nucleosynthesis constraints derived from deuterium
abundances \citep{cooke14}. We confirm that cluster number counts are not sensitive to these parameters
within these tight ranges \citep{bocquet18}. It is thus not necessary to use informative priors on these parameters, as
previous
studies have done \citep[see for instance][]{bocquet15, dehaan16}.
In Section~\ref{sec:syn_CMB} we will consider the synergies between eROSITA number counts and WL mass
calibration, and CMB temperature and polarization anisotropy measurements, which to date provide us with a
significant amount of information about the cosmological parameters. In the models of interest, where either
$w$ or $\sum m_\nu$ are free parameters, the CMB constraints from the Planck mission
\citep{planck16_cosmo} display large degeneracies between the parameters we choose to sample.
\footnote{These degeneracies are partially due to our choice of sampling parameters. The CMB does not
directly constrain $H_\text{0}$, which is a present day quantity. Consequently, also $\Omega_\text{M}$ is
weakly constrained.
The same holds for $w$, which has predominantly a late time impact on the expansion rate. In contrast, co-
moving densities like $\omega_\text{b}$, or primordial quantities like $A_\text{S}$ and $n_\text{S}$ are
narrowed down with high precision.} For this reason, we cannot approximate the CMB posterior as a
Gaussian distribution. To capture the non-Gaussian feature, we calibrate a nearest-neighbor kernel density
estimator (KDE) on the publicly available\footnote{\url{https://pla.esac.esa.int/pla/\#cosmology}, where we
utilized the \texttt{TTTEE\_lowTEB} samples.} posterior sample. We utilize Gaussian kernels and, for each
model, we tune the bandwidth through cross calibration to provide maximum likelihood of the KDE on a test
subsample. As discussed in Section~\ref{sec:fiducial}, our choice of input cosmology is slightly inconsistent
with the CMB constraints. As we are only interested in the reduction of the uncertainties when combining
CMB and eROSITA, we shift the CMB posteriors so that they are consistent with our input values at less than one sigma.
The resulting estimator reproduces the parameter uncertainties and the degeneracies
accurately.
\section{Results}\label{sec:results}
In the following subsections we first calculate how accurately the observable--mass scaling relation parameters
must be constrained to enable the best possible cosmological constraints from the sample
(Section~\ref{sec:optimal}). Thereafter we explore the impact of the WL mass calibration on the cosmological
constraints that can be extracted from an analysis of the eROSITA galaxy cluster counts
(Section~\ref{sec:eROSITA+WL}). In Section~\ref{sec:eROSITA+CMB} we explore synergies of the
eROSITA dataset with the CMB and in Section~\ref{sec:eROSITA+BAO} we examine the impact of combining
the eROSITA dataset with BAO measurements from DESI. In Section~\ref{sec:combined} we examine
the constraints derived when combining with both these external data sets, and the final subsection focuses on
the impact of an eROSITA sample where the minimum mass is allowed to fall from our baseline value of
$M_{500\text{c}}\goa2\times 10^{14} \text{M}_{\sun}$ to $M_{500\text{c}}\goa5\times 10^{13} \text{M}_{\sun}$,
corresponding to a sample that is $\sim3.5$ times larger.
\subsection{Optimal mass calibration}\label{sec:min_mssclbr}
\label{sec:optimal}
The number counts likelihood depends both on the scaling relation parameters, and-- through the
mass function, the cosmological
volume and their changes with redshift-- also on the cosmological parameters. Furthermore, there are significant
degeneracies between the mass scale of the cluster sample (i.e., the parameters of the observable mass relation) and the
cosmological parameters, as demonstrated already in the earliest studies \citep{haiman01}. A full self-calibration of the
number counts (i.e., including no direct mass measurement information) that allows full cosmological and scaling relation
freedom, results in only very weak cosmological constraints \citep[e.g.,][]{majumdar03, majumdar04}.
Thus, before forecasting the cosmological constraints from the eROSITA sample, we estimate how accurate
the mass calibration needs
to be so that the information contained in the number counts is primarily resulting in the reduction of uncertainties on the
cosmological parameters rather than the observable mass scaling relation parameters.
To estimate this required level of mass calibration, which we refer to as "optimal mass calibration", we quantify
how much the number counts constrain the scaling relation parameters when the cosmological parameters are fixed to
fiducial values. In such a case, all the information contained in the number counts likelihood informs our
posterior on the scaling relation parameters. If this level of information, or more, were provided by direct mass calibration,
then the number counts information would predominantly constrain the cosmology. In this sense, the optimal mass
calibration then provides a threshold or goal for the amount and precision of external mass calibration we should strive for
in our direct mass calibration through, e.g., weak lensing.
We find that in fact the number counts alone do not contain enough information to meaningfully constrain all five scaling
relation parameters even in the presence of full cosmological information. Our scaling relation parametrization includes two
additional parameters beyond those explored in \citet{majumdar03}, the scatter $\sigma_X$ and the redshift evolution of the
mass trend $B_\text{X}^\prime$. Thus, as a next test, we examine
the constraints from number counts with fixed cosmology while assuming priors only on $B_\text{X}^\prime$.
Interestingly, in this case we find that the constraints lead to an upper limit on the scatter of the scaling relation
$\sigma_X<0.44$ (at $95 \%$), which is weaker than our current knowledge of that parameter, which we infer from the
scatter in the X-ray luminosity--mass relation from \citet[][see discussions in Section~\ref{sec:priors} and
Appendix~\ref{app:scaling_relations}]{bulbul19}.
We therefore adopt this external prior
on the scatter parameter and allow full freedom for all other parameters (including $B_\text{X}^\prime$).
Results in this case are more interesting, providing constraints that we adopt as our estimate of optimal mass calibration.
The uncertainties are $\delta \ln A_\text{X} = 0.042$, $\delta B_\text{X} = 0.024$, $\delta \gamma_\text{X}
= 0.053$, and $\delta B_\text{X}^\prime = 0.116$.
We take this to mean that an optimal cosmological exploitation of the eROSITA cluster number counts will require that we
know the parameters of the observable mass relation to at least these levels of precision.
We will discuss in the following how this can be accomplished.
\subsection{Forecasts: eROSITA+WL}\label{sec:cosmo_constr}
\label{sec:eROSITA+WL}
\begin{table*}
\centering
\caption{Forecast parameter constraints for eROSITA number counts with current, best available calibration
(eROSITA+Baseline), with DES+HSC WL calibration (eROSITA+DES+HSC), with Euclid WL calibration
(eROSITA+Euclid), and with LSST WL calibration
(eROSITA+LSST) are presented in two different models, $\nu$-$w$CDM and $\nu$-$\Lambda$CDM within three different
scenarios. From top to bottom they are eROSITA+WL alone, in combination with Planck CMB constraints
(Pl15) and in combination with DESI BAO and Alcock-Pacynzki test constraints. Also shown are the scaling
relation parameter uncertainties for an optimal mass calibration. In addition to the five cosmological parameters who
constraints are presented, each model includes the parameters $n_\mathrm{S}$ and $\omega_\mathrm{b}$ marginalized
over weak priors (see Table~\ref{tab:priors}).
The units of the column ``$\sum m_\nu$'' and ``$H_\text{0}$'' are eV and km s$^{-1}$ Mpc$^{-1}$, respectively.
\textit{Comments: a)} This parameter is not constrained within the prior ranges. When
reporting upper limits ``<'', we refer to the 95th percentile, while lower limits ``>'' refer to the 5th percentile.
When a parameter is kept fixed in that model, we use ``--''.}
\label{tab:baseline_constraints}
\begin{tabular}{llcccccccccc}
\hline
& & $\Omega_\text{M}$ & $\sigma_8$ & $w$ & $\sum m_\nu$ & $H_\text{0}$ & $\ln A_\text{X}$ &
$B_\text{X}$ & $\gamma_\text{X}$ & $\sigma_\text{X}$ & $B_\text{X}^\prime$\\
\hline
\hline
\multicolumn{3}{l}{optimal mass calibration\hfil} & & & & & 0.042 & 0.024 & 0.053 & & 0.116 \\
\hline
\hline
\multispan{12}{\hfil eROSITA + WL calibration\hfil}\\
\hline
$\nu$-$w$CDM & priors & & & & & & 0.23 & 0.17 & 0.42 & 0.11 & 0.78 \\
& eROSITA+Baseline & 0.032 & 0.052 & 0.101 & $^\text{a)}$ & 10.72 & 0.165 & 0.073 & 0.209 & 0.083 & 0.128 \\
& eROSITA+DES+HSC & 0.023 & 0.017 & 0.085 & $^\text{a)}$ & 6.449 & 0.099 & 0.053 & 0.121 & 0.062 & 0.111\\
& eROSITA+Euclid & 0.016 & 0.012 & 0.074 & $^\text{a)}$ & 5.210 & 0.059 & 0.037 & 0.090 & 0.034 & 0.107 \\
& eROSITA+LSST & 0.014 & 0.010 & 0.071 & $^\text{a)}$ & 4.918 & 0.058 & 0.031 & 0.089 & 0.030 & 0.107 \\
\hline
$\nu$-$\Lambda$CDM & priors & & & -- & & & 0.23 & 0.17 & 0.42 & 0.11 & 0.78 \\
& eROSITA+Baseline & 0.026 & 0.033 & -- & $^\text{a)}$ & 10.18 & 0.157 & 0.069 & 0.192 & 0.078 & 0.110 \\
& eROSITA+DES+HSC & 0.016 & 0.014 & -- & $^\text{a)}$ & 5.664 & 0.091 & 0.049 & 0.103 & 0.059 & 0.104\\
& eROSITA+Euclid & 0.011 & 0.007 & -- & $^\text{a)}$ & 4.691 & 0.040 & 0.035 & 0.065 & 0.033 & 0.104 \\
& eROSITA+LSST & 0.009 & 0.007 & -- & $^\text{a)}$ & 4.691 & 0.039 & 0.032 & 0.058 & 0.029 & 0.104 \\
\hline
\hline
\multispan{12}{\hfil eROSITA + WL calibration + Pl15 (TTTEE\_lowTEB)\hfil}\\
\hline
$\nu$-$w$CDM & priors (incl. CMB) & <0.393 & 0.063 & 0.242 & <0.667 & >62.25 & 0.23 & 0.17 & 0.42 &
0.11 & 0.78 \\
& eROSITA+Baseline & 0.019 & 0.032 & 0.087 & <0.590 & 2.857 & 0.165 & 0.026 & 0.132 & 0.083 & 0.121 \\
& eROSITA+DES+HSC & 0.018 & 0.019 & 0.085 & <0.554 & 2.206 & 0.099 & 0.024 & 0.118 & 0.062 & 0.107\\
& eROSITA+Euclid & 0.014 & 0.010 & 0.074 & <0.392 & 1.789 & 0.059 & 0.020 & 0.090 & 0.034 & 0.107 \\
& eROSITA+LSST & 0.013 & 0.009 & 0.069 & <0.383 & 1.662 & 0.058 & 0.018 & 0.080 & 0.030 & 0.103 \\
\hline
$\nu$-$\Lambda$CDM & priors (incl. CMB) & 0.024 & 0.035 & -- & <0.514 & 1.723 & 0.23 & 0.17 & 0.42 &
0.11 & 0.78 \\
& eROSITA+Baseline & 0.016 & 0.018 & -- & <0.425 & 1.192 & 0.122 & 0.025 & 0.101 & 0.077 & 0.110 \\
& eROSITA+DES+HSC & 0.013 & 0.015 & -- & <0.401 & 1.067 & 0.086 & 0.023 & 0.098 & 0.060 & 0.104\\
& eROSITA+Euclid & 0.011 & 0.007 & -- & <0.291 & 0.978 & 0.039 & 0.020 & 0.065 & 0.033 & 0.103 \\
& eROSITA+LSST & 0.009 & 0.007 & -- & <0.285 & 0.767 & 0.038 & 0.020 & 0.054 & 0.030 & 0.103 \\
\hline
\hline
\multispan{12}{\hfil eROSITA + WL calibration + DESI (BAO)\hfil}\\
\hline
$\nu$-$w$CDM & priors (incl. BAO) & 0.007 & $^\text{a)}$ & 0.086 & $^\text{a)}$ & $^\text{a)}$ & 0.23 &
0.17 & 0.42 & 0.11 & 0.78 \\
& eROSITA+Baseline & 0.007 & 0.030 & 0.063 & $^\text{a)}$ & 1.987 & 0.164 & 0.043 & 0.139 & 0.083 & 0.128 \\
& eROSITA+DES+HSC & 0.006 & 0.010 & 0.051 & $^\text{a)}$ & 1.597 & 0.086 & 0.037 & 0.110 & 0.056 & 0.101\\
& eROSITA+Euclid & 0.006 & 0.005 & 0.047 & $^\text{a)}$ & 1.463 & 0.040 & 0.030 & 0.086 & 0.032 & 0.096 \\
& eROSITA+LSST & 0.006 & 0.005 & 0.043 & $^\text{a)}$ & 1.403 & 0.040 & 0.026 & 0.076 & 0.029 & 0.095 \\
\hline
$\nu$-$\Lambda$CDM & priors (incl. BAO) & 0.006 & $^\text{a)}$ & -- & $^\text{a)}$ & $^\text{a)}$ & 0.23 &
0.17 & 0.42 & 0.11 & 0.78 \\
& eROSITA+Baseline & 0.006 & 0.015 & -- & $^\text{a)}$ & 0.943 & 0.094 & 0.041 & 0.109 & 0.078 & 0.110 \\
& eROSITA+DES+HSC & 0.006 & 0.010 & -- & $^\text{a)}$ & 0.925 & 0.074 & 0.040 & 0.077 & 0.055 & 0.104\\
& eROSITA+Euclid & 0.006 & 0.005 & -- & $^\text{a)}$ & 0.910 & 0.040 & 0.029 & 0.054 & 0.032 & 0.089 \\
& eROSITA+LSST & 0.006 & 0.005 & -- & $^\text{a)}$ & 0.910 & 0.035 & 0.025 & 0.053 & 0.027 & 0.089 \\
\hline
\hline
\multispan{12}{\hfil eROSITA + WL calibration + DESI + Pl15\hfil}\\
\hline
$\nu$-$w$CDM & priors (incl. CMB+BAO) & 0.007 & 0.027 & 0.049 & <0.284 & 1.118 & 0.23 & 0.17 &
0.42 & 0.11 & 0.78 \\
& eROSITA+Baseline & 0.006 & 0.026 & 0.049 & <0.281 & 1.103 & 0.161 & 0.023 & 0.079 & 0.083 & 0.128 \\
& eROSITA+DES+HSC & 0.006 & 0.011 & 0.048 & <0.245 & 1.050 & 0.085 & 0.023 & 0.071 & 0.061 & 0.104\\
& eROSITA+Euclid & 0.005 & 0.006 & 0.047 & <0.241 & 1.023 & 0.039 & 0.017 & 0.064 & 0.032 & 0.095 \\
& eROSITA+LSST & 0.005 & 0.006 & 0.039 & <0.223 & 0.870 & 0.038 & 0.017 & 0.064 & 0.029 & 0.089 \\
\hline
$\nu$-$\Lambda$CDM & priors (incl. CMB+BAO) & 0.004 & 0.020 & -- & <0.256 & 0.255 & 0.23 & 0.17 &
0.42 & 0.11 & 0.78 \\
& eROSITA+Baseline & 0.004 & 0.016 & -- & <0.254 & 0.253 & 0.093 & 0.024 & 0.067 & 0.074 & 0.110 \\
& eROSITA+DES+HSC & 0.004 & 0.009 & -- & <0.218 & 0.251 & 0.072 & 0.021 & 0.062 & 0.051 & 0.095\\
& eROSITA+Euclid & 0.003 & 0.004 & -- & <0.211 & 0.148 & 0.035 & 0.020 & 0.050 & 0.033 & 0.071 \\
& eROSITA+LSST & 0.002 & 0.003 & -- & <0.185 & 0.145 & 0.033 & 0.017 & 0.050 & 0.033 & 0.069 \\
\hline
\hline
\end{tabular}
\end{table*}
\subsubsection{$\nu$-$w$CDM constraints}
As a first cosmological model we investigate $\nu$-$w$CDM, a flat cold Dark Matter cosmology with dark energy
with constant but free equation of state parameter $w$ and massive neutrinos. In this Section, we present the
constraints on the cosmological parameters for three different cases: number counts alone combined with baseline
priors on the X-ray observable mass scaling relation that we derive from the latest analysis within SPT \citep{bulbul19}
(eROSITA+Baseline), number counts with DES+HSC WL mass calibration (eROSITA+DES+HSC), number counts
with Euclid WL mass calibration (eROSITA+Euclid), and number counts
with LSST WL mass calibration (eROSITA+LSST) . The respective marginal contour plot is shown in
Fig.~\ref{fig:cosmo_constr}, and the corresponding uncertainties are listed in Table~\ref{tab:baseline_constraints}.
Considering the current knowledge of the X-ray scaling relation, we find that eROSITA number counts constrain
$\Omega_\text{M}$ to $\pm0.032$, $\sigma_8$ to $\pm 0.052$, $w$ to $\pm 0.101$, and $H_\text{0}$ to $\pm
10.72$ km s$^{-1}$ Mpc$^{-1}$, while marginalizing over the summed neutrino mass $\sum m_\nu<1$ eV without
constraining it. We
also find no constraints on $\omega_\text{b}$ and $n_\text{S}$ within the prior ranges that we assumed.
The addition of mass information consistently reduces the uncertainties on the cosmological parameters: the
knowledge on $\Omega_\text{M}$ is improved by factors of $1.4$, $2.0$ and $2.3$ when adding DES+HSC,
Euclid, and LSST WL information, respectively; for $\sigma_8$ the improvements are $3.1$, $4.3$ and $5.2$, whereas
for the dark energy equation of state parameter they are $1.2$, $1.4$ and $1.4$, respectively. In summary, weak
lensing calibration provides the strongest improvement of the determination of $\sigma_8$, followed by $
\Omega_M$. The improvements on the dark energy equation of state parameter $w$ are clearly weaker.
\subsubsection{$\nu$-$\Lambda$CDM constraints}
We also investigate a model in which the equation of state parameter $w$ is kept constant: $\nu$ $
\Lambda$CDM. The corresponding uncertainties are shown in Table~\ref{tab:baseline_constraints}. In this model,
we find that the constraints on $\Omega_\text{M}$ and $\sigma_8$ are $0.019$ and $0.032$, respectively,
which is tighter than in the $\nu$-$w$CDM model. However, the constraint on $H_\text{0}$ is comparable in
the two models.
We also find that the addition of WL mass information improves the constraints on $\Omega_\text{M}$ by factors
of $1.6$, $2.4$ and $2.9$ for DES+HSC, Euclid and LSST, respectively. The determination of $\sigma_8$ improves by
factors
$2.4$, $4.7$ and $4.7$. It is especially worth highlighting how eROSITA with Euclid or LSST WL information will be
able to determine $\sigma_8$ at a sub-percent level. Nevertheless, also in this simpler model we find that
eROSITA number counts do not constrain the summed neutrino mass in the sub-eV regime.
\begin{figure}
\includegraphics[width=\columnwidth]{distance_amplitude_deg}
\vskip-0.10in
\caption{Two dimensional marginalized posterior sample of the amplitude of the scaling relation $A_\text{X}$
and the luminosity distance to the median redshift of our sample $D_\text{L}(0.51)$ in Mpc, as derived from
the cosmological parameters in the posterior sample in the $w$CDM model. In red, orange, green and blue we
present the constraints from the number counts alone (eROSITA+Baseline), from number counts and DES+HSC
WL calibration (eROSITA+DES+HSC), Euclid WL calibration (eROSITA+Euclid), and LSST WL calibration (eROSITA+LSST),
respecitvely. When no direct mass information is
present, as in the case of number counts only, the two quantities are not degenerate with each other. As
mass information is added, the underlying parameter degeneracy between the amplitude of the X-ray
observable mass relation and the cosmological distance information emerges.}
\label{fig:dist_deg}
\end{figure}
\subsubsection{Limiting parameter degeneracy}
\label{sec:degeneracy}
We have studied the causes of the weaker improvement in $w$ when calibrating with Euclid or LSST WL, and we
have discovered an interesting degeneracy due to the $w$ sensitivity of the distance. Remember that our WL
calibration dataset consists of observations of the shear profiles and the redshift distributions of the
background galaxies. To turn these into masses, one needs the cosmology sensitive angular diameter
distances (see discussion below equation~\ref{eq:gt}). Moreover, our selection observable is the eROSITA
count rate (similar to X-ray flux) that is related to the underlying X-ray luminosity through the luminosity
distance (see equation~\ref{eq:eta_mz}). This leads to a degeneracy between $w$, governing the redshift
evolution of distances, and the amplitude and redshift trend of the selection observable--mass relation.
The degeneracy between $w$ and ($\ln A_\text{X}$, $\gamma_\text{X}$) can be easily understood by considering
the parametric form of the rate mass scaling relation in equation~(\ref{eq:eta_mz}). Ignore for a moment the
distance dependence of the mass. Then for a given redshift $z$ and rate $\eta$, a shift in $w$ leads to a
shift in the luminosity distance $D_\text{L}(z)$, and, to a minor degree, to a shift in the co-moving expansion
rate $E(z)$. Such a shift can be compensated by a shift in $\ln A_\text{X}$ and $\gamma_\text{X}$, resulting
in the same mass, and consequently the same number of clusters, making it indiscernible. The distance
dependence of the shear to mass mapping and the power law dependence of the rate on mass leads to a
somewhat different dependence, and so the parameter degeneracy is not catastrophic.
This effect is demonstrated in Fig.~\ref{fig:dist_deg}, where the joint posterior of the luminosity distance to the median
cluster redshift $D_\text{L}(0.51)$ and of the amplitude of the scaling relation $\ln A_\text{X}$ is shown. In the
case of no direct mass information, when we fit the number counts with priors on the scaling relation
parameters, the median distance and the amplitude are uncorrelated. As one adds more mass information, e.g., the
+DES-HSC WL, and +Euclid WL or +LSST WL cases, the underlying correlation between the median distance and the
amplitude
becomes apparent. This degeneracy provides a limitation to improving the $w$ constraint from the number counts by means
of mass calibration. Given that it affects the halo masses directly, and not only the WL signal, we expect these
degeneracies to be present also in other mass calibration methods, although to a different extent, given the
different scaling of the selection observables with mass.
As a side note, these degeneracies highlight the importance of fitting for mass calibration and number counts
simultaneously and self consistently. A mass calibration done at fixed cosmology would miss these
correlations and lead to underestimated uncertainties on the scaling relation parameters. More worrisome, modeling
mass calibration by simply adopting priors on the observable mass scaling relation parameters would
miss the underlying physical degeneracies altogether \citep[e.g.,][]{sartoris16,pillepich18}.
The degeneracies between the distance redshift relation and the scaling relation parameters in the mass
calibration explain why the impact of WL mass calibration in weaker in the $\nu$-$w$CDM model, compared
to the $\nu$ $\Lambda$CDM model: in the latter $w$ is kept fixed, and the redshift evolution of distances and
critical densities is controlled predominantly by a single variable: $\Omega_\text{M}$. With one degenerate
degree of freedom less, WL mass calibration can put tighter constraints on $\ln A_\text{X}$ and $
\gamma_\text{X}$ in the $\nu$-$\Lambda$CDM than in the $\nu$-$w$CDM model.
\subsection{Synergies with Planck CMB}\label{sec:syn_CMB}
\label{sec:eROSITA+CMB}
It is customary in observational cosmology to combine the statistical power of different experiments to further
constrain the cosmological parameters. An important part of these improvements is due to the fact that each
experiment has distinctive parameter degeneracies that can be broken in combination with constraints from
another experiment. This is especially true for CMB temperature and polarization anisotropy measurements,
which constrain the cosmological parameters in the early Universe, but display important degeneracies on
late time parameters such as $\Omega_\text{M}$, $\sigma_8$ and $w$ \citep[for a recent study applicable to
current CMB measurements, see][]{Howlett12}. We will discuss in the following the synergies between the
Planck cosmological constraints from temperature and polarization anisotropy \citep{planck16_cosmo} and
those from the eROSITA cluster counts analysis.
\begin{figure}
\vskip-0.2in
\includegraphics[width=\columnwidth]{breakingCMBdegen}
\vskip-0.1in
\caption{Marginalized posterior sample of $\sigma_8$ and $w$ in the $w$CDM model. In purple the constraints
from Planck CMB alone (Pl15), in red the constraints from the number counts and Planck
(eROSITA+Baseline+Pl15), in orange the constraints from the addition of DES+HSC WL calibration
(eROSITA+DES+HSC+Pl15), in green for the addition Euclid WL calibration (eROSITA+Euclid+Pl15), in blue for the addition
LSST WL calibration (eROSITA+LSST+Pl15).
Cluster information breaks the inherent CMB degeneracies and allows to constrain the late time parameters
to high precision.}
\label{fig:cmbdegen}
\end{figure}
\subsubsection{$\nu$-$w$CDM constraints}
In the $\nu$-$w$CDM model, the CMB suffers from the so called \textit{geometrical degeneracy}
\citep{efstathiou99},
that arises because the CMB anisotropy primarily constrains the ratio of the sound horizon at recombination and the angular
diameter distance to that epoch. As a consequence, for example, the
current day expansion rate $H_\text{0}$ is degenerate with the equation of state parameter
$w$. This uncertainty in the expansion history of the Universe leads to large uncertainties on late time
properties such as $\Omega_\text{M}$ and $\sigma_8$. Addition of a late time probe that constrains these
quantities allows one to break the degeneracies and put tighter constraints on $w$. This can be nicely seen
for the case of eROSITA in Fig.~\ref{fig:cmbdegen}, where the red CMB degeneracy between $\sigma_8$
and $w$ is broken by the addition of cluster information. The corresponding uncertainties are shown in
Table~\ref{tab:baseline_constraints}.
While in this model the CMB alone is not able to determine $\Omega_\text{M}$, the addition of eROSITA number
counts allows a constraint of $\pm0.019$.
Inclusion of WL mass information further reduces the uncertainty to $0.018$, $0.014$ and $0.013$ for DES+HSC, Euclid and
LSST,
respectively. The uncertainty in $\sigma_8$ is reduced from $0.065$ when considering only the CMB, to
$0.032$ with number counts, $0.019$ with number counts and DES+HSC WL, and $0.010$ with number counts
and Euclid, and $0.009$ with LSST WL. Noticeably, the determination of the equation of state parameter $w$ is improved
from
$0.242$ from CMB data alone, to $0.087$ when adding number counts. Even more remarkable is the fact
that WL calibrated eROSITA constraints on $w$ are only margimally improved by the addition of CMB information.
\begin{figure}
\vskip-0.2in
\includegraphics[width=\columnwidth]{CMB_degene_mnu}
\vskip-0.1in
\caption{Marginalized posterior sample of $H_\text{0}$, $\Omega_\text{M}$, $\sigma_8$ and $\sum m_\nu$ in the $\nu$-$
\Lambda$CDM model. In red the constraints from Planck CMB alone (Pl15) and the constraints from
eROSITA number counts and DES WL calibration without CMB priors in blue (eROSITA+DES), and with CMB
priors in purple (eROSITA+DES+Pl15). By measuring $\sigma_8$ and $\Omega_\text{M}$ independently of
the sum of neutrino masses, WL calibrated cluster number counts break the degeneracy among these
parameters in the CMB posteriors.}
\label{fig:cmbdegen_mnu}
\end{figure}
\subsubsection{Constraints on sum of the neutrino masses}\label{sec:sum_mnu}
We showed earlier that cluster number counts, even when they are WL calibrated, provide little information about
the sum of the neutrino masses in the regime $<1$ eV. On the other hand, the CMB posteriors on $
\sigma_8$ and $\Omega_\text{M}$ are strongly degenerate with the neutrino mass, as can be seen in
Fig.~\ref{fig:cmbdegen_mnu}. Contrary to the CMB, the number counts of galaxy clusters are only weakly
affected by the sum of the neutrino mass. Recent studies have shown that the halo mass function is a
function of the power spectrum of baryons and dark matter only \citep{costanzi13,castorina14}. Effectively,
this means that number counts can be used to constrain the density $\Omega_\text{coll}$ and fluctuation amplitude $
\sigma_{8, \text{coll}}$ of baryons and dark matter independently of the neutrino mass.
If one considers matter as cold dark matter, baryons and neutrinos, as is customarily done, then $\Omega_\text{M}
=\Omega_\text{coll}+\Omega_\nu$ and $\sigma_8^2 = \sigma_{8, \text{coll}}^2 +\sigma_{8, \nu}^2$, where $
\Omega_\nu$ is the density parameter of neutrinos and $\sigma_{8, \nu}^2$ is the amplitude of their clustering on
8$h^{-1}$~Mpc scales. The counts derived constraints on $\Omega_\text{coll}$ and $ \sigma_{8, \text{coll}}$ then lead to
only very weak
degeneracies between the sum of the neutrino masses and $\Omega_\mathrm{M}$ and $\sigma_8$, respectively, because
neutrinos constitute a tiny fraction of the total matter density and the total matter fluctuations on 8$h^{-1}$~Mpc scales.
In Fig.~\ref{fig:cmbdegen_mnu} we can see how these very different parameter degeneracies in the CMB and cluster counts
manifest themselves.
Combining these weaker degeneracies arising from eROSITA+DES WL with the more
pronounced degeneracies in the CMB posteriors allows us to break the latter and to better constrain the sum
of the neutrino masses.
Consistently, we find that in the $\nu$-$\Lambda$CDM model, the addition of CMB priors only marginally improves
the constraints eROSITA will put on $\sigma_8$ and $\Omega_\text{M}$. However, while the CMB alone puts
an upper limit of $\sum m_\nu<0.514$ eV (at 95\%) we determine that the combination of Planck CMB and
eROSITA number counts will constrain the neutrino masses to $<0.425$ eV, which will improve to
$<0.401$ eV, $<0.291$ eV and $<0.285$ eV with the addition of WL information from DES+HSC, Euclid and LSST,
respectively.
\begin{figure*}
\includegraphics[width=\columnwidth]{sigma8VSOm_plbao}
\includegraphics[width=\columnwidth]{sigma8VSw_plbao}
\vskip-0.15in
\caption{2 dimensional marginal contours of the posteriors in ($\Omega_\text{M}$,$\sigma_8$) (left panel) and
($w$,$\sigma_8$) (right panel), showing the incremental improvement of constraining power when first
adding WL information and second combining with external cosmological data sets (``Pl15'' stands for the
CMB fluctuation measurements by the Planck satellite, while ``DESI'' refers only to the BAO constraints).
These posteriors are derived while simultaneously marginalizing over the summed neutrino mass.}
\label{fig:plbao}
\end{figure*}
\subsection{Synergies with DESI BAO measurements}
\label{sec:eROSITA+BAO}
From the discussion in Section~\ref{sec:degeneracy}, it is apparent that the flux based X-ray selection and the
distance dependent WL mass information lead to an inherent degeneracy between distances to the clusters
and scaling relation parameters that ultimately limits the constraint on $w$.
It would be desirable to utilize CMB independent constraints on the distance
redshift relation, to allow for more stringent consistency checks between cluster derived constraints and CMB
constraints. Some previous cosmological studies of X-ray clusters have used the distance information
gleaned from the assumption of constant intracluster medium (ICM) mass fraction with redshift
\citep{Mantz15,Schellenberger17}. While these results are encouraging, a challenge with this method is that
it only provides accurate distance information if in fact the ICM mass fraction is constant at all redshifts. It has
been established for decades now that the ICM mass fraction varies with cluster mass \citep[e.g.,][]{mohr99},
but direct studies of how the ICM mass fraction varies over the redshift range of the eROSITA survey (i.e.,
extending beyond $z=1$) have only recently been undertaken \citep{lin12,chiu16,chiu18,bulbul19}. The
evolution is consistent with constant ICM mass fraction, but the uncertainties are still large. Further study is
clearly needed. Another interesting eROSITA internal prospect for better constraining the distance redshift relation
is to utilize the clustering of clusters to
determine the BAO scale \citep[for a recent application, see][and references therein.]{marulli18}
As an alternative, we consider constraints from other low redshift experiments, more precisely the measurement of
the Baryonic Acoustic Oscillations (hereafter BAO) in future spectroscopy galaxy surveys. In this work, we
consider the forecast for the constraints provided by the Dark Energy Spectroscopy
Instrument\footnote{\url{https://www.desi.lbl.gov}} \citep[DESI;][]{levi13} as the relative error on the transversal
BAO measurement $d_\mathrm{A}/r_\text{S}$ and the radial BAO measurement $H(z)\, r_\text{S}$ as
functions of redshift, where $d_\mathrm{A}$ is the angular diameter distance, $H(z)$ the expansion rate, and
$r_\text{S}$ is the sound horizon.
The values adopted in this work are reported in Table V of \citet{Font-Ribera14}. Furthermore, we follow the
authors indications and assume that in each redshift bin, the measurement error on the two quantities are
correlated with correlation coefficient $\rho=0.4$. Using this information we perform an importance sampling
of the posterior samples presented above and summarize the resulting uncertainties in
Table~\ref{tab:baseline_constraints}.
When considering the uncertainties on the different parameters obtained by sampling these observables, we find
that the BAO measurement dominates the uncertainty on $\Omega_\text{M}$. The addition of number counts,
or number counts and WL information does not lead to major improvements on this parameter either in $
\nu$-$w$CDM or in $\nu$-$\Lambda$CDM. However, the uncertainty on the dark energy equation of state parameter $w$ is
reduced from $0.086$ in the BAO only case, to $0.065$ when adding just number counts, $0.054$ and
$0.047$ when adding DES+HSC and Euclid WL information, respectively. Remarkably, eROSITA counts with BAO
priors on the expansion history outperforms eROSITA counts with CMB priors when it comes to constraining the
parameters $\Omega_\text{M}$, $\sigma_8$ and $w$, while simultaneously marginalizing over the summed
neutrino mass. The latter is unconstrained by eROSITA+BAO, even when considering WL mass
information. Furthermore, eROSITA+BAO allows us to measure the Hubble constant $H_\text{0}$ to varying
degrees of precision, depending on the quality of the WL data. While these constraints never go below the
present precision from other methods \citep[see for instance][]{riess16}, they will provide a valuable
systematics cross-check \citep[for an example of systematics in SNe Ia that impact local $H_0$ measurements, see, e.g.,][]
{rigault13,rigault15,rigault18}.
\subsection{Combining all datasets}
\label{sec:combined}
It is current practice in cosmology to first test consistency of constraints from different data sets as a check on systematics
and to then combine the constraints as possible to provide the most precise
cosmological parameter constraints possible. In
the case of a forecast work like this, agreement is guaranteed by the choice of input cosmology for the mock
creation, while statistical independence can be assumed for eROSITA with WL data, DESI and the CMB
measurement from Planck.
We provide the results of this combination at the bottom of Table~\ref{tab:baseline_constraints}. In $\nu$-$w$CDM, already
the combination of Planck CMB measurements and DESI BAOs allows us to
determine $\Omega_\text{M}$ and $w$ to $0.007$ and $0.049$, respectively, while simultaneously putting an
upper limit of $<0.284$ eV on the summed neutrino mass. Addition of eROSITA+Euclid WL only
marginally improves these constraints to $0.005$ and $0.047$ for $\Omega_\text{M}$ and $w$, respectively,
and leads to the 95\% confidence upper limit $\sum m_\nu<0.241$ eV.
In this configuration, however, the added value of eROSITA
number counts and WL mass calibration lies in the ability to constrain $\sigma_8$: while CMB and BAO put a
constraint of $0.027$, addition of eROSITA improves this to $0.026$, $0.011$ and $0.006$, when considering
the baseline mass information, DES+HSC WL, Euclid WL or LSST WL, respectively. In summary, using BAO and
CMB priors together increases the constraining power of eROSITA cluster cosmology considerably, as can be seen in
the shrinking of the 2 dimensional marginal contours in ($\Omega_\text{M}$,$\sigma_8$) and ($w$,$\sigma_8$)
space, shown in the left and the right panel of Fig.~\ref{fig:plbao}, respectively.
\subsection{Inclusion of low mass clusters and groups}
\label{sec:low_mass}
In this work, we have taken the conservative approach of excluding all systems with a halo mass $\loa 2 \times 10^{14}
\text{M}_{\sun}$ by means of increasing the eROSITA cluster count rate threshold at low redshift (cf.
Section~\ref{sec:x-ray_mock} and Appendix~\ref{app:selection}). There are several good reasons to do so, all
of them related, in one way or another, to an increase in systematic uncertainty when going to lower mass
systems that are not as well studied. However, to enable comparison to previous work, and as a
motivation to further investigate and
control the systematic uncertainties in low mass clusters and groups, we also examine the impact of WL
mass calibration on the constraining power for a cluster sample where the count rate threshold is reduced at low redshift so
that only clusters with masses $M_\text{500c} \lessapprox 5\times
10^{13} \text{M}_{\sun}$ are excluded.
\begin{table*}
\centering
\caption{Parameter uncertainties, for number counts (eROSITA+Baseline), number counts and DES+HSC WL
calibration (eROSITA+DES+HSC), number counts and Euclid WL calibration (eROSITA+Euclid), and number counts and
LSST WL calibration (eROSITA+LSST) in the
$\nu$-$w$CDM model when including low mass clusters.
The units of the column ``$\sum m_\nu$'' and ``$H_\text{0}$'' are eV and km s$^{-1}$ Mpc$^{-1}$, respectively.
\textit{Comments: a)} This parameter is not constrained
within the prior ranges. When reporting upper limits ``<'', we refer to the 95th percentile, while lower limits ``>''
refer to the 5th percentile. When a parameter is kept fixed in that model, we use ``--''.}
\label{tab:lowmass_constraints}
\begin{tabular}{llcccccccccc}
\hline
& & $\Omega_\text{M}$ & $\sigma_8$ & $w$ & $\sum m_\nu$ & $H_\text{0}$& $\ln A_\text{X}$ &
$B_\text{X}$ & $\gamma_\text{X}$ & $\sigma_\text{X}$ & $B_\text{X}^\prime$\\
\hline
\hline
\multicolumn{3}{l}{optimal mass calibration\hfil} & & & & & 0.028 & 0.021 & 0.050 & & 0.116 \\
\hline
\hline
\multispan{12}{\hfil eROSITA + WL\hfil}\\
\hline
$\nu$-$w$CDM & priors & & & & & & 0.23 & 0.17 & 0.42 & 0.11 & 0.78 \\
& eROSITA+Baseline & 0.025 & 0.038 & 0.079 & $^\text{a)}$ & 8.081 & 0.113 & 0.071 & 0.202 & 0.078 & 0.086 \\
& eROSITA+DES+HSC & 0.012 & 0.012 & 0.069 & $^\text{a)}$ & 4.572 & 0.081 & 0.028 & 0.097 & 0.052 & 0.072\\
& eROSITA+Euclid & 0.009 & 0.007 & 0.056 & $^\text{a)}$ & 3.762 & 0.042 & 0.019 & 0.073 & 0.027 & 0.058 \\
& eROSITA+LSST & 0.007 & 0.006 & 0.050 & $^\text{a)}$ & 2.707 & 0.042 & 0.016 & 0.068 & 0.023 & 0.051 \\
\hline
\hline
\multispan{12}{\hfil eROSITA + WL + Pl15 (TTTEE\_lowTEB)\hfil}\\
\hline
$\nu$-$w$CDM & priors (incl. CMB) & <0.393 & 0.063 & 0.242 & <0.667 & >62.25 & 0.23 & 0.17 & 0.42 &
0.11 & 0.78 \\
& eROSITA+Baseline & 0.017 & 0.028 & 0.078 & <0.580 & 2.745 & 0.131 & 0.026 & 0.128 & 0.083 & 0.087 \\
& eROSITA+DES+HSC & 0.010 & 0.012 & 0.069 & <0.542 & 1.587 & 0.092 & 0.017 & 0.102 & 0.052 & 0.065\\
& eROSITA+Euclid & 0.007 & 0.006 & 0.060 & <0.381 & 1.401 & 0.046 & 0.013 & 0.076 & 0.021 & 0.054 \\
& eROSITA+LSST & 0.006 & 0.005 & 0.051 & <0.365 & 1.317 & 0.045 & 0.012 & 0.065 & 0.021 & 0.050 \\
\hline
\hline
\multispan{12}{\hfil eROSITA + WL + DESI (BAO)\hfil}\\
\hline
$\nu$-$w$CDM & priors (incl. BAO) & 0.007 & $^\text{a)}$ & 0.086 & $^\text{a)}$ & $^\text{a)}$ & 0.23 &
0.17 & 0.42 & 0.11 & 0.78 \\
& eROSITA+Baseline & 0.006 & 0.016 & 0.051 & $^\text{a)}$ & 1.703 & 0.136 & 0.036 & 0.090 & 0.068 & 0.070 \\
& eROSITA+DES+HSC & 0.006 & 0.009 & 0.048 & $^\text{a)}$ & 1.425 & 0.080 & 0.025 & 0.084 & 0.050 & 0.059 \\
& eROSITA+Euclid & 0.005 & 0.005 & 0.038 & $^\text{a)}$ & 1.379 & 0.036 & 0.016 & 0.063 & 0.021 & 0.050 \\
& eROSITA+LSST & 0.004 & 0.005 & 0.038 & $^\text{a)}$ & 1.303 & 0.036 & 0.014 & 0.061 & 0.021 & 0.049 \\
\hline
\hline
\multispan{12}{\hfil eROSITA + WL + DESI + Pl15\hfil}\\
\hline
$\nu$-$w$CDM & priors (incl. CMB+BAO) & 0.007 & 0.027 & 0.049 & <0.284 & 1.118 & 0.23 & 0.17 &
0.42 & 0.11 & 0.78 \\
& eROSITA+Baseline & 0.005 & 0.015 & 0.046 & <0.279 & 1.114 & 0.134 & 0.022 & 0.079 & 0.067 & 0.067 \\
& eROSITA+DES+HSC & 0.005 & 0.010 & 0.044 & <0.242 & 1.040 & 0.078 & 0.014 & 0.067 & 0.049 & 0.056\\
& eROSITA+Euclid & 0.005 & 0.005 & 0.037 & <0.237 & 1.015 & 0.039 & 0.012 & 0.058 & 0.021 & 0.049 \\
& eROSITA+LSST & 0.004 & 0.005 & 0.034 & <0.224 & 0.790 & 0.039 & 0.010 & 0.053 & 0.021 & 0.047 \\
\hline
\hline
\end{tabular}
\end{table*}
\subsubsection{Systematics of low mass clusters and groups}
There are several important systematic concerns. For instance, \citet{bocquet16_mfct} find in a study using
hydrodynamical structure formation simulations that for masses below $10^{14} \text{M}_{\sun}$, baryonic
feedback effects reduce the halo mass function by up to 10\% compared to halo mass functions extracted from
dark matter only simulations.
The magnitude of this effect depends on the feedback model, and therefore needs be treated as a systematic
uncertainty in the cosmological modeling. The magnitude of this uncertainty awaits further study.
Baryonic feedback effects also impact the mass profiles of clusters. \citet{lee18} show how active galactic nuclei
feedback induces a deficit of mass in the cluster center when compared to gravity only simulations. The
partial evacuation of baryons is strong enough to modify also the matter profile. \citet{lee18} demonstrate how
this effect impacts the WL bias $b_\text{WL}$ and the WL scatter $\sigma_\text{WL}$, making them mass
dependent. Such effects will need to be taken into account, especially when considering lower mass systems.
Similarly, the thermodynamic structure of low mass systems, generally called groups, is more complex than for
massive galaxy clusters, showing a larger impact of non gravitational physics \citep{eckmiller11,
Bharadwaj14, barnes17}. \citet{lovisari15} showed that the mass slope of the luminosity mass relation is
significantly steeper for groups than for clusters. \citet{Schellenberger17} demonstrate how such a break in
the power law might bias the cosmological results derived from an X-ray selected cluster sample. We have thus chosen
the conservative approach of excluding these systems from our primary eROSITA forecasts, thereby
reducing the sensitivity of the forecast cosmological
parameter constraints to these important complications at low masses.
\subsubsection{Improvement of the constraints}
Nevertheless, the controlled environment of mock data analysis allows us to investigate how much constraining power
could ideally be gained by lowering the mass limit if all the above described systematics where well
understood and controlled. To this end, we select a low mass sample by imposing an observable selection with redshift that
enforces $M_{500\text{c}}\goa5\times 10^{13} \text{M}_{\sun}$, assuming that the scaling relation and the mass
function used for the fiducial sample are still valid also at this lower mass scale. This increases the sample size to
43k clusters, with a median redshift $\bar z=0.31$ and a median halo mass of $\bar M_{500\text{c}} =
1.4\times 10^{14} \text{M}_{\sun}$. The resulting constraints on the parameters of the $\nu$-$w$CDM model
are shown in Table~\ref{tab:lowmass_constraints}. The constraints both on the cosmological parameters, as well as on
the scaling relation parameters show a strong improvement compared to those from the higher mass sample.
For eROSITA number counts we determine that the uncertainty on $\Omega_\text{M}$, $\sigma_8$ and $w$
will be reduced by factors of $1.3$, $1.4$ and $1.3$, respectively. When calibrating masses with DES+HSC, we
find improvements of factor $1.9$, $1.4$ and $1.2$, when considering Euclid the inclusion of low
mass systems will reduce the uncertainties by $1.8$, $1.7$ and $1.3$,
while using LSST leads to reductions by $2.0$, $1.7$ and $1.4$.
In absolute terms, eROSITA including
low mass systems, calibrated with Euclid will provide constraints on $\Omega_\text{M}$, $\sigma_8$
and $w$ of $0.009$, $0.007$ and $0.056$, respectively. We emphasize that these tight constraints can
only be obtained if the aforementioned systematic effects are adequately controlled.
\section{Discussion}\label{sec:discussion}
The above presented results on the constraining power of the eROSITA cluster sample demonstrate its value as a
cosmological probe. They also underline the crucial impact of WL mass calibration on the constraining power
of cluster number counts. However, they also give some clear indications of how this impact manifests itself in
detail.
In the following subsections we discuss first how the constraints on the scaling relation parameters are affected by
the addition of better WL data, by the choice of the model and by the choice of cosmological priors, resulting
in an assessment of the conditions under which we can attain an optimal mass calibration. We then
determine the sensitivity of our observable to the different input parameters. Finally, we compare our
prediction to the constraints from current and future experiments.
\subsection{Impact of WL on scaling relation parameters}
\label{sec:WLimpact}
In the previous section we discussed in detail the impact of WL mass calibration on the eROSITA cosmological
parameter constraints. Naturally, adding WL information will also improve the constraints on the scaling
relation parameters. The resulting uncertainties are reported in Table~\ref{tab:baseline_constraints}. In the
following, we will focus on two interesting aspects of these results: first, we assess under which
circumstances eROSITA will be optimally calibrated; second, we comment on the constraints on the scatter in
observable at fixed mass.
\subsubsection{Which mass calibration is optimal?}
In Section~\ref{sec:optimal}, we introduced the concept of the $\textit{optimal}$ mass calibration. Comparing the
bounds on the parameter uncertainties derived there to the forecasts for DES+HSC, we find that, independent of
the presence of external cosmological priors and in both models we consider, DES WL will not provide an
optimal calibration of the eROSITA observable mass relation. Only the calibration of the mass slope
$B_\text{X}$ when considering CMB and BAO data is an exception to this. This is not to say that, as shown
above, the inclusion of DES+HSC WL information does not improve the cosmological constraints. It is to say that
some part of the information contained in the number counts is used to constrain the scaling relation
parameters instead of the cosmological parameters.
The optimal nature of the Euclid or LSST mass calibration is more subtle. When the dark energy equation of state
parameter is kept fixed in the $\nu$-$\Lambda$CDM model, Euclid provides an optimal mass
calibration on the amplitude of the scaling relation,
both with and without external cosmological priors from CMB or BAO observations. However, in the
$\nu$-$w$CDM model without external priors, Euclid or LSST WL does not constrain the scaling relation
parameters optimally. The amplitude is calibrated optimally after the inclusion of BAO data. On the other
hand, including CMB priors makes an optimal calibration of the mass trend possible. In the presence of dark
energy with free but constant equation of state, the redshift slope is never calibrated optimally.
Nevertheless, as demonstrated in the previous section, even in the limit of sub-optimal mass calibration, the
eROSITA dataset provides cosmological information complementary to these other cosmological
experiments.
Furthermore, the calibration of the redshift trend could be improved by complementary direct mass calibration
methods. At high redshift, the most promising options would be pointed observations of high-z clusters
\citep{schrabback18a, schrabback18b} and CMB-WL calibration \citep{baxter15, planck16cluster_cosmo, baxter18}.
\subsubsection{Scatter in the count rate to mass relation}
One may imagine that the inclusion of low scatter mass proxies in the number counts and mass calibration
analysis may tighten the constraints on the scatter and thereby reduce the uncertainties on the cosmological
parameters. Our present work does not seem to support this hypothesis. First, we show that even an
arguably weak constraint on the scatter can be considered an optimal calibration (cf.
Section~\ref{sec:min_mssclbr}). In other words, even at fixed cosmology, the number counts are unable to
constrain the scatter. Consequently, our ability to constrain the cosmology using the number counts is not
expected to depend strongly on the knowledge of the scatter. This can also be seen by the fact that there is
little correlation between the scatter and any other parameter of interest in the $\nu$-$w$CDM posterior
sample, as shown in Fig.~\ref{fig:cosmo_constr} and Fig.~\ref{fig:covs}. We conclude that constraining the
scatter to high precision, although of astrophysical interest, is not required to perform an optimal cosmological
analysis.
Furthermore, our results indicate that DES+HSC, Euclid and LSST WL mass calibration will be able to determine the
scatter to $0.062$, $0.034$, and $0.030$, respectively (see Table~\ref{tab:baseline_constraints}). This may seem surprising,
because WL mass calibration has large observational uncertainties and a large intrinsic scatter when
compared to typical low scatter mass proxies such as the ICM mass or temperature. However, the final
constraining power stems in our analysis from the large number of cluster with WL information and the
relatively small prior uncertainty on the intrinsic WL scatter $\sigma_\text{WL}$. In summary, given that the
knowledge of the scatter does not impact the constraints on the cosmological parameters, and that WL mass
calibration is able to constrain the scatter directly, it is not clear that a dedicated scatter calibration through
the inclusion of low scatter mass proxies like the ICM mass will significantly impact eROSITA cluster
cosmology constraints. Further study would be required to confirm this.
\subsection{Parameter sensitivities}\label{sec:params_sens}
\begin{figure*}
\includegraphics[width=\textwidth]{cosmo_sensitivity_nobs}
\vskip-0.1in
\caption{Sensitivity in terms of change in log likelihood of the number counts likelihood to various parameters as a
function of redshift. From left to right, each panel represents a higher count rate bin.
The total number of clusters for the
fiducial parameter values is shown as a dashed line. The parameters are varied from the fiducial
values as noted at the top of the figure.
The grey area shows the redshift range where we exclude low mass clusters by raising the selection
threshold. Notably, we find that the number counts likelihood is most sensitive to the parameters $\Omega_\text{M}$ and
$A_\text{S}$ with comparable sensitivity to $w$ and $A_\text{X}$. }
\label{fig:cosmo_sens_nobs}
\end{figure*}
\begin{figure*}
\vskip-0.25in
\includegraphics[width=\textwidth]{cosmo_sensitivity}
\vskip-0.1in
\caption{Sensitivity of different mass observables to the parameters considered in this work. On the x-axis, we
plot the redshift and each column represents a different count rate. The parameters are varied around the
input values. The grey area shows the observable range which is excluded by the approximate mass cut.
From the top, the first row shows the fractional change in mass. The second and third rows show the
difference in tangential shear for a single cluster, weighted by the observational WL uncertainty for a single cluster at that
redshift, for DES and
Euclid, respectively. We also see that both for the halo masses and for the shear signal, $\Omega_\text{M}$
and $w$ lead to changes comparable to the change in amplitude $\ln A_\text{X}$ and $\gamma_\text{X}$.
We conclude that these parameters must be degenerate with each other.}
\label{fig:cosmo_sens}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{covs}
\vskip-0.10in
\caption{Absolute values of the correlation matrices of the posterior samples in the $w$CDM model, for number
count (eROSITA+Baseline), number counts with DES+HSC WL information (eROSITA+DES+HSC) and number counts
with Euclid WL information (eROSITA+Euclid). Noticeably, we find that the initial correlations between
the pairs $(\Omega_\text{M}, \sigma_8)$ and $(B_\text{X}, \gamma_\text{X})$ (in eROSITA+Baseline) is gradually
broken by the addition of better mass information (eROSITA+DES+HSC and eROSITA+Euclid). However, the better
the mass information, the clearer the inherent correlations between $w$, $\Omega_\text{M}$, $A_\text{X}$
and $\gamma_\text{X}$. They indicate the degeneracies among these parameters stemming from the cosmology
dependence of the rate mass mapping, as discussed in Section~\ref{sec:params_sens}. }
\label{fig:covs}
\end{figure*}
To investigate in more detail how our observables-- i.e. the number counts of clusters as a function of rate and
redshift together with the WL mass calibration information-- depend on the model parameters, we perform the
following experiment: we vary the model parameters one by one and examine how the number counts, the
masses and the WL signals change. The results of this test are shown in Figs.~\ref{fig:cosmo_sens_nobs}
and~\ref{fig:cosmo_sens}. At three different fixed rates (increasing from left to right in the three columns),
we investigate the sensitivity as a function of redshift
with respect to the input parameter of the likelihood of the number counts (Fig.~\ref{fig:cosmo_sens_nobs}), as well as the
masses, and the WL signals (Fig.~\ref{fig:cosmo_sens}). We grey out the part of rate--redshift space that
is rejected due to our mass cut.
\subsubsection{Number counts}
Fig.~\ref{fig:cosmo_sens_nobs} shows the sensitivity of the number counts with respect to shifts in the input
parameters. We decide here to plot the difference in log likelihood between the fiducial number counts
$N_\text{fid}$ and the number counts $\tilde N$ if one parameter is varied. The difference in log likelihood in
each bin reads
\begin{equation}
\delta \ln L = N_\text{fid} \ln \left(\frac{N_\text{fid}}{\tilde N} \right) - N_\text{fid} + \tilde N,
\end{equation}
which can be simply obtained by taking the Poisson log likelihoods in that bin. We find that the number counts are
most sensitive to the parameters $\Omega_\text{M}$ and $A_\text{S}$. The sensitivity to the parameters
$w$, $A_\text{X}$, $B_\text{X}$, and $\gamma_\text{X}$ is much lower. This is reflected also in our results
for the parameter uncertainties (Tables~\ref{tab:baseline_constraints} and \ref{tab:lowmass_constraints}).
The number counts do put tighter
constraints on $\Omega_M$ and $\sigma_8$, than on $w$, consistent with results from the
first forecast studies for large scale cluster surveys \citep{haiman01,holder01}.
For comparison we also plot the total number of objects $N_\text{fid}$ (dashed line), on a scale proportional to the
difference in log likelihood. We can readily see that the difference in log likelihood is not simply proportional to
the number of objects: the rarer, higher redshift, and consequently, at fixed rate, higher mass objects
contribute more log likelihood per cluster than the lower redshift, lower mass systems. This trend is especially
true for the constraints on $\Omega_M$ and $\sigma_8$ ($A_\mathrm{S}$), as noted in
previous studies of cluster number counts \citep{haiman01, majumdar04}. The sensitivity to 10\% shifts in $w$ and
$A_\mathrm{X}$ are comparable. The more similar the shapes of the sensitivity curves for two parameters, the stronger the
parameter degeneracy one could expect between those parameters.
\subsubsection{Masses and WL observables}
The first row of Fig.~\ref{fig:cosmo_sens} shows how much the masses are impacted by changes in input
parameters. To this end, we plot the ratio between the input mass and the mass determined at the shifted
parameters. In the range of interest for our study, the white area, we find that all parameters (except for
$A_\text{S}$, of course, which we do not include in this figure) have a
comparably large impact on the masses. Most remarkably, both shifts in $
\Omega_\text{M}$ and $w$ change the masses associated to a given rate and redshift. This is because the
rate mass relation has a strong distance dependence and also some critical density dependence. Both $\Omega_\text{M}$
and $w$ alter the redshift dependence of distances and critical densities.
More precisely, the shift to more positive $w$ leads to a shift to higher masses, which mirrors the effect of changing the
amplitude of the scaling relation $\ln A_\text{X}$ and the redshift slope $\gamma_\text{X}$. Similarly, the
redshift dependent mass shift induced by $\Omega_\text{M}$ could be compensated by a shift in the redshift
slope $\gamma_\text{X}$ and $\ln A_\text{X}$. We therefore conclude that within the context of the masses corresponding to
a fixed eROSITA count rate, the parameters
$w$ and $\Omega_\text{M}$ are degenerate with a combinations of $\ln A_\text{X}$ and $
\gamma_\text{X}$. This degeneracy impacts the predicted halo masses. The mass slope parameter
$B_\text{X}$, however, seems to impact the masses in a distinctively different way, leading to no obvious
parameter degeneracy. The same can be said for its redshift trend $B_\text{X}^\prime$.
In our main experiment, we do not consider perfect halo masses, but WL signal.
Therefore, we explore also the
sensitivity of the WL signal for a single cluster to the input parameters. For the sake of simplicity, we do not consider the
entire
profile, but just assume one large radial bin spanning the fixed metric range corresponding to 0.25 -- 5.0 Mpc
in our fiducial cosmology.
Given the constant metric size of the area considered, the WL measurement uncertainty for a single cluster due to
shape noise can be computed by considering the background source density as a function of cluster redshift
$n_\epsilon(z_\text{cl})=n_\epsilon(z_\text{s}>z_\text{cl}+0.1)$.
In addition, the mapping from halo mass to tangential shear is non-linear and cosmology dependent.
Consequently, the shear signal associated with a given rate and redshift is expected to have strong
dependencies on cosmological parameters and, through the mass, also on the scaling relation parameters.
We visualize these effects in the second and third rows of Fig.~\ref{fig:cosmo_sens} by plotting the difference
between the WL signal for a single cluster in the fiducial model and the shifted model, divided by the expected
magnitude of the
shape noise for a single cluster. Indeed, one can readily see how the sensitivity per cluster of DES WL (second row) is
generally lower, but
also decreases more quickly with redshift than the sensitivity of Euclid WL (third row). This
is due to the larger Euclid source galaxy sample and its extension to higher redshift as compared to DES.
The trends we discuss above for the difference in halo mass do apply also to the sensitivity of the WL
signal as a function of redshift.
We find the same degeneracies in the covariance matrices of our posterior samples in the $\nu$-$w$CDM model
for the three cases of eROSITA+baseline, eROSITA+DES+HSC, and eROSITA+Euclid, shown in
Fig.~\ref{fig:covs}. In the case of number counts alone, we find a strong correlation between the pairs $
(\Omega_\text{M}, \sigma_8)$ and $(B_\text{X}, \gamma_\text{X})$. The latter degeneracy is strongly reduced by the
addition of WL mass information, and is not present in the case of Euclid WL calibration. This is due to
the fact that WL is quite sensitive to $B_\text{X}$. This is in line with improvements of both the
$(\Omega_\text{M}, \sigma_8)$ and $B_\text{X}$ constraints when adding WL mass information. However,
when $w$ is free to vary, the degeneracies between $w$, $\ln A_\text{X}$ and $\gamma_\text{X}$ lead to
stronger correlations between these parameters for better mass information. They are most pronounced in
the case of number counts with Euclid WL mass calibration.
\subsection{Comparison to previous work}
Finally, we compare our results to the constraints of recent and future experiments, with the intention of exploring
how competitive eROSITA will be.
\subsubsection{Current probes}
The most up to date number counts analysis of an X-ray selected sample with WL mass calibration has been
presented by \citet[called Weighing the Giants, hereafter WtG]{Mantz15}. It consists of 224 clusters, 51 of
which have a WL mass measurement, and 91 of which have ICM mass measurements. The analysis method
is similar to the one described in this paper, with the exception that we did not consider cosmological
constraints from the measurement of the ICM mass fraction. In the $w$CDM model (i.e. fixing the neutrino mass), when
considering only X-
ray and WL data, the uncertainties on $\Omega_\text{M}$, $\sigma_8$ and $w$ are 0.036, 0.031 and 0.15,
respectively. The direct comparison to our work is made difficult by the addition of the distance sensitive gas
fraction measurements, which by themselves constrain $\delta \Omega_\text{M}=0.04$ and $\delta w = 0.26$
\citep{Mantz14}. This measure clearly dominates the error budget on $\Omega_\text{M}$ and provides
valuable distance information. Nevertheless, eROSITA cluster cosmology is evenly matched with WtG when
considering just the number counts. It will outperform the constraining power of WtG when calibrated with
DES+HSC WL information. In the case of LSST WL calibration, we project that the uncertainties on $
\Omega_\text{M}$, $\sigma_8$ and $w$ are smaller by factors $2.6$, $3.1$ and $2.1$, respectively. These
projections ignore distance information from the eROSITA clusters and AGN, which would further improve the constraints.
Another recent cluster cosmology study has been presented by \citet{dehaan16}. Therein, the cosmological
constraints from 377 Sunyaev-Zeldovich selected clusters detected by the South Pole Telescope (hereafter
SPT) above redshift $>0.25$ are determined. From the number counts alone, the dark energy equation of state parameter
is constrained to a precision of $\delta w = 0.31$, which is a factor $3.1$ worse than our prediction for the
number counts from eROSITA alone. Furthermore, \citet{dehaan16} find $\delta \Omega_\text{M} = 0.042$ and $\delta
\sigma_8 = 0.039$, while keeping the summed neutrino mass fixed at its minimal value.
By comparison, in the baseline configuration eROSITA will improve the constrain on $\Omega_\text{M}$ and $\sigma_8$ by
a factor $1.5$ and $1.2$, however while marginalizing over the summed neutrino mass.
Also note that the priors used for the \citet{dehaan16} analysis encode the
mass uncertainty over which \citet{bulbul19} marginalized when deriving the uncertainties on the X-ray
scaling relation parameters we employ as our eROSITA+Baseline.
When the SPT number counts are combined with the CMB constraints from Planck, \citet{dehaan16} report
constraints on $\sigma_8$ and $w$ of $0.045$ and $0.17$ respectively. We find that eROSITA number
counts alone, in combination with Planck, will do better by a factor $2.8$ on $\sigma_8$ and a factor $2.0$ on
the equation of state parameter $w$, while additionally marginalizing over the summed neutrino mass.
These numbers improve even more, if we consider the WL mass
calibration by DES+HSC, Euclid and LSST.
Comparing our forecasts on the improvement of the upper limit on the summed neutrino mass to previous results from the
combination of Planck CMB measurements with either SPT cluster number counts or WtG is complicated by several factors.
First, we considere the full mission results for Planck \citep{planck16_cosmo},
while SPT \citep[][]{dehaan16} used the half mission data \citep{planck13cosmo} in addition to BAO data,
and WtG \citep[][]{Mantz15} additionally added ground based CMB measurements and supernova data.
SPT reports the measurement $\sum m_\nu = 0.14 \pm 0.08$ eV, which is impacted to some degree
by the statistically insignificant shift between their constraint and the CMB constraints in the $(\Omega_\text{M}, \sigma_8)$
plane.
Comparison to this result is complicated by our choice to use the minimal neutrino mass as input value.
On the other hand, WtG reports $\sum m_\nu \le 0.22$ at 95\% confidence, which is comparable with our result from
eROSITA number counts, DES+HSC WL, Planck CMB and DESI BAO.
The latest cosmological constraints from measurements of the Large Scale Structure (LSS) of the Universe were
presented by the \citet{desY1_3x2pt} for the first year of observations (Y1), where the joint constraints from
the cosmic shear and photometric galaxy angular auto- and crosscorrelation functions are derived. In the $
\nu$-$w$CDM model, the uncertainties on $\Omega_\text{M}$, $\sigma_8$ and $w$ are 0.036, 0.028 and
0.21, respectively. This is better than the constraints from eROSITA number counts alone, except for the dark
energy equation of state parameter, which will be constrained better by eROSITA. However, utilizing DES+HSC to
calibrate the cluster masses, we forecast that eROSITA will outperform the DES-Y1 analysis. In
combination with Planck CMB data, DES-Y1 puts a 95\% upper limit of 0.62 eV on the sum of the neutrino
masses, whereas we forecast an upper limit of 0.424 (0.401) when combining eROSITA number counts (and
DES+HSC WL calibration) with Planck data. Considering that our DES WL analysis assumes year 5 data, it will be
interesting to see whether the DES Y5 LSS measurements or eROSITA with DES WL calibration will provide
the tighter cosmological constraints.
As can be seen from Table~\ref{tab:baseline_constraints}, eROSITA will clearly outperform Planck CMB measurements
on several cosmological parameters.
In the $\nu$-$\Lambda$CDM model, eROSITA with WL mass information will outperform
Planck on the parameters $\Omega_\text{M}$ and $\sigma_8$, and in the
$\nu$-$w$CDM eROSITA with WL case will also outperform Planck on the equation of state
parameter $w$. However, for constraints on the sum of the neutrino
mass, Planck alone offers much more than eROSITA alone.
Given, however, that eROSITA and Planck extract their constraints at low redshift and high redshift,
respectively, the true benefit of these two experiments lies in assessing the mutual consistency and thereby
probing whether our evolutionary model of the Universe is correct. If this is the case, their joint constraints will
tightly constrain the cosmological model, and provide improved constraints on the sum of neutrino masses.
\subsubsection{Previous forecasts for eROSITA}
This work elaborates further on the forecast of the eROSITA cosmological constraints first presented in
\citet{merloni12}, and subsequently discussed in more detail in P18. The direct
comparison to the latter is complicated by several diverging assumptions, including that we only consider the
German half of the sky. Perhaps the most significant difference is their approach of using
Fisher matrix estimation and modeling mass calibration as simply being independent priors on the
various scaling relation parameters, whereas we have developed a working prototype for
the eROSITA cosmology pipeline and used it to analyze a mock sample with shear profiles in a self-consistent manner.
Other differences include their use of different input scaling relations from older work at lower redshift
and different fiducial cosmological parameters. P18 includes constraints from the angular clustering
of eROSITA clusters, although these constraints
are subdominant in comparison to counts except for parameters associated with non-Gaussianity in the initial density
fluctuations \citep[see][]{pillepich12}. In our analysis, we marginalize
over the sum of the neutrino mass as well as relatively weak priors on $\omega_\mathrm{b}$ and $n_\mathrm{S}$.
Following what P18 call the \text{pessimistic} case with an approximate
limiting mass of $5\times 10^{13} M_{\sun} h^{-1}$, they predict 89 k clusters, which is in good agreement
with our forecast of 43 k clusters when including clusters down to masses of $5\times 10^{13} M_{\sun}$.
Under the assumption of a 0.1 \% amplitude prior, 14 \% mass slope prior and 42 \% redshift slope prior, they
forecast a constraint of $0.017$, $0.014$ and $0.059$ on $\sigma_8$, $\Omega_\text{M}$ and $w$,
respectively. P18 also consider an \textit{optimistic} case, in which clusters down to masses of $1\times
10^{13} M_{\sun} h^{-1}$ are used under the assumption of 4 times better priors on the scaling relation
parameters. For this case, the constraints on $\sigma_8$, $\Omega_\text{M}$ and $w$ are $0.011$, $0.008$
and $0.037$, respectively.
A quantitative comparison to our work is complicated by the fact that we find a constraint on the amplitude of the
scaling relation (through direct modeling of the WL calibration from Euclid or LSST) that is worse than their
\textit{pessimistic} case, but our constraint on the mass and redshift trends is better than their \text{optimistic}
case. Consistently, we predict tighter constraints of $\sigma_8$ and $\Omega_\text{M}$, which are sensitive
to the mass and redshift trends of the scaling relation, while we predict lower precision on $w$, which we
demonstrate to be degenerate with the amplitude of the scaling relation through the amplitude distance
degeneracy. Important here is the realization that the observed shear profiles map into cluster mass
constraints in a distance dependent fashion \citep[this is true for all direct mass constraints;][]{majumdar03}.
It is not straightforward to capture this crucial subtlety by simply
adopting priors on observable mass scaling
relation parameters.
\subsubsection{Euclid cosmological forecasts}
The Euclid survey will not only provide shear catalogs to calibrate the masses of clusters, but will also allow the
direct detection of galaxy clusters via their red galaxies \citep{sartoris16}, and the measurement of the auto-
and cross-correlation of red galaxies and cosmic shear \citep{Giannantonio14}. For the optically selected
Euclid cluster sample, \citet{sartoris16} forecast $2\times 10^6$ galaxy clusters with limiting mass of $7 \times
10^{13} M_{\sun}$ up to redshift $z=2$, yielding constraints on $\Omega_\text{M}$, $\sigma_8$, and $w$ of
$0.0019$ ($0.0011$), $0.0032$ ($0.0014$), and $0.037$ ($0.034$), respectively, when assuming no
knowledge on the scaling relation parameter (perfect knowledge of the scaling relation parameters). Under
these assumptions, the number counts and the angular clustering of Euclid selected clusters would outperform
eROSITA cluster cosmology. Nevertheless, cross comparisons between the X-ray based eROSITA selection
and the optically based Euclid cluster selection will provide chances to validate the resulting cluster
samples.
\citet{Giannantonio14} forecast that the auto- and cross-correlations between red galaxies and cosmic shear in the
Euclid survey will provide constraints on $\Omega_\text{M}$, $\sigma_8$, and $w$ of $0.005$, $0.033$ and
$0.050$, respectively. Such a precision on $\sigma_8$ would be achieved by the baseline eROSITA+Euclid
analysis, too. However, to achieve similar precisions in $\Omega_\text{M}$ and $w$, it would be necessary to
consider eROSITA detected clusters down to masses of $ 5\times 10^{13} \text{M}_{\sun}$.
\section{Conclusions}\label{sec:conclusions}
In this work, we study the impact of WL mass calibration on the cosmological constraints from an eROSITA cluster
cosmology analysis. To this end, we create a mock eROSITA catalog of galaxy clusters. We assign luminosities and ICM
temperatures to each cluster using the latest measurements of the X-ray scaling relations over the relevant redshift range
\citep{bulbul19}.
Considering the eROSITA ARF, we then compute the eROSITA count rate for all
clusters in this sample. We apply a selection on the eROSITA count rate, corresponding to a $\sim6\sigma$ detection
limit given current background estimates, to define a sample for a cosmological forecast. This detection limit ensures both
high
likelihood of existence and angular extent, and -- through raising the detection threshold at low redshift -- also excludes low
mass objects at low redshift.
We assume all cluster redshifts are measured photometrically using red sequence galaxies \citep[see discussion in, e.g.][]
{klein18,klein19}.
We forecast that in the 14,892~deg$^2$ of the low Galactic extinction sky
accessible to the eROSITA-DE collaboration, when raising the detection threshold at low redshift to exclude
clusters with $M_{500\text{c}} \lessapprox 2\times 10^{14} \text{M}_{\sun}$,
we predict that eROSITA will detect 13k clusters. This baseline cosmology sample has a median mass of
$\bar M_{500\text{c}} = 2.5\times 10^{14} \text{M}_{\sun}$ and a median redshift of $\bar z = 0.51$.
For the case where we adjust the low redshift detection threshold to exclude clusters with
$M_{500\text{c}} \lessapprox 5\times 10^{13} \text{M}_{\sun}$, we predict 43k clusters. This sample has a median
mass $\bar M_{500\text{c}} = 1.4\times 10^{14} \text{M}_{\sun}$, and a median redshift $\bar z = 0.31$.
Both samples extend to high redshift with $\sim 400$ clusters at $z>1$.
We then analyze these mock samples using a prototype of the eROSITA cluster cosmology
code that is an extension of the code initially
developed for SPT cluster cosmology analyses \citep{bocquet15,dehaan16,bocquet18}. This codes employs a
Bayesian framework for simultaneously
evaluating the likelihoods of cosmological and scaling relation parameters given the distribution of
clusters in observable and redshift together
with any direct mass measurement information. The scaling relation between
the selection observable (eROSITA count rate) and the mass and redshift is parametrized as a power law with log-normal
intrinsic scatter. Final
parameter constraints are marginalized over the uncertainties (systematic and statistical) in the parameters of the mass--
observable scaling relation.
We first estimate the optimal level of mass calibration necessary for the number counts of eROSITA clusters
to mainly inform the constraints on the cosmological parameters. This requires a calibration of the amplitude
of the mass observable relation at 4.2\%, the mass trend of the scaling relation at 2.4\%, and the redshift
trend at 5.3\%. These numbers are derived using current knowledge of the scatter
around the mass luminosity relation. Furthermore, we determine that the mass trend of the rate mass relation
has to be allowed to vary with redshift to enable the recovery of unbiased cosmological results.
We then examine cosmological constraints in three different cluster mass calibration contexts: (1) using "baseline"
constraints
existing today that are taken from the recent SPT analysis of the X-ray luminosity and temperature mass relations
\citep{bulbul19},
(2) using WL information from the DES+HSC survey and (3) using WL information from the future Euclid and LSST survey.
For the subset of the two catalogs that overlap the DES, HSC, Euclid or LSST survey footprints, we produce tangential shear
profiles with
appropriate characteristics for these surveys.
We also estimate the level of systematic mass uncertainties in the WL masses that
would result from the data quality of these two surveys and from theoretical uncertainties in the impact of mis-centering and
mis-fitting
the shear profiles. We adopt mass uncertainties of 5.1\%,
1.3\% and 1.5\% for DES+HSC, Euclid, and LSST, respectively. These levels of systematic mass uncertainty will require that
our understanding of the theoretical mass bias from simulations be improved by factors of 2 and 5 for DES+HSC
and Euclid/LSST, respectively, in comparison to current work \citep{dietrich19}. We note that achieving these improvements
will
require a significant investment of effort.
Throughout this
work, we allow the summed neutrino mass to vary. All results are thus marginalized over
the summed neutrino mass. In the $\nu$-$w$CDM model, we forecast that eROSITA number counts will
constrain the density of matter in the Universe $\Omega_\text{M}$ to 0.032, the amplitude of fluctuation $\sigma_8$ to 0.052,
and the equation of state parameter of the dark energy $w$ to 0.101.
Calibrating the masses of eROSITA clusters with DES+HSC (Euclid; LSST) WL will reduce these uncertainties to 0.023
(0.016; 0.014), 0.017 (0.012; 0.010), and 0.085 (0.074; 0.071), respectively.
We also find that eROSITA clusters alone will not provide
appreciable constraints on the sum of the neutrino masses.
eROSITA number counts will be able to break several degeneracies in current CMB constraints, especially on late
time parameters such as $\Omega_\text{M}$, $\sigma_8$ and $w$. In combination with Planck constraints
from the measurement of the angular auto- and crosscorrelation functions of CMB temperature and
polarization anisotropies, we determine that eROSITA will constrain these parameters to 0.019, 0.032 and
0.087 when adopting "baseline" priors on the scaling relation parameters. These uncertainties shrink to 0.018
(0.014; 0.013), 0.019 (0.010; 0.009) and 0.085 (0.074; 0.069) when calibrating the masses with DES+HSC (Euclid; LSST)
WL
information.
When considering the $\nu$-$\Lambda$CDM model, the upper limit on the neutrino mass of 0.514~eV from CMB
alone can be improved to a constraint of 0.425~eV when utilizing number counts with the "baseline" priors, 0.404~eV
when also considering DES WL calibration, and to 0.291~eV when calibrating with Euclid WL,
and 0.285~eV when calibrating with LSST WL.
We find that the constraining power of eROSITA cluster cosmology, even when calibrated with high quality shear
profiles, is limited by a degeneracy between the scaling relation parameters and the cosmological distance to
the clusters. This degeneracy arises, because the luminosity distance is necessary to transform observed
count rates into luminosities, whose absolute and redshift dependent scaling with mass needs to be fitted
simultaneously with the cosmological parameters that alter the redshift distance relation. This leads to the
assessment that even the Euclid or LSST WL mass calibration will, by itself, not reach what we have defined as
optimal levels in the $\nu$-$w$CDM model.
However, we demonstrate that, with the inclusion of BAO measurements that constrain the redshift distance
relation, the Euclid or LSST WL dataset can be used to calibrate cluster masses at an optimal level.
Considering DESI-like BAO measurements, we project that eROSITA with Euclid WL mass
calibration will constrain $\sigma_8$ to 0.005 and $w$ to 0.047, while the uncertainty on $\Omega_\text{M}$
will be dominated by the BAO measurement.
Furthermore, we investigate the impact of lowering the mass limit to $M_\text{500c} \goa 5\times 10^{13}
\text{M}_{\sun}$. Given the larger number of low mass clusters or groups, the eROSITA counts with
Euclid WL can optimistically be used to determine $\Omega_\text{M}$ to 0.009, $\sigma_8$ to 0.007, and $w$ to
0.056, if these low mass systems are simple extrapolations of the high mass systems. The expected additional
complexity of these low mass systems would have to be modeled, and this additional modeling would likely
weaken the cosmological constraints.
In summary, WL mass calibration from DES+HSC, Euclid, and LSST will significantly improve cosmological constraints from
eROSITA cluster number counts, enabling a precise and independent cross-check of constraints from other measurements.
The constraining power on $w$ suffers from an inherent degeneracy between the distance redshift relation and
the scaling relation between the X-ray observable, mass and redshift. This degeneracy can be lifted by
inclusion of other cosmological measurements, such as BAO or CMB measurements. In turn eROSITA
cluster cosmology can break degeneracies in these other observations, underscoring the synergies between
different cosmological experiments.
\section*{Acknowledgements}
We thank Hermann Brunner for help in accessing the eROSITA ARF, Thomas Reiprich, Tim Schrabback,
Andrea Merloni, Peter Predehl and Cristiano Porciani for the useful comments, and Matteo Costanzi, Steffen Hagstotz, David
Rapetti, Tommaso
Giannantonio, and Daniel Gruen for helpful conversations.
We acknowledge financial support from the MPG Faculty Fellowship program, the DFG Cluster of Excellence
``Origin and Structure of the Universe'', the DFG Transregio program TR33 ``The Dark Universe'', and the
Ludwig-Maximilians-Universit\"at Munich.
AS is supported by the ERC-StG ``ClustersXCosmo'' grant agreement 71676, and by the FARE-MIUR grant
``ClustersXEuclid'' R165SBKTMA.
Numerical computations in this work relied on the \texttt{python} packages \texttt{numpy} \citep{numpy} and
\texttt{scipy} \citep{scipy}. The plots were produced using the package \texttt{matplotlib} \citep{matplotlib}. The
marginal contour plots were created using \texttt{pyGTC} \citep{pygtc}.
\bibliographystyle{mnras}
|
2,877,628,091,016 | arxiv | \section{Introduction}
\begin{figure}[t]
\includegraphics[width=7. cm]{figure1.eps}
\caption{\label{fig:particle} Graphic description of the $A$ and
$B$ particles (left) and snapshot of the simulated system
(right). The centers of the small spheres locate the bonding sites
on the surface of the hard--core particle.}
\end{figure}
Irreversible polymerization is a mechanism of self--organization
of molecules which proceeds via the formation of covalent bonds
between pairs of mutually--reactive groups.\cite{FloryBOOK,
ColbyBOOK, burchard} If monomers with functionality (number $f$
of reactive groups on a monomer) greater than two are present,
branched molecules grow by reactions and convert the system from
a fluid of monomers into a well connected cross--linked network,
giving rise to a chemical gelation process. At the gel point, a
persistent network spanning the sample first appears; the system
is then prevented from flowing, yet not arrested on a mesoscopic
length scale. The development of a network structure results, for
example, from step polymerization, chain addition polymerization
and cross--linking of polymer chains.\cite{YoungLovell,
Thermosets} The same phenomenon is also observed in colloids and
other soft materials when the thermodynamics and the molecular
architecture favor the formation of a limited number of strong
interactions (i.e., with attraction strength much larger than the
thermal energy) between different particles. Chemical gelation
has been extensively studied in the past, starting from the
pioneering work of Flory and Stockmayer\cite{FloryBOOK,
StockmayerJPS1952} who developed the first mean--field
description of gelation, providing expressions for the cluster
size distribution as a function of the extent of reaction and the
{\it critical} behavior of the connectivity properties close to
gelation. More appropriate descriptions based on geometric
percolation concepts have, in the late seventies, focused on the
non--mean field character of the transition, which reveals itself
near the gel point, extending to percolation the ideas developed
in the study of the properties of systems close to a
second--order critical point. Several important numerical
studies,\cite{StaufferJCSFT1976, Manneville1981, HerrmannPRL1982,
Pandey, ClercAP1983, BansilMACRO1984, LeungJCP1984, GuptaJCP1991,
LairezJPF1991, Gimel, VernonPRE2001, DelGadoPRE2002}
---most of them based on simulations on lattice
---have focused on the critical behavior close to the
percolation point, providing evidence of the percolative nature
of the transition and accurate estimates of the percolation
critical exponents. As in critical phenomena, a crossover from
mean--field to percolation behavior is expected close to the gel
transition.\cite{ginzburg} But, how the microscopic properties of
the system control the location of the crossover (i.e., how wide
is the region where the mean--field description applies) and how
accurate is the mean--field description far from the percolation
point is not completely understood. Another important open
question regards the connectivity properties of chemical gels
well beyond percolation.\cite{rubinstein} Even in the mean--field
approximation, several possibilities for the post--gel solutions
have been proposed, based on different assumptions on the
reactivity of sites located on the infinite
cluster.\cite{VanDongenJSP1997, rubinstein} Different propositions
predict different cluster--size distributions above the gel point
and a different evolution with time for the extent of reaction.
Here we introduce a model inspired by stepwise polymerization of
bifunctional diglycidyl--ether of \mbox{bisphenol--A} ($B$
particles in the following) with pentafunctional
diethylenetriamine ($A$ particles).\cite{CorezziPRL2005} To
incorporate excluded volume and shape effects, each type of
molecule is represented as hard homogeneous ellipsoid of
appropriate length, whose surface is decorated in a predefined
geometry by $f$ identical reactive sites per particle (see
Figure~\ref{fig:particle}). In this respect, the model is also
representative of colloidal particles functionalized with a
limited number of patchy attractive sites,\cite{Glotz-Solomon}
where the selectivity of the interaction is often achieved
building on biological specificity.\cite{hiddessen, DNA, DNAsoft}
The off--lattice evolution of the system is studied via
event--driven molecular dynamics simulations, using a novel code
which specifically extends to ellipsoidal particles the algorithm
previously designed for patchy spheres.\cite{pwm} Differently
from previous studies, we do not focus on the critical properties
close to the gel--point but study in detail the development of the
irreversible gelation process and the properties of the cluster
size distribution in the pre-- and post--gelation regime.
We find that the dynamic evolution of the system produces an
irreversible (chemical) gelation process whose connectivity
properties can be described, in a very large window of the extent
of reaction, with the Flory--Stockmayer (FS)
predictions.\cite{FloryBOOK, ColbyBOOK, StockmayerJPS1952} This
offers to us the possibility to address, in a well controlled
model, the kinetics of the aggregation and to evaluate the extent
of reaction at which the breakdown of the Flory post--gel
solution takes place.
\section{Method}
We study a 5:2 binary mixture composed of $N_A=480$ ellipsoids of
type $A$ and $N_{B}=1200$ ellipsoids of type $B$, for a total of
$N=1680$ particles. $A$ particles are modeled as hard ellipsoids
of revolution with axes $a=b=2\sigma$ and $c=10\sigma$ and mass
$m$; $B$ particles have axes $a=b=4\sigma$ and $c=20\sigma$, mass
$3.4 m$. Simulations are performed at a fixed packing fraction
$\phi=0.3$. Five (two) sites are rigidly anchored on the surface
of the $A$ ($B$) particles, as described in
Fig.~\ref{fig:particle}. Sites on $A$ particles can only react
with sites on $B$ particles. Every time, during the dynamic
evolution, the distance between two mutually--reactive sites
becomes smaller than a predefined distance $\delta=0.2 \sigma$, a
new bond is formed between the particles. To model irreversible
gelation, once a bond is formed, it is made irreversible by
switching on an infinite barrier at distance $r^{ij}_{AB}=\delta$
between the sites $i$ and $j$ involved, which prevents both the
formation of new bonds in the same sites and the breaking of the
existing one. Hence, the newly formed bond cannot break any
longer, and the maximum distance between the two reacted sites is
constrained to remain smaller than $\delta$. Similarly, the two
reacted sites cannot form further bonds with available unreacted
sites. The composition of the system and the particle
functionality are such that the reactive sites of type $A$ and
$B$ are initially present in equal number,
$f_{A}N_{A}=f_{B}N_{B}$, which in principle allows the formation
of a fully bonded state in which all the sites have reacted. This
offers a way to properly define the extent of reaction as the
ratio $p$ between the number of bonds present in a configuration
and the maximum number of possible bonds $f_{A}N_{A}$.
Between bond--formation events, the system propagates according to
Newtonian dynamics at temperature $T=1.0$. As in standard
event--driven codes, the configuration of the system is propagated
from one collisional event to the next one. Note that temperature
only controls the time scale of exploration of space, by
modulating the average particle's velocity. An average over 40
independent starting configurations is performed to improve
statistics.
\begin{figure}
\includegraphics[width=10 cm]{figure2.eps}
\vspace{-0.8 cm} \caption{\label{fig:conv} Time dependence of the
fraction of bonds $p$. Symbols: simulation results (averaged over
40 independent realizations). For the chosen stoichiometry, $p$
coincides with the reacted fraction of $A$ reactive sites, i.e.
the $A$ conversion, or equivalently with the reacted fraction of
$B$ sites, i.e. the $B$ conversion. $p=1$ would indicate that all
possible bonding sites have reacted. Time is measured in
arbitrary units. Line: $p(t)=kt/(1+kt)$, with the fit--parameter
$k$ fixing the time scale. This functional form is expected when
any pair of reactive groups in the system is allowed to react,
but loops do not occur in finite size
clusters.\cite{VanDongenJSP1997}}
\end{figure}
\section{Results}
In the starting configurations no bonds are present by
construction. As a function of time, the fraction $p$ of formed
bonds ---a measure of the state of advancement of the reaction---
increases monotonically, until most of the particles are
connected in one single cluster (Figure~\ref{fig:conv}). As a
result, $p$ saturates around $0.86$, despite the fact that an
equal number of reactive sites of type $A$ and $B$ is initially
present in the system.
Flory and Stockmayer\cite{FloryBOOK, StockmayerJPS1952} laid out
the basic relations between extent of reaction and resulting
structure in step polymerizations, on the assumptions that all
functional groups of a given type are equally reactive, all
groups react independently of one another, and that ring
formation does not occur in molecular species of finite size.
Only when $p$ exceeds a critical value $p_{c}$ infinitely large
molecules can grow.\cite{FloryBOOK} In this respect the FS theory
describes the gelation transition as the random percolation of
permanent bonds on a loopless lattice.\cite{Stauffer1992} The
present model satisfies the conditions of equal and independent
reactivity of all reactive sites. The absence of closed bonding
loops in finite size clusters is not a priori implemented; as we
will show in the following, however, such a condition ---favored
by the poor flexibility of the bonded particles and their
elongated shape, the absence of an underlying lattice and the
asymmetric location of the reactive sites--- is valid in a
surprisingly wide region of $p$ values.
The FS theory predicts the $p$ dependence of the cluster size
distribution in the very general case of a mixture of monomers
bearing mutually reactive groups.\cite{StockmayerJPS1952} In the
present case, the number $n_{lm}$ of clusters containing $l$
bifunctional particles and $m$ pentafunctional ones can be
written as
\begin{eqnarray}
n_{lm}=N_{B} N_{A} p^{l+m-1}(1-p)^{3m+2}
w_{lm}\label{eq:nlm}\\
w_{lm}=\frac{(4m)!} {(l-m+1)!(4m-l+1)!m!}\nonumber
\end{eqnarray}
and the number of clusters of size $s$ is obtained by summing
over all contributions such that $l+m=s$, i.e., $n_{s}=\sum
_{lm,l+m=s}n_{lm}$. As shown in Figure~\ref{fig:distr}a on
increasing $p$, the $n_{s}$ distribution becomes broader and
broader and develops a power--law tail. The theory predicts a
gelation transition when
$p_c=1/\sqrt{(f_A-1)(f_B-1)}=0.5$.\cite{FloryBOOK,
StockmayerJPS1952} Even close to $p=0.5$, the FS prediction
---which conforms to the prediction of random percolation on a
Bethe (loopless) lattice where $n_{s}\sim s^{-2.5}$ at the
percolation threshold--- is consistent with the numerical data.
On further increasing $p$ (Figure~\ref{fig:distr}b), the
distribution of finite size clusters progressively shrinks, and
only small clusters survive. Data show that Eq~\ref{eq:nlm}, with
no fitting parameters, predicts rather well the numerical
distribution at any extent of polymerization, both below and
above the point where the system is expected to percolate,
including details such that the local minimum at $s=2$.
\begin{figure}
\includegraphics[width=7.5 cm]{figure3.eps}
\caption{\label{fig:distr} Distribution of finite size clusters
$n_{s}$ for different fraction of bonds $p$ (a) below and (b)
above percolation. Points are simulation data and lines are the
corresponding theoretical curves according to FS. The
dashed line represents the power law $n_{s}\sim s^{-2.5}$.}
\end{figure}
To compare with the mean--field prediction of gelation at
$p_{c}=0.5$, we examine the connectivity properties of the
aggregates for each studied value of $p$, searching for the
presence of clusters which are infinite under periodic boundary
conditions. We find that configurations at $p=0.497\pm 0.008$
have not yet developed a percolating structure while
configurations at $p=0.513\pm 0.007$ have. Hence, we locate the
gel point at $p_{c}=0.505\pm 0.007$, in close agreement with the
theoretical mean--field expectations. Beyond this point, the
material which belongs to the infinite (percolating) network
$N_{\infty}$ constitutes the \emph{gel}, while the soluble
material formed by the finite clusters which remain interspersed
within the giant network constitutes the \emph{sol}.
Figure~\ref{fig:GELsol}a shows that the fraction of gel
$P_{\infty}=N_{\infty}/N$ and even its partition between
particles of type $A$ ($P_{A,\infty}=N_{A,\infty}/N$) and $B$
($P_{B,\infty}=N_{B,\infty}/N$) calculated according to the FS
theory,\cite{MillerMACRO1976} properly represent the simulation
results throughout the polymerization process. Indeed, the
proportion of $B$ particles to $A$ particles in gel and in sol is
a function of $p$ (see inset). The relative amount of $B$
particles in the sol ($N_{B,sol}/N_{A,sol}$) increases as a
consequence of the preferential transfer of the $A$ particles
(having more reactive sites) to the gel, in a way that the
fraction $p_{sol}$ of sites $B$ in the sol that have reacted
(extent of reaction in the sol) differs from the total fraction
$p$ of sites $B$ reacted (extent of reaction in the system). The
constitution of the sol (Figure~\ref{fig:distr}(b)) results to be
the same as that of a smaller system made of $N_{A,sol}$
particles of type $A$ and $N_{B,sol}$ particles of type $B$
reacted up to the extent $p_{sol}$.\cite{FloryBOOK, MillerPES1979}
\begin{figure}
\includegraphics[width=7.5 cm]{figure4.eps}
\caption{\label{fig:GELsol} (a) Gel fraction $P_{\infty}$ and its
partition between particles of type $A$ ($P_{A,\infty}$) and $B$
($P_{B,\infty}$) vs the fraction of bonds $p$ (i.e. the extent of
reaction in the system). The inset shows the proportion of $B$
particles to $A$ particles in gel ($N_{B,\infty}/N_{A,\infty}$
--- left axis) and in sol ($N_{B,sol}/N_{A,sol}$
--- right axis) vs $p$. (b) Number and weight average cluster
size ($x_{n}$ and $x_{w}$) prior to gelation and for the sol
after gelation vs the fraction of bonds $p$. (c) Relation between
the number of finite size clusters (molecules in the sol)
${n}_{sol}$ and the fraction of bonds $p$. The inset shows the
number of loops ${n}_{loop}$ vs $p$. In all panels, symbols are
simulation results and solid lines FS predictions.}
\end{figure}
The evolution of the cluster size distribution can be quantified
by the number ($x_n$) and weight average ($x_w$) cluster sizes of
the sol, defined as $x_{n}=\sum_{s}sn_{s}/\sum_{s}n_{s}$ and
$x_{w}=\sum_{s}s^{2}n_{s}/\sum_{s}sn_{s}$. The numerical results
and the FS theoretical predictions are shown in
Figure~\ref{fig:GELsol}b. Both averages increase before gelation;
then, they regress in the sol existing beyond the gel point,
since large clusters are preferentially incorporated into the gel
network. While $x_{n}$ increases only slightly up to the gel
point, never exceeding 3.5, $x_{w}$ increases sharply in
proximity of $p_{c}$ as well as sharply decreases beyond this
point, consistently with the fact that $x_w$ is singular at
percolation being dominated by large clusters. Again, simulation
data agree very well with FS predictions. Discrepancies between
theory and simulation ---which reveal the mean--field character of
the FS theory--- only concern the range of $p$ very near $p_{c}$,
suggesting that for this model the crossover from mean--field to
percolation is very close to the gel point --- i.e., the Ginzburg
zone\cite{ginzburg} near the gel point, where non--mean field
effects are important, is very limited. A finite--size study very
close to the critical point would be requested to accurately
locate the percolation point and the critical exponents, a
calculation beyond the scope of the present work.
From a physical point of view, the change from mean--field to
percolation universality class is rooted in the presence of
bonding loops in the clusters of finite size, which pre-empts the
possibility to predict the cluster size distribution. The
realistic estimate of the percolation threshold and the agreement
between theory and simulation (Fig.~\ref{fig:distr}) suggest that
the present model strongly disfavors the formation of loops in
finite clusters, at least for cluster sizes probed in this
finite--size system. As a test, we evaluate the total number of
finite (sol) clusters ${n}_{sol}=\sum_{s}n_{s}$ as a function of
the extent of reaction. If finite clusters do not contain closed
loops, ${n}_{sol}$ equals the number of particles in the sol
minus the number of bonds, since each added bond decreases the
number of clusters by one. This applies equally to the system
preceding gelation, or to the sol existing beyond the gel point.
Thus, at $p<p_{c}$ (pre--gelation) the relation between
${n}_{sol}$ and $p$ is linear, i.e. ${n}_{sol}=N - 2N_{B}p$. At
$p>p_{c}$ (post--gelation), ${n}_{sol}$ can be calculated as
${n}_{sol}= N_{sol} - 2N_{B,sol}p_{sol}$, where $N_{sol}$ is the
number of particles in the sol fraction ($N_{B,sol}$ of which
bear reactive sites of type $B$), and $p_{sol}\neq p$ is the
reacted fraction of sites B in the sol. Hence, the relation
between ${n}_{sol}$ and $p$ crosses to a nonlinear behavior, so
that the number of clusters becomes one when $p=1$. As shown in
Figure~\ref{fig:GELsol}c, the number of finite clusters found in
the simulation data conforms to the theoretical expectation for
all $p$ values, both below and above the gel point. Hence, as a
first approximation, loops are only present in the infinite
(percolating) cluster and do not significantly alter the
distribution of the finite size clusters, both below and above
percolation. The difference between ${n}_{sol}$ found in
simulation and the value predicted by the FS theory counts the
number of loops in the sol, ${n}_{loop}$. Such a quantity is
shown in the inset of Figure~\ref{fig:GELsol}c. The maximum value
of ${n}_{loop}$, achieved for $p \sim p_{c}$, corresponds to
$0.2\%$ of the total number of bonds. This demonstrates that
intramolecular bonds within finite clusters can be neglected,
consistent with the Flory hypothesis for the post--gelation
regime\cite{rubinstein}. Figure~\ref{fig:GELsol}c also shows that
the linear relation between ${n}_{sol}$ and $p$ is valid also
after the gel point (up to $p \approx 0.6$). This finding is in
full agreement with recent experimental
studies\cite{CorezziPRL2005, CorezziJPCM2005, VolponiMACRO2007}
on the polymerization of bifunctional diglycidyl--ether of
\mbox{bisphenol--A} with pentafunctional diethylenetriamine, also
suggesting that the number of cyclic connections in the infinite
cluster is negligible well above $p_{c}$.
As a further confirmation of the absence of closed loops we
compare the time evolution of $p$ with the prediction of the
mean--field kinetic modeling of polymerization, based on the
solution of the Smoluchowski coagulation
equation.\cite{ZiffJSP1980, GalinaAPS1998} For loopless
aggregation, $p(t)$ is predicted to follow
\begin{equation}
p(t)=\frac{kt}{1+kt},
\end{equation}
where the fit--parameter $k$, which has the meaning of a bond
kinetic constant, fixes the time scale of the aggregation process.
The time evolution of $p$ is found to perfectly agree with the
theoretical predictions\cite{VanDongenJSP1997} (see
Figure~\ref{fig:conv}) up to $p \approx 0.6$, i.e. beyond $p_c$.
While the prediction would suggest that $p(t \rightarrow
\infty)=1$ (dash line in Figure~\ref{fig:conv}), the simulation
shows that the formation of a percolating structure prevents the
possibility of completing the chemical reaction, leaving a finite
number of unreacted sites frozen in the structure. As shown above
(Figure~\ref{fig:distr}), even in this frozen state the cluster
size distribution is provided by the Flory's post--gel
hypothesis. Such a feature is not captured by the mean--field
Smoluchowski equation in which spatial information in the kernels
are neglected.
\section{Conclusions}
A binary mixture of patchy hard ellipsoids undergoing chemical
gelation displays a very large interval of the extent of reaction
in which parameter--free mean--field predictions are extremely
accurate. The connectivity properties of the model are properly
described --- without any fitting parameter --- both below and
above percolation by the mean--field loopless classical FS
theory.\cite{FloryBOOK, VanDongenJSP1997} The mean--field cluster
size distribution for the sol component is found to be valid for
all values of the extent of reaction, both below and above the
gel point, suggesting that for the present model, the Flory's
hypothesis for the post--gelation regime properly describes the
irreversible aggregation phenomenon, despite the explicit
consideration of the excluded volume.
The absence of loops in finite size clusters, which is not
assumed by the model, results from the specific geometry of the
bonding pattern and by the presence of the excluded volume
interactions, disfavoring the formation of ordered bonding
domains. Hence, the geometry of the particles and the location of
the reactive sites on them may play a significant role in the
stabilization of the mean--field universality class with respect
to the percolation universality class,\cite{StaufferAPS1982}
locating the crossover between the two classes\cite{ginzburg}
very close to the gel point. The present study shows that
irreversibly aggregating asymmetric hard--core patchy particles,
even if excluded volume effects are properly taken into account,
may provide a close realization of the FS predictions in a wide
range of $p$ values. The model thus offers a starting point ---for
which theoretical predictions are available--- for further
investigations of the gelation process and for a more precise
control over the structure and connectivity of the gel state. In
particular, a full and detailed structural information can be
known along with the dynamics of the system, which is potentially
useful to investigate the relation between structural
heterogeneity and heterogeneous dynamics,\cite{VolponiMACRO2007}
and to shed light on the microscopic aspects of the dynamic
crossover from short\cite{CorezziPRL2006} to long relaxation
times,\cite{CorezziNATURE2002} during irreversible polymerization.
While the structural properties are all well--described by the FS
theory, the evolution of the extent of reaction, modeled via the
coagulation Smoluchowski equation, is properly described by the
theory only in the pre--gelation region. After gelation, kinetic
constraints due to the absence of mobility of the reactive sites
anchored to the percolating cluster or to smaller clusters
trapped inside the percolating matrix prevent the completion of
the reaction and the extent of reaction freezes (to $p\approx
0.86$ in the present case) before reaching one (as Eq.~2 would
predict). A proper modeling of the long--time behavior will
require the insertion of spatial information inside the kernels
entering the Smoluchowski equation. The freezing of the extent of
reaction at long times correspondingly freezes the cluster size
distribution to that predicted by Flory for the reached $p$ value.
In the present model, the entire polymerization process proceeds
via a sequence of FS cluster size distributions, determined by
$p(t)$. Recently, it has been shown that the FS theory properly
describes also equilibrium clustering in patchy particle systems
when $p$ is a function of temperature and
density.\cite{BianchiJPCB2007} It is thus tempting to speculate
that for loopless models, irreversible evolution can be put in
correspondence with a sequence of \textit{equilibrium} states
which could be sampled in the same system for finite values of
the ratio between temperature and bonding depth. If this is
indeed the case, chemical gelation could be formally described as
a deep quench limit of physical gelation. This correspondence
would facilitate the transfer of knowledge from recent studies of
equilibrium gels\cite{BianchiPRL2007, ZaccarelliREVIEW2007} to
chemical ones. Concepts developed for irreversible aggregation of
colloidal particles, like diffusion-- and reaction--limited
cluster--cluster aggregation, could be connected to chemical
gelation. Work in this direction is ongoing.
We acknowledge support from MIUR-PRIN. We thank P. Tartaglia for interesting discussions.
|
2,877,628,091,017 | arxiv | \section*{ACKNOWLEDGMENTS}
This work was partly supported by the Major State 973 Program of China No.~2013CB834400, Natural Science Foundation of China under Grants No.~11335002, No.~11375015, and No.~11621131001, the Overseas Distinguished Professor Project from Ministry of Education of China No.
MS2010BJDX001, the Research Fund for the Doctoral Program of Higher Education of China under Grant No.~20110001110087, and the DFG (Germany) cluster of excellence \textquotedblleft Origin and Structure of the Universe\textquotedblright\ (www.universe-cluster.de).
H.L. would like to thank the RIKEN iTHES project and iTHEMS program.
\bibliographystyle{apsrev4-1}
|
2,877,628,091,018 | arxiv | \section{Introduction}
In string theories with spacetime supersymmetry, two-dimensional conformal field theories (CFTs) describing the superstring world-sheets are also symmetric under two-dimensional superconformal transformations including two-dimensional supersymmetry. Physics in higher-dimensional ambient spacetime is described by two-dimensional superconformal field theories. The goal of celestial holography is to describe physics in four-dimensional asymptotically flat spacetime as a hologram on two-dimensional celestial sphere, in the framework of celestial conformal field theory (CCFT) \cite{Strominger:2017zoo,Raclariu:2021zjz,Pasterski:2021rjz}. In this context, there is an obvious question: is there any relation between four-dimensional supersymmetry and some type of supersymmetry in CCFT, hereafter called celestial supersymmetry?
The symmetries of CCFT reflect the BMS symmetries of asymptotically flat spacetime. In
Ref.\cite{Fotopoulos:2020bqj}, we discussed (one type of) supersymmetric extensions of BMS algebra.
We found that spacetime supersymmetry is disconnected from two-dimensional supersymmetry.
The reason was that spacetime supersymmetry algebra close on supertranslations, which are genuinely nonholomorphic on a celestial sphere, while the two-dimensional superconformal algebras have factorized holomorphic and antiholomorphic parts. More recently, however, the role of (super)translations came under scrutiny because the CCFT correlators obtained by taking Mellin transforms of scattering amplitudes are overconstrained by momentum conservation \cite{Mizera:2022sln}.
One possible resolution of this problem relies on introducing background fields that violate translational invariance \cite{Fan:2022vbz,Fan:2022kpp,Casali:2022fro,PipolodeGioia:2022exe,Gonzo:2022tjm,Melton:2022fsf,Costello:2022wso,Banerjee:2023rni}. With translational symmetry broken in this way, our question can be rephrased as: Are there some background field configurations that admit celestial supersymmetry? In this work, we give an affirmative answer to this question.
In Ref.\cite{Stieberger:2022zyk}, we considered the Yang-Mills theory coupled to a complex dilaton field and introduced a pointlike source creating a dilaton shockwave. This background breaks four-dimensional translational symmetry in a controllable way and supplies external momentum to the gauge system. Multi-gluon celestial MHV amplitudes evaluated in such a background have simple, factorized structure.
They factorize into the holomorphic current correlator times the correlator of a Liouville theory with an infinite central charge.\footnote{The factorizations between the current sector and the other part that decouples from the current part was suggested earlier in Ref.\cite{Nande:2017dba}.}
In the current sector, the correlators are holomorphic. They contain all the information about the spins of celestial primaries and the gauge group structures. In the Liouville sector, we encountered the correlation functions of ``light'' operators, evaluated in the limit of the Liouville coupling $b\rightarrow 0$ (infinite central charge).
The factorization of celestial amplitudes into the current and Liouville sectors allows for addressing the question posed at the very beginning in a simpler way. We consider supersymmetric Yang-Mills theory and show that with the suitably chosen dilaton background fields, the current sector exhibits (1,0) celestial supersymmetry. For comparison, in heterotic superstring theory, the left-moving current sector is not supersymmetric, while the right-moving part has a similar world-sheet supersymmetry.
Other aspects of fermionic fields and supersymmetries in CCFT have been discussed in Refs.\cite{Iacobacci:2020por,Narayanan:2020amh,Pasterski:2020pdk,Jiang:2021xzy,Brandhuber:2021nez,Hu:2021lrx,Ferro:2021dub,Himwich:2021dau,Pano:2021ewd,Jiang:2021ovh,Ahn:2021erj,Ahn:2022oor,Bu:2021avc,Ahn:2022vfw,Hu:2022bpa,Banerjee:2022lnz}.
\section{Review of (1,0) superconformal symmetry in two dimensions}
In this section, we give a brief review of $(1,0)$ superconformal symmetry in two dimensions, by following Refs.\cite{Bershadsky:1985dq,Friedan:1984rv}. See also Ref.\cite{Polchinski:1998rr}.
There are two superconformal algebras, Neveu-Schwarz (NS) algebra and Ramond algebra, corresponding to two different periodicities of the fermionic operators. All correlators discussed in the present work
are single-valued, with the trivial monodromy when a fermionic operator circulates around the other one. For that reason, only the NS algebra is directly related to our work and we will not discuss the Ramond algebra.
The $(1,0)$ superconformal algebra is generated by the super stress-energy tensor
\begin{align}
\mathbf{T}(Z) = T_F(z) +\theta\, T_B(z) \, ,
\end{align}
where $Z=(z,\theta)$ is the superspace coordinate, $T_B$ is the usual stress-energy tensor, and $T_F(z)$ is its superpartner with conformal weights $\Delta=h=\frac{3}{2}, \bar{h} = 0$. The OPEs among them are
\begin{align}
T_B(z_1)\, T_B(z_2) &= \frac{c}{2\, z_{12}^4} +\frac{2}{z_{12}^2}T_B(z_2) +\frac{1}{z_{12}}\partial_z T_B(z_2) \, , \label{eq:TT}\\
T_B(z_1) \, T_F(z_2) & = \frac{3}{2\, z_{12}^2}T_F(z_2) +\frac{1}{z_{12}} \partial_z T_F(z_2) \, , \label{eq:TTF}\\
T_F(z_1) \, T_F(z_2) &= \frac{2c}{3 \, z_{12}^3} + \frac{2}{z_{12}} T_B(z_2) \, , \label{eq:TFTF}
\end{align}
For $T_B(z)$ and $T_F(z)$ the Laurent expansions are
\begin{align}
T_B(z) &= \sum_{m=-\infty}^{\infty} \frac{L_m}{z^{m+2}} \, , \\
T_F(z) &= \sum_{r\in \mathbb{Z}+ \frac{1}{2}} \frac{G_r}{z^{r+\frac{3}{2}}} \, .
\end{align}
Note that in the NS case, the sum is over half-integer $r$.
One finds the following NS algebra from the OPEs Eqs.(\ref{eq:TT})-(\ref{eq:TFTF})
\begin{align}
[L_m, L_n] &= (m-n)\, L_{m+n} +\frac{c}{12}(m^3-m)\delta_{m,-n} \, , \\
\{G_r, G_s\} &= 2 \, L_{r+s} +\frac{c}{12}(4r^2-1) \delta_{r,-s} \, , \\
[L_m, G_r] &= \frac{m-2r}{2} G_{m+r} \, .
\end{align}
The global superconformal group OSP(2|1) is generated by $L_{-1}, \, L_0, \, L_1,\, G_{-1/2}, \, G_{1/2}$. \par
By using the superfield formalism, a NS primary superfield with holomorphic conformal weight $\Delta = h$ can be represented as
\begin{align}
\Phi_{\Delta}(z,\theta) = \phi_{\Delta}(z) +\theta \, \psi_{\Delta+1/2}(z) \, .
\end{align}
The transformation properties of the NS primary superfield under $T_B$ and $T_F$ are determined by the following OPEs
\begin{align}
T_B(z_1) \, \phi(z_2) &= \frac{\Delta}{z_{12}^2} \phi(z_2) +\frac{1}{z_{12}}\partial_z \phi(z_2) \, , \\
T_F(z_1) \, \phi(z_2) &= \frac{1}{z_{12}}\psi(z_2) \, ,\\
T_F(z_1) \, \psi(z_2) &= \frac{2\Delta}{z_{12}^2} \phi(z_2) +\frac{1}{z_{12}}\partial_z \phi(z_2) \, .
\end{align}
By using the mode expansion of $T_B$ and $T_F$, one can obtain the following commutators:
\begin{align}
[L_n, \Phi(z,\theta)] &= \Big(z^{n+1}\partial_z +(n+1) z^n ( \Delta+\frac{1}{2}\theta\partial_\theta)\Big) \Phi(z,\theta) \, , \label{eq:LmPhi} \\
[G_{n+\frac{1}{2}} ,\Phi(z,\theta) ] &= \Big( z^{n+1}(\partial_\theta -\theta\partial_z) -(n+1)\, z^n 2 \, \Delta \, \theta \Big) \, \Phi(z,\theta) \, .\label{eq:GmPhi}
\end{align}
For the global generators $L_{-1}, \, L_0, \, L_1,\, G_{-1/2}, \, G_{1/2}$, Eqs.(\ref{eq:LmPhi})-(\ref{eq:GmPhi}) give us the global superconformal Ward identities of the supercorrelators.
In particular, the Ward identities generated by $G_{-1/2}$ and $G_{1/2}$ will be used in the following sections.
\section{4D theory: $\mathcal{N}=1$ SYM coupled to a massive chiral multiplet}
We consider the following $\mathcal{N}=1$ supersymmetric theory with SYM coupled to a massive chiral dilaton supermultiplet $(\phi,\chi)$. This is a supersymmetric completion of the usual dilaton-YM Lagrangian, see Ref.\cite{Dixon:2004za}:
\begin{align}
\mathcal{L} &= \int d^4\theta \, \Phi^\dagger \Phi +\int d^2\theta \left[ \frac{m_H}{2} \Phi^2 + \frac{1}{4}(1-\frac{4}{\Lambda} \Phi) \, \text{tr} \, W^\alpha W_{\alpha}\right] \nonumber\\
&\qquad +\int d^2\bar{\theta} \left[ \frac{m_H}{2} \Phi^{\dagger2} + \frac{1}{4}(1-\frac{4}{\Lambda} \Phi^{\dagger}) \, \text{tr} \, \overline{W}_{\dot{\alpha}} \overline{W}^{\dot{\alpha}}\right] \label{eq:Lag1}\\
&= F^{\dagger}F -\partial_{\mu}\phi^{\dagger}\partial^{\mu}\phi-i\bar{\chi} \slashed{\partial}\chi+\frac{1}{2} \text{tr} D^2-\frac{1}{4} \text{tr} G_{\mu\nu}G^{\mu\nu}-i\bar{\lambda}\slashed{D}\lambda \nonumber\\
&\quad +\Bigg\{ m_H\Big(F\phi -\frac{1}{2} \chi^2 \Big) -\frac{1}{\Lambda} \Big[ -F\, \text{tr}\lambda \lambda -2\sqrt{2} i\chi^{\alpha} \text{tr} \Big( \lambda_\alpha D-(\sigma_{\mu\nu})_{\alpha}^{~\beta}\lambda_\beta G^{\mu\nu}_{SD}\Big) \nonumber\\
& \qquad\qquad\qquad \qquad \quad\quad ~~~~~~~~~+ \phi\, \text{tr}\Big( -2i\bar{\lambda}\slashed{D}\lambda-G_{SD \, \mu\nu}G^{\mu\nu}_{SD}\Big) \Big] + \, \text{h.}\, \text{c.} \Bigg\} \, , \label{eq:Lag2}
\end{align}
where $\Lambda$ is a parameter with mass dimension 1. In the standard model, it is related to the VEV of the Higgs field \cite{Dixon:2004za}.
We will focus on the interaction terms that are linear in the coefficient $\frac{1}{\Lambda}$. These include the usual dilaton-YM couplings and the following terms that come from the $F^\dagger F$ term:
\begin{equation}
\mathcal{L}_F = -\Big|m_H \phi -\frac{1}{\Lambda}\lambda\lambda\Big|^2 = -m_H^2 \phi^{\dagger}\phi +\frac{1}{\Lambda} m_H (\phi^{\dagger}\lambda\lambda +\phi \bar{\lambda}\bar{\lambda}) +\mathcal{O}\Big(\frac{1}{\Lambda^2}\Big) \, . \label{eq:phi2lambdas}
\end{equation}
We will keep $m_H$ explicitly. Later, we will consider the massless limit ($m_H\rightarrow 0$) of the massive case.
We are interested in the amplitudes with a number of gauge particles and one external dilaton, particularly in the amplitudes related by supersymmetry to the amplitudes with gluons in the MHV helicity configuration. We will be considering {\em partial} amplitudes associated with one particular group factor. There are \underline{two sets of such amplitudes}.
\underline{Set 1} contains the MHV gluons amplitudes with one dilaton,
\begin{align}
A_2(1^{-1},2^{-1},\phi) &= -\frac{1}{\Lambda} \langle 12\rangle^2 \, ,\\
A_n(1^{-1},2^{-1},3^{+1},\dots n^{+1},\phi) &= \frac{1}{\Lambda} \frac{\langle 12\rangle^4}{\langle 12\rangle \langle 23\rangle \dots \langle n1\rangle} \, , \label{eq:MHVg}
\end{align}
and all other amplitudes that are related to Eq.(\ref{eq:MHVg}) by 4D SUSY Ward identities
\cite{Parke:1985pn,Taylor:2017sph}.
Explicit expressions for the amplitudes involving gauginos are written in Ref.\cite{Badger:2004ty}.\footnote {Supersymmetry transformations of the on-shell fields are written in the appendix of Ref.\cite{Dixon:2004za}.}
All these amplitudes contain the same numbers of helicity ${+}1/2$ and ${-}1/2$ gauginos.
\underline{Set 2} contains the amplitudes originating from the gaugino chirality (R-symmetry) violating $F$-terms in Eq.(\ref{eq:phi2lambdas}). They are
\begin{align}
A_2(1^{-\frac{1}{2}},2^{-\frac{1}{2}}, \phi^*) &= \frac{1}{\Lambda} m_H \, \langle 12\rangle \, , \\%= C m_H \, \omega_1^{\frac{1}{2}}\omega_2^{\frac{1}{2}} \, z_{12} \, , \\
A_3(1^{-\frac{1}{2}},2^{-\frac{1}{2}},3^{+1},\phi^*) &= -\frac{1}{\Lambda} m_H\, \frac{ \langle 12\rangle^2}{\langle 23\rangle \langle 31\rangle} \, , \\
A_n(1^{-\frac{1}{2}},2^{-\frac{1}{2}},3^{+1}, \dots n^{+1}, \phi^*) &= -\frac{1}{\Lambda} m_H \frac{\langle 12 \rangle^3}{\langle 12\rangle \langle 23\rangle \dots \langle n1\rangle} \, . \label{eq:MHV2gim}
\end{align}
The $n$-point amplitude written above can be derived recursively by using BCFW recursion relations \cite{Britto:2005fq}. All these amplitudes have the net surplus of two negative helicity gauginos.
\section{Celestial amplitudes with a universal Liouville sector}
In this section, we identify the sources of the dilaton for the amplitudes from set 1 and set 2 separately, such that in the limit $m_H\rightarrow 0$, the resultant celestial MHV amplitudes become related to celestial Liouville theory.
For \underline{Set 1}, we choose
\begin{align}
{\cal J}_1(Q,m_H)_{\phi} = \frac{1}{\Lambda} \, , \label{eq:phisource}
\end{align}
where $\Lambda$ is the mass scale parameter introduced in Eq.(\ref{eq:Lag1}), which determines the strength of the dilaton coupling to the gauge sector.\footnote{In the position space, ${\cal J}(x)_{\phi}\sim\delta^{(4)}(x)$, which corresponds to a pointlike dilaton source.} Note that the source has the correct mass dimension ${-}$1. To compute the celestial amplitudes from set 1, {\em c.f.} Eq.(\ref{eq:MHVg}), we take Mellin transforms with respect to the energies:
\begin{align}
\text{Celes}_1(\Delta_i|J_i)_{m_H} \supset &\int \prod_{i=1}^n d\omega_i \omega_i^{\Delta_i-1} A_n(1^{-1},2^{-1},3^{+1},\dots n^{+1},\phi) \frac{1}{Q^2+m_H^2}{\cal J}_1(Q,m_H)_{\phi} \nonumber\\
&= \frac{1}{\Lambda^2}\int \prod_{i=1}^n d\omega_i \omega_i^{\Delta_i-1} \, \frac{\langle 12\rangle^4}{\langle 12\rangle \langle 23\rangle \dots \langle n1\rangle} \, \frac{1}{Q^2+m_H^2} \, , \label{eq:Celes1massive}
\end{align}
where $\text{Celes}_1(\Delta_i|J_i)_{m_H} $ denote celestial amplitudes from set 1, with conformal dimensions $\Delta_i$ and helicities $J_i$.
Note that in the massless dilaton limit, Eq.(\ref{eq:Celes1massive}) goes back to the amplitudes studied in Ref.\cite{Stieberger:2022zyk}:
\begin{align}
\text{Celes}_1(\Delta_i|J_i) = \lim_{m_H\rightarrow 0} \text{Celes}_1(\Delta_i|J_i)_{m_H} \supset \frac{1}{\Lambda^2}\int \prod_{i=1}^n d\omega_i \omega_i^{\Delta_i-1} \, \frac{\langle 12\rangle^4}{\langle 12\rangle \langle 23\rangle \dots \langle n1\rangle} \, \frac{1}{Q^2} \, . \label{eq:Celes1massless}
\end{align}
For \underline{Set 2}, we choose
\begin{align}
{\cal J}_2(Q,m_H)_{\phi^*} = -\frac{2}{m_H} \, . \label{eq:phi*source}
\end{align}
Then the corresponding celestial amplitudes are
\begin{align}
\text{Celes}_2(\Delta_i|J_i)_{m_H} \supset & \int \prod_{i=1}^n d\omega_i\, \omega_i^{\Delta_i-1} A_n(1^{-\frac{1}{2}},2^{-\frac{1}{2}},3^{+1}, \dots n^{+1}, \phi^*)\frac{1}{Q^2+m_H^2}{\cal J}_2(Q,m_H)_{\phi^*} \nonumber\\
&= \frac{1}{\Lambda} \int \prod_{i=1}^n d\omega_i\, \omega_i^{\Delta_i-1} \, m_H \frac{2\, \langle 12 \rangle^3}{\langle 12\rangle \langle 23\rangle \dots \langle n1\rangle}\, \frac{1}{Q^2+m_H^2} \, \frac{1}{m_H} \nonumber\\
&= \frac{1}{\Lambda} \int \prod_{i=1}^n d\omega_i\, \omega_i^{\Delta_i-1} \, \frac{2 \, \langle 12 \rangle^3}{\langle 12\rangle \langle 23\rangle \dots \langle n1\rangle}\, \frac{1}{Q^2+m_H^2} \, . \label{eq:Celes2massive}
\end{align}
In the limit of $m_H \rightarrow 0$,
\begin{align}
\text{Celes}_2(\Delta_i|J_i) = \lim_{m_H\rightarrow 0} \text{Celes}_2(\Delta_i|J_i)_{m_H} \supset \frac{1}{\Lambda} \int \prod_{i=1}^n d\omega_i\, \omega_i^{\Delta_i-1} \, \frac{2 \, \langle 12 \rangle^3}{\langle 12\rangle \langle 23\rangle \dots \langle n1\rangle}\, \frac{1}{Q^2} \, . \label{eq:Celes2massless}
\end{align}
As we will see below, upon the following choice of the conformal dimensions:
\begin{equation}
\left\{
\begin{array}{ll}
& -1 \, \text{helicity gluon: \, } \Delta = i\lambda \, , \\
& -\frac{1}{2} \, \text{helicity gaugino: \, } \Delta = \frac{1}{2} +i\lambda \, \\
& +\frac{1}{2} \, \text{helicity gaugino: \, } \Delta = \frac{1}{2} +i\lambda \, \\
&+1 \, \text{helicity gluon: \, } \Delta = 1+i\lambda \, ,
\end{array}
\right. \label{eq:Deltas}
\end{equation}
two sets of celestial amplitudes $\text{Celes}_1$ and $\text{Celes}_2$, {\em c.f.} Eqs.(\ref{eq:Celes1massless}) and (\ref{eq:Celes2massless}), contain identical Mellin transforms, which means that they have a universal Liouville sector.
The information about the spins of the celestial primaries and the gauge group structures is contained entirely in the current sector.\par
The correspondence between celestial primaries associated to the gauge supermultiplet and the CCFT operators can be summarized as
\begin{align}
O^{a}_{\Delta,J}(z,\bar{z}) = \, O_J^a(z)\, \Gamma(\Delta-J) \, e^{(\Delta-J)b\phi(z,\bar{z})} \, \label{eq:celetoLiouville}
\end{align}
with the implied limit of $b\rightarrow0$. In Eq.(\ref{eq:celetoLiouville}), the choice of the conformal dimensions are given by Eq.(\ref{eq:Deltas}), and the current sector operators $O_J^a$ are
\begin{align}
O_{-1}^a(z) &= \frac{1}{\sqrt{\Lambda}}\,\widehat{j}^a(z), \\ O_{-\frac{1}{2}}^a(z) &= \psi^{-a}(z) , \\
O_{+\frac{1}{2}}^a(z) &=\frac{1}{\sqrt\Lambda}\psi^{+a}(z), \\ O_{+1}^a(z) &= j^a(z) \, .
\end{align}
The role of the $\frac{1}{\sqrt\Lambda}$ normalization factors will become clear in the next section.
Note that the operators $O_J^a$ are purely holomorphic with conformal weights $h=J$. With the choice given by Eq.(\ref{eq:Deltas}), celestial primaries with negative helicities have the same Liouville part,
\begin{align} \Gamma(\Delta-J) \, e^{(\Delta-J)b\phi(z,\bar{z})}=
\Gamma(1+i\lambda)e^{(1+i\lambda)b\phi(z,\bar{z})} \, ,
\end{align}
while for positive helicities
\begin{align} \Gamma(\Delta-J) \, e^{(\Delta-J)b\phi(z,\bar{z})}=
\Gamma(i\lambda) e^{ i\lambda b \phi(z,\bar{z})} \, .
\end{align}
\subsection{Three points}
We begin by writing explicit expressions for three-particle amplitudes. Set 1 contains
\cite{Badger:2004ty}: \\
\begin{align}
A_3(1^{-1},2^{-1},3^{+1},\phi) &= \frac{1}{\Lambda}\frac{\langle 12\rangle^3}{\langle 23 \rangle \langle 31\rangle} = \frac{1}{\Lambda}\frac{z_{12}^3}{z_{23}z_{31}} \frac{\omega_1\omega_2}{\omega_3} \, , \\
A_3(1^{-1}, 2^{-\frac{1}{2}}, 3^{+\frac{1}{2}} , \phi) &=\frac{1}{\Lambda} \frac{\langle 12\rangle^3 \langle 13 \rangle}{\langle 12\rangle \langle 23\rangle\langle31\rangle} = \label{h22} \frac{1}{\Lambda}\frac{z_{12}^2}{z_{32}} \frac{\omega_1\omega_2^{1/2}}{\omega_3^{1/2}} \, , \\
A_3(1^{-\frac{1}{2}}, 2^{-1}, 3^{+\frac{1}{2}}, \phi)&= \frac{1}{\Lambda}\frac{\langle 21\rangle^3 \langle 23\rangle}{\langle 12\rangle \langle 23\rangle \langle 31 \rangle} = \frac{1}{\Lambda}\frac{z_{12}^2}{z_{13}} \frac{\omega_1^{1/2}\omega_2}{\omega_3^{1/2}} \, ,\label{h33}
\end{align}
In set 2,
\begin{align}
A_3(1^{-\frac{1}{2}},2^{-\frac{1}{2}}, 3^{+1},\phi^*) & = -\frac{m_H}{\Lambda}\frac{\langle 12\rangle^2}{\langle 23\rangle \langle 31\rangle} =- \frac{m_H}{\Lambda}\frac{ z_{12}^2}{z_{23}z_{31}} \frac{\omega_1^{1/2}\omega_2^{1/2}}{\omega_3} \, .\label{h44}
\end{align}
By using Eqs.(\ref{eq:Celes1massless}), (\ref{eq:Celes2massless}), and (\ref{eq:Deltas}), we find the following $m_H\to 0$ limits of three-point celestial amplitudes:
\begin{align}
\text{Celes}_1(&i\lambda_1,i\lambda_2, 1+i\lambda_3|J_1=-1, \, J_2 = -1, \, J_3 = +1) \nonumber\\
=& ~\frac{1}{\Lambda^2}\frac{z_{12}^3}{z_{23}z_{31}} \int \prod_{i=1}^3 d\omega_i \omega_1^{i\lambda_1-1}\omega_2^{i\lambda_2-1}\omega_3^{i\lambda_3} \frac{\omega_1\omega_2}{\omega_3} \frac{1}{Q^2} ~=~ \frac{1}{\Lambda}
\langle \widehat{j}_1\, \widehat{j}_2 \, j_3\rangle\, L_3(z_i,\bz_i) \label{eq:C3point1}
\end{align}
and similar expressions following from Eqs.(\ref{h22}) and (\ref{h33}). Here, the universal Liouville factor \cite{Stieberger:2022zyk}
\be L_3(z_i,\bz_i)=
{2\pi\over\Lambda} \delta\left(\sum_{i=1}^3 \lambda_i\right) \Gamma(-i\lambda_1)\Gamma(-i\lambda_2)\Gamma(1-i\lambda_3) (z_{12}\bar{z}_{12})^{i\lambda_3-1}(z_{23}\bar{z}_{23})^{i\lambda_1} (z_{13}\bar{z}_{13})^{i\lambda_2} \, . \label{eq:C3point11}
\ee
Furthermore, from Eq.(\ref{h44}), we obtain
\begin{align}
\text{Celes}_2 &\left(\frac{1}{2}+i\lambda_1, \, \frac{1}{2}+i\lambda_2, \, 1+i\lambda_3| J_1=-\frac{1}{2}, \, J_2 = -\frac{1}{2}, \, J_3 = +1 \right) \nonumber\\
=& \frac{1}{\Lambda}\frac{2\, z_{12}^2}{z_{23}z_{31}} \int \prod_{i=1}^3 d\omega_i \omega_1^{i\lambda_1-1/2}\omega_2^{i\lambda_2-1/2} \omega_3^{i\lambda_3} \, \frac{\omega_1^{1/2}\omega_2^{1/2}}{\omega_3} \frac{1}{Q^2} = \langle \psi_1^- \, \psi_2^- \, j_3\rangle\,
L_3(z_i,\bz_i) \, .\label{eq:C3point4}
\end{align}
All correlators (\ref{eq:C3point1})-(\ref{eq:C3point4}) have the same three-point Liouville part. The respective current correlators are
\begin{align}
&\langle \widehat{j}_1 \, \widehat{j}_2\, j_3\rangle = \frac{z_{12}^3}{z_{23}z_{31}} \, , \label{eq:3ptmatter1}\\
&\langle \widehat{j}_1 \, \psi_2^- \, \psi_3^+\rangle = \frac{z_{12}^2}{z_{32}} \, ,\label{eq:3ptmatter2}\\
&\langle \psi_1^- \, \widehat{j}_2 \, \psi_3^+\rangle = \frac{z_{12}^2}{z_{13}} \, , \label{eq:3ptmatter3}\\
&\langle\psi_1^-\, \psi_2^- \, j_3\rangle = \frac{2 \, z_{12}^2}{z_{23}z_{31}} \, . \label{eq:3ptmatter4}
\end{align}
The current operators are purely holomorphic $(\bh=0)$, with
the chiral weights equal to the helicities of the corresponding particles:
\begin{equation}
h(\widehat{j})=-1 \, , \quad h(\psi^-)= -\frac{1}{2}, \quad h(\psi^+)= +\frac{1}{2}, \quad h(j)= +1 \, . \label{eq:conwh}
\end{equation}
\subsection{Four points}
We can perform similar computations of the four-point correlators. The relevant amplitudes are \cite{Badger:2004ty}:
\begin{align}
A_4(1^{-1},2^{-1},3^{+1},4^{+1},\phi) &= \frac{1}{\Lambda}\frac{\langle 12\rangle^4}{\langle12\rangle\langle 23\rangle\langle 34\rangle \langle 41\rangle} =\frac{1}{\Lambda} \frac{z_{12}^3}{z_{23}z_{34}z_{41}}\frac{\omega_1\omega_2}{\omega_3\omega_4} \, , \label{eq:psp4pt1}\\
A_4(1^{-1},2^{-\frac{1}{2}},3^{+\frac{1}{2}},4^{+1},\phi) &= \frac{1}{\Lambda}\frac{\langle 12\rangle^2 \langle 13\rangle}{\langle 23\rangle\langle 34\rangle\langle 41\rangle} = \frac{1}{\Lambda}\frac{z_{12}^2 z_{13}}{z_{23}z_{34}z_{41}}\frac{\omega_1\omega_2^{1/2}}{\omega_3^{1/2}\omega_4} \, , \label{eq:psp4pt2}\\
A_4(1^{-1},2^{-\frac{1}{2}},3^{+1},4^{+\frac{1}{2}},\phi) &= -\frac{1}{\Lambda}\frac{\langle 12\rangle^2}{\langle 23\rangle \langle 34\rangle} = - \frac{1}{\Lambda}\frac{z_{12}^2}{z_{23}z_{34}}\frac{\omega_1\omega_2^{1/2}}{\omega_3\omega_4^{1/2}} \, , \label{eq:psp4pt3}\\
A_4(1^{-\frac{1}{2}},2^{-1},3^{+\frac{1}{2}},4^{+1},\phi) &=\frac{1}{\Lambda} \frac{\langle 12\rangle^2}{\langle 34\rangle\langle 14\rangle} = \frac{1}{\Lambda}\frac{z_{12}^2}{z_{34}z_{14}} \frac{\omega_1^{1/2}\omega_2}{\omega_3^{1/2}\omega_4} \, ,\label{eq:psp4pt4}\\
A_4(1^{-\frac{1}{2}},2^{-1},3^{+1},4^{+\frac{1}{2}},\phi) &= \frac{1}{\Lambda}\frac{\langle 12\rangle^2 \langle 24\rangle}{\langle 23 \rangle \langle 34\rangle\langle14\rangle}= \frac{1}{\Lambda}\frac{z_{12}^2z_{24}}{z_{23}z_{34}z_{14}} \frac{\omega_1^{1/2}\omega_2}{\omega_3\omega_4^{1/2}} \, , \label{eq:psp4pt5}\\
A_4(1^{-\frac{1}{2}},2^{-\frac{1}{2}},3^{+\frac{1}{2}}, 4^{+\frac{1}{2}},\phi) &= \frac{1}{\Lambda}\frac{\langle 12\rangle^2}{\langle 23\rangle\langle 14\rangle} = \frac{1}{\Lambda} \frac{z_{12}^2}{z_{23}z_{14}} \frac{\omega_1^{1/2}\omega_2^{1/2}}{\omega_3^{1/2}\omega_4^{1/2}}\, ,\label{eq:psp4pt6}
\end{align}
from set 1 and
\begin{align}
A_4(1^{-\frac{1}{2}},2^{-\frac{1}{2}}, 3^{+1}, 4^{+1},\phi^*) = -\frac{m_H}{\Lambda}\frac{ \langle 12\rangle^3}{\langle 12\rangle \langle 23\rangle \langle 34\rangle \langle 41\rangle} = -\frac{m_H}{\Lambda} \frac{ z_{12}^2}{z_{23}z_{34}z_{41}} \frac{\omega_1^{1/2}\omega_2^{1/2}}{\omega_3\omega_4} \, \label{eq:psp4pt7}
\end{align}
from set 2.
By using Eqs.(\ref{eq:Celes1massless}), (\ref{eq:Celes2massless}), and (\ref{eq:Deltas}), we find
\begin{align}
\text{Celes}_1(&i\lambda_1,i\lambda_2, 1+i\lambda_3, 1+i\lambda_4|J_1=-1, \, J_2 = -1, \, J_3 = +1 \, J_4 =+1) \nonumber\\
=&\frac{1}{\Lambda^2}\frac{z_{12}^3}{z_{23}z_{34}z_{41}} \int \prod_{i=1}^4 d\omega_i \omega_1^{i\lambda_1}\omega_2^{i\lambda_2}\omega_3^{i\lambda_3-1}\omega_4^{i\lambda_4-1}\frac{1}{Q^2} = \frac{1}{\Lambda}\langle \widehat{j}_1 \, \widehat{j}_2\, j_3 \, j_4\rangle L_4(z_i,\bar{z}_i),
\end{align}
and similar expressions following from Eqs.(\ref{eq:psp4pt2}-\ref{eq:psp4pt6}). The current parts of these correlators are different, but they all contain the universal Liouville factor
\begin{align}
L_4(z_i,\bar{z}_i) = \frac{2}{\Lambda} \delta\left(\sum_{i=1}^4 \lambda_i\right) \Gamma(1+i\lambda_1)\Gamma(1+i\lambda_2)\Gamma(i\lambda_3)\Gamma(i\lambda_4) I_4 (z_i,\bz_i)\, ,
\end{align}
where the function $I_4(z_i,\bz_i)$ is written explicitly in Ref.\cite{Stieberger:2022zyk}. Furthermore,
\begin{align}
\text{Celes}_2&\left( \frac{1}{2} +i\lambda_1, \, \frac{1}{2}+i\lambda_2, \, 1+i\lambda_3, \, 1+i\lambda_4 : J_1 = -\frac{1}{2}, \, J_2 = -\frac{1}{2}, \, J_3 +1, \, J_4+1\right) \nonumber\\
=&~~\langle \psi_1^- \, \psi_2^- \, j_3 \, j_4\rangle L_4(z_i,\bar{z}_i) \, .
\end{align}
In this way, we obtain
\begin{align}
&\langle \widehat{j}_1 \, \widehat{j}_2\, j_3 \, j_4\rangle = \frac{z_{12}^3}{z_{23}z_{34} z_{41}} \, , \label{eq:4ptmatter1}\\
&\langle \widehat{j}_1 \, \psi_2^- \, \psi_3^+\, j_4\rangle = \frac{z_{12}^2 z_{13}}{z_{23}z_{34}z_{41}} \, ,\label{eq:4ptmatter2}\\
&\langle \widehat{j}_1 \, \psi_2^- \, j_3 \, \psi_4^+\rangle = -\frac{z_{12}^2}{z_{23}z_{34}} \, , \label{eq:4ptmatter3}\\
&\langle \psi_1^- \, \widehat{j}_2 \, \psi_3^+ \, j_4\rangle = \frac{z_{12}^2}{z_{34}z_{14}} \, , \label{eq:4ptmatter4}\\
&\langle \psi_1^- \, \widehat{j}_2 \, j_3 \, \psi_4^+\rangle = \frac{z_{12}^2 z_{24}}{z_{23}z_{34} z_{14}} \, , \label{eq:4ptmatter5}\\
&\langle \psi_1^-\psi_2^-\psi_3^+\psi_4^+\rangle = \frac{z_{12}^2}{z_{23}z_{14}} \, ,\label{eq:4ptmatter6}\\
&\langle \psi_1^- \, \psi_2^-\, j_3\, j_4\rangle =\frac{2\, z_{12}^2}{z_{23}z_{34}z_{41}} \, .\label{eq:4ptmatter7}
\end{align}
\subsection{Celestial OPEs from the celestial current algebra and Liouville CFT}
The OPEs of the CCFT operators
(\ref{eq:celetoLiouville}) can be computed by using the OPEs of current operators and the well-known OPEs of (light) Liouville operators \cite{ZZlecture}. We want to compare them with the OPEs extracted from the collinear limits of celestial amplitudes \cite{Fotopoulos:2020bqj,Fan:2019emx,Pate:2019lpp}
\paragraph{gluon-gluon:}
The current-current OPE is
\be
j^a(z_1) j^b(z_2) \sim \frac{f^{abc}}{z_{12}} j^c(z_2) \, .\ee
The Liouville OPE is
\begin{align} \Gamma(\Delta_1-1)&e^{(\Delta_1-1)b\phi(z_1,\bar{z}_1)} \Gamma(\Delta_2-1)e^{(\Delta_2-1)b\phi(z_2,\bar{z}_2)} \nonumber\\ &
= B(\Delta_1-1,\Delta_2-1) \Gamma(\Delta_1+\Delta_2-2) e^{(\Delta_1+\Delta_2-2)b\phi(z_2,\bar{z}_2)} \, . \label{eq:LiouvilleOPE1}
\end{align}
As a result,
\begin{align}
O^a_{\Delta_1, +1}(z_1,\bar{z}_1) O^b_{\Delta_2, +1}(z_2,\bar{z}_2) \sim B(\Delta_1-1,\Delta_2-1) \frac{f^{abc}}{z_{12}} O^c_{\Delta_1+\Delta_2-1,+1}(z_2,\bar{z}_2) \, ,
\end{align}
in agreement with Refs.\cite{Fan:2019emx,Pate:2019lpp}. \par
The current-current OPE with opposite helicities is
\be
j^a(z_1) \widehat{j}^b(z_2) \sim \frac{f^{abc}}{z_{12}} \widehat{j}^c(z_2) \, .\ee
The Liouville OPE is
\begin{align} \Gamma(\Delta_1-1)&e^{(\Delta_1-1)b\phi(z_1,\bar{z}_1)} \Gamma(\Delta_2+1)e^{(\Delta_2+1)b\phi(z_2,\bar{z}_2)} \nonumber\\ &
= B(\Delta_1-1,\Delta_2+1) \Gamma(\Delta_1+\Delta_2) e^{(\Delta_1+\Delta_2)b\phi(z_2,\bar{z}_2)} \, . \label{eq:LiouvilleOPE1a}
\end{align}
As a result,
\begin{align}
O^a_{\Delta_1, +1}(z_1,\bar{z}_1) O^b_{\Delta_2, -1}(z_2,\bar{z}_2) \sim B(\Delta_1-1,\Delta_2+1) \frac{f^{abc}}{z_{12}} O^c_{\Delta_1+\Delta_2-1,-1}(z_2,\bar{z}_2) \, ,
\end{align}
in agreement with Refs.\cite{Fan:2019emx,Pate:2019lpp}.
\paragraph{gluon-gluino:}
The current-supercurrent OPE is
\be
j^a(z_1) \psi^{+b}(z_2) \sim \frac{f^{abc}}{z_{12}} \psi^{+c}(z_2) \, .\ee
The Liouville OPE is \begin{align}
\Gamma(\Delta_1-1)&e^{(\Delta_1-1)b\phi(z_1,\bar{z}_1)} \Gamma\Big(\Delta_2-\frac{1}{2}\Big) e^{(\Delta_2-1/2)b\phi(z_2,\bar{z}_2)}\nonumber\\
=& ~B\Big(\Delta_1-1, \Delta_2-\frac{1}{2}\Big) \Gamma\Big(\Delta_1+\Delta_2-\frac{3}{2}\Big) e^{(\Delta_1+\Delta_2-3/2)b\phi(z_2,\bar{z}_2)} \, . \label{eq:LiouvilleOPE2}
\end{align}
As a result,
\begin{align}
O^a_{\Delta_1,+1}(z_1,\bar{z}_1) O^b_{\Delta_2, +\frac{1}{2}}(z_2,\bar{z}_2) \sim B\Big(\Delta_1-1,\Delta_2-\frac{1}{2}\Big) \frac{f^{abc}}{z_{12}} O^c_{\Delta_1+\Delta_2-1, +\frac{1}{2}}(z_2,\bar{z}_2) \, ,
\end{align}
in agreement with Ref.\cite{Fotopoulos:2020bqj}.\par
The current-supercurrent OPE with opposite helicities is
\be
j^a(z_1) \psi^{-b}(z_2) \sim \frac{f^{abc}}{z_{12}} \psi^{-c}(z_2) \, .\ee
The Liouville OPE is \begin{align}
\Gamma(\Delta_1-1)&e^{(\Delta_1-1)b\phi(z_1,\bar{z}_1)} \Gamma\Big(\Delta_2+\frac{1}{2}\Big) e^{(\Delta_2+1/2)b\phi(z_2,\bar{z}_2)}\nonumber\\
=& ~B\Big(\Delta_1-1, \Delta_2+\frac{1}{2}\Big) \Gamma\Big(\Delta_1+\Delta_2-\frac{1}{2}\Big) e^{(\Delta_1+\Delta_2-1/2)b\phi(z_2,\bar{z}_2)} \,
\end{align}
As a result,
\begin{align}
O^a_{\Delta_1,+1}(z_1,\bar{z}_1) O^b_{\Delta_2, -\frac{1}{2}}(z_2,\bar{z}_2) \sim B\Big(\Delta_1-1,\Delta_2+\frac{1}{2}\Big) \frac{f^{abc}}{z_{12}} O^c_{\Delta_1+\Delta_2-1, -\frac{1}{2}}(z_2,\bar{z}_2) \, ,
\end{align}
in agreement with Ref.\cite{Fotopoulos:2020bqj}.\par
\paragraph{gluino-gluino:}
The supercurrent-supercurrent OPE is
\be
\psi^{-a}(z_1) \psi^{+b}(z_2) \sim~ \frac{f^{abc}}{z_{12}} \, \widehat{j}^c(z_2) .\ee
The Liouville OPE is \begin{align}\Gamma\Big(\Delta_1+\frac{1}{2}\Big) \, &e^{(\Delta_1 +1/2)b\phi(z_1,\bar{z}_1)} \Gamma\Big(\Delta_2-\frac{1}{2}\Big) \, e^{(\Delta_2 -1/2)b\phi(z_2,\bar{z}_2)} \nonumber\\
=& ~B\Big(\Delta_1+\frac{1}{2}, \Delta_2 -\frac{1}{2} \Big) \Gamma(\Delta_1+\Delta_2) e^{(\Delta_1+\Delta_2)b\phi(z_2,\bar{z}_2)} \, ,
\end{align}
As a result,
\begin{align}
{O}^{a}_{\Delta_1,-\frac{1}{2}}(z_1,\bar{z}_1){O}^{b}_{\Delta_2,+\frac{1}{2}}(z_2,\bar{z}_2) \sim
B\left( \Delta_1+\frac{1}{2},\Delta_2-\frac{1}{2}\right)
\frac{f^{abc}}{z_{12}} {O}^c_{\Delta_1+\Delta_2-1,-1}(z_2,\bar{z}_2) \, , \label{eq:gigiOPE}\end{align}
in agreement with Ref.\cite{Fotopoulos:2020bqj}.
\section{Current correlators in (1,0) superspace}
In this section, we assemble the current and supercurrent operators into the multiplets of (1,0) supersymmetry. This is most succinctly done in (1,0) superspace parametrized by $(z, \theta)$,
where the global supersymmetry generators act on the primary superfields in the following way:
\begin{align}G_{-1/2}\Phi_\Delta(z,\theta)&= (\partial_\theta-\theta \partial_z)\,\Phi_\Delta(z,\theta)\, ,\\
G_{+1/2}
\Phi_\Delta(z,\theta)&= [z(\partial_{\theta}-\theta\partial_z) -2 \Delta \, \theta]\,\Phi_\Delta(z,\theta) \, .
\end{align}
We introduce the following supercurrent superfields:
\begin{align}
\mathbf{\widehat J}^a(z,\theta) &= \widehat{j}^a(z)+\theta \, \psi^{-a}(z) \, ,\label{eq:bfO-}\\
\mathbf{J}^a(z,\theta) &= -\psi^{+a}(z) +\theta \, j^a(z) \, , \label{eq:bfO+}
\end{align}
with dimensions $\Delta_{\mathbf{\widehat J}}=-1$ and $\Delta_{\mathbf{J}}=1/2$, respectively.
For the moment, we skip the gauge group indices and focus on the partial correlators associated with a single gauge group factor.
\subsection{Three points}
All nonvanishing three-point current correlators are written in
Eqs.(\ref{eq:3ptmatter1})-(\ref{eq:3ptmatter4}). We can assemble them into
\begin{align}
\langle\mathbf{\widehat J}_1\mathbf{\widehat J}_2\mathbf{J}_3
\rangle
=& ~\theta_3\langle \widehat{j}_1 \, \widehat{j}_2 \, j_3\rangle-\theta_2 \langle \widehat{j}_1 \, \psi_2^- \, \psi_3^+\rangle-\theta_1 \langle \psi_1^- \, \widehat{j}_2 \, \psi_3^+\rangle-\theta_1\theta_2\theta_3\langle \psi_1^-\, \psi_2^- \, j_3\rangle \nonumber\\
=&~ \theta_3 \frac{z_{12}^3}{z_{23}z_{31}} -\theta_2 \frac{z_{12}^2}{z_{32}} -\theta_1 \frac{z_{12}^2}{z_{13}} -\theta_1\theta_2\theta_3 \frac{2 \, z_{12}^2}{z_{23}z_{31}} \, . \label{eq:3ptO-O-O+}
\end{align}
It is easy to check that they satisfy the Ward identities associated with
$G_{-1/2}$ and $G_{+1/2}$:
\begin{align} \sum_{i=1}^3 &(\partial_{\theta_i}-\theta_i\partial_{z_i})\,
\langle\mathbf{\widehat J}_1\mathbf{\widehat J}_2\mathbf{J}_3
\rangle
= 0 \, ,\\
\sum_{i=1}^3 &[z_i(\partial_{\theta_i}-\theta_i\partial_{z_i}) -2 \Delta_i \theta_i]\,
\langle\mathbf{\widehat J}_1\mathbf{\widehat J}_2\mathbf{J}_3
\rangle = 0\, .
\end{align}
The correlator (\ref{eq:3ptO-O-O+}) also has a compact expression in terms of superspace coordinates $Z=(z,\theta)$. We use the conventions in Refs.\cite{DiVecchia:1984nyg,Fuchs:1986ew}, where the intervals in superspace are denoted as
\begin{align}
\theta_{ij} = \theta_i-\theta_j \, ,\\
Z_{ij} = z_{ij}-\theta_i\theta_j \, .
\end{align}
A generic three-point super-correlator can be written as
\begin{align}
\langle \Phi_1(Z_1) \Phi_2(Z_2)\Phi_3(Z_3)\rangle = \prod_{i<j}^3 \frac{1}{Z_{ij}^{\Delta_{ij}}} (c_{123}+c'_{123}\theta_{123}) \, , \label{eq:3ptsucogeneral}
\end{align}
where
\begin{align}
\Delta_{ij} &= \Delta_i+\Delta_j-\epsilon_{ijk}\Delta_k \, ,\\
\theta_{ijk} &= \frac{1}{\sqrt{Z_{ij}Z_{jk}Z_{ki}}} (\theta_i Z_{jk} +\theta_j Z_{ki} +\theta_k Z_{ij} +\theta_i\theta_j\theta_k) \, ,
\end{align}
and $c_{123}$, $c'_{123}$ are independent three-point coefficients. \par
The correlator (\ref{eq:3ptO-O-O+}) can be written as
\begin{align} \langle\mathbf{\widehat J}_1\mathbf{\widehat J}_2\mathbf{J}_3
\rangle&= \frac{Z_{12}^2}{Z_{23} Z_{31}} (\theta_1 Z_{23} +\theta_2 Z_{31} +\theta_3 Z_{12} +\theta_1\theta_2\theta_3) \nonumber\\
&= \frac{Z_{12}^{5/2}}{Z_{23}^{1/2}Z_{31}^{1/2}} \theta_{123} \, , \label{eq:3ptsucorr}
\end{align}
which takes the same form as Eq.(\ref{eq:3ptsucogeneral}), with
\begin{align}
c_{123}=0, \quad c'_{123} = 1 \, .
\end{align}
\subsection{Four points and OPEs}
All non-vanishing four-point current correlators are written in Eqs.(\ref{eq:4ptmatter1})-(\ref{eq:4ptmatter7}). Other correlators vanish for various reasons.
For example, $\langle \widehat{j}_1 \, \widehat{j}_2 \, \psi_3^+ \, \psi_4^+\rangle $, which receives contributions from the background due to the source ${\cal J}_1(Q,m_H)_\phi=1/\Lambda$, is of order $m_H$ and vanishes in the $m_H=0$ limit. The nonvanishing correlators can be assembled into
\begin{align}
\langle\mathbf{\widehat J}_1\mathbf{\widehat J}_2\mathbf{J}_3\mathbf{J}_4
\rangle
=&~\theta_3\theta_4 \frac{z_{12}^3}{z_{23}z_{34}z_{41}} \, - \, \theta_2\theta_4 \frac{z_{12}^2 z_{13}}{z_{23}z_{34}z_{41}} -\theta_2\theta_3 \frac{z_{12}^2}{z_{23}z_{34}}-\theta_1\theta_4 \frac{z_{12}^2}{z_{34}z_{14}} \nonumber\\
&+\theta_1\theta_3 \frac{z_{12}^2 z_{24}}{z_{23}z_{34}z_{14}} -\theta_1\theta_2 \frac{z_{12}^2}{z_{23}z_{14}} -\theta_1\theta_2\theta_3\theta_4 \frac{2\, z_{12}^2}{z_{23}z_{34}z_{41}} \, . \label{eq:4ptO-O-O+O+}
\end{align}
Once again, one can check that the correlator (\ref{eq:4ptO-O-O+O+}) satisfies two-dimensional supersymmetric Ward identities:
\begin{align}
\sum_{i=1}^4 &(\partial_{\theta_i}-\theta_i\partial_{z_i})\,
\langle\mathbf{\widehat J}_1\mathbf{\widehat J}_2\mathbf{J}_3\mathbf{J}_4
\rangle = 0 \, ,\\
\sum_{i=1}^4 & [ z_i(\partial_{\theta_i}-\theta_i \partial_{z_i}) -2 \Delta_i \theta_i]\,
\langle\mathbf{\widehat J}_1\mathbf{\widehat J}_2\mathbf{J}_3\mathbf{J}_4
\rangle
= 0 \, .
\end{align}
In the previous section, we wrote the OPEs of the current operators. In superspace, they read
\begin{align}
\mathbf{ J}^{a}(Z_1) \mathbf{J}^b(Z_2) \sim \frac{f^{abc}\, \theta_{ij}}{Z_{12}}\, \mathbf{ J}^c(Z_2) \, ,\label{eq:superJJOPE} \\
\mathbf{ J}^{a}(Z_1) \mathbf{\widehat J}^b(Z_2) \sim \frac{f^{abc}\, \theta_{ij}}{Z_{12}} \, \mathbf{\widehat J}^c(Z_2) \, ,\label{eq:superJJhatOPE}
\end{align}
Note that Eq.(\ref{eq:superJJOPE}) is the standard OPE of the super WZW model with level $k=0$ \cite{DiVecchia:1984nyg,Fuchs:1986ew}.
\section{Relation between spacetime and celestial supersymmetries}
In order to exhibit celestial supersymmetry, it was necessary to combine two sets of amplitudes, number 1 and 2, described in section 4. The relative weights were determined by the source terms, chosen in a rather contrived way. In the massless dilaton limit, the amplitudes of set 2 vanish while the amplitudes of set 1 are related by standard (spacetime) supersymmetric Ward identities. In this section, we show that when the dilaton is massive, both sets are related by spacetime supersymmetry, and the choice of ${\cal J}_2\sim m_H^{-1}$ allows for treating both sets at the same footing in the limit of $m_H=0$.
In the first step, we want to find a supersymmetric Ward identity for the amplitude
$A_3(1^{-1/2},2^{-1/2}, 3^{+1}, \phi^*)$ from set 2, with the massive dilaton $\phi^*$ carrying momentum $P$, $P^2=m_H^2$. Following Ref.\cite{Schwinn:2006ca}, we introduce an arbitrary lightlike reference vector $r$ and define the lightlike momentum
\be
k_4^\mu\equiv P^\mu-\frac{m_H^2}{2 (P\cdot r)} r^\mu\, . \ee
Note that
\be 2(P\cdot r)= 2(k_4\cdot r)=\langle 4r\rangle [r4]\ .\ee
The vector $r$ defines the dilatino spin quantization axis and enters into the dilatino wave functions \cite{Schwinn:2006ca}.\footnote{For that reason, some dilatino helicity amplitudes depend on this reference vector.} The desired Ward identity is obtained by commuting the supercharges $Q(\xi,\bar\xi)$, where $\xi$ and $\bar\xi$ are arbitrary transformation parameters, with a string of creation operators \cite{Parke:1985pn,Taylor:2017sph}:
\begin{align}0~=~
\big\langle[Q(\xi,\bar\xi)&\, , \,\lambda^- \, g^-\, g^+ \, \phi^*]\big\rangle ~=~[\xi 1]\langle g^- \, g^- \, g^+ \, \phi^* \rangle + \langle \xi 2\rangle \langle \lambda^- \, \lambda^-\, g^+\, \phi^*\rangle \nonumber\\& -[\xi 3] \langle \lambda^- \, g^- \, \lambda^+\, \phi^*\rangle -\langle \xi 4\rangle \langle \lambda^-\, g^- \, g^+ \chi^+\rangle
- m_H \frac{\langle\xi r\rangle}{\langle4r\rangle} \langle \lambda^-\, g^-\, g^+\, \chi^-\rangle \, , \label{eq:susya3phi*}
\end{align}
After setting $\bar\xi=0,\xi=4$, we obtain
\be
\langle 42 \rangle \langle \lambda^- \, \lambda^-\, g^+\, \phi^*\rangle = m_H \langle \lambda^-\, g^-\, g^+\, \chi^-\rangle \, , \label{eq:susya3phi*step1}
\ee
which relates $A_3(1^{-1/2},2^{-1/2}, 3^{+1}, \phi^*)$ to an amplitude that can be connected to set 1. To that end, we use the supersymmetric Ward identity
\begin{align}
0=\langle [Q(\xi,\bar\xi)&\, , \, g^-\, g^- \, g^+ \, \chi^-]\rangle=\langle \xi1\rangle \langle \lambda^- \, g^- \, g^+ \, \chi^-\rangle +\langle \xi 2\rangle \langle g^- \, \lambda^- \, g^+ \, \chi^-\rangle\\ & -[\xi3] \langle g^-\, g^- \, \lambda^+ \, \chi^-\rangle + \langle \xi 4\rangle \langle g^-\, g^- \, g^+ \, \phi\rangle + m_H \frac{\langle \xi r\rangle}{\langle 4r\rangle} \langle g^-\, g^-\, g^+ \, \phi^*\rangle \, .
\end{align}
After setting $\bar\xi=0$, $\xi=r=2$, we relate the dilatino amplitude to the MHV amplitude:
\be\langle 21\rangle \langle \lambda^- \, g^- \, g^+ \, \chi^-\rangle = \langle 42\rangle \langle g^-\, g^- \, g^+ \, \phi\rangle \, .
\ee
By combining this with Eq.(\ref{eq:susya3phi*step1}), we obtain
\be A_3(1^{-1/2},2^{-1/2}, 3^{+1}, \phi^*)=-\frac{m_H}{\langle 12\rangle}A_3(1^{-1},2^{-1}, 3^{+1}\phi)=-\frac{m_H}{\Lambda} \frac{\langle 12 \rangle^2}{\langle 23\rangle \langle 31\rangle},\ee
in agreement with Eq.(\ref{eq:MHV2gim}). We conclude that sets 1 and 2 are related by spacetime supersymmetry and that celestial supersymmetry appears upon choosing appropriate (pointlike) dilaton sources.
\section{Conclusions} In this work,
we discussed supersymmetric Yang-Mills theory coupled to the dilaton field, in the framework of celestial holography.
Previously in Ref.\cite{Stieberger:2022zyk}, we showed that
in the presence of a dilaton background field produced by a pointlike source, celestial gluon amplitudes become well-defined correlators of the products of holomorphic current operators with integer dimensions times the exponential ``light'' operators associated with Liouville theory in the limit of infinite central charge.
In the present work, we discussed the amplitudes involving gauginos and dilatinos, related to MHV amplitudes by supersymmetric Ward identities. We constructed the CCFT operators associated with gauginos
in a similar, factorized form, with the helicity and gauge degrees of freedom contained in the holomorphic supercurrent factors. We showed that in a theory with massive dilaton, the currents and supercurrents form supermultiplets of (1,0) supersymmetry, provided that the R-symmetry breaking amplitudes contribute in the massless limit. This requires a rather intricate choice of weight factors associated with the dilaton sources.
The emerging picture is quite similar to heterotic superstring theory. The holomorphic sector of supersymmetric celestial Liouville theory consists of two-dimensional (1,0) supercurrents carrying all information about the spin and gauge degrees of freedom of the gauge supermultiplet. The role of the Liouville operators is to supplement their integer dimensions to continous values, thus carrying over the information about energies, i.e.\ the scattering data, from spacetime to celestial CFT. There is no supersymmetry in the Liouville sector.
In this work, the dilaton sources were chosen in a contrived way, not only to transform celestial amplitudes into Lioville correlators, but also to ensure celestial supersymetry. The most important question is whether there is some underlying principle that singles out pointlike sources (akin to ``dilaton background charges''), more precisely their specific combinations, that lead to a holographic description of four-dimensional spacetime in terms of super WZW and Liouville theories. In general, the role of background fields in celestial holography needs further study.
\section*{Acknowledgements}
We would like to thank Tim Adamo, Wei Bu, Davide Gaiotto, Yangrui Hu, and Sabrina Pasterski for useful conversations.
TRT is supported by the National Science Foundation
under Grants Number PHY--1913328 and PHY--2209903.
Any opinions, findings, and conclusions or recommendations
expressed in this material are those of the authors and do not necessarily
reflect the views of the National Science Foundation.
BZ is supported by the Celestial Holography Initiative at the Perimeter Institute for Theoretical Physics. Research at the Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Industry Canada and by the Province of Ontario through the Ministry of Colleges and Universities.
|
2,877,628,091,019 | arxiv | \section{Introduction}
\subsection{Localised and partially localised patterns}
Localised patterns are observed in a wide variety of systems, including
experimental systems such as the Belusov-Zabotinsky reaction~\cite{VanagEpstein04}, nonlinear optics~\cite{TaranenkoStaliunasWeiss97,StaliunasSanchezMorcillo98}, vertically shaken granular media~\cite{UmbanhowarMeloSwinney96,TsimringAranson97}, and Bose-Einstein condensates~\cite{StreckerPartridgeTruscottHulet02}, and also in idealised systems such as the Swift-Hohenberg equation~\cite{CrossHohenberg93,SakaguchiBrand96,SakaguchiBrand98,TlidiMandelLefever94} or networks of reacting cells~\cite{MooreHorsthemke05}. More recently objects have been observed that are only \emph{partially} localised: structures in two dimensions, for instance, that are `thin' in one spatial direction and `long' in the other. Such \emph{partially localised patterns} have been observed in Nonlinear Schr\"odinger equations~\cite{DAprile00,BadialeDAprile02,AmbrosettiMalchiodiNi02,AmbrosettiMalchiodiNi03,AmbrosettiMalchiodiNi04}, Gierer-Meinhardt-type systems~\cite{DoelmanVanderPloeg02}, and even in scalar nonlinear elliptic equations~\cite{MalchiodiMontenegro02,MalchiodiMontenegro03,Malchiodi05}. In addition, the membrane that surrounds each living cell, for instance, is such a structure~\cite{Lipowski98,BlomPeletier04,PeletierRoeger08}.
In this paper we study an example of \emph{energy-driven} partial localisation, arising in the study of mixtures of \emph{diblock copolymers} with \emph{homopolymers}. Such mixtures feature two opposing forces: a repelling force between different monomer types favours separation into homogeneous phases, while covalent bonds between some of the repelling monomers impose an upper limit on the separation length. As a result a wide variety of patterns are observed (both in physical and in numerical experiments), ranging from spheres~\cite{KoizumiHasegawaHashimoto94,OhtaNonomura97,UneyamaDoi05,ZhangJinMa05}, cylinders~\cite{KinningWineyThomas88}, dumbbells~\cite{OhtaIto95}, helices~\cite{HashimotoMitsumuraYamaguchiTakenakaMoritaKawakatsuDoi01}, `labyrinths' and
`sponges'~\cite{LoewenhauptSteurerHellmannGallot94,Ito98,OhtaIto95}, `ball-of-thread'~\cite{LoewenhauptSteurerHellmannGallot94}, layered structures~\cite{KinningWineyThomas88,KoizumiHasegawaHashimoto94,OhtaIto95,ZhangJinMa05}, and many more.
Our focus is on \emph{layered patterns}, consisting of two or more parallel layers of roughly uniform thickness. In each layer the composition is dominated by one of the polymer types, and in the separation into layers one can recognise a phase separation phenomenon triggered by the repelling forces between polymer types. In addition to their interest as particular patterns in copolymer-homopolymer blends, such layered structures are examples of energy-driven partial localisation.
The main goal of this article is to understand the (in)stability of such layered structures in this simple model of copolymer-homopolymer blends.
\subsection{Diblock copolymers and blends}
Diblock copolymers are linear polymer molecules that consist of two parts (blocks) called the U-part and the V-part in this paper, with corresponding volume fractions given by the functions~$u$ and~$v$. Each part contains monomers of a single type only, U or V. As described above, the interaction between the two types of monomers is the net result of two opposing influences. On the one hand the U- and V-parts repel each other, leading to a tendency of the U-and V-phases to separate; on the other hand the U-and V-parts are chemically bonded together in a single diblock copolymer molecule, forcing both parts to remain close to each other. As a result of these two types of interaction, the separation between the U-and V-phases is restricted to length scales of the order of the molecule size.
We consider systems that contain, in addition to the diblock copolymers, some species of \emph{homopolymer}, that we call the 0-phase. A homopolymer is made up of a single type of monomers, here named 0. The system therefore contains three phases, and because of an assumption of incompressibility we can use the functions $u$ and $v$ to describe the distributions of the three phases.
In~\cite{ChoksiRen05} the following energy is derived:
\[
\mathcal{F}(u, v) = \left\{ \begin{array}{ll} \displaystyle
c_0 \int_{S_L} |\nabla (u + v)| + c_u \int_{S_L} |\nabla u| + c_v \int_{S_L} |\nabla v|
\hspace{0.3cm}+ \|u - v\|_{H^{-1}(S_L)}^2
& \mbox{ if $(u, v) \in \mathcal{K}$,}\\
\infty &\mbox{ otherwise,} \end{array} \right.
\]
where the coefficients $c_i$ are nonnegative (and not all equal to zero), $S_L$ is a periodic strip $\vz{T}_L \times {\vz R}$ (where $\vz{T}_L$ is the one-dimensional torus of length $L$), and the set of admissible functions is given by
\[
\mathcal{K} := \left\{ (u, v) \in \left(\text{BV}\left(S_L\right)\right)^2 :
u(x), v(x) \in \{0, 1\} \text{ a.e., } uv = 0 \text{ a.e., and } \int_{S_L} u = \int_{S_L} v \; \right\}.
\]
Since unconstrained minimisation will lead to the trivial structure $u\equiv v\equiv 0$, the natural problem to look at here is minimisation under constrained mass, i.e. with the constraint $\int_{S_L} u = \int_{S_L} v = M$ for some $M>0$.
Under the extra restriction $u+v\equiv 1$---no 0-phase---the functional $\mathcal F$ is a well-known sharp-interface model for diblock copolymer melts~\cite{RenWei00,ChoksiRen03}. The sharp-interface character of this model, known in the physics literature as the strong-segregation limit, is recognizable in the fact that the variables $u$ and $v$ are characteristic functions, implying that at each point in space only one phase is present. The underlying diffuse-interface model is well studied~\cite{NishiuraOhnishi95,FifeKowalczyk99,EscherMayer01,Muratov02,RenWei02,ChoksiRen03,RenWei03b,RenWei05,RenWei06a,RenWei06b,Tzoukmanis06,RoegerTonegawa07} because of the interesting pattern formation phenomena it exhibits.
The first three terms of $\mathcal F$ can be recognised as the sharp-interface manifestation of the repelling forces between the U-, V-, and 0-monomers. The last term, the $H^{-1}$-norm, is a remainder of the chemical bond between the U- and V-parts and penalises large-scale separation of the U- and V-phases.
The functional $\mathcal{F}$ resembles the energy functional used to model triblock copolymers, i.e. block copolymers consisting of three chemically bonded parts, \cite{RenWei03c, RenWei03d}. The interface penalisation part is present in that functional as well, and the long range interaction term includes interaction between the third phase (the phase corresponding to the third part of the triblock copolymers) and the other two phases in addition to the interaction between the first two phases present in the functional $\mathcal{F}$ above.
For a more extensive review of the modelling of diblock copolymers and diblock copolymer-homopolymer blends and its study in mathematics, we refer to \cite[Chapter 2]{vanGennip08}.
\subsection{From one-dimensional to two-dimensional structures}\label{sec:1D2D}
A layered structure with perfectly straight layers can be described by functions $u$ and $v$ of one spatial variable. In a companion paper~\cite{vanGennipPeletier07a} (see also~\cite{ChoksiRen05}) we study this one-dimensional case and give a full characterisation of global minimisers.
One of the results in that paper is that, for generic parameter values, every constrained-mass global minimiser on ${\vz R}$ is a \emph{concatenation of equal-width monolayers}. A monolayer is shown in Figure~\ref{fig:2d-multilayers}: a structure, described by a pair of functions $(u,v)$, in which the supports of $u$ and $v$ are adjacent intervals of equal length---or, in the higher-dimensional context, adjacent layers of equal width (see Figures~\ref{fig:monolayer1D} and~\ref{fig:monolayer2D}).
For small constrained mass, the global minimiser in one dimension is a monolayer. For slightly larger constrained mass, the global minimiser switches to a \emph{bilayer}, a pair of monolayers joined back to back (Figures~\ref{fig:bilayer1D} and~\ref{fig:bilayer2D}). As the constrained mass further increases the global minimiser switches to structures of increasing numbers of monolayers (see~\cite{vanGennipPeletier07a}).
In the present paper we are interested in the stability properties under $\mathcal{F}$ of a particular subset of two-dimensional mono- and bilayer structures:
\begin{itemize}
\item For both mono- and bilayers we assume that the layer thickness is such that the energy-to-mass ratio $\mathcal F/\int u$ is minimal among all such layers;
\item For monolayers we assume that $c_u=c_v$, i.e. that the interface penalisation is the same for U-0 and V-0 interfaces.
\end{itemize}
Both restrictions arise from our interest in thin, partially localised structures in ${\vz R}^2$, as is explained in detail in Appendices~\ref{sec:energy-per-unit-mass} and~\ref{sec:cu=cv}.
The optimal widths $\aleph$ and $\beth$ for which the energy-to-mass ratio is minimal for the mono- and bilayer respectively are indicated in Figure~\ref{fig:2d-multilayers} and defined in (\ref{eq:monod}) and (\ref{eq:bid}).
\begin{figure}[ht]
\subfloat[Monolayer1D][A one-dimensional monolayer]
{
\psfrag{y}{$x_2$}
\psfrag{u}{U}
\psfrag{v}{V}
\psfrag{0}{$0$}
\psfrag{1}{$1$}
\includegraphics[width=0.35\textwidth]{uv_monolayer_strip5a}\\
\label{fig:monolayer1D}
}
\hspace{0.15\textwidth}
\subfloat[Bilayer1D][A one-dimensional bilayer]
{
\psfrag{y}{$x_2$}
\psfrag{u}{U}
\psfrag{v}{V}
\psfrag{0}{$0$}
\psfrag{1}{$1$}
\includegraphics[width=0.35\textwidth]{vuv_bilayer_strip5a}\\
\label{fig:bilayer1D}
}
\hspace{0.1\textwidth}
\subfloat[Monolayer2D][A straight monolayer on the periodic strip $S_L$]
{
\psfrag{x}{$x_1$}
\psfrag{y}{$x_2$}
\psfrag{u}{U}
\psfrag{v}{\hspace{0.05cm}V}
\psfrag{L}{$L$}
\psfrag{0}{$0$}
\psfrag{1}{$1$}
\psfrag{d}{$\delta_m$}
\includegraphics[width=0.35\textwidth]{uv_monolayer_strip5b}\\
\label{fig:monolayer2D}
}
\hspace{0.25\textwidth}
\subfloat[Bilayer2D][A straight VUV bilayer on the periodic strip $S_L$]
{
\psfrag{x}{$x_1$}
\psfrag{y}{$x_2$}
\psfrag{u}{U}
\psfrag{v}{\hspace{0.05cm}V}
\psfrag{L}{$L$}
\psfrag{0}{$0$}
\psfrag{1}{$1$}
\psfrag{d}{$\delta_b$}
\includegraphics[width=0.35\textwidth]{vuv_bilayer_strip5b}\\
\label{fig:bilayer2D}
}
\caption{Mono- and bilayers on a strip as trivial extensions of one-dimensional structures. We assume the 0-phase to fill up the rest of the domain where there is no U- or V-phase. We do not indicate this in our pictures.}
\label{fig:2d-multilayers}
\end{figure}
\subsection{Stability of mono- and bilayers in two dimensions}
The aim of this paper is to investigate the stability of these mono- and bilayers in two dimensions. Since the functions $u$ and $v$ are forced to be characteristic functions of sets, the only admissible perturbations are changes in the supports of these functions. In this paper we only consider \emph{local} stability with respect to perturbation of the position of the interfaces; other perturbations, such as those that change the topology of the structure are disregarded (see the discussion in Section~\ref{sec:concdisc}).
Specifically, we consider perturbations of the interfaces that are periodic with period $L$ along the length of the layer, and therefore we assume a domain that is periodic in one direction ($x_1$) and unbounded in the other (see Figure~\ref{fig:2d-multilayers}). Because of this periodicity each perturbation of an interface is given by a periodic function $p:\vz{T}_L \to {\vz R}^3$ (for the monolayer) or $p:\vz{T}_L \to {\vz R}^4$ (for the bilayer), where each component is the lateral displacement of one of the interfaces. By expanding the perturbations in Fourier modes, and using the usual vanishing of cross terms of different frequency, the positivity of the second derivative of the energy reduces to the positivity
on each Fourier mode.%
\footnote{First and second derivatives of similar functionals have been calculated by Muratov and Choksi \& Sternberg~\cite{Muratov02,ChoksiSternberg06}. Our calculations differ in the number of phases (three instead of two) and in the early adoption of a Fourier framework.}
Fourier modes have a natural scale invariance: the $k^\mathrm{th}$ Fourier mode on the interval of length~$L$ is equivalent to the $1^\mathrm{st}$ Fourier mode on an interval of length $L/k$. This allows us to establish the stability with respect to the first Fourier mode as a function of $L$, rescale for the stability properties of the $k^\mathrm{th}$ mode, and aggregate the results.
Using this approach we show in Section~\ref{sec:stability} that the monolayer of optimal width $\aleph$ is linearly stable with respect to mode-$1$ perturbations iff
\[
\frac{c_u}{2c_u+c_0} \geq f_1(L/\aleph),
\]
where $f_1$ is an explicit function given in~\pref{def:f1}.
By combining all Fourier modes we find
\begin{theorem}
\label{th:stab-monolayer}
Assume $c_u=c_v$. The monolayer of optimal width is linearly stable iff
\begin{equation}
\label{cond:stab-monolayer}
\frac{c_u}{2c_u+c_0} \geq f(L/\aleph)
\end{equation}
where
\[
f(\ell) := \sup_{k\geq1} f_1(\ell/k).
\]
\end{theorem}
\noindent
The graphs of the functions
$x\mapsto f_1(x/k)$ for different values of $k$ are shown in Figure~\ref{fig:negeigmonolay}.
\begin{figure}[ht]
\hspace{0.025\textwidth}
\subfloat[Monolayer][The monolayer; plotted are the curves $L/\aleph\mapsto f_1(L/(k\aleph))$ for $k=1\dots 20$]
{
\psfrag{a}{$L/\aleph$}
\psfrag{b}{$f_1(L/(k\aleph))$}
\includegraphics[width=0.45\textwidth]{stabiliteit_monolaag_20orders_cd}\\
\label{fig:negeigmonolay}
}
\hspace{0.05\textwidth}
\subfloat[Bilayer][The bilayer; plotted are the curves $L/\beth\mapsto g_1(L/(k\beth))$ for $k=1\dots 20$]
{
\psfrag{a}{$L/\beth$}
\psfrag{b}{$g_1(L/(k\beth))$}
\includegraphics[width=0.45\textwidth]{stabiliteit_bilaag_20orders_cd}\\
\label{fig:negeigbilay}
}
\caption{The graphs of the functions
$x\mapsto f_1(x/k)$ and $x\mapsto g_1(x/k)$ ($k=1, \ldots, 20$) portray the curves in parameter space that separate the parts where the first twenty Fourier modes of the second variation for the monolayer (Figure~\ref{fig:negeigmonolay}) and bilayer (Figure~\ref{fig:negeigbilay}) are positive and negative. If $c_u/(2c_u+c_0) < f_1(L/(k\aleph))$ the $k$th Fourier mode is negative for the monolayer, if the reverse inequality holds the mode is positive. Similarly for the bilayer the $k$th Fourier mode is negative if $(c_u+c_v)/(c_0+c_u+2c_v) < g_1(L/(k\beth))$. The leftmost curve in each figure corresponds to the first order Fourier mode, the order increases towards the right. Note that the positivity of the parameters $c_u$ and $c_0$ implies that $c_u/(2c_u+c_0) \leq \frac12$ as indicated in Figure~\ref{fig:negeigmonolay} by the dashed line.}\label{fig:negeig}
\end{figure}
\medskip
For a bilayer of optimal width $\beth$ a similar result holds:
\begin{theorem}
\label{th:stab-bilayer}
The VUV bilayer of optimal width is linearly stable iff
\begin{equation}
\label{cond:stab-bilayer}
\frac{c_u+c_v}{c_0+c_u+2c_v} \geq g(L/\beth)
\end{equation}
where
\[
g(\ell) := \sup_{k\geq1} g_1(\ell/k)
\]
and $g_1$ is given by~\pref{def:g_1}.
\end{theorem}
\noindent
In Figure~\ref{fig:negeigbilay}
the graphs of the functions $x\mapsto g_1(x/k)$ are shown for different values of $k$.
From Figures~\ref{fig:negeigmonolay} and~\ref{fig:negeigbilay} one might think that curves belonging to higher orders remain below curves of lower orders. The blow-ups in Figure~\ref{fig:negeigzoom} however show that this is not the case. However, it is true that only the first Fourier mode is of importance for determining the stability of the monolayer. This can be recognised by noting that the left-hand side of~\pref{cond:stab-monolayer} cannot reach values larger than $1/2$, and Figures~\ref{fig:negeigmonolay} and~\ref{fig:negeigmonolayzoom} show that the non-monotonicity for the monolayer plays a role only for values above $1/2$.
\begin{figure}[ht]
\hspace{0.025\textwidth}
\subfloat[Monolayerzoom][Blow-up of Figure~\ref{fig:negeigmonolay}]
{
\psfrag{a}{$L/\aleph$}
\psfrag{b}{$\mu$}
\includegraphics[width=0.45\textwidth]{stabiliteit_monolaag_20_zoom_cd}\\
\label{fig:negeigmonolayzoom}
}
\hspace{0.05\textwidth}
\subfloat[Bilayerzoom][Blow-up of Figure~\ref{fig:negeigbilay} ]
{
\psfrag{a}{$L/\beth$}
\psfrag{b}{$\zeta$}
\includegraphics[width=0.45\textwidth]{stabiliteit_bilaag_20_zoom_cd}\\
\label{fig:negeigbilayzoom}
}
\caption{A blow-up of the graphs in Figure~\ref{fig:negeig}. Curves corresponding to different Fourier modes clearly cross. Here $\mu=\frac{c_u+c_v}{2 (c_0+c_u+c_v)}$ and $\zeta=\frac{c_u+c_v}{c_0+c_u+2c_v}$, see also (\ref{eq:upslamb}) and (\ref{eq:yeahgoaheadanddefinemreluv}).}\label{fig:negeigzoom}
\end{figure}
\bigskip
Figure~\ref{fig:signs-intro} summarises the stability properties of both the mono- and the bilayer. In Figure~\ref{fig:signs-intro-mono} the vertical axis is restricted to the interval $[0,1/2]$ to reflect the value set of the left-hand side of~\pref{cond:stab-monolayer}.
This implies that monolayers can only be stable if $L$ is sufficiently small, and even then only for a subset of the coefficients $c_0$, $c_u$, and $c_v$; for sufficiently large $L$ the monolayer is unstable for all choices of interface penalisation.
For the bilayer the situation is different: here the condition~\pref{cond:stab-bilayer} allows for both stability and instability at all values of $L$. The function $g$ is bounded from above (away from 1), implying that a threshold $\alpha$ exists such that
\[
\frac{c_u+c_v}{c_0+c_u+2c_v} \geq \alpha
\qquad \Longrightarrow\qquad
\text{Bilayer is stable for all $L$}.
\]
From Figure~\ref{fig:signs-intro-bi} we estimate that $\alpha \approx 0.65$.
\begin{figure}[ht]
\hspace{0.1\textwidth}
\subfloat[Monolayer]
{
\psfrag{a}{$\nu$}
\psfrag{b}{$\mu$}
\psfrag{c}{\scriptsize $+$}
\psfrag{d}{\color{white}$+/-$}
\includegraphics[width=0.35\textwidth]{monolaag_contour_100modes_cd}\label{fig:signs-intro-mono}\\
}
\hspace{0.1\textwidth}
\subfloat[Bilayer]
{
\psfrag{a}{$\upsilon$}
\psfrag{b}{$\zeta$}
\psfrag{c}{$+$}
\psfrag{d}{\color{white} $+/-$}
\includegraphics[width=0.35\textwidth]{bilaag_contour_100modes_1000PlotPoints_cd}
\label{fig:signs-intro-bi}\\
}
\caption{The sign of the second derivative operator for the mono- and bilayer of optimal width. $+/-$ indicates indeterminate sign, due to the negativity of one or more eigenvalues. Along the horizontal axes are plotted $\nu = e^{-{2 \pi \aleph}/L}$ and $\upsilon = e^{-2 \pi \beth/L}$. The vertical axes show $\mu=c_u/(2c_u+c_0)$ and $\zeta=(c_u+c_v)/(c_0+c_u+2c_v)$. These figures are based on a calculation involving Fourier modes up to and including order 100.}\label{fig:signs-intro}
\end{figure}
\subsection{Directions of instability}
For the functional $\mathcal{F}$ one may imagine a number of different evolution problems, such as gradient flows based on the $L^2$, $H^{-1}$, or Wasserstein metrics. Under such an evolution the straight mono- and bilayer structures are stationary. If they are unstable, the evolution will amplify small deviations and move away from the straight configurations. While the perturbations are still small, the main contribution of the evolution will be in the directions of the eigenvectors of the second variation\footnote{For each Fourier mode the bilinear form that is the second variation can be identified with a bilinear form on ${\vz R}^3$ (monolayer) or ${\vz R}^4$ (bilayer) whose eigenvalues and eigenvectors can be studied. Details can be found in Sections~\ref{subsec:bilayerstability}--\ref{subsec:monolayerstability}.} belonging to the (most) negative eigenvalues.
For the monolayer there is, for each Fourier mode, one eigenvalue that can become negative (for the first Fourier mode: $E_3$ in Lemma~\ref{lem:M1negeig}; other modes follow by rescaling as above) and there are two which are always positive. Each component of the corresponding eigenvectors is associated with the deformation of one of the interfaces in the layer.
A cartoon of the (possibly) unstable deformation direction is given in Figure~\ref{fig:monounstab2}, the two stable directions are shown in Figures~\ref{fig:monostab1} and~\ref{fig:monostab3}.
\begin{figure}[ht]
\hspace{0.1\textwidth}
\subfloat[Monolayerunstable2][The (possibly) unstable deformation direction]
{
\psfrag{u}{U}
\psfrag{v}{V}
\includegraphics[width=0.2\textwidth]{monolaag_instabiel_2_cd}\\
\label{fig:monounstab2}
}
\hspace{0.1\textwidth}
\subfloat[Monolayerstable1][One of the stable directions]
{
\psfrag{u}{U}
\psfrag{v}{V}
\includegraphics[width=0.2\textwidth]{monolaag_stabiel_1_cd}\\
\label{fig:monostab1}
}
\hspace{0.1\textwidth}
\subfloat[Monolayerstable3][The other stable direction]
{
\psfrag{u}{U}
\psfrag{v}{V}
\includegraphics[width=0.2\textwidth]{monolaag_stabiel_3_cd}\\
\label{fig:monostab3}
}
\hspace{0.1\textwidth}
\caption{One (possibly) unstable and two stable first order Fourier modes of deformation for the monolayer; see Section~\ref{sec:stability}, in particular Remark~\ref{rem:stableunstablemodesmonolayer}.}
\end{figure}
For the bilayer two eigenvalues are always positive, and two eigenvalues may also become negative. For the first Fourier mode the dependence of the sign of the latter two on the parameters $L/\beth$ and $\zeta$ is given in Figures~\ref{fig:signeigenvaluecorrespondingtoG2} and~\ref{fig:signeigenvaluecorrespondingtoG1}. We recognise in the second figure the first order curve ($k=1$) from Figure~\ref{fig:negeigbilay}; a similar curve for the first figure would always stay below the curve from the latter one, which is why its influence is not recognisable in Figure~\ref{fig:negeigbilay}.
\begin{figure}[ht]
\hspace{0.1\textwidth}
\subfloat[SigneigenvaluecorrespondingtoG2][The sign in parameter space of the eigenvalue corresponding to the eigenvalue $G_+$ of the reduced matrix $\tilde B_1$ in the proof of Lemma~\ref{lem:B1negeig}]
{
\psfrag{x}{$L/\beth$}
\psfrag{y}{$\zeta$}
\psfrag{a}{$\color{white}-$}
\psfrag{b}{$+$}
\includegraphics[width=0.35\textwidth]{signeigenvaluecorrespondingtoG+_cd}\\
\label{fig:signeigenvaluecorrespondingtoG2}
}
\hspace{0.1\textwidth}
\subfloat[SigneigenvaluecorrespondingtoG1][The sign in parameter space of the eigenvalue corresponding to the eigenvalue $G_-$ of the reduced matrix $\tilde B_1$ in the proof of Lemma~\ref{lem:B1negeig}]
{
\psfrag{x}{$L/\beth$}
\psfrag{y}{$\zeta$}
\psfrag{a}{$\color{white}-$}
\psfrag{b}{$+$}
\includegraphics[width=0.35\textwidth]{signeigenvaluecorrespondingtoG-_cd}\\
\label{fig:signeigenvaluecorrespondingtoG1}
}
\caption{The black patches in parameter space indicate where two of the eigenvalues of the first Fourier order second variation operator for the bilayer become negative.}
\end{figure}
The (possibly) unstable deformation directions are shown in Figures~\ref{fig:biunstab1} and~\ref{fig:biunstab3}, corresponding to the eigenvalues in Figures~\ref{fig:signeigenvaluecorrespondingtoG2} and~\ref{fig:signeigenvaluecorrespondingtoG1}, the stable ones in Figures~\ref{fig:bistab2} and~\ref{fig:bistab4}.
\begin{figure}[h]
\hspace{0.2\textwidth}
\subfloat[Bilayerunstable1][One of the (possibly) unstable deformation directions]
{
\psfrag{u}{U}
\psfrag{v}{V}
\includegraphics[width=0.2\textwidth]{bilaag_instabiel_1_cd}\\
\label{fig:biunstab1}
}
\hspace{0.2\textwidth}
\subfloat[Bilayerunstable3][The other (possibly) unstable deformation direction]
{
\psfrag{u}{U}
\psfrag{v}{V}
\includegraphics[width=0.2\textwidth]{bilaag_instabiel_3_cd}\\
\label{fig:biunstab3}
}
\\
\vspace{0.5cm}\hspace{0.2\textwidth}
\subfloat[Bilayerstable2][One of the stable directions]
{
\psfrag{u}{U}
\psfrag{v}{V}
\includegraphics[width=0.2\textwidth]{bilaag_stabiel_2_cd}\\
\label{fig:bistab2}
}
\hspace{0.2\textwidth}
\subfloat[Bilayerstable4][The other stable direction]
{
\psfrag{u}{U}
\psfrag{v}{V}
\includegraphics[width=0.2\textwidth]{bilaag_stabiel_4_cd}\\
\label{fig:bistab4}
}
\caption{Two (possibly) unstable and two stable first-order Fourier modes of deformation for the bilayer. The (possibly) unstable deformation in Figure~\ref{fig:biunstab1} corresponds to the eigenvalue in Figure~\ref{fig:signeigenvaluecorrespondingtoG2} and the deformation in Figure~\ref{fig:biunstab3} to the eigenvalue in Figure~\ref{fig:signeigenvaluecorrespondingtoG1}. For details see the discussion in Section~\ref{sec:stability}, in particular Remark~\ref{rem:stableunstablemodesbilayer}.}
\end{figure}
These results all show that depending on the parameters in the model the monolayer and bilayer structures can be unstable. This mirrors closely the results in~\cite{Muratov02}, \cite{RenWei03b}, and \cite{RenWei05}, where it is shown that in the pure diblock case `wriggled' lamellar structures bifurcate off the straight lamellar pattern if the spacing between the lamellae becomes too large. In Section~\ref{sec:wriggledlamellar} we discuss the relation with these results in more detail.
\subsection{Structure of this paper}
We start in Section~\ref{sec:defcon} by defining the functional under consideration and clarifying some of the notation that is used throughout the paper. In Section~\ref{sec:perstrip} we prove via a calculation of the first variation of $\mathcal{F}$ that the monolayer and bilayer are both stationary points of $\mathcal{F}$ with respect to mass preserving perturbations of the interfaces. We then proceed to compute the second variation for both these structures. Since this calculation for the monolayer is similar to that for the bilayer, we only give the details in the latter case and even there we defer most of the computational details to Appendix~\ref{app:proofbilayersecondvar}. Section~\ref{sec:stability} is dedicated to computing the sign of the second variations for the monolayer and bilayer in order to determine the parameter regions of stability and instability. Much of the work in the proofs is again of a calculational nature, some of which we have also moved to the back of the paper in Appendix~\ref{app:details}. Finally Section~\ref{sec:greens} gives a Green's function of $-\Delta$ on the periodic strip. This Green's function is heavily used in this paper and since the authors could not trace a previous appearance of it in the literature a section on its validity closes this paper.
\section{Definitions and conventions}\label{sec:defcon}
\subsection{Problem setting}\label{subsec:problemsetting}
The domain of definition is the strip $S_L := \vz{T}_L \times \vz{R}$, where $\vz{T}_L$ is the one-dimensional torus of length $L$, i.e. the interval $[0, L]$ with the endpoints identified. For functions on $S_L$ the $H^{-1}$-norm is defined by convolution:
\begin{definition}\label{def:H-1norm}
For $f\in L^{\infty}(S_L)$ with compact support satisfying $\int_{S_L} f = 0$
we define
\[
\|f\|_{H^{-1}(S_L)}^2 := \int_0^L \int_{\vz R} f(x_1, x_2) G*f(x_1, x_2)\, dx_2dx_1,
\]
where $G$ is the Green's function of the operator $-\Delta$ on $S_L$, i.e. it satisfies $-\Delta G = \delta$ in the sense of distributions ($\delta$ is the Dirac delta distribution).
\end{definition}
Note that $\phi_f := G*f$ satisfies $-\Delta \phi_f = f$ on $S_L$. Also note that while the Green's function is only unique up to addition of an affine function of $x_2$, this non-uniqueness is irrelevant for the definition above.
We repeat the definition of $\mathcal{F}$ and $\mathcal{K}$ for convenience.
\begin{definition}\label{def:functional}
Let $c_0$, $c_u$, and $c_v$ be real numbers.
Define
\[
\mathcal{F}(u, v) = \left\{ \begin{array}{ll}
\displaystyle c_0 \int_{S_L} |\nabla (u + v)| + c_u \int_{S_L} |\nabla u|
+ c_v \int_{S_L} |\nabla v| \hspace{0.3cm} + \|u - v\|_{H^{-1}(S_L)}^2
& \mbox{ if $(u, v) \in \mathcal{K}$,}\vspace{0.25cm}\\
\infty &\mbox{ otherwise,} \end{array} \right.
\]
where the admissible set is given by
\[
\mathcal{K} := \left\{ (u, v) \in \left(\text{BV}\left(S_L\right)\right)^2 :
u(x), v(x) \in \{0, 1\}, \ uv = 0 \text{ a.e., and } \int_{S_L}
u = \int_{S_L} v\right\}.
\]
\end{definition}
We will require that all $c_i$ are non-negative and at least one of them is positive.
Another, equivalent, form of the functional will be useful, in which the
penalisation of the three types of interface U-0, V-0, and U-V, is given explicitly by surface tension
coefficients $d_{kl}$:
\begin{lemma}
\label{lemma:d_ij}
Let the \emph{surface tension coefficients} be given by
\begin{align*}
d_{u0} &:= c_u+c_0,\\
d_{v0} &:= c_v+c_0,\\
d_{uv} &:= c_u+c_v.
\end{align*}
Non-negativity of the $c_i$ is equivalent to the conditions
\footnote{The indices $j, k, l$ take values in $\{u, v, 0\}$ and the $d_{kl}$
are taken symmetric in their indices, i.e. $d_{vu} = d_{uv}$ etc.}
\begin{equation}\label{eq:ddemands}
0 \leq d_{kl} \leq d_{kj} + d_{jl} \qquad\text{for each } k\not=l\not=j\not=k.
\end{equation}
Then
\[
\mathcal{F}(u, v) = \left\{ \begin{array}{ll}
d_{u0}\mathcal H^{N-1}(S_{u0}) + d_{v0}\mathcal H^{N-1}(S_{v0}) + d_{uv}\mathcal H^{N-1}(S_{uv})
+ \|u - v\|_{H^{-1}(S_L)}^2
& \mbox{ if $(u, v) \in \mathcal{K}$,}\\ \infty &\mbox{ otherwise.} \end{array} \right.
\]
where $S_{kl}$ is the interface between the phases $k$ and $l$:
\begin{align*}
&S_{u0} = \partial^* \supp u \setminus \partial^* \supp v,\\
&S_{v0} = \partial^* \supp v \setminus \partial^*\supp u,\\
&S_{uv} = \partial^* \supp u \cap \partial^* \supp v,
\end{align*}
and $\partial^*$ is the essential boundary of a set.
\end{lemma}
\noindent
The essential boundary of a set consists of all points in the set that have a density other than $0
$ or $1$ in the set; see e.g.~\cite[Chapter 3.5]{AmbrosioFuscoPallara00}.
\begin{proof}[Proof of Lemma~\ref{lemma:d_ij}]
The main step in recognising the equivalence of both forms of $\mathcal{F}$ is noticing
that, for characteristic functions of a set, such as $u, v$ and $u+v$, the equality
\[
\int_\Omega |\nabla u| = \mathcal H^{N-1}(\partial^* \supp u \cap \Omega)
\]
(see \cite[Theorem 4.4]{Giusti84}, \cite[Theorems 3.59, 3.61]{AmbrosioFuscoPallara00}).
\end{proof}
Note the different interpretations of the coefficients $c_i$ and the surface tension coefficients $d_{kl}$. The latter have a direct physical interpretation (and can be related to material parameters, see \cite{ChoksiRen05}): they determine the mutual repulsion between the different constituents of the diblock copolymer-homopolymer blend. For example, the value of $d_{uv}$ (as compared to the values of $d_{u0}$, $d_{v0}$ and $1$, the coefficient in front of the $H^{-1}$-norm) determines the energy penalty associated with close proximity of U- and V-polymers. In particular, if one of these surface tension coefficients is zero, the corresponding polymers do not repel each other and many interfaces between their respective phases in the model can be expected. On the other hand the coefficients $c_i$, when taken separately, do not convey complete information about the penalisation of the boundary of a phase. If for instance $c_u=0$, but $c_v\neq0$, the part of the U-phase interface that borders on the V-phase still receives a penalty, because $d_{uv}=c_v$. For this reason the use of surface tension coefficients makes more sense from a physical point of view. For the mathematics it is often easier to use the formulation in terms of $c_i$.
If we consider the functional $\mathcal{F}$ on three-dimensional `physical space' we can also see from dimensional considerations that the name ``surface tension coefficients'' for the $d_{kl}$ is justified. Since $\mathcal{H}^2(S_{kl})$ measures surface area, if $\mathcal{F}$ has the dimension of energy then the coefficients $d_{kl}$ have the dimension of energy per unit area, which is also the dimension of surface tension.
\begin{remark}
The conditions (\ref{eq:ddemands}) can be understood in several ways. If, for
instance, $d_{uv}>d_{u0}+d_{v0}$, then the U-V type interface, which is penalised
with a weight of $d_{uv}$, is unstable, for the energy can be reduced by
slightly separating the U and V regions and creating a thin zone of 0
inbetween. A different way of seeing the necessity of~\pref{eq:ddemands} is by
remarking that the equivalent requirement of non-negativity of the $c_i$ is
necessary for $\mathcal{F}$ to be lower semicontinuous in e.g. the $L^1$ topology (see e.g. \cite[Theorem 1.9]{Giusti84}). Our assumption that at least one $c_i$ is positive is equivalent to requiring at least two $d_{kl}$ to be positive.
\end{remark}
\subsection{Fourier transformation}\label{subsec:fourtfm}
To clarify the notation we use, we will explicitly define the Fourier series we are using. For future reference we will also state some results we will need.
\begin{definition}\label{def:fourser}
Let $f \in L^2\left(\vz{T}_L\right)$, then we will denote by $\hat f \in L^2(\vz{Z}; {\vz C})$, the \emph{Fourier transform of~$f$}:
\[
\hat f(k) := \frac{1}{\sqrt{L}} \int_0^L f(x) e^{-2 \pi i x k / L} \, dx,
\]
and by $a_j$ and $b_j$, $j \in \vz{N}$, the Fourier coefficients of $f$ with respect to the normalised basis of cosines and sines:
\begin{align*}
a_0 &:= \frac1{\sqrt{L}} \int_0^L f(x)\, dx,\\
a_j &:= \sqrt{\frac2L} \int_0^L f(x) \cos\left(\frac{2 \pi x j}{L}\right) \, dx,\\
b_j &:= \sqrt{\frac2L} \int_0^L f(x) \sin\left(\frac{2 \pi x j}{L}\right) \, dx,
\end{align*}
\end{definition}
For easy reference we give here the relations between $\hat f(j)$ and $a_j, b_j$: $\hat f(0) = a_0$ and, for $j \geq 1$, $\hat f(j) = \frac{1}{\sqrt{2}} (a_j - i b_j), \hat f(-j) = \frac{1}{\sqrt{2}} (a_j + i b_j), a_j = \frac{1}{\sqrt{2}} \left( \hat f(j) + \hat f(-j) \right)$ and $b_j = \frac{i}{\sqrt{2}} \left(\hat f(j) - \hat f(-j)\right)$.
Furthermore we have
\begin{align*}
f(x) &= \frac{a_0}{\sqrt{L}} + \sqrt{\frac2L} \sum_{j=1}^{\infty} a_j \cos\left(2 \pi x j / L \right) + \sqrt{\frac2L} \sum_{j=1}^{\infty} b_j \sin\left(2 \pi x j /L\right),\\
f(x) &= \frac{1}{\sqrt{L}} \sum_{q \in \vz{Z}} \hat f(q) e^{2 \pi i x q / L},
\end{align*}
where the convergence is in the $L^2$ topology.
Finally Parseval's theorem takes the form
\begin{align}
\int_0^L f(x) g(x) \, dx &=
\hat f(0) \hat g(0) + 2 \text{Re}\, \sum_{q=1}^{\infty} \hat f(q) \overline{\hat g(q)}\nonumber\\
&=
a_{f,0} a_{g,0} + \sum_{j=1}^{\infty} \bigl[a_{f,j} a_{g,j} + b_{f,j} b_{g,j}\bigr],\label{eq:Parseval}
\end{align}
and as a consequence we have for $p_1, p_2, p_3 \in L^2(\vz{T}_L)$
\begin{equation}\label{eq:parsconv}
\int_{\vz{T}_L} \int_{\vz{T}_L} p_1(x) p_2(x - y) p_3(y) \, dx dy = L^{1/2} \sum_{q \in \vz{Z}} \hat p_1(q) \overline{\hat p_2(q)} \overline{\hat p_3(q)}.
\end{equation}
\section{Geometrical derivatives of the energy}
\label{sec:perstrip}
In the following two sections we will take a look at the stability of two-dimensional periodic monolayer and bilayer configurations. First we need to determine under which conditions these structures are stationary points of the functional $\mathcal{F}$. For the bilayer this will be done in Section~\ref{subsec:bilaystat}, after which we compute the second variation for a bilayer in Section~\ref{subsec:secvarbilay}. We will give analogous results for the monolayer in Section~\ref{subsec:varmonolay}. In Section~\ref{sec:stability} we will use these results to derive the explicit stability criteria of Theorems~\ref{th:stab-monolayer} and~\ref{th:stab-bilayer}.
Of the two possible bilayer structures---UVU and VUV---we only discuss the VUV structure. The results for the UVU structure follow from exchanging the roles of $u$ and $v$.
\subsection{Bilayer: admissible perturbations and stationarity}\label{subsec:bilaystat}
The VUV bilayer of optimal width is a structure given by functions $(u_0,v_0)\in\mathcal{K}$ with
\begin{equation}
\label{def:bilayer-optimal}
u_0 := \chi^{}_{\vz{T}_L\times[-\beth,\beth]}
\qquad\text{and}\qquad
v_0 := \chi^{}_{\vz{T}_L\times([-2\beth,-\beth]\cup[\beth,2\beth])},
\end{equation}
where~\cite{vanGennipPeletier07a}
\begin{equation}\label{eq:bid}
\beth := \sqrt[3]{\frac34 (c_0+c_u+2c_v)},
\end{equation}
and $\chi_A$ is the characteristic function of the set $A$. The set of admissible boundary perturbations of this structure is only restricted by regularity and the equal-mass constraint:
\begin{definition}
The set of admissible perturbations is characterised by
\begin{equation}
\label{def:Pb}
\mathcal{P}_b := \left\{p \in \left(W^{1,2}(\vz{T}_L)\right)^4:
2\int_{\vz{T}_L}(p_1+p_3) = \int_{\vz{T}_L} (p_2+p_4)\right\}.
\end{equation}
For $p\in \mathcal{P}_b$ and $\e>0$ we define a perturbed structure $(u_\e,v_\e)$,
\begin{align*}
u_\e(x_1, x_2) &= \left\{ \begin{array}{ll} 1 & \text{if } x_2 \in \bigl(-\beth-\e p_3(x_1), \beth + \e p_1(x_1)\bigr),\\ 0 & \text{otherwise,} \end{array} \right.\\
v_\e(x_1, x_2) &= \left\{ \begin{array}{ll} 1 & \text{if } x_2 \in \bigl(-2\beth -\e p_4(x_1), -\beth - \e p_3(x_1)\bigr) \cup \bigl(\beth+\e p_1(x_1), 2\beth+\e p_2(x_1)\bigr),\\ 0 & \text{otherwise.} \end{array} \right.
\end{align*}
We also introduce the subset of perturbations that conserve mass:
\begin{equation}
\label{ass:mass_conservation}
\mathcal{P}_b^M := \left\{p \in \mathcal{P}_b:
\int_{\vz{T}_L}(p_1+p_3) = \int_{\vz{T}_L}(p_2+p_4) = 0 \right\}
\end{equation}
\end{definition}
\noindent
Note that since $W^{1,2}(\vz{T}_L)$ is imbedded in $L^\infty(\vz{T}_L)$ by the Sobolev imbedding theorem, the pair $(u_\e,v_\e)$ belongs to $\mathcal{K}$ for sufficiently small $\e$.
A picture of a bilayer of optimal width with perturbations $p$ is shown in Figure~\ref{fig:perturbationsbilayer}.
\begin{figure}[ht]
\centering
{
\psfrag{1}{$\beth + \epsilon p_1(x_1)$}
\psfrag{2}{$2\beth + \epsilon p_2(x_1)$}
\psfrag{3}{$-\beth - \epsilon p_3(x_1)$}
\psfrag{4}{$-2\beth - \epsilon p_4(x_1)$}
\psfrag{x}{$x_1$}
\psfrag{y}{$x_2$}
\psfrag{u}{U}
\psfrag{v}{V}
\includegraphics[width=0.35\textwidth]{perturbations_bilayer2_cd}
}
\caption{The bilayer of optimal width with perturbations}
\label{fig:perturbationsbilayer}
\end{figure}
\begin{remark}\label{rem:differentmassconstraints}
We should stress the difference between the two mass constraints~\pref{def:Pb} and~\pref{ass:mass_conservation}. The constraint~\pref{def:Pb} is equivalent to the condition that $u_\e$ and $v_\e$ have the same mass.
This property is a basic element of the model of block copolymers, via the set of admissible functions $\mathcal{K}$.
The additional condition~\pref{ass:mass_conservation} expresses the requirement that $\int u_\e$ and $\int v_\e$ both equal the mass $\int u_0$ of the unperturbed bilayer; perturbations without this property are meaningful in a situation where the joint mass of $u_\e$ and $v_\e$ may change. The bilayer of optimal width is a stationary point of the functional $\mathcal{F}$ under mass-preserving changes (see Lemma~\ref{lemma:bilayer_is_stationary} below); but as can be inferred from equation~\pref{eq:bilayer_deriv_HMO}, the functional is \emph{not} stationary under perturbations that do change the mass.
\end{remark}
\begin{definition}\label{def:stationarity}
We say that the bilayer of optimal width is stationary with respect to the admissible perturbations $\mathcal{P}_b$ (or $\mathcal{P}_b^M$) if, for all $p \in \mathcal{P}_b$ (or all $p \in \mathcal{P}_b^M$),
\[
\left.\frac{d}{d \epsilon} \mathcal{F}(u_{\epsilon}, v_{\epsilon})\right|_{\epsilon=0} = 0.
\]
\end{definition}
\begin{lemma}
\label{lemma:bilayer_is_stationary}
The VUV bilayer of optimal width is stationary with respect to all $p\in \mathcal{P}_b^M$. \end{lemma}
\begin{proof}
Choksi and Sternberg calculate the first and second variations of a related functional~\cite{ChoksiSternberg06}, and their method can be adapted without much difficulty to the functional $\mathcal{F}$. Here we give a self-contained proof.
Since the interfaces of the bilayer are straight, the derivative of the interfacial terms with respect to the perturbation is zero for all $p\in \mathcal{P}_b$:
\begin{align}
&\hspace{.4cm}\left. \frac{d}{d \epsilon} \left(c_0 \int_{S_L} |\nabla (u_\epsilon + v_\epsilon)| + c_u \int_{S_L} |\nabla u_\epsilon| + c_v \int_{S_L} |\nabla v_\epsilon| \right)\right|_{\epsilon = 0} \notag \\
&=\left. \frac{d}{d \epsilon} \left[d_{uv} \int_0^L \left( \sqrt{1 + \epsilon^2 {p_1'}^2} + \sqrt{1 + \epsilon^2 {p_3'}^2} \right) \, dx
+ d_{v0} \int_0^L \left( \sqrt{1 + \epsilon^2 {p_2'}^2} + \sqrt{1 + \epsilon^2 {p_4'}^2} \right) \, dx\right]\right|_{\e=0}\notag\\
&= 0.
\label{eq:inteps}
\end{align}
For the derivative of the $H^{-1}$-norm, let $\eta \in C({\vz R})$ and compute
\begin{align}
\frac{d}{d \epsilon} \int_{S_L} \eta(x_2) u_{\epsilon}(x) \, dx\Bigr|_{\e=0}
&= \int_0^L \frac{d}{d \epsilon} \int_{-\beth - \epsilon p_3(x_1)}^{\beth + \epsilon p_1(x_1)} \eta(x_2) \, dx_2 \Bigr|_{\e=0} dx_1 \nonumber\\
&= \int_0^L \Bigl( p_1(x_1)\, \eta(\beth + \epsilon p_1(x_1)) + p_3(x_1)\, \eta(-\beth - \epsilon p_3(x_1)) \Bigr) \, dx_1\Bigr|_{\e=0} \notag\\
&= \eta(\beth)\int_0^L p_1 + \eta(-\beth)\int_0^L p_3.
\label{eq:weakudir}
\end{align}
Similarly,
\begin{align}
\frac{d}{d \epsilon} \int_{S_L} \eta(x_2) v_{\epsilon}(x) \, dx\Bigr|_{\e=0}
&= - \eta(\beth)\int_0^L p_1 + \eta(2\beth)\int_0^L p_2
- \eta(-\beth)\int_0^L p_3 + \eta(-2\beth)\int_0^L p_4.
\label{eq:weakvdir}
\end{align}
Let $G$ be the Green's function of $-\Delta$ on $S_L$ from Theorem~\ref{thm:gf}, then
\begin{align*}
\left. \frac{d}{d \epsilon} \|u_{\epsilon} - v_{\epsilon}\|_{H^{-1}(S_L)}^2\right\vert_{\epsilon=0}
&= \left. \frac{d}{d \epsilon} \int_{S_L}
|\nabla G\ast (u_\e-v_\e) |^2\, dx\,\right|_{\e=0}\\
&= 2\int_{S_L} \nabla G\ast (u_0 - v_0)
\left[ \frac{d}{d \epsilon} \nabla G \ast (u_\e - v_\e)\right]_{\epsilon=0}
dx\\
&= 2\left. \frac{d}{d \epsilon}\int_{S_L} \nabla G\ast (u_0 - v_0) \cdot
\nabla G \ast (u_\e - v_\e)\, dx \right|_{\epsilon=0}
\\
&= 2\left. \frac{d}{d \epsilon}\int_{S_L} \bigl[G\ast (u_0 - v_0)\bigr]
(u_\e - v_\e)\, dx \right|_{\epsilon=0}
\end{align*}
Setting $\eta(x_2) := [G\ast (u_0-v_0)](x_1,x_2)$ (which is independent of $x_1$, because $u_0-v_0$ is independent of $x_1$) we calculate by the Fourier series~\pref{eq:greenfour} (or by remarking that this is a one-dimensional situation) that
\[
\eta(x_2) = -\frac12 \int_{\vz{R}} |x_2 - y| (u_0-v_0)(0, y) \,dy,
\]
from which it follows that $\eta(\beth) = \eta(-\beth)$ and $\eta(\pm2\beth)=0$. Therefore $\eta\in C({\vz R})$ and thus we obtain from~(\ref{eq:weakudir}) and (\ref{eq:weakvdir}) that
\begin{equation}
\label{eq:bilayer_deriv_HMO}
\left. \frac{d}{d \epsilon} \|u_{\epsilon} - v_{\epsilon}\|_{H^{-1}(S_L)}^2
\right|_{\e=0}
= 4L\eta(\beth)\int (p_1+p_3)
\stackrel{\pref{ass:mass_conservation}}= 0.
\end{equation}
\end{proof}
\begin{remark}\label{rem:whyoptimalwidth}
Note that in Lemma~\ref{lemma:bilayer_is_stationary} we nowhere use the specific definition of $\beth$ in (\ref{eq:bid}). As we explain in Appendix~\ref{sec:energy-per-unit-mass} the optimal width is relevant when considering energy per unit mass. If we define the mass functional as
\[
\mathcal{M}(u,v) := \int_{S_L} u
\]
for $(u,v)\in \mathcal{K}$, then the following calculations show that the bilayer of optimal width is a stationary point of $\mathcal{F}/\mathcal{M}$ with respect to all perturbations in $\mathcal{P}_b$ (so not only the mass preserving ones) in the sense of Defintion~\ref{def:stationarity} (with the functional $\mathcal{F}$ replaced by $\mathcal{F}/\mathcal{M}$). Thus, let now $p\in \mathcal{P}_b$.
We first compute that $\eta(\beth) = \frac12 \beth^2$, where $\eta$ is such as chosen at the end of the proof of Lemma~\ref{lemma:bilayer_is_stationary}. Then with the help of (\ref{eq:bilayer_deriv_HMO}) and the computations for the one-dimensional case in \cite{vanGennipPeletier07a} we find that
\begin{align*}
\left. \frac{d}{d\e} \mathcal{F}(u_\e, v_\e)\right|_{\e=0} &= 2 \beth^2 \int_0^L (p_1+p_3),\\
\left. \frac{d}{d\e} \mathcal{M}(u_\e, v_\e)\right|_{\e=0} &= \int_0^L (p_1+p_3),\\
\mathcal{F}(u_0,v_0) &= 2 L (c_0+c_u+2c_v) + \frac43 L \beth^3,\\
\mathcal{M}(u_0,v_0) &= 2 L \beth.
\end{align*}
Now we conclude
\begin{align*}
\left.\frac{d}{d\e} \mathcal{F}/\mathcal{M}(u_\e, v_\e)\right|_{\e=0} &= \mathcal{M}(u_0,v_0)^{-2} \Biggl( \mathcal{M}(u_0,v_0)\left. \frac{d}{d\e} \mathcal{F}(u_\e, v_\e)\right|_{\e=0}+\\&\hspace{2.8cm} - \mathcal{F}(u_0,v_0)\left. \frac{d}{d\e} \mathcal{M}(u_\e, v_\e)\right|_{\e=0} \Biggr)\\
&= \frac12 L^{-1} \beth^{-2} \int_0^L (p_1+p_3) \Biggl( \frac43 \beth^3 - (c_0+c_u+2c_v) \Biggr)\\
&\hspace{-0.145cm}\stackrel{\pref{eq:bid}}= 0.
\end{align*}
We see that the optimal width condition \pref{eq:bid} is not necessary for stationarity under $\mathcal{F}/\mathcal{M}$ with respect to the mass preserving perturbations in $\mathcal{P}_b^M$, but it is for stationarity with respect to perturbations in $\mathcal{P}_b\setminus\mathcal{P}_b^M$.
\end{remark}
\subsection{Second variation for a bilayer}\label{subsec:secvarbilay}
We express the components $p_i$ of a given perturbation $p\in \mathcal{P}_b$ as a Fourier series (see Section~\ref{subsec:fourtfm}):
\begin{equation}
\label{eq:Fourier-pi}
p_i(x) = \frac{a_{i,0}}{\sqrt{L}} + \sqrt{\frac2L} \sum_{j=1}^{\infty} a_{i,j} \cos\left( \frac{2 \pi x j}{L} \right) + \sqrt{\frac2L} \sum_{j=1}^{\infty} b_{i,j} \sin\left( \frac{2 \pi x j}{L} \right).
\end{equation}
The equal-mass condition in~\pref{def:Pb} translates into
\begin{equation}\label{eq:masscondition}
2 \left( a_{1,0} + a_{3,0}\right) = a_{2,0} + a_{4,0}.
\end{equation}
We also write
\[
\mathfrak{a}_j := \left(a_{1,j}, a_{2,j}, a_{3,j}, a_{4,j}\right) \qquad \text{and} \qquad \mathfrak{b}_j := \left(b_{1,j}, b_{2,j}, b_{3,j}, b_{4,j}\right).
\]
\begin{theorem}\label{thm:bilayersecondvar}
Using the notation introduced above, the second variation of $\mathcal F$ at the VUV bilayer of optimal width~\pref{def:bilayer-optimal} in the direction $p\in\mathcal{P}_b$ is given by
\[
\left.\frac{d^2}{d \epsilon^2} \mathcal{F}(u_\e,v_\e) \right\vert_{\epsilon=0} = B_0\left(\mathfrak{a}_0, \beth\right) + \sum_{j=1}^{\infty} B_j\left(\mathfrak{a}_j, \mathfrak{b}_j, d_{uv}, d_{v0}, L\right),
\]
where
\begin{align*}
&B_0\left(\mathfrak{a}_0, \beth\right) :=\\ &4\beth \left\{- a_{1,0}^2 - a_{3,0}^2 + a_{1,0} a_{2,0} + a_{3,0} a_{4,0} - 4 a_{1,0} a_{3,0} + 3 a_{2,0} a_{3,0} + 3 a_{1,0} a_{4,0} - 2 a_{2,0} a_{4,0}\right\},
\end{align*}
and, for $j \in \vz{N}_{>0}$,
\begin{align*}
&B_j\left(\mathfrak{a}_j, \mathfrak{b}_j, d_{uv}, d_{v0}, L\right) :=\\ &\frac{4 \pi^2 j^2}{L^2} \left[ d_{uv} \left\{ a_{1,j}^2 + a_{3,j} ^2 + b_{1,j} ^2 + b_{3,j}^2 \right\}
+ d_{v0} \left\{ a_{2,j} ^2 + a_{4,j} ^2 + b_{2,j} ^2 + b_{4,j} ^2 \right\} \right] \Biggr.\\
&+ \left.\frac{L}{ \pi j}\right. \left. \biggl[2 \left(1 - \frac{2 \pi \beth j}{L} \right) \left\{ a_{1,j} ^2 + a_{3,j}^2 + b_{1,j}^2 + b_{3,j}^2 \right\}\biggr.\right.\\
&\quad \quad \quad \quad \quad+ \left.\left. \frac12 \left\{ a_{2,j}^2 + a_{4,j}^2 + b_{2,j}^2 + b_{4,j}^2 \right\}\right.\right.\\
&\quad \quad \quad \quad \quad-2 \left. \left. \left\{ a_{1,j} a_{2,j} + a_{3,j} a_{4,j} + b_{1,j} b_{2,j} + b_{3,j} b_{4,j} \right\} e^{-2 \pi \beth j / L} \right.\right.\\
&\quad \quad \quad \quad \quad+4 \left. \left. \left\{ a_{1,j} a_{3,j} + b_{1,j} b_{3,j} \right\} e^{-4 \pi \beth j / L}\right.\right.\\
&\quad \quad \quad \quad \quad-2 \left. \left. \left\{ a_{1,j} a_{4,j} + a_{2,j} a_{3,j} + b_{1,j} b_{4,j} + b_{2,j} b_{3,j} \right\} e^{-6 \pi \beth j / L}\right.\right.\\
&\quad \quad \quad \quad \quad+ \Biggl. \biggl. \left\{ a_{2,j} a_{4,j} + b_{2,j} b_{4,j}\right\} e^{-8 \pi \beth j / L}\biggr].
\end{align*}
\end{theorem}
The proof is given in Appendix~\ref{app:proofbilayersecondvar}.
\subsection{Variations for a monolayer}\label{subsec:varmonolay}
Analogous results also hold for monolayers as defined below. In the current subsection we will state them. Since the proofs are completely analogous to the proofs for bilayers, we will not write them out here.
The monolayer of optimal width is a structure given by functions $(u_0,v_0)\in\mathcal{K}$ with
\begin{equation}
\label{def:monolayer-optimal}
u_0 := \chi^{}_{\vz{T}_L\times [0,\aleph]}
\qquad\text{and}\qquad
v_0 := \chi^{}_{\vz{T}_L\times [\aleph,2\aleph]},
\end{equation}
where~\cite{vanGennipPeletier07a}
\begin{equation}\label{eq:monod}
\aleph := \sqrt[3]{\frac32 (c_0+c_u+c_v)}.
\end{equation}
The set of admissible boundary perturbations of this structure is again restricted by regularity and the equal-mass constraint:
\begin{definition}
The set of admissible perturbations is characterised by
\[
\mathcal{P}_m := \left\{p \in \left(W^{1,2}(\vz{T}_L)\right)^3:
\int_{\vz{T}_L}(p_2-p_1) = \int_{\vz{T}_L} (p_3-p_2)\right\}.
\]
For $p\in \mathcal{P}_m$ and $\e>0$ we define a perturbed structure $(u_\e,v_\e)$,
\begin{align*}
u_\e(x_1, x_2) &= \left\{
\begin{array}{ll}
1 & \text{if } x_2 \in \bigl(\e p_1(x_1), \aleph + \e p_2(x_1)\bigr),\\
0 & \text{otherwise,} \end{array} \right.\\
v_\e(x_1, x_2) &= \left\{
\begin{array}{ll}
1 & \text{if } x_2 \in \bigl(\aleph + \e p_2(x_1), 2\aleph + \e p_3(x_1)\bigr),\\ 0 & \text{otherwise.} \end{array} \right.
\end{align*}
We also define the subset of mass preserving perturbations:
\begin{equation}\label{assum:monolayermassconservation}
\mathcal{P}_m^M := \left\{p \in \mathcal{P}_m : \int_{\vz{T}_L}(p_2-p_1) = \int_{\vz{T}_L}(p_3-p_2) = 0\right\}.
\end{equation}
\end{definition}
A picture of a monolayer of optimal width with perturbations $p$ is shown in Figure~\ref{fig:perturbationsmonolayer}.
\begin{figure}[ht]
\centering
{
\psfrag{0}{$\epsilon p_1(x_1)$}
\psfrag{1}{$\aleph + \epsilon p_2(x_1)$}
\psfrag{2}{$2\aleph + \epsilon p_3(x_1)$}
\psfrag{x}{$x_1$}
\psfrag{y}{$x_2$}
\psfrag{u}{U}
\psfrag{v}{V}
\includegraphics[width=0.35\textwidth]{perturbations_monolayer2_cd}\\
}
\caption{The monolayer of optimal width with perturbations}
\label{fig:perturbationsmonolayer}
\end{figure}
Stationarity for the monolayer of optimal width is defined analogously to stationarity for the bilayer, see Definition~\ref{def:stationarity}.
\begin{lemma}\label{lem:monolayerstationary}
The monolayer of optimal width is stationary with respect to all $p\in \mathcal{P}_m^M$.
\end{lemma}
\begin{proof}
Analogous to the proof of Lemma~\ref{lemma:bilayer_is_stationary} we find that the first variation of the interfaces with respect to all $p \in \mathcal{P}_m$ is zero.
For any $\eta \in C({\vz R})$ we compute
\begin{align*}
\frac{d}{d \epsilon} \int_{S_L} \eta(x_2) u_{\epsilon}(x) \, dx\Bigr|_{\e=0} &= \eta(\aleph) \int_0^L p_2 - \eta(0) \int_0^L p_1,\\
\frac{d}{d \epsilon} \int_{S_L} \eta(x_2) v_{\epsilon}(x) \, dx\Bigr|_{\e=0} &= \eta(2\aleph) \int_0^L p_3 - \eta(\aleph) \int_0^L p_2.
\end{align*}
With $G$ the Green's function of $-\Delta$ on $S_L$ from Theorem~\ref{thm:gf} we make the choice $\eta(x_2) = G\ast(u_0-v_0)(x_1, x_2)$, which is independent of $x_1$. Using $\eta(\aleph) = 0$ and $\eta(0) = -\eta(2 \aleph) > 0$, we compute, as in the above mentioned proof,
\begin{equation}\label{eq:stationaritymonolayer}
\frac{d}{d\epsilon} \|u_{\epsilon}-v_{\epsilon}\|_{H^{-1}(S_L)}^2 = 2 \eta(0) \int_0^L(p_3-p_1) \stackrel{\pref{assum:monolayermassconservation}}= 0.
\end{equation}
\end{proof}
Note that by equation~(\ref{eq:stationaritymonolayer}) the monolayer of optimal width is not stable with respect to perturbations that are allowed to change the total mass, i.e with respect to $p \in \mathcal{P}_m\setminus \mathcal{P}_m^M$.
\begin{remark}
In complete analogy to Remark~\ref{rem:whyoptimalwidth} we see that the optimal width condition (\ref{eq:monod}) plays no role in the stationarity of the monolayer under the functional $\mathcal{F}$, but it plays an important role when considering the stationarity of the monolayer under $\mathcal{F}/\mathcal{M}$, the energy per unit mass functional. Appendix~\ref{sec:energy-per-unit-mass} also argues the relevance of optimal width when considering energy per unit mass. Using, \cite{vanGennipPeletier07a},
\[
\mathcal{F}(u_0,v_0) = 2 L (c_0+c_u+c_v) + \frac23 L \aleph^3,
\]
a computation analogous to that in Remark~\ref{rem:whyoptimalwidth} shows that the optimal width condition \pref{eq:monod} is not necessary for stationarity under $\mathcal{F}/\mathcal{M}$ with respect to the mass preserving perturbations in $\mathcal{P}_m^M$, but it is for stationarity with respect to perturbations in $\mathcal{P}_m\setminus\mathcal{P}_m^M$.\end{remark}
Similar to~\pref{eq:Fourier-pi} we express a $p\in \mathcal P_m$ in terms of its Fourier modes $a_{i,j}$ and $b_{i,j}$ and introduce the notation
\[
\mathfrak{a}_j := \left(a_{1,j}, a_{2,j}, a_{3,j}\right) \qquad \text{and} \qquad \mathfrak{b}_j := \left(b_{1,j}, b_{2,j}, b_{3,j}\right).
\]
\begin{theorem}\label{thm:monolayersecond}
Using the notation given above, the second variation of $\mathcal{F}$ at $(u_0, v_0)$ in the direction of $p\in\mathcal{P}_m$ is given by
\[
\left. \frac{d^2}{d \epsilon^2} \mathcal{F}(u_{\epsilon}, v_{\epsilon}) \right\vert_{\epsilon=0} = M_0\left(\mathfrak{a}_0, \aleph\right)
+ \sum_{j=1}^{\infty} M_j\left(\mathfrak{a}_j, \mathfrak{b}_j, d_{u0}, d_{uv}, d_{v0}, L\right),
\]
where
\[
M_0\left(\mathfrak{a}_0, \aleph\right) := \aleph \left(a_{1,0} - a_{3,0}\right)^2,
\]
and, for $j \in \vz{N}$,
\begin{align*}
&M_j\left(\mathfrak{a}_j, \mathfrak{b}_j, d_{u0}, d_{uv}, d_{v0}, L\right) :=\\
&\frac{4 \pi^2 j^2}{L^2} \left[ d_{u0} \left\{ a_{1,j}^2 + b_{1,j}^2 \right\} + d_{uv} \left\{ a_{2,j}^2 + b_{2,j}^2 \right\} + d_{v0} \left\{ a_{3,j}^2 + b_{3,j}^2 \right\} \right] \Biggr.\\
&+ \left.\frac{L}{\pi j}\right. \left. \biggl[2 \left(1 - \frac{2 \pi \aleph j}{L} \right) \left\{ a_{2,j}^2 + b_{2,j}^2 \right\}\biggr.\right.\\
&\quad \quad \quad \quad \quad+ \left.\left. \frac12 \left\{ a_{1,j}^2 + a_{3,j}^2 + b_{1,j}^2 + b_{3,j}^2 \right\}\right.\right.\\
&\quad \quad \quad \quad \quad-2 \left. \left. \left\{ a_{1,j} a_{2,j} + a_{2,j} a_{3,j} + b_{1,j} b_{2,j} + b_{2,j} b_{3,j} \right\} e^{-2 \pi \aleph j / L} \right.\right.\\
&\quad \quad \quad \quad \quad+ \Biggl. \biggl. \left\{ a_{1,j} a_{3,j} + b_{1,j} b_{3,j} \right\} e^{-4 \pi \aleph j / L} \biggr].
\end{align*}
\end{theorem}
\begin{proof}
Analogous to the proof of Theorem~\ref{thm:bilayersecondvar}.
\end{proof}
\section{Stability}\label{sec:stability}
In this section we study stability of monolayers and bilayers with respect to the admissible perturbations. The bilayer will be treated in Section~\ref{subsec:bilayerstability}, the monolayer in Section~\ref{subsec:monolayerstability}.
\subsection{Preliminary definitions and results}
In this paper we only consider \emph{linear} stability---whenever we use the words \emph{stable} or \emph{unstable}, this refers to the sign of the second derivative:
\begin{definition}\label{def:stability}
Using the notation of Section~\ref{sec:perstrip}, the VUV bilayer (monolayer) of optimal width $(u_0,v_0)$ is called \emph{stable} iff
\[
\left.\frac{d^2}{d\epsilon^2}\mathcal{F}(u_{\epsilon}, v_{\epsilon})\right|_{\epsilon=0} \geq 0,
\]
for every $p\in \mathcal{P}_b^M$ ($\mathcal{P}_m^M$), and unstable otherwise.
\end{definition}
The following property simplifies the study of stability of the bilayers and monolayers.
\begin{lemma}\label{lem:oneforall}
Using the notation from Theorem~\ref{thm:bilayersecondvar} we have, for any $x, y\in\vz{R}^4$ and for $j \geq 1$,
\begin{align*}
B_j\left(x, y, d_{uv}, d_{v0}, L\right) &= B_1\left(x, y, d_{uv}, d_{v0}, L/j\right),\\
B_j\left(x, 0, d_{uv}, d_{v0}, L\right) &= B_j\left(0, x, d_{uv}, d_{v0}, L\right),\\
B_j\left(x, y, d_{uv}, d_{v0}, L\right) &= B_j\left(x, 0, d_{uv}, d_{v0}, L\right) + B_j\left(0, y, d_{uv}, d_{v0}, L\right).
\end{align*}
Similarly, in the notation from Theorem~\ref{thm:monolayersecond} we have, for any $x, y\in\vz{R}^3$ and for $j \geq 1$,
\begin{align*}
M_j\left(x, y, d_{u0}, d_{uv}, d_{v0}, L\right) &= M_1\left(x, y, d_{u0}, d_{uv}, d_{v0}, L/j\right),\\
M_j\left(x, 0, d_{u0}, d_{uv}, d_{v0}, L\right) &= M_j\left(0, x, d_{u0}, d_{uv}, d_{v0}, L\right),\\
M_j\left(x, y, d_{u0}, d_{uv}, d_{v0}, L\right) &= M_j\left(x, 0, d_{u0}, d_{uv}, d_{v0}, L\right) + M_j\left(0, y, d_{u0}, d_{uv}, d_{v0}, L\right).
\end{align*}
\end{lemma}
\begin{proof}
These properties follow from the definitions of $B_j$ in Theorem~\ref{thm:bilayersecondvar} and $M_j$ in Theorem~\ref{thm:monolayersecond}.
\end{proof}
\subsection{Stability of the bilayer}\label{subsec:bilayerstability}
Throughout this subsection we will use the notation as introduced in Section~\ref{subsec:secvarbilay}. Lemma~\ref{lem:oneforall} provides us with a simpler characterisation of stability:
\begin{corol}\label{cor:bilayerstability}
The VUV bilayer is stable iff
\begin{enumerate}
\item $B_0(\mathfrak{a}_0, \beth)\geq 0$ for all $\mathfrak{a}_0\in {\vz R}^4$ satisfying \pref{eq:masscondition}, and
\item $B_1(x, 0, d_{uv}, d_{v0}, L/j)
\geq 0$ for all $x\in {\vz R}^4$ and all $j\geq 1$.
\end{enumerate}
\end{corol}
We therefore study $B_0$ and $B_1$ as quadratic forms on ${\vz R}^4$ subject to~\pref{eq:masscondition} and investigate their sign. Note that $B_0$ and $B_1$ can be identified with symmetric $4\times 4$ matrices, and we will continuously make this identification. Among other things that means we can speak of eigenvalues of $B_0$ and $B_1$, and relate the sign of the quadratic forms to the signs of their eigenvalues.
\begin{lemma}
\label{lemma:B0_positive}
$B_0(\mathfrak{a}, \beth)\geq 0$ for all $\beth>0$ and for all $\mathfrak{a}_0\in {\vz R}^4$ satisfying~\pref{eq:masscondition}.
\end{lemma}
\begin{proof}
The Lemma follows immediately from writing $B_0$ as
\[
\frac{1}{4\beth}B_0(\mathfrak{a}_0, \beth) =
-\frac12\left(2a_{1,0}-a_{2,0} + 2a_{3,0} - a_{4,0}\right)^2
+ \frac12\left(a_{1,0} - a_{2,0} - a_{3,0} + a_{4,0}\right)^2
+ \frac12\left(a_{1,0} + a_{3,0}\right)^2.
\]
\end{proof}
\begin{lemma}\label{lem:B1negeig}
Two of the four eigenvalues of $B_1$ are nonnegative for all $d_{uv}$, $d_{v0}$, and $L$; the other two do not have a definite sign. Denote the smallest eigenvalue by $\lambda_1^b(d_{uv}, d_{v0}, L)$. Define
\begin{equation}
\label{eq:upslamb}
\upsilon := e^{-2 \pi \beth/L}, \hspace{2cm}
\zeta := \frac{d_{uv}}{d_{uv}+d_{v0}} = \frac{c_u + c_v}{c_0 + c_u + 2 c_v}.
\end{equation}
There exists a function $\zeta_1 \in C([0, 1])$ (see~\pref{eq:brel1}) such that
\begin{align*}
&\lambda_1^b(d_{uv}, d_{v0}, L) \geq 0 \Longleftrightarrow \zeta \geq \zeta_1(\upsilon).
\end{align*}
\end{lemma}
\begin{proof}
Note that $\upsilon \in (0, 1)$ and, by conditions (\ref{eq:ddemands}), \mbox{$\zeta \in \left[\frac12 - \frac{c_u + c_0}{2(c_0 + c_u + 2 c_v)}, \frac12 + \frac{c_u + c_0}{2(c_0 + c_u + 2 c_v)}\right] \subset [0,1]$}.
Let $x\in\vz{R}^4$. We now write
\[
B_1\left(x, 0, d_{uv}, d_{v0}, L\right) = \frac{2L}{\pi} \tilde B_1\left(x, \zeta, \upsilon\right),
\]
where
\begin{align}
\tilde B_1\left(x, \zeta, \upsilon\right) &:= -\frac13 \log^3 \upsilon\, \Bigl( \zeta \left(x_1^2 + x_3^2 \right) + (1 - \zeta) \left( x_2^2 + x_4^2 \right) \Bigr)\label{eq:tildeB1}\\
&\hspace{0.5cm} + (1 + \log \upsilon) \left(x_1^2 + x_3^2\right) + \frac14 \left( x_2^2 + x_4^2 \right)\nonumber\\
&\hspace{0.5cm} - \left( x_1 x_2 + x_3 x_4 \right) \upsilon + 2 x_1 x_3 \upsilon^2 - \left(x_1 x_4 + x_2 x_3\right) \upsilon^3 + \frac12 x_2 x_4 \upsilon^4.\nonumber
\end{align}
Note that when $x_1=x_3=0$,
\[
\tilde B_1\left(x, \zeta, \upsilon\right)
= (1-\zeta)\left( x_2^2 + x_4^2 \right)
+ \frac14 \left( x_2^2 + x_4^2 \right)
+ \frac12 x_2 x_4 \upsilon^4
\geq 0,
\]
so that by the max-min characterisation of the third eigenvalue $\lambda_3^b$, for fixed $\zeta, \upsilon$, we have
\[
\lambda^b_3 \;= \;\max_{\dim L = 2}\; \min_{\substack{x\in {\vz R}^4/L\\|x|=1}}\; \tilde B_1(x,\zeta,\upsilon)
\;\geq \;\min_{\substack{x_1=x_3=0\\|x|=1}} \;\tilde B_1(x,\zeta,\upsilon) \geq0,
\]
implying that the largest two eigenvalues are always non-negative.
We now turn to the question of existence of admissible $x$ such that $\tilde B_1$ is negative, and we simplify the problem by minimizing $\tilde B_1$ with respect to $x_2$ and $x_4$ under fixed $x_1$ and $x_3$. The stationarity conditions $\frac{\partial}{\partial x_2} \tilde B_1\left(x, \zeta, \upsilon\right) = 0$ and $\frac{\partial}{\partial x_4} \tilde B_1\left(x, \zeta, \upsilon\right) = 0$ lead to the equations
\[
\left(\begin{array}{c} x_2^{\text{opt}} \\ x_4^{\text{opt}} \end{array}\right) = \frac1{\det A(\zeta, \upsilon)} A(\zeta, \upsilon) \left(\begin{array}{cc} \upsilon & \upsilon^3 \\ \upsilon^3 & \upsilon \end{array}\right) \left(\begin{array}{c} x_1 \\ x_3 \end{array}\right),
\]
where
\[
A(\zeta, \upsilon) := \left(\begin{array}{cc} \frac12 - \frac23 (1 - \zeta) \log^3 \upsilon & -\frac12 \upsilon^4 \\ -\frac12 \upsilon^4 & \frac12 - \frac23 (1 - \zeta) \log^3 \upsilon \end{array}\right).
\]
Inserting these results into $\tilde B_1$ gives
\[
\tilde B_1\left(x_1, x_2^{\text{opt}}, x_3, x_4^{\text{opt}}, \zeta, \upsilon\right) = \left(x_1, x_3\right) \overset{\maltese}{B}(\zeta, \upsilon) \left(x_1, x_3\right)^T,
\]
where the matrix entries of $\overset{\maltese}{B}$ are given by
\begin{align*}
\overset{\maltese}{B}_{11}(\zeta, \upsilon) = \overset{\maltese}{B}_{22}(\zeta, \upsilon) &= \log \upsilon - \frac13 \zeta \log^3 \upsilon\\
&\hspace{0.47cm} - \frac{\Bigl(3 (-1 + \upsilon^2) - 4 (-1 + \zeta) \log^3 \upsilon\Bigr) \Bigl(3 (-1 + \upsilon^6) - 4 (-1 + \zeta) \log^3 \upsilon\Bigr)}{9 \Bigl(-1 + \upsilon^8\Bigr) + 8 \Bigl(-1 + \zeta\Bigr) \Bigl(-3 - 2 (-1 + \zeta) \log^3 \upsilon\Bigr) \log^3 \upsilon},\\
\overset{\maltese}{B}_{12}(\zeta, \upsilon) = \overset{\maltese}{B}_{21}(\zeta, \upsilon) &= -\frac{\Bigl( 3 \upsilon (-1 + \upsilon^2) - 4 \upsilon (-1 + \zeta) \log^3 \upsilon\Bigr)^2}{9 \Bigl(-1 + \upsilon^8\Bigr) + 8 \Bigl(-1 + \zeta\Bigr) \Bigl(-3 -2 (-1 + \zeta) \log^3 \upsilon\Bigr) \log^3 \upsilon}.
\end{align*}
The eigenvalues of $\overset{\maltese}{B}$ are
\begin{align*}
G_-(\zeta, \upsilon) &:= 1 - \upsilon^2 + \log \upsilon - \frac13 \zeta \log^3 \upsilon + \frac{3 \upsilon^2 (-1 + \upsilon^2)^2}{3 (-1 + \upsilon^4) - 4 (-1 + \zeta) \log^3 \upsilon}\\
&= \Bigl(3 (-1 + \upsilon^4) - 4 (-1 + \zeta) \log^3 \upsilon\Bigr)^{-1} h_-(\zeta, \upsilon),\\
G_+(\zeta, \upsilon) &:= 1 + \upsilon^2 + \log \upsilon - \frac13 \zeta \log^3 \upsilon - \frac{3 \upsilon^2 (1 + \upsilon^2)^2}{3 (1 + \upsilon^4) + 4 (-1 + \zeta) \log^3 \upsilon}\\
&= \Bigl(3 (1 + \upsilon^4) + 4 (-1 + \zeta) \log^3 \upsilon\Bigr)^{-1} h_+(\zeta, \upsilon),
\end{align*}
with
\begin{align*}
h_-(\zeta, \upsilon) &:= \left( \frac43 \log^6 \upsilon \right) \zeta^2 + \left( -\frac43 \log^6 \upsilon - 4 \log^4 \upsilon + (-3 + 4 \upsilon^2 - \upsilon^4) \log^3 \upsilon \right) \zeta\\
&\hspace{.5cm} - 3 (1 - \upsilon^2)^2 + 3 (-1 + \upsilon^4) \log \upsilon + 4 (1 - \upsilon^2) \log^3 \upsilon + 4 \log^4 \upsilon,\\
h_+(\zeta, \upsilon) &:= -\left( \frac43 \log^6 \upsilon\right) \zeta^2 + \left(\frac43 \log^6 \upsilon + 4 \log^4 \upsilon + (3 + 4 \upsilon^2 - \upsilon^4) \log^3 \upsilon \right) \zeta\\
&\hspace{.5cm} + 3 (1 - \upsilon^4) + 3 (1 + \upsilon^4) \log \upsilon - 4 (1 + \upsilon^2) \log^3 \upsilon - 4 \log^4 \upsilon .
\end{align*}
Note that $G_- < G_+$, since for $\upsilon \in (0, 1), \zeta \in [0, 1]$,
\[
3(1 + \upsilon^4) + 4 (-1 + \zeta) \log^3 \upsilon > 0, \qquad 3(-1 + \upsilon^4) - 4 (-1 + \zeta) \log^3 \upsilon < 0,
\]
and thus
\[
G_+(\zeta, \upsilon) - G_-(\zeta, \upsilon) = \frac{-2 \upsilon^2 \Bigl(3 (-1+\upsilon^2)-4(-1+\zeta) \log^3\upsilon\Bigr)^2}{\Bigl(3(1 + \upsilon^4) + 4 (-1 + \zeta) \log^3 \upsilon\Bigr)\Bigl(3(-1 + \upsilon^4) - 4 (-1 + \zeta) \log^3 \upsilon\Bigr)} > 0.
\]
We have now the following equivalences:
\begin{align*}
\forall x\in\vz{R}^4, B_1\left(x, 0, d_{uv}, d_{v0}, L\right) \geq 0 &\Longleftrightarrow \forall x\in\vz{R}^4, \tilde B_1\left(x, \zeta, \upsilon\right) \geq 0\\
&\Longleftrightarrow \overset{\maltese}{B} \left(\zeta, \upsilon\right) \geq 0\\
&\Longleftrightarrow G_-\left(\zeta, \upsilon\right) \geq 0 .
\end{align*}
We prove the following characterisation of the sign of $G_-$:
\begin{equation}
G_-(\zeta, \upsilon) \geq 0 \Longleftrightarrow \zeta \geq \zeta_1(\upsilon), \label{eq:F1neg}
\end{equation}
where
\begin{align}
\zeta_1(\upsilon) &= (8 \log^3 \upsilon)^{-1} \Biggl(9 - 12 \upsilon^2 + 3 \upsilon^4 + (4 \log \upsilon) (3 + \log^2 \upsilon)\Bigr.\nonumber\\
&\hspace{2.3cm}\left.+ \Bigl\{ 225 - 504 \upsilon^2 + 342 \upsilon^4 - 72 \upsilon^6 + 9 \upsilon^8 + (360 - 288 \upsilon^2 - 72 \upsilon^4) \log \upsilon \right.\nonumber\\
&\hspace{2.8cm}\Bigl. + 144 \log^2 \upsilon + (-120 + 96 \upsilon^2 + 24 \upsilon^4) \log^3 \upsilon - 96 \log^4 \upsilon + 16 \log^6 \upsilon \Bigr\}^{\frac12} \Biggr).\label{eq:brel1}
\end{align}
The details of this calculation can be found in Appendix~\ref{app:details}. This concludes the proof.
\end{proof}
The function $g_1$ mentioned in Theorem~\ref{th:stab-bilayer} in the introduction is related to $\zeta_1$, given in (\ref{eq:brel1}), by
\begin{equation}
\label{def:g_1}
g_1(\ell) := \zeta_1\bigl(e^{2\pi/\ell}\bigr).
\end{equation}
\begin{figure}[ht]
\hspace{0.1\textwidth}
\subfloat[G1][The sign of $G_-$]
{
\psfrag{a}{$\upsilon$}
\psfrag{b}{$\zeta$}
\psfrag{c}{$G_- > 0$}
\psfrag{d}{\color{white} $G_- < 0$}
\includegraphics[width=0.35\textwidth]{bilayer_G1_500plotpoints_cd}\\
}
\hspace{0.1\textwidth}
\subfloat[G2][The sign of $G_+$]
{
\psfrag{a}{$\upsilon$}
\psfrag{b}{$\zeta$}
\psfrag{c}{$G_+ > 0$}
\psfrag{d}{\color{white} $G_+ < 0$}
\includegraphics[width=0.35\textwidth]{G2_cd}\\
}
\caption{The sign in parameter space of the eigenvalues $G_-<G_+$. The boundary between the two regions in the left-hand figure is given by $\zeta=\zeta_1(v)$.}
\label{fig:bilayerev}
\end{figure}
\begin{remark}
The four eigenvalues of $\tilde B_1$ from the proof of Lemma~\ref{lem:B1negeig} are
\begin{align*}
&\frac1{72} \Biggl( 45 - 36 \upsilon^2 - 9 \upsilon^2 + 36 \log\upsilon - 12 \log^3\upsilon \\
&\hspace{0.6cm}\pm \biggl\{ \Bigl(-45 + 36 \upsilon^2 + 9 \upsilon^4 - 36 \log\upsilon + 12 \log^3\upsilon\Bigr)^2\\
&\hspace{1.3cm}- 144 \Bigl(9 - 18 \upsilon^2 + 9 \upsilon^4 + 9 \log\upsilon - 9 \upsilon^4 \log\upsilon - 12 \log^3\upsilon + 12 \upsilon^2 \log^3\upsilon + 9 \zeta \log^3 \upsilon\\
&\hspace{2.5cm}- 12 \upsilon^2 \zeta \log^3\upsilon + 3 \upsilon^4 \zeta \log^3\upsilon - 12 \log\upsilon^4 + 12 \zeta \log^4\upsilon + 4 \zeta \log^6\upsilon - 4 \zeta^2 \log^6\upsilon\Bigr)\biggr\}^{\frac12}\Biggr),
\end{align*}
and
\begin{align*}
&\frac1{72} \Biggl( 45 + 36 \upsilon^2 + 9 \upsilon^2 + 36 \log\upsilon - 12 \log^3\upsilon \\
&\hspace{0.6cm}\pm \biggl\{ \Bigl(45 + 36 \upsilon^2 + 9 \upsilon^4 + 36 \log\upsilon - 12 \log^3\upsilon\Bigr)^2\\
&\hspace{1.3cm}+ 144 \Bigl(- 9 + 9 \upsilon^4 - 9 \log\upsilon - 9 \upsilon^4 \log\upsilon + 12 \log^3\upsilon + 12 \upsilon^2 \log^3\upsilon - 9 \zeta \log^3\upsilon \\
&\hspace{2.5cm}- 12 \upsilon^2 \zeta \log^3\upsilon + 3 \upsilon^4 \zeta \log^3 \upsilon + 12 \log\upsilon^4 - 12 \zeta \log^4\upsilon - 4 \zeta \log^6\upsilon + 4 \zeta^2 \log^6\upsilon)\biggr\}^{\frac12}\Biggr).
\end{align*}
Plotting the areas where these eigenvalues are negative shows that the eigenvalues with the plus sign chosen for $\pm$ are positive everywhere for $\upsilon \in (0, 1)$ and $\zeta \in [0, 1]$. The plots for the other two eigenvalues correspond with those in Figure~\ref{fig:bilayerev}.
\end{remark}
\medskip
Collecting Lemmas~\ref{lemma:B0_positive} and~\ref{lem:B1negeig} we can summarise the stability properties with the use of Corollary~\ref{cor:bilayerstability} as follows:
\begin{theorem}\label{thm:stabilitybilayer}
Let $\zeta$, $\upsilon$, and $\zeta_1$ be as in Lemma~\ref{lem:B1negeig}. Define the functions $\underline{\zeta}_j$, $j\geq 1$, and $\tilde \zeta$ by
\[
\underline{\zeta}_j(\upsilon) :=\zeta_1(\upsilon^j), \quad \tilde \zeta(\upsilon) := \sup_{j \geq 1} \underline{\zeta}_j(\upsilon).
\]
Then the VUV bilayer of optimal width~\pref{def:bilayer-optimal} is stable with respect to all (mass-conserving) perturbations in $\mathcal{P}_b^M$ iff
\[
\zeta \geq \tilde \zeta(\upsilon).
\]
\end{theorem}
This is Theorem~\ref{th:stab-bilayer} from the introduction. Its implications are illustrated in Figure~\ref{fig:signs-intro-bi}.
\begin{remark}\label{rem:whichperturbationsbilayer}
Note that the statement in Theorem~\ref{thm:stabilitybilayer} about the positivity of the second variation also holds true if we allow the perturbations to come from the larger set of perturbations $\mathcal{P}_b$, instead of $\mathcal{P}_b^M$. However, as stated in Remark~\ref{rem:differentmassconstraints}, the bilayer of optimal width is not stationary under perturbations that do not preserve mass.
\end{remark}
We next show that $\tilde \zeta$ is bounded from above away from $1$. Therefore there is a threshold $\alpha$ (as mentioned in the introduction) such that the bilayer is stable if $\zeta \geq \alpha$.
\begin{lemma}\label{lem:brel1boundedawayfrom1}
Let $\tilde \zeta$ be as in Theorem~\ref{thm:stabilitybilayer}, then there exists $\alpha\in(0,1)$ such that for all $\upsilon \in (0, 1)$,
\[
\tilde \zeta(\upsilon) < \alpha < 1.
\]
\end{lemma}
\begin{proof}
First note that per definition of $\tilde \zeta$ it suffices to show that there exists a $c \in (0,1)$, such that for all $\upsilon \in (0, 1)$,
\[
\zeta_1(\upsilon) < c < 1.
\]
Since $\zeta_1$ is continuous on the interval $(0, 1)$ and goes to zero for $\upsilon \downarrow 0$ and to $\frac52 - \frac12\sqrt{\frac{69}5}$ for $\upsilon \uparrow 1$ (see Remark~\ref{rem:limitsbrel}), this is equivalent to
\[
(8 \log^3\upsilon) (\zeta_1(\upsilon) - 1) > 0.
\]
By (\ref{eq:lotsofinfoinhere}) we know that
\begin{align*}
0 &< \Biggl(\Bigl(9 - 12 \upsilon^2 + 3 \upsilon^4 + (4 \log \upsilon) (3 + \log^2 \upsilon)\Bigr) - 8 \log^3 \upsilon\Biggr)^2\\
&< 225 - 504 \upsilon^2 + 342 \upsilon^4 - 72 \upsilon^6 + 9 \upsilon^8 + (360 - 288 \upsilon^2 - 72 \upsilon^4) \log \upsilon \\
&\hspace{0.5cm}+ 144 \log^2 \upsilon + (-120 + 96 \upsilon^2 + 24 \upsilon^4) \log^3 \upsilon - 96 \log^4 \upsilon + 16 \log^6 \upsilon.
\end{align*}
Taking square roots completes the proof.
\end{proof}
\begin{remark}\label{rem:stableunstablemodesbilayer}
To find out the stable and unstable first-order Fourier modes of deformation for the bilayer, we compute the eigenvectors belonging to the positive and (potentially) negative eigenvalues of $\tilde B_1$ from (\ref{eq:tildeB1}). For the stable directions we find
\begin{align*}
\mathfrak{a}_1^{s_1}(\zeta, \upsilon) &:= \left(\frac1{12 \upsilon( 1 + \upsilon^2)} \left(f_1(\zeta, \upsilon) - \sqrt{f_4(\zeta, \upsilon)}\right), 1, \frac2{\upsilon} \frac{f_2(\zeta, \upsilon) - \sqrt{f_4(\zeta, \upsilon)}}{f_3(\zeta, \upsilon) + \sqrt{f_4(\zeta, \upsilon)}}, 1 \right),\\
\mathfrak{a}_1^{s_2}(\zeta, \upsilon) &:= \left(\frac1{12 \upsilon(-1 + \upsilon^2)} \left(g_1(\zeta, \upsilon) - \sqrt{g_4(\zeta, \upsilon)}\right), -1, \frac2{\upsilon} \frac{g_2(\zeta, \upsilon) + \sqrt{g_4(\zeta, \upsilon)}}{g_3(\zeta, \upsilon) - \sqrt{g_4(\zeta, \upsilon)}}, 1 \right),
\end{align*}
where
\begin{align*}
f_1(\zeta, \upsilon) &:= -9 - 12 \upsilon^2 + 3 \upsilon^4 - 12 \log\upsilon + (-4 + 8 \zeta) \log^3\upsilon\\
f_2(\zeta, \upsilon) &:= -9 - 3 \upsilon^2 (6 + \upsilon^2) - 12 \log\upsilon + (-4+8\zeta) \log^3\upsilon\\
f_3(\zeta, \upsilon) &:= 15+3\upsilon^2 (4+\upsilon^2) - 12 \log\upsilon + (-4+8 \zeta) \log^3\upsilon\\
f_4(\zeta, \upsilon) &:= 9 \left( 9 + 40 \upsilon^2 + 42 \upsilon^4 + 8 \upsilon^6 + \upsilon^8\right)\\ &\hspace{0.4cm} + 8 \log\upsilon \Bigl( -3 + (-1+2 \zeta) \log^2\upsilon\Bigr) \Bigl(3 (-3 - 4\upsilon^2 + \upsilon^4) - 6 \log\upsilon + (-2+4 \zeta) \log^3\upsilon\Bigr),\\
g_1(\zeta, \upsilon) &:= -9 + 12 \upsilon^2 - 3 \upsilon^4 - 12 \log\upsilon + (-4 + 8 \zeta) \log^3\upsilon\\
g_2(\zeta, \upsilon) &:= 9 - 3 \upsilon^2 (2 + \upsilon^2) + 12 \log\upsilon + (4-8\zeta) \log^3\upsilon\\
g_3(\zeta, \upsilon) &:= -15+3\upsilon^2 (4+\upsilon^2) + 12 \log\upsilon + (4-8 \zeta) \log^3\upsilon\\
g_4(\zeta, \upsilon) &:= 9 (-1+\upsilon^2)^2 (1+\upsilon^2) (9+\upsilon^2)\\&\hspace{0.4cm} + 8 \log\upsilon \Bigl(-3 + (-1+2\zeta) \log^2\upsilon\Bigr) \Bigl(-3 (3-4\upsilon^2+\upsilon^4) - 6\log\upsilon + (-2+4\zeta)\log^3\upsilon\Bigr).
\end{align*}
The directions belonging to the eigenvalues that can become negative, corresponding to the eigenvalues $G_+$ and $G_-$ of the reduced matrix $\overset{\maltese}{B}$ in the proof of Lemma~\ref{lem:B1negeig}, are
\begin{align*}
\mathfrak{a}_1^{u_1}(\zeta, \upsilon) &:= \left(\frac1{12 \upsilon( 1 + \upsilon^2)} \left(f_1(\zeta, \upsilon) + \sqrt{f_4(\zeta, \upsilon)}\right), 1, \frac2{\upsilon} \frac{f_2(\zeta, \upsilon) + \sqrt{f_4(\zeta, \upsilon)}}{f_3(\zeta, \upsilon) - \sqrt{f_4(\zeta, \upsilon)}}, 1 \right),\\
\mathfrak{a}_1^{u_2}(\zeta, \upsilon) &:= \left(\frac1{12 \upsilon(-1 + \upsilon^2)} \left(g_1(\zeta, \upsilon) + \sqrt{g_4(\zeta, \upsilon)}\right), -1, \frac2{\upsilon} \frac{g_2(\zeta, \upsilon) + \sqrt{g_4(\zeta, \upsilon)}}{g_3(\zeta, \upsilon) + \sqrt{g_4(\zeta, \upsilon)}}, 1 \right),
\end{align*}
respectively.
The direction of the perturbation $\mathfrak{a}_1^{u_1}$ is depicted in Figure~\ref{fig:biunstab1}. Here we have chosen the values $d_{uv}=0.7, d_{v0}=0.3, L=5$ and $\epsilon=0.25$. Similarly we get Figures~\ref{fig:biunstab3}, \ref{fig:bistab2}, and ~\ref{fig:bistab4} using perturbations $\mathfrak{a}_1^{u_2}$, $\mathfrak{a}_1^{s_1}$, and $\mathfrak{a}_1^{s_2}$ respectively.
\end{remark}
\subsection{Stability of the monolayer}
\label{subsec:monolayerstability}
We now redo the arguments for the monolayer of optimal width~\pref{def:monolayer-optimal}.
Throughout this subsection we use the notation of Section~\ref{subsec:varmonolay}.
We can simplify $M_1$ a bit by writing
\[
\nu := e^{-{2 \pi \aleph}/L}, \qquad
\varrho := \frac{d_{u0}}{d_{u0} + d_{uv} + d_{v0}}
=\frac{c_u + c_0}{2 (c_0 + c_u + c_v)}, \qquad
\varsigma := \frac{d_{v0}}{d_{u0} + d_{uv} + d_{v0}}
=\frac{c_v + c_0}{2 (c_0 + c_u + c_v)}.
\]
Note the slightly different definition of $\nu$ than for the bilayer~\pref{eq:upslamb}.
Then, for all $x\in\vz{R}^3$,
\[
M_1\left(x, 0, d_{u0}, d_{uv}, d_{v0}, L\right)
= \frac L{\pi}
\tilde M_1\left(x, \varrho, \varsigma, \nu\right),
\]
where
\begin{align*}
\tilde M_1\left(x, \varrho, \varsigma, \nu\right) &:= -\frac23 \log^3\nu \left(\varrho x_1^2 + (1 - \varrho - \varsigma) x_2^2 + \varsigma x_3^2\right)\\
&\hspace{0.5cm} + 2 (1 + \log \nu) x_2^2 + \frac12 \left(x_1^2 + x_3^2 \right)\\
&\hspace{0.5cm} - 2\left(x_1 + x_3 + x_2 x_3 \right) \nu + x_1 x_3 \nu^2.
\end{align*}
We now can write
\[
\tilde M_1\left(x, \varrho, \varsigma, \nu\right)
= x^T \hat M (\varrho, \varsigma, \nu) \,x,
\]
with
\[
\hat M (\varrho, \varsigma, \nu) := \left(\begin{array}{ccc} -\frac23 \varrho \log^3\nu + \frac12 & -\nu & \frac12 \nu^2\\ -\nu & -\frac23 (1 - \varrho - \varsigma) \log^3 \nu + 2 (1 + \log \nu) & -\nu\\ \frac12 \nu^2 & -\nu & -\frac23 \varsigma \log^3 \nu + \frac12 \end{array} \right).
\]
This matrix is well defined for all $\varrho, \varsigma \in \vz{R}$, $\nu > 0$, but note that the nonnegativity of the parameters $c_0$, $c_u$, and $c_v$---or equivalently conditions (\ref{eq:ddemands})---translates into
\begin{equation}\label{eq:conditionsonchikappa}
0 \leq \varrho \leq \frac12, \hspace{2cm} 0 \leq \varsigma \leq \frac12, \hspace{2cm} \varrho + \varsigma \geq \frac12,
\end{equation}
and furthermore $\nu \in (0, 1)$ by definition.
\begin{remark}
As explained in the introduction and Appendix~\ref{sec:cu=cv}, we assume throughout the paper that for the monolayer the interfaces U-0 and V-0 are penalised equally strongly, i.e. $d_{u0} = d_{v0}$ or equivalently $c_u=c_v$. Under this assumption $\chi=\kappa$, and the inequalities in (\ref{eq:conditionsonchikappa}) imply that $\chi$ and $\kappa$ take values in $[\tfrac14,\tfrac12]$.
\end{remark}
\begin{lemma}\label{lem:M1negeig}
Let $c_u = c_v$. Two of the three eigenvalues of $\hat M (\varsigma, \varsigma, \nu)$ are nonnegative for all $\nu \in (0, 1)$ and $\varsigma \in [\frac14, \frac12]$. The third eigenvalue is given by
\[
E_3(\varsigma, \nu) := \frac1{12} \left( e_1(\varsigma, \nu) - \sqrt{e_2(\varsigma, \nu)}\right),
\]
where $\nu \in (0, 1), \varsigma \in [\frac14, \frac12]$ and $e_1$ and $e_2$ are given in (\ref{eq:e1}) and (\ref{eq:e2}).
The sign of $E_3$ is characterised by
\begin{equation}\label{eq:signofE2conditions}
E_3(\varsigma, \nu) \geq 0 \Longleftrightarrow \varsigma \leq \varsigma_2(\nu)\end{equation}
with $\varsigma_2$ as given in (\ref{eq:mrelv2}).
\end{lemma}
\begin{proof}
Since we are interested in the case where $c_u = c_v$ we will take $\varrho = \varsigma$ from here on, which turns the conditions (\ref{eq:conditionsonchikappa}) into $\frac14 \leq \varsigma \leq \frac12$.
For the three eigenvalues of $\hat M_1(\varsigma, \varsigma, \nu)$ we compute
\begin{align*}
E_1(\varsigma, \nu) &:= \frac16 (3 - 3 \nu^2 - 4 \varsigma \log^3\nu),\\
E_{2, 3}(\varsigma, \nu) &:= \frac1{12} \left( e_1(\varsigma, \nu) \pm \sqrt{e_2(\varsigma, \nu)}\right),
\end{align*}
where
\begin{align}
e_1(\varsigma, \nu) &:= 15 + 3 \nu^2 + (12 - 4 \log^2 \nu + 4 \varsigma \log^2 \nu) \log \nu, \label{eq:e1}\\
e_2(\varsigma, \nu) &:= 81 + 234 \nu^2 + 9 \nu^4 + 216 \log \nu - 72 \nu^2 \log \nu + 144 \log^2 \nu\nonumber\\
&\hspace{0.48cm} - 72 \log^3 \nu + 24 \nu^2 \log^3 \nu - 96 \log^4 \nu + 16 \log^6 \nu\nonumber\\
&\hspace{0.48cm} + \left(216 \log^3 \nu - 72 \nu^2 \log^3 \nu + 288 \log^4 \nu - 96 \log^6 \nu \right) \varsigma\nonumber\\
&\hspace{0.48cm} + \left(144 \log^6 \nu\right) \varsigma^2\label{eq:e2}
\end{align}
and we choose the plus sign for $E_2$ and the minus sign for $E_3$.
First note that $\nu \in(0,1)$ and $\chi\geq0$ imply that $E_1$ is always positive.
$E_{2,3}$ are real, since they are the eigenvalues of a symmetric matrix and thus $e_2(\varsigma, \nu) \geq 0$ for all $\varsigma \in \vz{R}$ and for all $\nu \in (0, 1)$.
Since for all $x>0$ and $\chi\leq 1/2$ we have $(1-\chi)x^3-3x\geq (1/2)x^3-3x\geq -2\surd 2$,
\[
e_1(\varsigma, \nu) = 15 + 3 \nu^2 + 4 \bigl[(1-\chi)|\log \nu|^3 - 3|\log \nu|\bigr] \geq 15 - 8 \sqrt{2} > 0.
\]
Combining this result with $e_2(\varsigma, \nu) \geq 0$, we conclude that $E_2(\varsigma, \nu) > 0$ for all admissible $\varsigma, \nu$. Thus,
the only eigenvalue that might be negative (in all or part of parameter space) is $E_3$.
To prove the statements in (\ref{eq:signofE2conditions}) we compute
\begin{align*}
\frac1{16} \Bigl(e_1^2(\varsigma, \nu) -e_2(\varsigma, \nu)\Bigr) &= 9 (1-\nu^2) + 9 (1+\nu^2) \log\nu - 3(1+\nu^2)\log^3\nu\\
&\hspace{0.4cm} + \Bigl(6 (-1+\nu^2) \log^3\nu + 4 (-3+\log^2\nu)\log^4\nu \Bigr)\varsigma\\
&\hspace{0.4cm} -\left(8 \log^6\nu\right) \varsigma^2.
\end{align*}
This expression is negative on $(0, 1)$ if and only if $\varsigma \in \left[\frac14, \varsigma_1(\nu)\right) \cup \left(\varsigma_2(\nu), \frac12\right]$ and zero if and only if $\varsigma = \varsigma_1(\nu)$ or $\varsigma = \varsigma_2(\nu)$, where
\begin{equation}\label{eq:mrelv2}
\varsigma_{1,2}(\nu) := \frac1{16 \log^6\nu} \left(f(\nu) \pm \sqrt{g(\nu)}\right),
\end{equation}
with
\begin{align*}
f(\nu) &:= \Bigl(6 (-1+\nu^2) + 4 (-3+\log^2\nu) \log\nu\Bigr)\log^3\nu;\\
g(\nu) &:= 96 \log^6\nu \Bigl( 3 (1-\nu^2) + 3 (1+\nu^2) \log\nu - (1+\nu^2) \log^3\nu \Bigr)\\
&\hspace{0.4cm}+ \Bigl( 6 (-1+\nu^2) \log^3\nu + 4 (\log^2\nu-3)\log^4\nu \Bigr)^2.
\end{align*}
The minus sign is chosen in $\varsigma_1$ while in $\varsigma_2$ we choose the plus sign. Plots of $\varsigma_1$ and $\varsigma_2$ are shown in Figure~\ref{fig:mrelv12}.
It is left to prove now that $\varsigma_1(\nu) < 1/4$ for all $\nu\in(0, 1)$. We will actually prove the stronger statement $\varsigma_1(\nu) < 0$, which follows from
\begin{alignat*}4
&&&g(\nu) > 0 & \qquad &\text{for $0<\nu<1$},\\
&\Longleftarrow&\qquad &f(\nu)^2 - g(\nu) < 0 & \qquad &\text{for $0<\nu<1$},\\
&\Longleftrightarrow&\qquad
&3(1-\nu^2)+3(1+\nu^2)\log \nu - (1+\nu^2)\log^3 \nu > 0 &&\text{for $0<\nu<1$},\\
&\Longleftrightarrow&\qquad
&3\frac{1-\nu^2}{1+\nu^2}+3\log \nu - \log^3 \nu > 0 &&\text{for $0<\nu<1$},\\
&\hspace{-0.315cm}\stackrel{w = -\log \nu}\Longleftrightarrow&\qquad
&3\tanh w-3w + w^3 > 0 &&\text{for $w>0$}.
\end{alignat*}
To prove that this last inequality holds, we define $h(w) := \tanh w-3w + w^3$ and use $\tanh'w=1-\tanh^2w$, to compute that $h'''(w) = 6 \tanh^2 w \, (-3 \tanh^4 w + 4) > 0$. From this it follows by integration that $h(w)>0$ for all $w>0$.
\end{proof}
\begin{figure}[ht]
\hspace{0.1\textwidth}
\subfloat[mrelv1zoom][Plot of $\varsigma_1$ away from $\nu=1$]
{
\psfrag{a}{$\nu$}
\psfrag{b}{$\varsigma_1$}
\includegraphics[width=0.35\textwidth]{mrelv1_zoom_cd}\\
}
\hspace{0.1\textwidth}
\subfloat[mrelv2][Plot of $\varsigma_2$, with the admissible range $\varsigma_2 \in \lbrack\frac14, \frac12 \rbrack$ indicated]
{
\psfrag{a}{$\nu$}
\psfrag{b}{$\varsigma_2$}
\includegraphics[width=0.35\textwidth]{monolaag_chi2_cd}\label{fig:mrelv2}\\
}
\caption{}\label{fig:mrelv12}
\end{figure}
\begin{remark}
For the excluded endpoints $0$ and $1$ we find
\begin{align*}
\underset{\nu \downarrow 0}{\lim}\varsigma_1 &= 0, \qquad \underset{\nu \uparrow 1}{\lim}\varsigma_1 = -\infty,\\
\underset{\nu \downarrow 0}{\lim}\varsigma_2 &= \frac12, \qquad \underset{\nu \uparrow 1}{\lim}\varsigma_2 = \frac15.
\end{align*}
The limits for $\nu \uparrow 1$ were found by calculating the first terms in the Taylor expansion of $\varsigma_{1,2}$.
\end{remark}
Figure~\ref{fig:E2mrelv} shows the parts of parameter space where $E_3$ is positive and negative, both on the admissible domain $\left(\frac14, \frac12\right)$ for $\varsigma$ as well as extended to $(0, 1)$.
\begin{figure}[ht]
\hspace{0.1\textwidth}
\subfloat[E3][Sign of $E_3$]
{
\psfrag{a}{$\nu$}
\psfrag{b}{$\varsigma$}
\psfrag{c}{\scriptsize $E_3>0$}
\psfrag{d}{\color{white}$E_3<0$}
\includegraphics[width=0.35\textwidth]{E2contour02505_cd}\label{fig:E2mrelva}\\
}
\hspace{0.1\textwidth}
\subfloat[E3extended][Sign of $E_3$ extended to $(0, 1)$ for $\varsigma$]
{
\psfrag{a}{$\nu$}
\psfrag{b}{$\varsigma$}
\psfrag{c}{$E_3>0$}
\psfrag{d}{\color{white}$E_3<0$}
\includegraphics[width=0.35\textwidth]{E2contourb_cd}
\label{fig:E2contour_cd}\\
}
\caption{}\label{fig:E2mrelv}
\end{figure}
\begin{remark}
Expanding $E_3$ around $\nu=1$ gives
\[
E_3(\varsigma, \nu) = \frac4{45} (1 - 5 \varsigma) (1 - \nu)^5 + \mathcal{O}\left(\left(1 - \nu\right)^6\right),
\]
for $\nu \uparrow 1$. Since $1-5\varsigma \leq -\frac14$ for $\varsigma\in\left[\frac14, \frac12\right]$ we can conclude that for $\nu$ close to $1$ (or equivalently large $L$) the monolayer is unstable for all admissible values for the interfacial (surface tensions) coefficients $d_{ij}$ (or $c_i$). This corresponds to what is shown in Figure~\ref{fig:E2mrelva}.
Taking into account the assumption $d_{v0}=d_{u0}$, the condition $1-5\varsigma < 0$ for negativity of $E_3$ is equivalent to $d_{uv} < \frac32 (d_{u0} + d_{v0})$. In \cite[Theorem~8]{vanGennipPeletier07a} we show that for a circular two-dimensional monolayer the term in
in the energy per unit mass $\mathcal{F}/M$ that is quadratic in the curvature is given by
\[
m \left( -\frac12 (d_{u0}+d_{v0}) + \frac4{15} m^3 \right) \kappa^2,
\]
where $m$ is the thickness of the layers and $\kappa$ is the curvature. Taking $m=\aleph$ we find that this term becomes negative exactly as $d_{uv} < \frac32 (d_{u0} + d_{v0})$, showing that the (large) circular monolayer loses stability at the same point as the flat monolayer on large domains. Note that conditions~(\ref{eq:ddemands}) imply $d_{uv} < \frac32 (d_{u0} + d_{v0})$. \end{remark}
In order to compare the results for the monolayer to those for the bilayer, we introduce the relative U-V interface penalisation
\begin{equation}\label{eq:yeahgoaheadanddefinemreluv}
\mu := 1-\varrho-\varsigma = \frac{c_u + c_v}{2 (c_0 + c_u + c_v)},
\end{equation}
analogous to $\zeta$ for the bilayer in Lemma~\ref{lem:B1negeig}. Note that conditions (\ref{eq:ddemands}) give $\mu \in [0, \frac12]$.
In terms of the surface tension coefficients,
\[
\mu = \frac{d_{uv}}{d_{u0} + d_{uv} + d_{v0}},
\]
$\mu$ is interpreted as the relative penalisation of the U-V interface.
\begin{theorem}\label{thm:stabilitymonolayer}
Let $d_{u0} = d_{v0}$ and let $\varsigma_2$ be as in Lemma~\ref{lem:M1negeig}. Define the functions $\underline{\varsigma}_j$ and $\tilde \varsigma$ by
\[
\underline{\varsigma}_j(\nu) := \varsigma_2(\nu^j), \quad \tilde \varsigma := \underset{j \geq 1}{\inf}\, \underline{\varsigma}_j, \quad \tilde \mu := 1 - 2 \tilde \varsigma.
\]
The monolayer of optimal width~\pref{def:monolayer-optimal} is stable with respect to perturbations in $\mathcal{P}_m^M$ if and only if $\mu \geq \tilde \mu(\nu)$.
\end{theorem}
\begin{proof}
First we work with $\varsigma$ as in Lemma~\ref{lem:M1negeig} and afterwards we translate the results into conditions on $\mu$. By Definition~\ref{def:stability} and Theorem~\ref{thm:monolayersecond} in order to prove stability, we have to prove that
\[
M_0\left(\mathfrak{a}_0, \aleph\right) + \sum_{j=1}^{\infty} M_j\left(\mathfrak{a}_j, \mathfrak{b}_j, d_{u0}, d_{uv}, d_{v0}, L\right) \geq 0,
\]
for all admissible perturbations.
Per definition we have $M_0\left(\mathfrak{a}_0, \aleph\right) := \aleph \left(a_{1,0} - a_{3,0}\right)^2 \geq 0$. By Lemma~\ref{lem:M1negeig} we know that if $(\nu, \varsigma)$ is such that $\varsigma \in \left[\frac14, \varsigma_2(\nu)\right]$ then $M_1\left(\mathfrak{a}_1, 0, d_{u0}, d_{uv}, d_{v0}, L\right) \geq 0$ for all $p \in \mathcal{P}_m$. By Lemma~\ref{lem:oneforall} we have for $j \geq 1$,
\[
\forall \mathfrak{a}_j, \mathfrak{b}_j, \, M_j\left(\mathfrak{a}_j, \mathfrak{b}_j, d_{u0}, d_{uv}, d_{v0}, L\right) \geq 0 \Longleftrightarrow \forall \mathfrak{a}_1, \, M _1\left(\mathfrak{a}_1, 0, d_{u0}, d_{uv}, d_{v0}, L/j\right) \geq 0,
\]
thus we see that, if $\varsigma \in \left[\frac14, \tilde \varsigma(\nu)\right]$ is satisfied, then for all $j \geq 1$, for all $\mathfrak{a}_j$, and for all $\mathfrak{b}_j$, $M_j\left(\mathfrak{a}_j, \mathfrak{b}_j, d_{u0}, d_{uv}, d_{v0}, L\right) \geq 0$.
Now note that $\varsigma = \frac{1-\mu}2$ and thus
\[
\varsigma \in \left[\frac14, \tilde \varsigma(\nu)\right] \Longleftrightarrow \mu \in \left[1-2 \tilde \varsigma(\nu), \frac12\right],
\]
which proves the statement of the theorem.
\end{proof}
To make the connection to Theorem~\ref{th:stab-monolayer} in the introduction, the function $f_1$ is defined by
\begin{equation}
\label{def:f1}
f_1(\ell) := 1-2\chi_2\left(e^{2\pi/\ell}\right),
\end{equation}
where $\chi_2$ is given in~\pref{eq:mrelv2}.
Figure~\ref{fig:signs-intro-mono} illustrates the stability properties of the monolayer of optimal width from Theorem~\ref{thm:stabilitymonolayer}.
\begin{remark}\label{rem:whichperturbationsmonolayer}
In Theorem~\ref{thm:stabilitymonolayer} we only consider perturbations in $\mathcal{P}_m^M$, i.e. perturbations that keep the total mass fixed. The statement about the positivity of the second variation still holds if we consider the larger set of perturbations $\mathcal{P}_m$, however, for these perturbations the monolayer of optimal width is not a stationary point, as was noted after Lemma~\ref{lem:monolayerstationary}.
\end{remark}
\begin{remark}\label{rem:stableunstablemodesmonolayer}
To find the stable and unstable first order Fourier modes of deformation we compute the eigenvectors belonging to the positive eigenvalues of $\tilde M_1\left(\mathfrak{a}_1, \varsigma, \varsigma, \nu\right)$ and to the eigenvalues that are negative for some parameter choices. For the positive, stable directions we find
\begin{align*}
\mathfrak{a}_1^{s_1}(\varsigma, \nu) &:= \left(-1, 0, 1 \right),\\
\mathfrak{a}_1^{s_2}(\varsigma, \nu) &:= \left(1, \frac1{12 \nu} \left(h_1(\varsigma, \nu) + \sqrt{h_2(\varsigma, \nu)}\right), 1\right),
\end{align*}
where
\begin{align*}
h_1(\varsigma, \nu) &:= -9 + 3 \nu^2 - 12 \log\nu + (4-12 \varsigma) \log^3\nu,\\
h_2(\varsigma, \nu) &:= 9 \left( 9 + 26 \nu^2 + \nu^4 \right)\\
&\hspace{0.5cm}+ 8 \log \nu \left( 3+ (-1+3 \varsigma) \log^2\nu\right) \left( 9 -3 \nu^2 + 6 \log\nu + (-2+6 \varsigma) \log^3\nu\right).
\end{align*}
The direction belonging to the eigenvalues that can become negative, corresponding to the eigenvalue $E_3$ of in Lemma~\ref{lem:M1negeig}, is
\begin{align*}
\mathfrak{a}_1^{u}(\varsigma, \nu) &:= \left(1, \frac1{12 \nu} \left(h_1(\varsigma, \nu) - \sqrt{h_2(\varsigma, \nu)}\right), 1\right).
\end{align*}
Figure~\ref{fig:monostab1} shows the monolayer with a perturbation corresponding to $\mathfrak{a}_1^{s_1}$. Here we have chosen the values $d_{u0}=1$, $d_{uv}=0.7$, $d_{v0}=0.3$, $L=5$ and $\epsilon=0.25$. Similarly we get Figure~\ref{fig:monostab3} using
$\mathfrak{a}_1^{s_2}$, and Figure~\ref{fig:monounstab2} using $\mathfrak{a}_1^{u}$.
\end{remark}
\subsection{Discussion and comparison}
In Sections~\ref{subsec:bilayerstability} and~\ref{subsec:monolayerstability} we found conditions for the stability of monolayers and bilayers with respect to some admissible perturbations. The main results are visualised in Figures~\ref{fig:tildemreluv} and~\ref{fig:tildemreluvsmallscale} for the monolayer and Figure~\ref{fig:tildebrelsmallscale} for the bilayer.
\begin{figure}[h]
\hspace{0.1\textwidth}
\subfloat[minmreluv][Plot of $\tilde \mu$ as a function of $L/\aleph=2\pi (\log\nu^{-1})^{-1}$]
{
\psfrag{a}{$L/\aleph$}
\psfrag{b}{$\tilde \mu$}
\includegraphics[width=0.35\textwidth]{minmreluv_cd}\\
}
\hspace{0.1\textwidth}
\subfloat[minmreluvrestricted][Plot of $\tilde \mu$ restricted to the admissible range $\tilde \mu \in \lbrack 0, \frac12 \rbrack$]
{
\psfrag{a}{$L/\aleph$}
\psfrag{b}{$\tilde \mu$}
\includegraphics[width=0.35\textwidth]{minmreluv_restricted_cd}
\label{fig:tildemreluvrestricted}
}
\caption{For the plots of $\tilde \mu$ from Theorem~\ref{thm:stabilitymonolayer} we have approximated $\tilde \varsigma$ by $\min_{1\leq j\leq 100} \underline{\varsigma}_j$} \label{fig:tildemreluv}
\end{figure}
\begin{figure}[h]
\centering
{
\psfrag{a}{$L/\aleph$}
\psfrag{b}{$\tilde \mu$}
\includegraphics[width=0.45\textwidth]{minmreluv_oscil_cd}
}
\caption{Plot of $\tilde \mu$ from Theorem~\ref{thm:stabilitymonolayer} as a function of $L/\aleph=2\pi (\log\nu^{-1})^{-1}$, showing the small-scale oscillations where different Fourier orders become the dominant contributors.} \label{fig:tildemreluvsmallscale}
\end{figure}
\begin{figure}[h]
\hspace{0.1\textwidth}
\subfloat[tildebrelzoom][Plot of $\tilde \zeta$ as a function of $L/\beth=2\pi (\log\upsilon^{-1})^{-1}$]
{
\psfrag{a}{$L/\beth$}
\psfrag{b}{$\tilde \zeta$}
\includegraphics[height=30mm]{tildebrel_cd}
\label{fig:tildebrel}
}
\hspace{0.1\textwidth}
\subfloat[tildebreloscil][Plot of $\tilde \zeta$ showing the small scale oscillations where different Fourier orders become the dominant contributors]
{
\psfrag{a}{$L/\beth$}
\psfrag{b}{$\tilde \zeta$}
\includegraphics[width=0.35\textwidth]{tildebrel_oscil_cd}
}
\caption{For the plots of $\tilde \zeta$ from Theorem~\ref{thm:stabilitybilayer} we have approximated $\tilde \zeta$ by $\max_{1\leq j\leq 100}\underline{\zeta}_j$} \label{fig:tildebrelsmallscale}
\end{figure}
The monolayer is stable with respect to perturbations of the interface if $\mu \geq \tilde \mu$ (Theorem~\ref{thm:stabilitymonolayer}), and the bilayer is stable with respect to mass-preserving perturbations of the interface if $\zeta \geq \tilde \zeta$ (Theorem~\ref{thm:stabilitybilayer}).
$\tilde \mu$ and $\tilde \zeta$ display very similar overall behaviour. They both rapidly increase for small values of $L/\aleph$ or $L/\beth$ until they settle down around a value for $\mu$ or $\zeta$ close to $0.6$. Around this value both $\tilde \mu$ and $\tilde \zeta$ oscillate as with increasing $L$ different Fourier modes become dominant. The similarity is broken, however, by the restriction of $\mu$ to $\left[0, \frac12\right]$. Because of this the monolayer is unstable for all values of $L/\aleph$ greater than about $6$ (see Figure~\ref{fig:tildemreluvrestricted}), while the bilayer can be stable for all values of $L/\beth$ (see Figure~\ref{fig:tildebrel}).
Remark that higher relative penalisation of the U-Vinterfaces, i.e. higher values of $\mu$ and $\zeta$, improves stability. For the bilayer a sufficiently high value of $\zeta$ even guarantees stability in the sense discussed here (Lemma~\ref{lem:brel1boundedawayfrom1}). This reinforces the notion that $d_{uv}$ plays a special role in the diblock copolymer-homopolymer problem, which was also encountered also in \cite[\S 2.2]{vanGennipPeletier07a}.
\section{Green's function on a periodic two-dimensional strip}\label{sec:greens}
When computing the first and second variation of $\mathcal{F}$ for monolayers and bilayers in Section~\ref{sec:perstrip} we required an explicit formula for the Green's function of $-\Delta$ on $S_L$. We present this Green's function here and prove that it satisfies the necessary conditions. For a heuristic derivation we refer to \cite[\S 6.3.1]{vanGennip08}.
\begin{theorem}\label{thm:gf}
Define $G: S_L\setminus\{(0, 0)\} \to \vz{R}$ in $L_{\text{loc}}^2(S_L)$ as follows:
\begin{equation}\label{eq:greensfunction}
G(x_1, x_2) := \frac{-1}{4 \pi} \log \left( 2 \cosh \left( \frac{2 \pi x_2}{L} \right) - 2 \cos \left( \frac{2 \pi x_1}{L} \right)\right).
\end{equation}
Then the equation $-\Delta G(x_1, x_2) = \delta(x_1, x_2)$ is satisfied with periodic boundary conditions $G(0, x_2) = G(L, x_2)$ and $\frac{\partial}{\partial x_1} G(0, x_2) = \frac{\partial}{\partial x_1} G(L, x_2)$. Writing the Fourier expansion of $G$ in $x_1$ gives
\begin{equation}\label{eq:greenfour}
G(x_1, x_2) = - \frac{1}{2 L} |x_2| + \frac{1}{2 \pi} \sum_{q=1}^{\infty} \frac{1}{q} e^{-2 \pi |x_2| q / L} \cos\left( \frac{2 \pi x_1 q}{L} \right).
\end{equation}
\end{theorem}
\begin{proof}
We first prove that $G$, as given in equation~(\ref{eq:greensfunction}), satisfies the equation \mbox{$-\Delta G(x_1, x_2) = \delta(x_1, x_2)$} in the sense of distributions, i.e. we show that for all $\phi \in C_c^{\infty}(S_L)$,
\[
\int_{S_L} G(x_1, x_2) (-\Delta \phi)(x_1, x_2) \, dx_1 \, dx_2 = \phi(0, 0).
\]
Note that the constant term $- \frac{1}{4 \pi} \log 2$ implicitly present in (\ref{eq:greensfunction}) as the factor $2$ in the logarithm is of no importance here and so we will leave it out of subsequent calculations \footnote{The reason for adding it in (\ref{eq:greensfunction}) in the first place is to get a Fourier series without a term independent of $x_1$ and $x_2$.}.
We write
\begin{align*}
&\hspace{0.4cm}\int_{S_L} G (-\Delta \phi) \, d\mathcal{L}^2\\
&= \lim_{\epsilon \to 0} \int_{S_L \setminus B(0,\epsilon)} G (-\Delta \phi) \, d\mathcal{L}^2\\
&= \lim_{\epsilon \to 0} \Biggl( -\int_{\partial B(0,\epsilon)} G \nabla \phi \cdot \nu \, d\mathcal{H}^1 - \int_{S_L \setminus B(0,\epsilon)} \Delta G \phi \, d\mathcal{L}^2 + \int_{\partial B(0,\epsilon)} \nabla G \cdot \nu \phi \, d\mathcal{H}^1 \Biggr),
\end{align*}
where $B(0, \epsilon)$ is the closed ball of radius $\epsilon$ and with the origin as center. $\nu$ is the unit outward normal to $S_L \setminus B(0,\epsilon)$, which means $\nu$ points into $B(0, \epsilon)$. Denote the three terms by $I_{\epsilon}, J_{\epsilon}$ and $K_{\epsilon}$ respectively. The integral $I_\e$ vanishes:
\begin{align*}
\lim_{\e\to0} |I_\e|
&\leq \lim_{\e\to0}
\|\nabla \phi\|_{L^\infty} \int_{\partial B(0,\epsilon)} |G|\, d\mathcal{H}^1\\
&=\lim_{\e\to0}
\|\nabla \phi\|_{L^\infty} \,2\pi\e\,
\left|\log\biggl( \frac{2 \pi^2}{L^2} \Bigl(\epsilon^2 + \mathcal{O}(\epsilon^4)\Bigr) \biggr)\right| = 0.\\
\end{align*}
For $J_\e$ we calculate
\[
\nabla G(x_1, x_2) = -\frac{1}{2L} \left[ \cosh\left(\frac{2 \pi x_2}{L}\right) - \cos\left(\frac{2 \pi x_1}{L}\right) \right]^{-1} \left( \begin{array}{c} \sin \left(\frac{2 \pi x_1}{L}\right)\\ \sinh\left(\frac{2 \pi x_2}{L}\right)\end{array} \right).
\]
For notational convenience we will write $C(x_1, x_2) := \cosh\left(\frac{2 \pi x_2}{L}\right) - \cos\left(\frac{2 \pi x_1}{L}\right)$. Then we can compute that at $(x_1, x_2) \not=(0,0)$
\begin{align*}
\frac{\partial^2}{\partial x_1^2} G(x_1, x_2) &= \frac{\pi}{L^2} \left( C(x_1, x_2)^{-2} \sin^2\left(\frac{2 \pi x_1}{L}\right) - C(x_1, x_2)^{-1} \cos\left(\frac{2 \pi x_1}{L}\right) \right),\\
\frac{\partial^2}{\partial x_2^2} G(x_1, x_2) &= \frac{\pi}{L^2} \left( C(x_1, x_2)^{-2} \sinh^2\left(\frac{2 \pi x_2}{L}\right) - C(x_1, x_2)^{-1} \cosh\left(\frac{2 \pi x_2}{L}\right) \right),
\end{align*}
which gives $\Delta G(x_1, x_2) = 0$, from which it follows that $J_{\epsilon} = 0$ for all $\epsilon > 0$.
To determine $\lim_{\e\to0}K_\e$ we approximate $G$ by $G_{{\vz R}^2}(x_1,x_2) = -(4\pi)^{-1}\log(x_1^2+x_2^2)$, the Green's function of $-\Delta$ on ${\vz R}^2$. Estimating the difference on $\partial B(0,\e)$ by
\begin{align*}
\left|\nabla G(x_1,x_2) - \nabla G_{{\vz R}^2}(x_1,x_2)\right|
&= \left| -\frac1{2L} \,
\frac{\frac{2\pi x_1}L \vec e_1 + \frac{2\pi x_2}L \vec e_2
+ \mathcal {O}\bigl((x_1^2+x_2^2)^{3/2}\bigr)}
{\frac{2\pi^2}{L^2}(x_1^2+x_2^2) + \mathcal {O}\bigl((x_1^2+x_2^2)^2\bigr)}
+ \frac1{2\pi} \,
\frac{x_1\vec e_1 + x_2 \vec e_2}{x_1^2+x_2^2}
\right|\\
&= \mathcal{O}\bigl((x_1^2+x_2^2)^{1/2}\bigr) \qquad\text{as }x_1^2+x_2^2 = \e^2 \to 0,
\end{align*}
we calculate
\begin{align*}
\lim_{\e\to0} K_\e
&= \lim_{\e\to0} \int_{\partial B(0,\epsilon)}
\nabla (G-G_{{\vz R}^2}) \cdot \nu \phi \, d\mathcal{H}^1 + \lim_{\e\to0} \int_{\partial B(0,\epsilon)}
\nabla G_{{\vz R}^2} \cdot \nu \phi \, d\mathcal{H}^1 \\
&= 0 + \lim_{\e\to0} \frac1{2\pi} \int_{\partial B(0,\epsilon)} \frac{x_1\vec e_1 + x_2 \vec e_2}{x_1^2+x_2^2}\cdot \frac{x_1\vec e_1 + x_2 \vec e_2}{(x_1^2+x_2^2)^{1/2}}\,\phi(x_1, x_2) \, d\mathcal{H}^1(x_1,x_2) = \phi(0,0).
\end{align*}
Taking these results together shows that $\lim_{\epsilon \downarrow 0}( I_{\epsilon} + J_{\epsilon} + K_{\epsilon}) = \phi(0, 0)$ and thus $-\Delta G = \delta$ holds in the sense of distributions.
\medskip
To prove that the Fourier series in (\ref{eq:greenfour}) corresponds to the Green's function (\ref{eq:greensfunction}), let $G$ be given by~(\ref{eq:greensfunction}) and $\tilde G$ by~(\ref{eq:greenfour}). Note that for every $x_2\not=0$ the series converges absolutely:
\[
\sum_{q=1}^{\infty} \left| \frac1{q}e^{-\frac{2 \pi |x_2| q}L} \cos\left( \frac{2 \pi x_1 q}{L} \right)\right|
\leq \sum_{q=1}^{\infty} \left(e^{-\frac{2 \pi |x_2|}L}\right)^q
= \frac{e^{-\frac{2 \pi |x_2|}L}}{1-e^{-\frac{2 \pi |x_2|}L}}.
\]
If $x_2 \neq 0$ we then calculate
\begin{align}
\sum_{q=1}^{\infty} \frac1q e^{-\frac{2 \pi |x_2| q}L} \cos\left(\frac{2 \pi q x_1}L\right) &= \text{Re}\, \sum_{q=1}^{\infty} \frac1q e^{\frac{2 \pi}L (-|x_2| + i x_1)}\nonumber\\
&= -\text{Re}\log\left(1 - e^{\frac{2 \pi}L (-|x_2| + i x_1)}\right)\nonumber\\
&= -\log\big|1 - e^{\frac{2 \pi}L (-|x_2| + i x_1)}\big|\nonumber\\
&= -\frac12 \log\left( 1 - 2 e^{-\frac{2 \pi |x_2|}L} \cos\left(\frac{2 \pi x_1}L\right) + e^{\frac{-4 \pi |x_2|}L} \right)\nonumber\\
&= -\frac12 \log\left(2 e^{-\frac{2 \pi |x_2|}L} \left( \frac12 e^{\frac{2 \pi |x_2|}L} - \cos\left(\frac{2 \pi x_1}L\right) + \frac12 e^{-\frac{2 \pi |x_2|}L} \right)\right)\nonumber\\
&= \frac{\pi}L |x_2| - \frac12 \log\left(2 \cosh\left(\frac{2 \pi |x_2|}L\right) - 2 \cos\left(\frac{2 \pi x_1}L\right)\right),\label{eq:greenfunctionandseries}
\end{align}
so that
the partial sums $\sum_{q=1}^{\ell} \frac1q e^{-\frac{2 \pi |x_2| q}L} \cos\left(\frac{2 \pi q x_1}L\right)$ converge pointwise to $2 \pi G(x_1,x_2) + \frac{\pi}L |x_2|$ for almost all $(x_1,x_2)\in S_L$. Since the partial sums are all bounded by the $L_{\text{loc}}^2$-function on the right hand side of (\ref{eq:greenfunctionandseries}) the Dominated Convergence Theorem yields $\tilde G \in L^2(S_L)$. Together with $G = \tilde G$ a.e. on $S_L$ this shows that $G = \tilde G$ in $L_{\text{loc}}^2(S_L)$.
\end{proof}
\begin{corol}\label{cor:Greenproperty}
Let $G$ be as in (\ref{eq:greenfour}) and let $x_2 \in \vz{R}\setminus\{0\}$. Then
\[
\int_0^L G(x_1, x_2) \, dx_1 = -\frac12 |x_2|.
\]
\end{corol}
\begin{proof}
For all $q \geq 1$,
\[
\int_0^L \cos\left( \frac{2 \pi q x_1}{L} \right) \, dx_1 = 0.
\]
\end{proof}
Also note that $G(-x_1, x_2) = G(x_1, x_2)$ and $G(x_1, -x_2) = G(x_1, x_2)$.
\section{Discussion and conclusions}\label{sec:concdisc}
\subsection{Comparing mono- and bilayers} In this paper we showed that bilayers can be both stable and unstable, depending on the parameters: when the U-V interface penalty is strong enough relative to the penalties of the other interfaces, the bilayer is stable. On the other hand, monolayers are unstable as soon as the strip is wide enough to accommodate the unstable wavelengths, regardless of the values of the interface penalisation.
The bilayer can be thought of as two juxtaposed monolayers, and therefore the question presents itself how the unstable mode of the monolayer is prevented in the bilayer context. The correct answer seems to be that the unstable mode is actually not prevented at all; it continues to exist in the context of the bilayer, as can be witnessed in Figures~\ref{fig:monounstab2} and (especially)~\ref{fig:biunstab3}.
The reason why this unstable mode does not make every bilayer unstable lies in the admissible values of the coefficients, which are different in the two cases. For the VUV bilayer, for instance, the value of the U-0 interface penalty $d_{u0}$ is irrelevant; therefore, by choosing $d_{u0} := d_{v0}+d_{uv}$, every choice of $d_{uv}$ and $d_{v0}$ becomes admissible, and most importantly, the case of purely U-V penalisation ($\zeta \approx 1$, or $d_{v0}\approx 0$) is therefore allowed. For the monolayer, however, the conditions~\pref{eq:ddemands} imply that the two side interfaces (0-U and V-0) are necessarily penalised at least half as strongly as the central (U-V) interface. Most of the white (stable) region in Figure~\ref{fig:E2contour_cd} therefore is inaccessible, and only the unstable region remains as can be seen in Figure~\ref{fig:E2mrelva} (Figures~\ref{fig:E2mrelv} only show stability of the first Fourier mode, but the situation is similar for the higher modes).
\subsection{Comparison with~\cite{PeletierRoeger08}}
In previous work~\cite{PeletierRoeger08} one of the authors (Peletier) and R{\"o}ger studied a related functional, \begin{equation}\label{eq:lipidbil}
\mathcal{G}_{\epsilon}(u, v) := \left\{ \begin{array}{ll} \displaystyle \epsilon \int_{{\vz R}^2} |\nabla u| + \frac{1}{\epsilon} d_1(u,
v) & \mbox
{ if $(u, v) \in \mathcal{K}_{\epsilon}$,}\vspace{0.25cm}\\ \infty &\mbox{ otherwise.} \end {array} \right.
\end{equation}
Here
$d_1(\cdot,\cdot)$ is the Monge-Kantorovich distance with cost function $c(x,y) = |x-y|$, \cite{Villani08}, and
\[
\mathcal{K}_{\epsilon} := \left\{ (u, v) \in \text{BV}(\vz{R}^2; \{0,
1/\epsilon\}) \times L^1({\vz R}^2; \{0, 1/\epsilon\}) : uv = 0 \text{ a.e., and }\int_{{\vz R}^2} u = \int_{{\vz R}^2} v = M \right\}.
\]
Apart from the choices $c_0 = c_v = 0$ and $c_u = 1$, the main difference between $\mathcal{F}$ and (\ref{eq:lipidbil}) is the different non-local term.
The scaling (constant mass but increasing amplitude $1/\e$) implies that the supports of $u$ and $v$ shrink to zero measure. The main goal in~\cite{PeletierRoeger08} was to investigate the limit $\e\to0$ and characterise the limiting structures and their energy.
The main result, a $\Gamma$-convergence theorem, can be interpreted as stating---in a very weak sense---that the limiting structures are VUV bilayers; in the limit $\e\to0$ these bilayers have a thickness equal to $4\e$ and their curvature is bounded in $L^2$. Most importantly, in connection with the present paper, the limit energy depends on the curvature in a stable way: the energy is minimal for straight bilayers and increases with curvature.
This result compares well with the results of this paper. The functional $\mathcal G_\e$ of~\cite{PeletierRoeger08} penalises only U-V and U-0 interfaces; the V-0 interface is free, or in terms of this paper $\zeta=1$. Both in~\cite{PeletierRoeger08} and in the present paper we therefore find that bilayers of optimal width are stable, although the precise results and their methods of proof are very different.
\subsection{Comparison with `wriggled lamellar' solutions}\label{sec:wriggledlamellar}
In a series of papers~\cite{Muratov02,RenWei03b,RenWei05} Muratov and Ren \& Wei investigate the stability of one-dimensional layered (lamellar) structures for copolymer melts---the case $u+v\equiv 1$. They find that for a critical value of the lamellar spacing the straight lamellar structures become unstable and a stable branch of curved, `wriggled'
lamellar structures bifurcates. Muratov considers unbounded domains and finds that the loss of stability happens at \emph{exactly} the optimal value of the width: for any larger value of the width unstable directions exist with very large wavelength. Ren and Wei consider bounded domains, which provides a natural limit on the wavelength of perturbations, and consequently they find that at the optimal width the straight lamellar structures are stable, and the bifurcation occurs at slightly larger width.
The system studied in this paper is different in that there are three types of interfaces, not one; for comparison purposes one can identify the pure-melt case described above with the case of pure U-V interface penalisation for bilayers ($\zeta=1$). In this case the bilayer of optimal width is stable, and this result mirrors the stability result of Ren and Wei for optimal-width lamellar structures.
\subsection{Generalizations and extensions} One might wonder whether the functional $\mathcal F$ depends in a smooth manner on the perturbations. The calculation of the second derivative of the functional in the melt case done by Choksi and Sternberg~\cite{ChoksiSternberg06} suggests that the second derivative of $\mathcal F$ depends continuously on $W^{1,2}$-regular perturbations of the interfaces. In that case the functional $\mathcal F$ is of class $C^2$, and the linear stability analysis of the current paper automatically implies the equivalent nonlinear stability properties.
One can also wonder whether the class of perturbations that are considered---those described by functions of the variable $x_1\in \mathbb T_L$---is not too restrictive. The class of all perturbations that are small in $L^1$, for instance, also includes many perturbations with small inclusions of one phase in another, which are not covered here. We believe that these will generally be less advantageous, since the results of this chapter show that perturbations with fast oscillations are energetically expensive (because the layers are stable with respect to the admissible perturbations for most values of the surface tension coefficients if $L$ is small). The same conclusion can be reached by a slightly different, heuristic argument as follows. Within the class of uniformly bounded functions the $H^{-1}$-norm is continuous with respect to the $L^1$-topology, as can be seen from
\[
\|f\|_{H^{-1}}^2 = \int f\varphi \leq \|f\|_{L^2} \|\varphi\|_{L^2} \leq C \|f\|_{L^2}^2 \leq C \|f\|_{L^1} \|f\|_{L^{\infty}},
\]
where $\varphi$ solves $-\Delta \varphi = f$. Therefore within that class of functions the $H^{-1}$-norm is also continuous with respect to the area of the inclusion; for small inclusions, with a large circumference-to-area ratio, a possible decrease in the $H^{-1}$-norm is thus dwarfed by the increase in interfacial length associated with such an inclusion.
Note that the problem has not completely been non-dimensionalised; it is possible to rescale the problem by the length scale $L$, resulting in a three-parameter problem (in the rescaled parameters $c_0$, $c_u$, and $c_v$). Instead we keep the length scale explicitly in the problem to illustrate the length-scale dependence of the stability properties.
\subsection{Diffuse interface model}
The functional $\mathcal{F}$ is the sharp interface limit (via $\Gamma$-convergence) of a well-known diffuse-interface functional~\cite{ModicaMortola77,Baldo90}
\[
\mathcal{F}_{\epsilon}(u, v) = \int \Bigl[ \frac{\epsilon}2 |\nabla u|^2 + \frac{\epsilon}2 |\nabla v|^2 + \frac{\epsilon}2 |\nabla (u+v)|^2 + \epsilon^{-1} W(u, v) \Bigr] \, dx + \frac12 \| u - v\|_{H^{-1}}^2.
\]
Here $W$ is a triple-well potential with wells at $(0,0)$, $(1,0)$, and $(0,1)$. The coefficients $d_{uv}$, $d_{u0}$, and $d_{v0}$ in the sharp interface limit depend on the specific form of $W$ via
\[
d_{kl} := 2 \inf\left\{ \int_0^1 \sqrt{W(\gamma(t))} |\gamma'(t)|\,dt: \gamma\in C^1([0,1]; ({\vz R}_+)^n), \gamma(0)=\alpha_k, \gamma(1)=\alpha_l \right\},
\]
where $\alpha_u = (1,0)$, $\alpha_v=(0,1)$, and $\alpha_0=(0,0)$.
By the properties of $\Gamma$-convergence minimisers of $\mathcal{F}_{\epsilon}$ converge to minimisers of $\mathcal{F}$~\cite[Corollary~7.17]{DalMaso93}. Therefore our results indicate that in the regions of their respective instability monolayers and bilayers are not minimisers for $\mathcal{F}_{\epsilon}$ for small $\epsilon$.
|
2,877,628,091,020 | arxiv | \section{Introduction}
Since its discovery in the early days of X-ray astronomy ~\citep{margon71},
Cir X-1 has shown a vast range of brightness levels, variablity patterns,
and spectral changes in its X-ray emissions. Despite significant advances in recent years
these emissions remain poorly understood. Until about a decade ago it appeared
fairly well established
from photometric variability of the optical counterpart~\citep{stewart91, glass94},
its orbital parameters~\citep{brandt95, tauris99}, as a well as its
X-ray spectral and timing patterns ~\citep{tennant87, shirey99} that Cir X-1 is
probably a low-mass X-ray binary containing a neutron star. The latter
is now well determined through the direct observation of type I X-ray
bursts~\citep{tennant86} and their recent confirmation ~\citep{linares10, papitto10}.
From kinematic parameters and the
assumption that Cir X-1 is associated with the supernova remnant G321.9-0.3,
~\citet{tauris99} deduced a companion mass of about 2 $M_{\odot}$ or less and a
very extreme orbital eccentricity (e $\sim$ 0.9). More recently~\citet{jonker07}
determined that the companion is more massive, most likely an A0 to B5 type
supergiant and revised the orbital eccentricity to a more moderate value
(e $\sim$ 0.45) reviving an orginal identification by \citet{whelan77}.
The picture definitely changed when \citet{heinz13} revealed a faint
X-ray supernova remnant associated with Cir~X-1. This
allowed the determination of the age of the system to about 4500 years,
which has two important consequences besides making Cir X-1 the
youngest X-ray binary of its class known today. It implies
that the neutron star should have a magnetic field exceeding 10$^{12}$
Gauss~\citep{kaspi10}, which is at odds with the occasional observation
of type I X-ray bursts in this source. Such events are ususally observed
in old low-mass X-ray binaries with magnetic fields below 10$^{9}$ Gauss
and where the accretion stream is hardly affected by such a low field.
Such youth also re-affirms the determination that the companion is massive because
a low mass star could not have had time to evolve to fill its Roche-lobe
at periastron. In a core collapse, the progenitor star was also massive~\citep{jonker07},
the companion star should then be a massive main sequence star which is
a few tens of Myrs old. Once the binary orbit is largely circularized
Cir X-1 will eventually become a high-mass X-ray binary as we know them today.
The possibility that the supernova was caused by an accretion
induced collapse (AIC,~\citet{bhattacharya91}) is unlikely as
neutron stars in electron capture supernovae are not expected to
receive a significant kick and is inconsistent with the dynamic orbital
parameters and evolution of the system \citep{clarkson04, tauris13, heinz13}.
In X-rays, Cir X-1 exhibits two main variability patterns,
one happens on short time scales related to its orbital period, another one
spans over many years with X-ray fluxes changing from mCrab
levels to several Crab. One orbit lasts about 16.5 days~\citep{kaluzienski76} and
for most of its orbital tenure, X-ray emissions are fairly persistent. Due to
its orbital eccentricity, the neutron star and its accretion disk actually
spend most of the time detached from the companion star. Near zero phase
(periastron) the Roche-lobe of the companion overflows and the neutron
star/disk system attaches to the overflow stream and actively accretes matter.
This results in a significant rise in X-rays at periastron passage, which at times
can radiate up to super-Eddington fluxes~\citep{brandt96, brandt00, schulz02}.
The frequency and
strengths of these periastron flares also seem to follow a long-term variability
pattern that spans over 30 years~\citep{parkinson03}.
This long-term lightcurve reflects the flux changes of 30 years measured
with most of the major detectors and observatories available up to 2001. In the
mid-1990s Cir X-1 was as bright as 2 Crab and exhibited relativistic radio jets
~\citep{stewart93, fender98}. {\sl ASCA } observations showed that the accretion disk
is probably viewed fairly edge-on~\citep{brandt96}. One of the biggest revelations
was delivered by the first {\sl Chandra } observations, which showed strong P Cygni lines
indicating a powerful and variable accretion disk wind ~\citep{brandt00, schulz02}.
~\citet{heinz07} discovered a parsec scale X-ray jet (see also \citet{sell10})
manifesting the picture that
at times of high flux Cir X-1 behaves like a true micro-quasar~\citep{mirabel01}.
Since then the X-ray source has dimmed steadily. In 2005 the source was already
down in flux by more than an order of magnitude. While the P Cygni lines were gone,
an emission line spectrum emerged rich in H- and He-like lines from high-Z elements
such as Si, S, Ar, Ca, and Fe at very high neutral columns. There was a notable
presence of an ionized absorber as well as some resonant absorptions indicating
a rather weak disk wind. Since these observations the X-ray flux continued
to drop into levels below 10 mCrab. In a rare event the X-ray source at periastron
experienced a larger outburst in 2010 during which Cir X-1 exhibited type I X-ray bursts
~\citep{linares10, papitto10}. This period was extensively covered with {\sl RXTE }
and one {\sl Chandra } observation providing much temporal and spectral information
but no definite conclusions with respect to the origins of the X-ray continuum
~\citep{dai12}. However it confirmed that Cir X-1
is a neutron star X-ray binary.
In this paper we analyze a series of observations between 2008 and 2017 which
were taken at the absolute lowest flux levels in decades with the goal
to characterize the nature of the X-ray source at these levels as well as
to find interfaces to the new evolving picture that Cir X-1 is a very young massive
main sequence X-ray binary.
\section{Chandra Observations}
Cir~X-1 was observed with the high energy transmission grating spectrometer
(HETGS, see \citet{canizares05} for a detailed description)
once in 2008 and a few times in 2017. Table~1 summarizes all its parameters.
The 2008 observation we denominate as 'V', the 2017 observations as 'VIIa-c'.
We add one more observation done in 2010 (PI: D'Ai) and label this
one as 'VI'. This is in line with the denominations defined in ~\citet{schulz08}
in sequence to the previous {\sl Chandra } HETGS observations. Observations I and
II are described in ~\citet{brandt00} and \citet{schulz02}. observations
III and IV in ~\citet{schulz08}.
Observations V and VIIa-c were all performed during periastron passage, observation
VI was done at apastron passage.
\begin{table*}[t]
\input{"table1.tex"}
\end{table*}
\vspace{0.3cm}
\includegraphics[angle=0,width=8.5cm]{figure1.eps}
\figcaption{
The raw-count HETG 1st order spectrum of the co-added observations (top panel).
A more spread out unfolded spectrum of the HETG 1st order (bottom three panels).
V and VIIa-c.
\label{figure1}}
\vspace{0.3cm}
All observations were reprocessed using CIAO 4.9 using CIAO
CALDBv4.7.7 products. Updates to CIAO and its products since then did not
impact the analysis preformed at the time of submission.
The wavelength scale was determined by measuring the zero-order
position to a positional accuracy of about half a detector pixel ensuring a wavelength
scale accuracy of about a quarter resolution element, i.e. 0.005~\AA\ for MEG and
0.003~\AA\ for HEG spectra. For our previous observations we
could not use a direct zero-order source detection because the point spread function of the
source was too piled up. In that case we measured the zero position by
determining the intersection between the readout streak and the grating dispersion arms.
In this analysis we applied both methods and their results agreed well within
the accuracy stated above. Note, in observation III
~\citep{schulz08, iaria08} we did not even have a source readout streak which
introduced a systematic uncertainty that resulted in slight redshifts in the
detected lines. It is thus paramount that we maintain a high confidence in the
location of the zero order position.
For transmission gratings the dispersion scale is linear in
wavelength, therefore we perform all analysis in wavelength space. This ensures
the most accurate scales through multiple binnings.
We used standard wavelength redistribution matrix files (RMF)
but generated ancillerary response files (ARFs) using the
provided aspect solutions, bad pixel maps, and CCD window filters.
\footnote{see \url{http://asc.harvard.edu/ciao/threads/}}
For all the observations we
generated spectra and analysis products for the medium energy gratings (MEG) +1 and -1
orders, as well as for the high energy gratings (HEG) +1 and -1 orders. Figure~\ref{figure1}
shows co-added 1st order HETG spectra for all periastron observations
(V and VIIa-c) binned by a factor 4, which represents about one
MEG resolution element per spectral bin. The
lightcurves of the periastron observations are all very similar and flat without any structure and
we therefore do not present any plots. Observation V is very faint with 0.12 c/s, VIIa,b are
similar,
observation VIIc is slightly brighter (see Table~1). The apastron observation (VI) was
taken during a specifically strong outburst and is an order of magnitude
brighter than all the other observations. This observation is shown in Figure~\ref{figure7}.
\vspace{0.3cm}
\includegraphics[angle=0,width=8.5cm]{figure2.eps}
\figcaption{The X-ray spectrum at apastron during an intermediate outburst in 2010.
\label{figure7}}
\vspace{0.3cm}
\section{Spectral Analysis}
For the spectral analysis we focus exclusively on the four periastron observations.
Each individual observation does not have enough statistics to perform detailed
line analysis and in general we co-add all of them in the plotting and fit
simultaneously in the analysis. Due to the fact that these
observations were over an order of magnitude fainter than the ones in
~\citet{schulz08} we used a \emph{Cash} statistic~\citep{cash79} for the fits.
This only reflects a more accurate treatment of low statistics data while
still allowing for a consistent comparison with previous much brighter
observations.
Figure~\ref{figure1} shows faint and bright lines in the spectrum. For the
determination of the continuum we remove these wavelength regions from the
spectral model fit. The observed lines all appear at known line wavelengths
for Mg, Si, S, Ar, Ca and Fe and they also appear quite narrow. We removed
0.05 \AA\ regions around the detected line centroids for the
line centroids (see Sect.~3.2) detected for these elements.
All spectra then have 1011 data bins in the unbinned case. We do not bin the
spectra during the fits as this introduces statistical biases. However, we do
match the grids of the HEG to the MEG spectra to plot the combined spectrum.
\vspace{0.3cm}
\includegraphics[angle=0,width=8.5cm]{figure3.eps}
\figcaption{Line ratios of the H- and He-like resonance lines
from the high flux observations in \cite{schulz08} and the low
flux observations reported in this paper.
\label{figure2}}
\vspace{0.3cm}
\subsection{X-ray Line Analysis}
The co-added spectrum in Fig~\ref{figure1} shows very strong H- and He-like lines
for Mg, Si, and S with line counts between 150 and 320 cts/line. This is large enough
to allow the
determination of critical line parameters such as line shifts, widths, and flux.
For this analysis we use Gaussian line functions. The
H-like lines and the Fe K line are single Gaussians. We do not account
for the spin-orbit split in the H-like lines which produces a separation of
the $\alpha_1$ and the $\alpha_2$ line by 0.0056~\AA\ and use the
average line location as was done in ~\citet{schulz02} and ~\citet{schulz08}. This
allows us to better compare the line measurements.
As a consequence we may slightly overestimate the line width in H-like lines.
The He-like lines are triplets containing a resonance (r), an intercombination (i),
and a forbidden (f) line with fixed line spacings. Here we froze the line
spacings of the i and f lines relative to the r lines.
We then divided the bandpass into five regions containing the Mg, Si, S,
Ca + Ar, and Fe lines, respectively.
\begin{table*}[t]
\input{"table2.tex"}
\end{table*}
\subsubsection{Line Fluxes}
The results of the line fits to the periastron data in Table~1 are
shown in Table~3. The first columns list the K-shell ion, the second column
the theoretical line location as in \citet{schulz08},
the third column is the measured wavelengths,
the fourth column gives line fluxes in units of $10^{-5}$ photons s$^{-1}$ cm$^{-2}$,
the fifth column lists the sigma of line widths in \hbox{km s$^{-1}$}
(3$\times10^5 (\sigma_{meas}/\lambda_{meas}$)). There are distinct differences
in the line fluxes with respect to previous observations when the source was brighter.
Figure~\ref{figure2} shows that the bulk of the H-like line fluxes was
of the order of 20$\%$ of the ones reported in \citet{schulz08}, while the
He-like lines are up to 70$\%$ of the previously reported fluxes. Calcium lines are
the exception as they were generally weak in observations III and IV \citep{schulz08}.
The lines in the Fe K line region follow this picture, the H-like Fe~{\sc xxvi}\ is very weak
and the region is dominated by the Fe~{\sc xxv}\ triplet. However, the Fe K line (Fe~I - X)
is very similar in flux and width to the one we observed in observation III.
The line widths cluster around 550 \hbox{km s$^{-1}$}, which is very similar
to previous detections. The line widths at shorter wavelengths appear
higher which is likely a consequence of the increasingly lower spectral
resolution at shorter wavelengths. The size of an HETG
resolution element of a dispersed grating order is constant in wavelength
space and thus resolution element sizes increase in velocity towards higher energies.
\vspace{0.3cm}
\includegraphics[angle=0,width=8.5cm]{figure4.eps}
\figcaption{Measured line centroids with respect to theoretical values
in velocity units. Negative values are blueshifts. The straight line
are zero velocities from the theoretical line values. The dashed line
marks the average blueshift of -410 \hbox{km s$^{-1}$} seen in the periastron observations
V, VIa-c. The squares are the data from ~\citet{schulz08} for observation
IV for reference.
\label{figure3}}
\vspace{0.3cm}
\subsubsection{Line Centroids}
Figure~\ref{figure3} shows the line centroids with the
expected rest positions in units of \hbox{km s$^{-1}$}. The filled black circles show
the periastron observations from Table~1. The majority of the brightest lines
concerning Mg, Si and S clearly show a line shift to the blue of about 400 \hbox{km s$^{-1}$}.
Higher Z element lines (Ar, Ca, and Fe) are fainter and uncertainties are larger.
For comparison we plot the results from observation
IV from ~\citet{schulz08} for reference (squares).
For this observation we had to refit the He-like
triplets (Fe, Ca, Ar, S, and Si) because in that
previous analysis the widths and line spacings were free parameters and not
tied according to their triplet properties. In some cases, specifically for
the Si, Ar, and Ca triplets, this produced line centroids that are much more consistent
with the rest of the sample. The line centroids from observation IV are
consistent with the expected rest wavelengths.
\vspace{0.3cm}
\includegraphics[angle=0,width=8.5cm]{figure5.eps}
\figcaption{The Si~XIV L$_{\alpha}$ line for all four observations
plotted separately. The hatched line marks the centroid location of the
co-added line fits. The solid line marks the rest wavelength location.
It shows that all observations agree with a line centroid location
blue-shifted by about 400 \hbox{km s$^{-1}$}.
\label{figure4}}
\vspace{0.3cm}
The Si~XIV L$_{\alpha}$ at the expected wavelengths is by far the most
significant one in terms of total counts ($>$ 300 cts) and here we can look
at the indivdual observations. The brightest contribution comes from observation
VIIc ($\sim$ 170 cts), the faintest from observation V ($\sim$ 35 cts), enough
to allow centroid studies. Figure~\ref{figure4} shows the line counts of the
unbinned data. The black line marks the expected rest wavelength at 6.183~\AA,
the hatched line the fitted location of the co-added line data. It clearly shows
that the blue-shift is present in all observation segments and
appears persistent over many years. The lines of observations V and VIIa,b are symmetric
around the fitted centroid (hatched line).
\subsubsection{G and R Ratios}
Line flux ratios are a powerful tool to diagnose detailed properties of
an ionized gas~\citep{bautista00, porquet00}.
A gas at temperatures of $< 1$ MK that is photoionized is expected to emit lines
primarily by recombination cascades and will have a G-line flux ratios of
$G = (f+i)/r > 4$ or close to that value for atomic numbers $Z$ of
12 and higher. From the fluxes in Table~3 we
find G values of 1.53$\pm$0.30, 0.93$\pm$0.60, 1.22$\pm$0.49, 2.46$\pm$1.11, 0.87$\pm$1.07
for Si, S, Ar, Ca, and Fe, respectively. These values appear well below the
expectation for a pure photoionized plasma indicating some form of hybrid
plasma. We rule out EUV photoexcitation as this would depopulate
the $\alpha$ resonance line and produce higher order transitions which do
not appear in Table~3. There are two processes that can lead to enhanced
resonance line emissions, one is collisionally ionization in very hot plasmas,
the other one is resonance scattering.
We can rule out contributions from a collisionally ionized plasma
as here the excitation of K-shell ions with atomic number larger than
14 would also require temperatures in excess of 10 MK and the presence of
a significant bremstrahlungs continuum which we do not observe.
What we observe is likely photoionized plasma affected by resonance scattering.
In that case the presence of a high optical depth medium can scatter
resonance line fluxes unisotropically into preferential directions depending
on geometry. Modeling of this effect requires extensive knowledge
of the geometry and local plasma properties.
Gas densities can be diagnosed with the
$R = f/i$ ratio for which we expect that at $ R > 1$ densities are below
a critical density for a specific atomic number $Z$. Our sample features
He-like triplets for Si, S, Ar, Ca, and Fe and features critical desities
of $\sim10^{14}, \sim10^{14}, \sim10^{15}, \sim10^{16}$, and $\sim 10^{17}$ cm$^{-3}$.
In all cases we find R values much larger than 1 except for Ca, where we only
detect an upper limit for the f line.
This points to plasma densities lower
than $\sim5 \times 10^{14}$ cm$^{-3}$ assuming an isotropically mixed gas.
The R ratio is also very susceptible to UV radiation where the metastable
forbidden line levels get depopulated into intercombination line levels. Since
the f-lines in Table~3 generally appear stronger than the corresponding i-lines
shows that effects from UV radiation appear not to be significant.
The lines in the Fe K line region follow this picture, the H-like Fe~{\sc xxvi}\ is very weak
and the region is dominated by the Fe~{\sc xxv}\ triplet. However, the Fe K line (Fe~I - X)
is very similar in flux and width to the one we observed in observation III.
\subsection{Photoionization Modeling}
\begin{table*}[t]
\input{"table3.tex"}
\end{table*}
Figure~\ref{figure1} features
strong He-like lines with especially strong r-lines. The ionization parameter is defined
as
\begin{equation}
\xi = L_x/(n d^2),
\end{equation}
\noindent
where L$_x$ is the source luminosity, n the plasma density, and d the distance
from the X-ray source. A multiple plasma environment requires a range
of ionization parameters to engage in a fit. This complicates the modelling process
in terms of fitting time and parameter range.
In that light we try to limit the number of fit components to a very
minimum. For the choice of continuum we modeled a few cases including combinations
of powerlaws as reported by \citet{schulz08} and \cite{iaria08} as well as
blackbody spectra.
For the fit itself we use \emph{XSTAR}'s \emph{photemis} function
available in \emph{Xspec}, which allows for a pre-set atomic levels population file
to calculate a photo-ionized spectrum for a single ionization parameter. For the final fit
setup we used two different functions to account for two ionization parameters.
Furthermore in order to be able to fit H- and He-like line morphology, the functions
also needed individual absorption columns, for which we applied \emph{pcfabs} functions.
For the fits we fixed the interstellar column to 1.8$\times10^{22}$ cm$^{-2}$ as reported
by \cite{heinz13}. The fits generally converged to a one component solution for the
contiuum, i.e. either a powerlaw or a blackbody spectrum. The powerlaw solution was
ruled out because it completely overshot the observed continuum above 2 \AA\ for the
Fe K line region, whereas the 1.6 keV blackbody provided the necessary steep decline
above 2 \AA\ (see Fig.~\ref{figure10}). The fit result is summarized in Table~4.
The uncertainties are 90$\%$ confidence limits calculated by \emph{conf$\_$loop} in
\emph{ISIS}. In the fit we also set the turbulent velocities to 600 \hbox{km s$^{-1}$}\ and
in the final stages we fixed some of the abundances we let float. The redshifts were also
fixed to -400 \hbox{km s$^{-1}$} (see dashed line in Figure~\ref{figure3}) in the one of the
\emph{photemis} components, let afloat in the second. The fact that we need at least
two ionization parameters is due to the fact that we observe Mg and Si lines as
well as Ca and Fe lines, which cannot be modeled with a single ionization
parameter~\citep{kallman04}. The Mg abundance was
slightly reduced to 0.93, which likely is a consequence of the statistically
challenging Mg~{\sc xi}\ bandbass. The Si abundance was set to 1.73, the S abundance to 1.68
for both \emph{photemis} components.
\vspace{0.3cm}
\includegraphics[angle=0,width=8.5cm]{figure6.eps}
\figcaption{
Comparison of the powerlaw fit (red) to the blackbody fit (blue) in the
Fe K line region (the data are (black)).
\label{figure10}}
\vspace{0.3cm}
The final ionization parameters obtained were $\xi$ = 340 and 1500 erg cm s$^{-1}$
and both associated with
high column densities. The covering fractions were free parameters but resulted in 0.99 in
both cases. The plasma with ionization parameter 340 erg cm s$^{-1}$
is responsible for bulk of the
line fluxes, while the one with ionization parameter 1500 erg cm s$^{-1}$
contributes to some
of the H-like line fluxes but mostly to the Fe region.
\vspace{0.3cm}
\includegraphics[angle=-90,width=8.5cm]{figure7.eps}
\figcaption{Photo-ionization model fit using \emph{XSTAR:photemis} function.
\label{figure6}}
\vspace{0.3cm}
The normalization of the photoionization model is defined by
\begin{equation}
K = {{EM} \over 4\pi D^2}\times10^{-10},
\end{equation}
\noindent
where EM is the emission measure of the gas in units of cm$^{-3}$ at
the involved ionization parameter, D is the distance to Cir X-1 in units
of cm. For a distance of 9.4 kpc~\citep{heinz15} this leads to
\begin{equation}
EM = K \times 1.06\times10^{56}.
\end{equation}
\noindent
For the plasma with log $\xi_1$ = 340 erg cm s$^{-1}$ we obtain
EM$_1 = 1.3\times10^{58}$ cm$^{-3}$,
for log $\xi_2$ = 1500 erg cm s$^{-1}$
we obtain EM$_2 = 1.1\times10^{57}$ cm$^{-3}$.
These emission measures are only slightly smaller than the one estimated
by ~\citet{schulz08}, the observed source luminosity, however, appears
to be significantly lower.
This observed low X-ray luminosity from the continuum
is inconsistent with the observed photo-ionized luminosity from
the X-ray line analysis.
One of the difficulties is to maintain the observed
ionization parameters at such a low luminosity over such a large volume.
While to some extent one could offset a larger source distance $d$ with
a lower density in the wind $n$, this eventually breaks down due to the
fact that the observed photo-ionized luminosity appears close to the actual
source luminosity. Such a high ionization efficiency is nearly impossible
to achieve under likely any circumstances. This points into the direction that
parts of the X-ray source is still obscured, something we encountered during
the analysis of observations I and II ~\citep{schulz08}. To simply
assume that some the flux of our observed continuum is partially blocked
and should be higher is possible but not a good solution
as a 1.6 keV blackbody is inheritently inefficient
to ionize high Z atoms such as Fe and Ca at any possible luminosity.
A more promising scenario relates back to our actual findings in ~\citet{schulz08}.
There we found that the observed photoionized lines are due to an
accretion disk corona (ADC) and in order to sustain such a corona the
source X-ray luminosity needed to be significantly higher than observed
which lead to the conclusion that parts of the central X-ray source
are obscured. This may still be the case. Figure~\ref{figure9} shows a
recalculation of the content of Figure 5 in ~\citet{schulz08}, including
lower luminosity predictions and the locations of our photo-ionized
data points in this picture. This indicates that for our results that the
source still is at a luminosity
close to 10$^{37}$ \hbox{erg s$^{-1}$ }, which allows to sustain a atatic ADC.
This then leaves the possibility that we do observe some highly
absorbed and obscured parts of the ADC in the high Z lines and
the photo-ionized wind of the companion seen in the blue-shifted
lower-Z lines.
The ionization parameter and emission measures then leave us with
a wide range of viable plasma regimes. Two regimes that are opposite in character
are of interest in the case of Cir X-1. One refers to the ADC emissions as described in
\citet{schulz08} which are confined to the domain of the accretion disk.
In that case we can assume that {\sl d}, the distance
to the X-ray source, is of the same order of magnitude as {\sl r}, the
size scale of the photo-emitting region. For distances of up to 10$^{10}$ to
10$^{11}$ cm from the neutron star surface we obtain ADC plasma densities
between about 10$^{12}$ and 10$^{14}$ g/cm$^{3}$ in a compact volume
within the accretion disk (see also ~\citet{jimenez02}).
The observed blueshifts point to another regime.
For very low densities as they exist in weak stellar winds,
i.e. densities $<< 10^{11}$ g/cm$^{3}$ we also get viable solutions
for values of {\sl d} and {\sl r} that are beyond the dimensions
of the accretion disk. Weak low density winds are expected in mid-B type stars.
For such low plasma densities the dimensional parameters are
larger than $10^{12}$ cm consistent with ionized line emissions
in a weak stellar wind.
\vspace{0.3cm}
\includegraphics[angle=90,width=8.5cm]{figure8.eps}
\figcaption{Calculation of emission measure in depedence of ionization
parameter for a static corona as was done in ~\citep{schulz08}. The
color lines are for various luminosities, $log L_x$ = 38 (red), 37 (green),
36 (blue), and 35 (light blue) in units of \hbox{erg s$^{-1}$ }. The two black
circles are the results from the photo-ionization modeling.
\label{figure9}}
\vspace{0.3cm}
The \emph{XSTAR:photemis} function applies a model describing a purely photo-ionized
plasma. The fact that this model cannot properly account for the strength of the resonance
lines in some of the He-like triplets confirms our
findings from the G-ratios that there is a significant contribution from some recombination
and resonance scattering. Future modeling of these effects should reveal the presence of
such a scattering medium,
\section{Discussion}
The X-ray source in the Cir X-1 binary has given us many different
looks in the past in terms of variability, flux, and spectral properties.
Since the first {\sl Chandra } HETG observation in the year 2000, where the
source flux still exceeded 1.5 Crab, the X-ray source has dimmed
by more than three orders of magnitude until it went below one mCrab in
2009 (see Figure~3 in \citet{heinz13}). Its spectral continuum shape similarly
has morphed from two partially covered blackbodies during its high state
\citep{brandt96, schulz02}, to two or one
partially covered powerlaws~\citep{schulz08, iaria08}, and now
to a single blackbody with just interstellar
absorption. Besides exhibiting various
levels of line emission and absorption there always was the presence of
enormous levels of either partially covered cold or warm intrinsic
continuum absorption. High levels of source intrinisic absorption are also present in
the obervations we report in this paper,
except in these new observations these absorbers have only
little effect on the continuum and mostly affect the photoionized regions.
This is an aspect we have not seen
before and it appears to be fundamentally different from the previous
observations.
\subsection{The Origin of the X-ray Continuum in this Very Low State}
The final continuum spectrum in the photo-ionization fit turned out to
be a 1.6 keV blackbody. We had fixed the interstellar column to
1.8$\times10^{22}$ cm$^{-2}$ which was found by ~\citet{heinz13}
in the fit to the supernova remnant, consistent with the large amount of
visible extinction towards Cir X-1 as well as a well established distance
of 9.4 kpc~\citep{heinz15}.
Such a distance is also consistent with Cir X-1 radiating at Eddington peak fluxes when
it was brightest~\citep{jonker04} and when it exhibited P Cygni X-ray lines from a radiation
driven wind~\citep{brandt00}. The property of the blackbody fit is rather
peculiar as it exhibits a very small emission radius. It is also notable
that there is no or only very little contribution of the accretion disk
to the observed X-ray spectrum. The only clear signature from the disk may come from
the Fe K fluorescence line observed at 1.93 \AA.
The big problem arises when we realize that our observed continuum
is quite insufficient to photo-ionize the plasma at the level observed.
This gives reasons to assume that emissions from the accretion disk are
there but are highly absorbed and obscured. This provides quite some
uncertainty to the observed luminosity and nature of the X-ray spectral
continuum as we have to consider that the bulk of the emission partly blocked and obscured.
Accreting neutron stars with magnetic fields significantly lower than
$10^{10}$ Gauss are generally seen in LMXB, which are considered to be
older systems in which the original field had enough time to decay to such
low values. With only a very few exceptions where we do not know the
field through cyclotron lines accreting neutron stars with high mass
companions have all high magnetic fields as they are considered very young.
However, we point out that the question of the
companion nature is fairly irrelavant here.
\citet{homan10} showed that transient sources
can morph through atoll and Z-stages at various flux phases making these
spectral variability imprints more related to effective Roche-lobe overflow
accretion rather than binary types. In the following we discuss how the
observed continuum rates to cases of low and high magnetic fields for the
accreting neutron star.
\subsubsection{The Low Field Case}
One of the reasons why Cir X-1 used to be considered a LMXB was its spectral
variation pattern during its brightest flux phases showing the nature of a
Z-source. We then may compare it to a rare class of LMXB pulsars such as
the transient Terzan 5 (IGR J17480-2446) or the persistant ultracompact
binary pulsar 4U1626-67. At an estimated magnetic field between 10$^9$ and 10$^{10}$ G
Terzan 5 is much older and
its accretion stream is likely only weakly affected by the magnetic field.
As a consequence its spectral signatures are expected to be more comparable to neutron star
atmospheres or normal LMXB emissions depending on whether the transient is
in a subcritical or critical state (see \citet{degenaar13} and references
therein). 4U 1626-67 with a magnetic field of 4$\times10^{12}$ G ~\citep{orlandini98}
is much younger but since its luminosity is persistently close to critical
its continuum emission relates more to what we observed in Cir X-1 during
its intermediate flux phases~\citep{schulz08}.
A significantly higher blackbody luminosity also significantly
increases the emission radius to sizes that are more reminiscent of
neutron stars with low magnetic fields in which the accretion flow is hardly
affected by the magnetic field and we can identify random hot patches or
boundary layers on the neutron star surface as emission regions.
Such blackbody continua are not particularly unusual in LMXB, prime examples are
4U1626-67 \citep{schulz01}, 4U1822-37 \citep{ji11}, Her X-1 \citep{ji09} or
Aqu X-1 \citep{sakurai14}.
However, in all of these cases that blackbody component
exhibits temperatures of about 0.5 keV. High temperatures like observed
here are more prominent in the peaks of type I X-ray bursts and in fact the
here observed continuum temperature is close to what \citep{linares10} report
for the top temperatures observed in the 2010 X-ray bursts from Cir X-1.
That is very unusual for LMXBs. The fact that this continuum is heavily
absorbed and some of this absorption in unaccounted for changes little,
as it is the part $< 2$ \AA\ that defines the blackbody in the fit and
its temperature. We note that neither \citet{schulz08} nor
\citet{iaria08} observed a blackbody when the source was an order of
magnitude brighter but report a more suitable powerlaw for the observed
photoionizations as the continuum.
The shape of a 1.6 keV blackbody is not particularly suited for K-shell
ionizations of high Z atoms as its shape drops dramatically above 7 keV and
lacks the harder photons.
Should the neutron star have a low field
as is common in LMXBs then the observed continuum properties are quite
unusual and hard to explain, it would also make Cir X-1 the youngest
neutron star with a low magnetic field known to date.
\subsubsection{The High Field Case}
However, what is very common is the fact that young neutron stars, accreting or
isolated, have high ($>> 10^{10}$ G) magnetic fields \citep{reig11, oezel16}.
In this case we consider the possibility that this observed blackbody
continuum does not contribute much to the photo-ionization process at all
and the photo-ionizing continuum is entirely obscured.
In a high field accretion scenario matter is predominantely funneled
by the field onto poles of the neutron star and the state of the
accretion column depends on X-ray luminosity (see \citet{becker07} for
and indepth review). At a critical luminosity of L$_{crit} = 1.5\times10^{37}$ \hbox{erg s$^{-1}$ }
the accretion flow becomes radiation pressure dominated and X-ray emissions
are defined by a complex mix of physical processes resulting in
something close to a power law with an exponential cut-off or similar as
observed when the source was brighter~\citep{schulz08}. However, in the
sub-critical case where the luminosity is $<< 10^{37}$ \hbox{erg s$^{-1}$ } as is observed
in the low state of Cir X-1, the emissions from the bare hot spot can
be explained by such a hard blackbody.
In such a sub-critical environment
we can expect that the X-rays are produced by impact onto or very close to the
neutron star surface.
If this is the accretion hot spot, its radius r$_0$ should fulfil the condition~\citep{lamb73}
\begin{equation}
r_0 < r_{ns} \lbrack{r_{ns}\over r_A}\rbrack^{1/2},
\end{equation}
\noindent
where r$_A$ is the Alfven radius given by~\citet{becker07} as
\begin{equation}
r_A = 2.6\times10^8 B^{4/7} r_{ns}^{10/7} M_{ns}^{1/7} L_x^{-2/7},
\end{equation}
\noindent
where B is the magnetic field in units of 10$^{12}$ G, r$_{ns}$ the
neutron star radius in units of 10 km, M$_{ns}$ the neutron star
mass in units of $M_{\odot}$ , and the X-ray luminosity L$_x$ in units of
10$^{37}$ \hbox{erg s$^{-1}$ }. Figure~\ref{figure8} shows the evaluation of
Equ. 4 and 5 for two different neutron star radii, 10 km (black) representing the
lower end of most equations of state, and 15 km (red) representing the
higher end of most equations of state~\citep{oezel16}. Equation 5 has only a
weak dependence on neutron star mass and here we used a canonical value
of 1.4 $M_{\odot}$ . For L$_x$ we applied the measured X-ray luminosity
in Table~2. For a distance of 9.4 kpc~\citep{heinz15} we can determine
limits to the blackbody emission radius depending on the blackbody fits
provided in Table~2. This then amounts to
a r$_{bb}$=r$_{min}$=0.512 km and a r$_{bb}$=r$_{max}$=0.597 km.
At a distance of 9.4 kpc the observed emission radius
in Table~4 is 0.589$\pm$0.151 km consistent with these values.
A relativistic color correction would do little to the range of these
values. From that we find a lower limit to the magnetic field strength
of 4.0$\times10^{10}$ G for a 15 km radius and and 2.8$\times10^{11}$ G
for a 10 km radius. Note that these are only lower limits and the
actual field can still be higher then these values.
\vspace{0.3cm}
\includegraphics[angle=0,width=8.5cm]{figure9.eps}
\figcaption{Evaluations of Equ. 4 and 5 for a neutron star radius of
10 km (black) and 15 km (red). The limits r$_{min}$ and r$_{max}$ are derived from
the blackbody fit results in Table~2 for a distance of 9.4 kpc.
\label{figure8}}
\vspace{0.3cm}
A survey of about 20 confirmed cyclotron line energies in HMXBs~\citep{caballero12} shows
that all but one magnetic surface field strengths are above 10$^{12}$ G, except
for Swift J1626.6-5156~\citep{decesar09} which has a field slightly below 10$^{12}$ G.
The result in Figure~\ref{figure8} of course depends on distance, while a cyclotron line
measurement does not~\citep{coburn06}.
However, we deem a distance of 9.4 kpc quite robust. In order to have both
the 10 km and 15 km solutions above 10$^{12}$ G, the distance has to be below 6 kpc,
which is implausible because of the amount of extinction along the line of sight and
it would take the X-ray luminosity in observations I and II~\citep{brandt00, schulz02}
too far away from Eddington conditions to produce such a powerful disk wind.
Higher distances do not change the result much unless they become unrealistically
high. In order for the field values to drop below 10$^{10}$ G, the blackbody radius
has to be higher than 1 km and the source would
reside outside the Milky Way. We therefore are confident that the surface magnetic field
in Cir X-1 is somewhere between the values shown in Fig.~\ref{figure8}.
The moderately high $>> 10^{10}$ G magnetic field
should still be consistent with a young system even though it is generally
assumed that neutron stars are born with $> 10^{12}$ G (see \citet{kaspi10} and
references therein). The neutron star in Kes 79~\citep{halpern10} with a
magnetic field of only $~ 3.1\times10^{10}$ G is a good example
for a younger neutron star with a moderate
magnetic field (see also \citet{shabaltas12}). Other examples are
PSR J1852+0040 and 1E1207.4-5209~\citet{halpern07, gotthelf07} as well
as PSR J0821-4300 in Puppis A \citep{gotthelf09}, even though these may not be
quite as young as Cir X-1. Thus young neutron stars
with lower magnetic fields of the order of $10^{11}$ G,
also sometimes dubbed `anti-magnetars', are not
unusual anymore.
The observation of type I X-ray thermonuclear
bursts~\citep{tennant86, linares10} also implies a field of $<< 10^{12}$ G~\citep{fujimoto81}.
However, this criteria is more based on the lack of observations of type I bursts
in X-ray pulsars rather than having a solid theoretical basis other than the suppression
of convective motions to allow runaway burning in magnetic fields of
about 10$^{12}$ G and higher~\citep{gough66, bildsten95}.
\citet{bildsten98} outlined the conditions for neutron star nuclear burning
for the case of high mass accretion rates (${\dot M}$ $> 10^{10}$ $M_{\odot}$ yr$^{-1}$). At these
rates unstable nuclear burning is more easily realized. The luminosity of Cir X-1 during
its outburst in 2010 was still below the stable burning criterion and here the
magnetic field may have been high enough to confine the accreted matter to the ignition
pressure, but still low enough to allow convective motions. We therefore argue that
in the case that the neutron star magnetic field in Cir X-1 is only moderately high, the
two facts that the neutron is very young and that we observe type I X-ray thermonuclear bursts
are consistent with what we know today.
\subsection{The Nature of the Blueshifts}
The most prominent difference in the observed line properties besides the
fact that the He-like resonance lines appear enhanced are the blueshifts
of about of 400 \hbox{km s$^{-1}$} in most of the observed lines. In \citet{schulz08}
the lines were at rest and identified as ADC emissions from the accretion disk.
In this low state blueshifted line emissions now need a two orders of magnitude
higher ionizing luminosity to sustain such an ADC in the accretion disk.
The flux of the lines and the amount of blue-shift in the lines appear small, and
may not have significantly affected the line profiles in observations III and IV
spcifically because the shifts are of the order of a grating spectral resolution
element. However we note that \citet{iaria08} claim a faint blue component
in their line profile analysis in observation III. This could mean that
this blue-shifted emission is always present but gets overpowered by ADC emissions
when the source gets brighter.
There is no viable explanation for blue-shifted lines within
the accretion disk suggesting emission regions outside the disk. While
\citet{iaria08} proposed possible jet emissions, the determined ionization
parameters as well as emission measures and the amount of the blueshift
are also consistent with X-ray illumination of a massive companion wind.
A wind velocity of 400 \hbox{km s$^{-1}$} would point
in direction of a B5Ia supergiant~\citep{prinja98} as a companion confirming
the identification from ~\citet{jonker07}. There is not much room for later types as in this
spectral type range terminal wind velocities decline rapidly with spectral type.
We point out that B5 stellar winds themselves cannot produce X-rays through
inner wind shocks as we observe in more massive stars. In this case the very low density
outer terminal velocity zone of the wind gets illuminated by the X-ray source
in Cir X-1.
That we may observe the illumination of the stellar wind is also
supported by an apastron observation
taken during a more prominent outburst in 2010. Figure~\ref{figure7} shows a dominant
continuum but no photo-ionized lines, which should have been detectable. Even with the
brighter continuum the three brightest lines in the periastron spectrum should still be
detectable above a 5$\sigma$ level. Yet the only line detected is the Fe K
fluorescence line which we commonly associate with the accretion disk.
The absence of unshifted ADC lines at least with regards to S, Si and Mg
from the accretion disk is peculiar because if it is true that the X-ray
emissions have not changed in luminosity and are further obscured than
what we observed in observations III and IV, then these lines should still
be there unless they are now also suppressed by heavy absorption. In that
case the photoionized emissions from the stellar wind is all thats left
to the observer as well as some weak higher Z lines from the ADC.
The amount of the shift would also rule out much later types as terminal wind velocites decline
rapidly with type~\citep{lamers95, prinja98}. Observations V and VIIa-c were taken
between orbital phase 0.00 and 0.07 using the RXTE/ASM ephemeris (MJD = 50082.04) and
orbital period of 16.54694 days from ~\citet{shirey98} (see also ~\citet{clarkson04})
for which the neutron star always illuminates
the face of the star into a wind coming towards the observer, hence the blueshift.
However, we point out that 400 \hbox{km s$^{-1}$} is also near the known rotational velocities
of any Be-star in an X-ray binary~\citep{reig11}. So if it is not the wind but
the stellar surface that gets ionized then we should see blue- and red-shifts depending
on whether the compact object comes towards the companion or moves away and none
when the compact object is a zero phase.
With all the distractions from disk emissions in the previous observations gone, we believe
that is the first time we might see the companion wind itself.
For this one can estimate the contribution of a spherical B-star wind with a constant
velocity and a specific mass loss rate to the emission measure by integrating over the
available volume from a B-star radius of 10$^{12}$ cm to infinity. With a spherical mass loss
rate of 10$^{-8}$ $M_{\odot}$ yr$^{-1}$ and a terminal wind velocity of 500 \hbox{km s$^{-1}$} this results
to emission measures of the order of 10$^{55}$ erg cm$^{-3}$ s$^{-1}$, which is only
a fraction of what we observe. Increased emission measures require increased
mass loss rates as well as non-spherical geometries. This indicates that if this is
indeed the companion wind, it has to be enhanced. One indicator that this is the case could be the
existence of significant amounts of resonance scattering in the He-like lines.
\subsection{On the Possible High Mass Nature of the X-ray Binary}
The photoionized plasma
requires a substantial absorber in the line of sight in addition to
absorption from the ISM. If the wind is indeed the emission line region,
this absorber has to be able to cover the bulk of the
stellar wind. Normal B5 supergiants are not well known to
carry wind produced stellar disks. The only plausible protagonists we know today are
magnetic stars and Be-stars, in the latter case also mostly much earlier type B-stars.
In both cases an equatorial wind fed decretion
disk can be produced under various critical circumstances.
In the case of Be stars it may be a stellar rotation rate of
more than 75$\%$ of the critical rate (see \citet{rivinius13} for a full review),
while in the case of magnetic stars it is a strong magnetic field configuration confining
the wind (see \citet{gagne05}). Such disks can provide not only the additional line
of sight absorber but also account for the resonance scattering we observe.
However, while this is still a lot of speculation, we can in the following consider
what is observationally known for the case of Be stars.
Be-star X-ray binaries (BeXBs) are a sizeable sub-group in the category of high-mass X-ray
binaries (HMXBs). Perhaps the most famous in the class of BeXBs is
GX 301-2. This system consists of an accreting magnetized neutron star in an
eccentric orbit (e$\sim$0.46; \citet{sato86, koh97}). This is a very similar
system except its orbit is three times as long and periastron passages
are not as violent as in Cir X-1. Its optical counterpart is a B1.5Ia supergiant
with a lower mass limit of 39 $M_{\odot}$ ,
which is much larger than what is considered for Cir X-1 and here the Roche lobe overflow connection
never breaks~\citep{sato86, watanabe03, kaper06}. However, there are many similarities,
such as an inclination larger than 44$^{o}$ which does not produce an eclipse but
just barely misses the face of the star producing dips in the light curve by the
dense stellar wind. \citet{jonker07} also concluded even though the inclination is high
\citep{brandt96} that in the case of Cir X-1 the neutron star never crosses
the star (see also \citet{iaria08}). The X-ray spectrum in observation III~\citep{schulz08}
is very similar to the one observed in GX 301-2, where the region above 5~\AA\ is dominated
by line emissions from the ionized wind and below is dominated by accretion disk activity
~\citep{sato86, koh97, watanabe03, fuerst11}.
In Be stars the regions above and below the disk are more or less equivalent to those
surrounding normal B-stars~\citep{rivinius13}. Fast rotation may even enhance the wind
towards the polar regions~\citep{puls08}. The fact that the companion in Cir X-1 is
a supergiant does not make much of a difference. Most massive stars begin the main
sequence as fast rotators and usually stay that way~\citep{langer97}, during the
supergiant phase the star may see a reduction in the rotation rate due to the increasing
stellar radius~\citep{puls08}. Generally there is no reason why a fast rotator would
not retain its disk in the supergiant phase and the case could be made that
Cir X-1 might as well be the youngest known BeXB.
All BeXBs classified so far have orbital periods of 20 days and higher up to 300 days
(see ~\citet{reig11} for a review) and Cir X-1 with 16.5 days would mark the
shortest period, which seems adequate since it would be the youngest in the sample.
However, the list of known OBe stars mostly shows dwarfs (class V), subgiants (class IV)
or giants (class III) and Cir X-1 would be the only other supergiant case (classes I and II)
to GX 301-2.
Supergiant X-ray binaries (SGXBs) tend to have shorter periods and here Cir X-1
would have the longest of this class. It is also the case that almost all BeXBs show
X-ray pulsations ranging from a few seconds to a few hundreds of seconds. To date,
no pulsations have been reported for Cir X-1 and we also did not detect any in the range
where we are sensitive which is also above a few seconds. For the neutron star being
that young we should also expect much shorter periods likely down to a few milliseconds.
The lower magnetic field of 10$^{11}$ G should not brake its period as fast as the
other cases in 4000 yr. The mere fact that we do not observe pulsations is
somewhat of concern but not completely unusual. To date no pulsations have been
found in the SGXB 4U1700-37~\citep{seifina16} and in a few BeXBs~\citep{reig11}.
New model calculations also show that at moderate magnetic field strength of the
order of 10$^{11}$ G and a high inclination of the system~\citep{brandt95},
pulsations are quite impossible
to detect if the angle between the neutron star rotation and magnetic field axis
is small (Falkner et al. 2019, submitted
\footnote{https://www.sternwarte.uni-erlangen.de/docs/theses/2018-07$\_$Falkner.pdf}).
We should also mention that \citet{cumming08} proposed for the case of HETEJ1900.1-2455
and possibly other accreting millisecond pulsars
that B-fields can be partially buried suppressing the observation of pulsations.
The fact that we know that the neutron star in Cir X-1 is very young combined with
an identification of the companion as a B5Ia supergiant also can put some constraints
on the evolutionary state of the entire binary.
A formation scenario involving an AIC~\citep{bhattacharya91} now seems
highly unlikely.
It also rules out that the
progenitor star was a massive O-star as these only live less than 20 Myr whereas
B3 stars and later live more than 30 Myr~\citep{behrend01}. Since the companion
in Cir X-1 is in its supergiant phase, which the star is in during its very late
main sequence state, it must be much older than 20 Myrs. More plausible is that
the progenitor was quite similar to the companion, maybe one or two types earlier.
This would put the progenitor into likely the lowest mass range (8 - 10 $M_{\odot}$ ) that
can produce a neutron star. Today we know little about neutron star progenitor
masses and if this conclusion is true, it is quite extraordinary.
Last but not least the classification of Cir X-1 as a BeXB might have the
potential to at least partially explain the $\sim$30 yr transient flux
behavior as shown by \citet{parkinson03} by invoking a precession period for the
Be-star-disk system. Such precession scenarios have already been suggested by
\citet{brandt95} in terms of accretion disk precession and by \citet{heinz13}
in terms of spin-orbit coupling effects between the neutron star spin and
the binary orbit. Here we suggest a precession of a companion star disk.
Super-orbital periods in accretion disks are not unusual
in X-ray binaries as the examples of Her X-1, LMC X-4 and SMC X-1 show.
Precessing Be-star disks are much more rare but not unheard of.
\citet{lau16} recently reported on an apparent precessing helical
outflow from the massive star WR102c. The interpreation is that the precessing outflow
emerged from a previous evolutionary rapidly rotating phase of the star and
attributed the precession to an unseen compact companion. In a sense this situation
is not unsimilar to what we envision here. In the WR102c system the period
of the unseen companion was constrained to between 800 and 1400 days, which is
much larger than the period in Cir X-1, but the precession period of
1.4$\times10^4$ yr is very close to the long term variation cycle in Cir X-1.
This sets a precedent and it is no longer a question whether in can happen,
but it calls for details of how it happens.
\acknowledgments
\bibliographystyle{jwapjbib}
|
2,877,628,091,021 | arxiv | \section*{Acknowledgements}
\medskip
This work is supported by the Spanish grants PID2020-113775GB-I00
(AEI/10.13039/ 501100011033) and PROMETEO/2018/165 (Generalitat
Valenciana). R.B. acknowledges financial support from the Generalitat
Valenciana (grant ACIF/2021/052) and CSIC (JAE\\ICU-20-IFIC-2).
G.C. acknowledges support from ANID FONDECYT-Chile grant No. 3190051.
G.C. and J.C.H. also acknowledge support from grant ANID
FONDECYT-Chile No. 1201673 and ANID – Millennium Science Initiative
Program ICN2019-044. The work of A.T. is supported by the “Generalitat
Valenciana” under grant PROMETEO/2019/087,
as well as by the FEDER/MCIyU-AEI grant FPA2017-84543-P
and the AEI-MICINN grant PID2020-113334GB-I00 (AEI/10.13039/501100011033).
Z.S.W. is supported by the Ministry of Science
and Technology (MoST) of Taiwan with grant numbers
MoST-109-2811-M-007-509 and MoST-110-2811-M-007-542-MY3.
\bibliographystyle{JHEP}
\section{Summary}\label{sect:sum}
The Standard Model (SM) effective field theory (EFT) extended with
sterile neutrinos, also known as the $N_R$SMEFT, provides a framework
to systematically study sterile neutrinos associated with a high
new-physics (NP) scale in ultra-violet complete models beyond the SM.
In the $N_R$SMEFT, high-scale NP effects are encoded in the so-called
Wilson coefficients of non-renormalizable operators at different mass
dimensions. Higher-dimensional operators involving $N_R$ can have
either one, two, or four sterile neutrinos, and may conserve or
violate lepton number, or else both lepton and baryon numbers.
In this work, we have focused on lepton-number-conserving four-fermion
single-$N_R$ operators associated with a charged lepton and two
quarks, which can induce both production and decay of the heavy
neutral leptons (HNLs) simultaneously. For HNLs of
$\mathcal{O}(10)$~GeV mass, such operators with a NP scale above $\sim
1$ TeV can easily make the HNLs become long-lived, leading to
displaced vertices at the LHC. We have therefore proposed a
displaced-vertex search strategy based on a prompt-lepton trigger and
selection of high-quality displaced tracks. By performing Monte-Carlo
simulations with \texttt{MadGraph}5 and \texttt{Pythia}8, we have
estimated the sensitivity reaches for ATLAS in the high-luminosity LHC
era with 3 ab$^{-1}$ integrated luminosity, to four single-$N_R$ EFT
operators: $\mathcal{O}_{duNe}$, $\mathcal{O}_{LNQd}$,
$\mathcal{O}_{LdQN}$, and $\mathcal{O}_{QuNL}$.
Multiple combinations of quark and lepton flavors can be studied.
Here, we have considered mainly two combinations: $(u,d)$ and $(c,s)$.
The first (second)-generation-quark only flavor combination is then
projected to have the best (worst) sensitivities, because of their
portion in the proton content. For both quark combinations, we also
studied all possible lepton generations, i.e electron, muon and tau.
In addition, for simplicity, we did not take into account the effect
of the active-sterile neutrino mixing, which is supposed to be
negligible if the type-I seesaw relation is assumed. For the $(u,d)$
and $(c,s)$ combinations, we find in general for the considered
single-$N_R$ operators, ATLAS can probe $\Lambda$ up to 20 TeV
and above for $m_N\gtrsim 20$ GeV, if we switch on one
operator at a time.
In addition to the EFT scenarios, we also revisited the minimal
scenario of the HNL mixing with the SM neutrinos. In this scenario,
the type-I seesaw relation is not assumed and we have two independent
parameters: mass of the HNL and its mixing parameter with one type of
the active neutrinos: a simple $3+1$ scenario. These results are an
update of those given in Ref.~\cite{Cottin:2018nms}. Besides some
minor changes, the most important difference is that we have now taken
into account both charged and neutral currents in our computation,
leading to more realistic projection results, especially for the case
of mixing with the $\tau$ neutrino.
In summary, we conclude that a displaced-vertex search at ATLAS for
HNLs can probe new physics scales up to about $20$~TeV and, in some
cases above, for HNL mass between about 5 GeV and 50 GeV, depending on
the quark and lepton flavors associated with the single-$N_R$ operator
under consideration.
\section{Numerical results\label{sect:res}}
Based on the computational procedure described in the previous
section, we have estimated the experimental sensitivities (95 $\%$
confidence level (C.L.) exclusion limits under the assumption of zero
background) of searches for long-lived HNLs at the ATLAS detector for
two different theoretical scenarios. The first is the minimal scenario in
which only right-handed neutrinos, $N_R$, are added to the particle
content of the SM and renormalizable interactions are assumed. In
this case, the HNLs interact with the SM particles only through the
mixing with the active neutrinos, $V_{lN}$, with $ l = e, \mu, \tau $.
In the second theoretical scenario we consider $N_R$SMEFT containing
non-renormalizable interactions of $N_R$ with the SM. In this case,
both production and decay of the HNLs can be mediated by the
single-$N_R$ effective operators under consideration.
In all of the plots below, we assume a $3+1$ scenario
where the HNL mixes dominantly with only one active neutrino flavor at a time. We assume only one HNL is kinematically relevant.
\subsection{Minimal scenario}
In the minimal scenario, the relevant parameters are the mass of the
HNL, $m_N$, and the mixing of the HNL with the active neutrinos
$V_{lN}$, which we have treated as independent parameters. The HNLs
are produced from the decays of on-shell $W$-bosons into a lepton and
an HNL associated with a charged lepton, $ pp \rightarrow W
\rightarrow l N $, via the HNL mixing with the active neutrinos. The
decay of the HNLs occurs also via the mixing with the active
neutrinos, through both charged and neutral SM currents,
$ N \rightarrow l (\nu) jj $. For the minimal scenario we use the
\texttt{FeynRules} implementation for HNLs of
Ref.~\cite{Degrande:2016aje}.
Figure~\ref{fig:minimalsensitivity} shows the region, in the plane
$|V_{lN}|^2$ vs.~$m_N$, where a displaced-vertex search at the ATLAS
detector for the center-of-mass energy 14 TeV, and with the selection
criteria discussed in Sec.~\ref{sect:sim}, may have sensitivity to the
minimal scenario. As can be
seen in this figure, the sensitivities in $|V_{eN}|^2$ and
$|V_{\mu N}|^2$ are rather similar and can reach values down to
$|V_{lN}|^2 \sim 10^{-9}$ for $m_N \sim 30$ GeV, with $3$ ab$^{-1}$ of
integrated luminosity. On the other hand, in the case of mixing with
the tau neutrinos, ATLAS can reach values of the mixing parameter down
to $|V_{\tau N}|^2 \sim 5 \times 10^{-9}$ for $ m_N \sim 20$ GeV with
$3$ ab$^{-1}$. Figure~\ref{fig:minimalsensitivity} compares our
limits with the current experimental bounds for this model,
represented by the dark gray area at the top of each plot. These
constraints were obtained at the following experiments:
ATLAS~\cite{ATLAS:2019kpx}, CMS~\cite{CMS:2018iaf},
DELPHI~\cite{Abreu:1996pa}, and LHCb~\cite{LHCb:2016inz,Antusch:2017hhu}.
As we can see, our forecasted limits can reach values of the mixing
$|V_{lN}|^2$ several orders of magnitude smaller than current
experimental bounds.
As mentioned above, the same search strategy for long-lived HNLs was
previously proposed by some of us in Ref.~\cite{Cottin:2018nms}. One
of the differences between our current and previous calculations is
the center-of-mass energy at the LHC, which now is taken as 14~TeV
(previously in Ref.~\cite{Cottin:2018nms} we used 13~TeV). Perhaps
more important is the fact that in the present paper, our numerical
calculations used more statistics, which allowed us to obtain much
smoother contours for our limits, which led to a slight increase in
the ranges shown. Moreover, in the case of mixing with taus,
our current limits are more sensitive than the previous ones
calculated in Ref.~\cite{Cottin:2018nms}. The reason for this
difference is that in Ref.~\cite{Cottin:2018nms}, we only considered
neutral currents in the decay of HNLs that coupled to taus (i.e. we
ignored a tau lepton coming from the displaced vertex), whereas now,
we have included both charged and neutral currents in our
calculations, making our limits more realistic for the case of the
mixing with the tau neutrinos and comparable with the sensitivity
reach projected with other proposed strategies (see for instance
Ref.~\cite{Drewes:2019fou}).
\begin{figure}[t]
\centering
\includegraphics[width=0.49\textwidth]{figs/sensitivity/minimal_e}
\includegraphics[width=0.49\textwidth]{figs/sensitivity/minimal_mu}\\
\includegraphics[width=0.49\textwidth]{figs/sensitivity/minimal_tau}
\caption{Minimal scenario sensitivity reach on $|V_{lN}|^2$ as
a function of $m_N$, for $l= e, \mu, \tau$. The dark region
corresponds to current experimental limits obtained at
several experiments: ATLAS~\cite{ATLAS:2019kpx},
CMS~\cite{CMS:2018iaf}, DELPHI~\cite{Abreu:1996pa}, and
LHCb~\cite{LHCb:2016inz,Antusch:2017hhu}.
} \label{fig:minimalsensitivity}
\end{figure}
\subsection{Four-fermion single-$N_R$ operators}
In the second theoretical scenario, we consider the four-fermion
single-$N_R$ operators in the $N_R$SMEFT. We estimate the
experimental sensitivity of our displaced search to a long-lived HNL
at the ATLAS detector. Here, we take the coefficients of the
operators $c_{{\cal O}}/\Lambda^2$ and the mass of the HNL, $m_N$, as
independent parameters. In this scenario, both the production and the
decay of the HNL can be dominated by the same operator $ {\cal O}$,
unlike the case of effective operators with two
HNLs~\cite{Cottin:2021lzz}, where the pair-$N_R$ operators dominantly
induce the HNL production, but the decay of the HNL still proceeds
only via mixing with the active neutrinos. For the EFT scenario, in
our analysis we have assumed that the contributions to the production
and decay of the HNL from its mixing with active neutrinos $ V_{lN} $
are sub-dominant and negligible compared to the effective operators'
contributions. For mixing angles smaller than $|V_{lN}|^2 \lesssim
10^{-9}$, this assumption is always fulfilled.
The production of the HNLs considered in our analysis, $ p p
\rightarrow l N $, is always accompanied by a prompt charged
lepton~---~ an electron, muon, or tau, depending on the flavor
structure of the effective operator considered. The presence of this
charged lepton is important in our analysis as it is used to trigger
the signal, as discussed in Sec.~\ref{sect:sim}. The decay of the
HNLs will occur via the same operator leading to two jets and one
neutral or charged lepton, $ N \rightarrow l (\nu) j j $. The
production cross sections of the HNLs will depend on the type of
quarks that the respective operator includes. In our analysis, we
have only considered effective operators with quarks of the first two
generations.
\begin{figure}[t]
\centering
\includegraphics[width=0.49\textwidth]{figs/sensitivity/ud_e}
\includegraphics[width=0.49\textwidth]{figs/sensitivity/ud_mu}\\
\includegraphics[width=0.49\textwidth]{figs/sensitivity/ud_tau}
\caption{Exclusion limits on the new physics scale $\Lambda$
as a function of $m_N$ in the EFT scenario with operators
including the first-generation quarks only, for an
integrated luminosity of 3 ab$^{-1}$. The two plots at the
top consider operators with charged leptons of the first and
second generation: electrons (left) and muons (right). The
plot at the bottom considers operators with taus leptons
only.} \label{fig:eftsensitivity}
\end{figure}
In Fig.~\ref{fig:eftsensitivity}, we show the experimental sensitivity
of the ATLAS detector to a long-lived HNL in the $\Lambda$ vs.~$m_N$
plane. In our analysis, we have considered the contributions of one
operator at a time, setting the value of the corresponding operator
coefficient $c_{{\cal O}} = 1$, and the rest of the operator
coefficients to zero. In Fig.~\ref{fig:eftsensitivity}, we have
considered only operators with quarks of the first generation. Note
that the numbers in the superscript of \textit{e.g.}
$c_{duNe}^{1112}$ refer to the first-generation quarks ($d$ and
$u$), the lightest $N_R$ and the second-generation charged lepton
(the muon). As can be seen in this figure, for an integrated
luminosity of 3~ab$^{-1}$, ATLAS can reach values of the new physics
scale up to (and above) $\Lambda \sim 20$ TeV for masses $m_N
\raise0.3ex\hbox{$\;>$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 50$ GeV in the case of operators with an electron or muon. In
the case of operators with a tau lepton, ATLAS can reach $\Lambda$
$\gtrsim 10$ TeV at masses $m_N$ of 10's GeV. It is worth mentioning
that our limits start at $ m_N \gtrsim 5$ GeV. The reason is the
kinematic cut at $ m_\text{DV}\geq 5$ GeV imposed in the selection
criteria. This cut is necessary to remove the SM background coming
from $B$-mesons, as discussed in Sec.~\ref{sect:sim}.
We also note that the projected exclusion limits are rather similar
for the four types of single-$N_R$ operators, in particular for
$\mathcal{O}_{LNQd}$ and $\mathcal{O}_{QuNL}$.
Figure~\ref{fig:eftsensitivity2} contains our limits in the plane
$\Lambda$ vs.~$m_N$ for the effective operators with quarks of the
second generation only. As expected, the sensitivity regions for
operators with quarks of the second generation only are smaller than
those corresponding to operators with first-generation quarks
(Fig.~\ref{fig:eftsensitivity}). This is due to the predominant
content of quarks $u$ and $d$ in the proton versus the quarks $c$ and
$s$. We find that limits shown in Fig.~\ref{fig:eftsensitivity2} can
reach $ \Lambda \sim 13$ TeV for $m_N \sim 23 $ GeV in the cases of
electrons and muons, and up to $\Lambda \sim 9 $ TeV for
$ m_N \sim 18$ GeV in the case of taus. All numbers assume an
integrated luminosity of 3 ab$^{-1}$. Other possible combinations of
quark flavors for the $N_R$SMEFT include $(u,s)$ and $(c,d)$. The
sensitivity reaches for these operators lie between the two cases
shown in Figs.~\ref{fig:eftsensitivity} and
\ref{fig:eftsensitivity2} (for $(u,d)$ and $(c,s)$). We therefore do
not show results for these cases explicitly. Operators with 3rd
generation quarks have not been considered in this work, since they
will require special treatment (i.e tagging).
\begin{figure}[t]
\centering
\includegraphics[width=0.49\textwidth]{figs/sensitivity/cs_e}
\includegraphics[width=0.49\textwidth]{figs/sensitivity/cs_mu}\\
\includegraphics[width=0.49\textwidth]{figs/sensitivity/cs_tau}
\caption{The same as Fig.~\ref{fig:eftsensitivity}, but for
operators with second-generation quarks
only.} \label{fig:eftsensitivity2}
\end{figure}
We also note that Figs.~\ref{fig:eftsensitivity} and \ref{fig:eftsensitivity2} have been calculated for Dirac HNLs. As
mentioned above, production cross sections for single-$N_R$ operators
are the same for Dirac and Majorana HNLs, while the half-lives for
Majorana HNLs are smaller by a factor of two. The sensitivity regions
for Majorana HNLs therefore differ slightly from the regions shown in
the figures. We do not repeat the plots for the Majorana case and
instead opt for a short explanation of the differences. First, the
maximal value of the HNL mass, to which this kind of search is
sensitive is determined by the smallest decay length that is
accessible in the experiment. Since the decay width scales as
$m_N^5$, for a Majorana HNL the largest HNL mass accessible is a
factor $(1/2)^{1/5} \simeq 0.87$ smaller than in the Dirac case.
Second, the maximal value of $\Lambda$ reached in our sensitivity
curves is essentially determined by the total cross section (times
luminosity). Since cross sections are the same for Dirac and Majorana
HNLs, this maximal value of $\Lambda$ does not change for Majorana
HNLs. Finally, in the regime where the decay lengths are large (i.e
for large values of $\Lambda$ at values of $m_{N}$ smaller than the
one where the maximal value of $\Lambda$ is reached), the event number
depends linearly on the half-life, while both the cross section and the decay width scale as $\Lambda^{-4}$. For Majorana HNLs, in
this part of the parameter space, slightly larger values of $\Lambda$
are accessible than for the Dirac case, i.e. an increase by
roughly a factor $2^{1/8} \simeq 1.09$.
We will close this discussion with one additional comment. Our
simulated analysis focused on the ATLAS detector and its
reconstruction capabilities to displaced vertices inside the inner
tracker, starting from 4 mm in multi-track
searches~\cite{Aad:2015rba,Aaboud:2017iio}. Relaxing this requirement
to decay distances below 4 mm (both in $d_{0}$, $r_{\text{DV}}$ and $z_{\text{DV}}$)
will allow to extend the reach in parameter space towards larger HNL masses.
Of course, with the loosening of these cuts we may depart from the
zero background case assumption, and a detailed study on the
multi-track search backgrounds would be needed, which goes beyond the
scope of the present work. Nevertheless, past displaced lepton
searches -- whose tracks are fitted to a common vertex -- at
CMS~\cite{CMS:2014hka} could probe transverse decay lengths starting
from $\approx 200$~$\mu$m\footnote{The explicit analysis requirement
in~Ref.~\cite{CMS:2014hka} demands tracks to have a transverse impact
parameter significance with respect to the primary vertex of
$|d_{0}|/\sigma_{d}>12$, where $\sigma_{d}$ is the uncertainty on
$|d_{0}|$. }. In addition, a recent 13 TeV CMS
search~\cite{CMS:2021kdm} demonstrates that lepton tracks with
$|d_{0}|>0.1$ mm are displaced enough to be considered for analysis.
This provides feasibility to experimentally go below the 4 mm
threshold.
We stress that an improvement of the displaced vertex search towards
smaller decay lengths by such a larger factor (up to $40$ for $0.1$mm)
would allow to test HNL masses larger by a factor 2 w.r.t the values
in our figures, i.e. extend the searches from $m_{N} \simeq 50$ GeV
to roughly $100$ GeV.
We hope that this large potential gain motivates the experimental
collaborations to study the lowering of the transverse cuts in displaced vertex searches to the sub-millimeter range.
\section{Simulation details\label{sect:sim}}
Our signal topology contains a prompt lepton and a displaced vertex
(DV) stemming from the $N_{R}$ decay to leptons and quarks. Our stage
to reconstruct such a signature is the ATLAS detector, specifically
its inner tracker, as it has the capability to reconstruct vertices
displaced from the IP by few millimeters to tens of centimeters. Our
analysis strategy builds up on an earlier work~\cite{Cottin:2018nms}
and is inspired from ATLAS multi-track displaced
searches~\cite{Aad:2015rba,Aaboud:2017iio}.
We consider the collision process $pp\to N l$ with $l=e,\mu,\tau$, at
$\sqrt{s}=14$ TeV at the high-luminosity LHC with an integrated
luminosity of 3 ab$^{-1}$. We generate LHE events with displaced
information at the parton level with \texttt{MadGraph5}, which are
read by \texttt{Pythia8}~\cite{Sjostrand:2014zea} for showering and
hadronization. Our detector simulation is based on a custom made code
within \texttt{Pythia8}, where we first reconstruct isolated prompt
electrons, muons, and taus (with help from \texttt{FastJet}~\cite{Cacciari:2011ma}), taking into account detector acceptance,
resolution, and smearing on their transverse momenta (for details, see
Ref.~\cite{Cottin:2018nms}).
After selecting events with a prompt lepton, the displaced vertex
reconstruction starts by selecting tracks\footnote{A track in our
simulation is a final state charged particle. These come from the
decays of $N_{R}$ and can correspond to an electron, a muon, or a
charged particle coming from the hadronization of quarks or from tau
decays.} with $p_{T}>1$ GeV and a large impact parameter, $d_{0}$,
defined as $d_{0}=r_\text{trk} \times \Delta\phi$. Here $\Delta\phi$
corresponds to the azimuthal angle between the track and the direction
of the long-lived $N_{R}$, and $r_\text{trk}$ corresponds to the
transverse distance of the track from the origin. We require
$|d_{0}|>2$ mm.
As we have access in simulation to truth-level Monte Carlo
information, we also identify the truth $N_{R}$ decay positions in the
transverse and longitudinal planes, namely, $r_{\text{DV}}$ and
$z_{\text{DV}}$, respectively. An additional step (with respect to
Ref.~\cite{Cottin:2018nms}) of the vertex reconstruction implemented
in this work is the requirement that $r_\text{trk} - r_\text{DV} < 4$
mm. It is not always the case that the ``starting" point of the
displaced track matches the displaced vertex position. This is more
evident in the case where we have a tau produced from the $N_{R}$
displaced decay, as taus also have an additional
displacement.\footnote{The proper decay distance of tau leptons is
$c\tau=87.1$ $\mu$m. This will lead, for example, to decay distances
of $\gamma c\tau\sim 5$mm at 100 GeV.} With this requirement, we
emulate what an experimental displaced-vertex reconstruction would do
when fitting nearby displaced tracks to a common
origin~\cite{Aad:2015rba}. This will lead to an additional reduction
in efficiency when reconstructing displaced vertices containing taus.
Nevertheless, it is a more realistic (and optimistic) approach than
what was done in Ref.~\cite{Cottin:2018nms} to handle heavy neutrino
decays to taus (see Sec.~\ref{sect:res} below).
After selecting optimal displaced tracks, we demand displaced vertices
within the ATLAS inner tracker acceptance, namely, $4$ mm
$<r_{\text{DV}}<300$ mm and $|z_{\text{DV}}|<300$ mm. Further cuts
are applied on the number of high-quality tracks coming from the DV,
$N_\text{trk}$, and its invariant mass, $m_{\text{DV}}$, assuming all
tracks have the pion mass. More concretely, we require
$N_\text{trk}>3$ and $m_{\text{DV}} \geq 5$ GeV. As detailed in
Refs.~\cite{Aad:2015rba,Aaboud:2017iio,Cottin:2018kmq}, these last two
cuts ensure that we are in a region where signal is expected to be
found free of backgrounds including $B$-mesons. Further detector
response to DVs is quantified by applying the 13 TeV ATLAS
parameterized efficiencies~\cite{Aaboud:2017iio} as a function of DV
invariant mass and number of tracks, where we assume these will remain
the same at 14 TeV.
\section{Effective theory with $N_R$}\label{sect:eft}
\subsection{Effective interactions}
In this section, we briefly introduce the $N_R$SMEFT, focusing on the
operators of interest for the current work. If HNLs with masses below
or around the electroweak scale exist in nature, the effects of new
multi-TeV physics at much smaller energies can be systematically
described in terms of an EFT built out of the SM fields and $N_R$. At
renormalizable level, in addition to the SM operators, there are a
Majorana mass term for $N_R$ and a $d=4$ operator describing the
fermion portal:
\begin{equation}
\mathcal{L}_\mathrm{ren} = \mathcal{L}_\mathrm{SM} + \overline{N_R} i \slashed{\partial} N_R
- \left[\frac{1}{2} \overline{N_R^c} M_N N_R + \overline{L} \tilde{H} Y_N N_R + \text{h.c.}\right],
\end{equation}
where $L$ stands for the SM lepton doublets, $H$ is the Higgs doublet
($\tilde{H} = \epsilon H^\ast$, $\epsilon$ is the totally
antisymmetric tensor), and $N_R^c \equiv C \overline{N_R}^T$ with $C$
being the Dirac charge conjugation matrix. The Majorana mass matrix
$M_N$ is a symmetric $n_N \times n_N$ matrix, with $n_N$ denoting the
number of HNL generations, and $Y_N$ is a generic $3 \times n_N$
matrix of Yukawa couplings.
Upon including non-renormalizable interactions $\mathcal{O}_i^{(d)}$
with $d \geq 5$, the full Lagrangian reads
\begin{equation}
\mathcal{L} = \mathcal{L}_\mathrm{ren} + \sum_{d \geq 5} \frac{1}{\Lambda^{d-4}} \sum_i c_i^{(d)} \mathcal{O}_i^{(d)}\,,
\end{equation}
where $c_i^{(d)}$ are the Wilson coefficients, and the second sum goes
over all independent interactions at a given dimension $d$. At $d=5$,
in addition to the renowned Weinberg operator composed of $L$ and
$H$~\cite{Weinberg:1979sa}, one finds two more operators that involve
$N_R$~\cite{delAguila:2008ir,Aparici:2009fh}.
At $d=6$, in addition to the pure SMEFT operators~\cite{Grzadkowski:2010es},
there are five operators involving two fermions (at least one of which
is $N_R$) and bosons, eleven baryon and lepton-number-conserving (LNC)
four-fermion interactions, one lepton-number-violating (LNV) operator,
and two operators that violate both baryon and lepton
number~\cite{Liao:2016qyd}.\footnote{Here, we count the operator
types, \textit{i.e.} we do not take into account the flavor
structure and do not count hermitian conjugates.} In the present
work, we are interested in the effects of the LNC four-fermion
interactions containing one $N_R$ and three SM fermions. We list them
in Table~\ref{tab:singleNops}. The effects of the four-fermion
operators containing a pair of HNLs and a pair of quarks have been
investigated in detail in Ref.~\cite{Cottin:2021lzz}.
\begin{table}[t]
%
\centering
\renewcommand{\arraystretch}{1.2}
%
\begin{tabular}[t]{|c|c|c|c|}
\hline
Name & Structure (+ h.c.) & $n_N = 1$ & $n_N = 3$ \\
\hline
\hline
${\cal O}_{duNe}$ &
$\left(\overline{d_R}\gamma^{\mu}u_R\right)\left(\overline{N_R}\gamma_{\mu}e_R\right)$ &
54 &
162 \\
%
${\cal O}_{LNQd}$ &
$\left(\overline{L}N_R\right)\epsilon\left(\overline{Q}d_R\right)$ &
54 &
162 \\
%
${\cal O}_{LdQN}$ &
$\left(\overline{L}d_R\right) \epsilon \left(\overline{Q}N_R\right)$ &
54 &
162 \\
%
${\cal O}_{QuNL}$ &
$\left(\overline{Q}u_R\right)\left(\overline{N_R}L\right)$ &
54 &
162 \\
\hline
${\cal O}_{LNLe}$ &
$\left(\overline{L}N_R\right) \epsilon \left(\overline{L}e_R\right)$ &
54 &
162\\
\hline
%
\end{tabular}
%
\caption{LNC four-fermion single-$N_R$ operators. For each operator
structure, we provide the number of independent real parameters for
$n_N = 1$ and $n_N= 3$ generations of $N_R$. The operator in the
last row is purely leptonic, and thus, it does not contribute to the
HNL production at the LHC.}
\label{tab:singleNops}
\end{table}
The single-$N_R$ operators including quarks can lead to enhanced HNL
production cross section at the LHC, but they also trigger the decay
of $N_R$ to a lepton and two quarks.
The total decay width of the $N_R$'s depends on the operator. Neglecting the masses of the lepton and light quarks, the partial decay width to charged
leptons plus quarks is given by
\begin{equation}
\Gamma(N_R \to \ell q q') = \frac{c_\mathcal{O}^2 m_N^5}{f_\mathcal{O}512 \pi^3 \Lambda^4}\,,
\label{eq:Gamma}
\end{equation}
with $m_N$ being the HNL mass, $c_\mathcal{O}$ the Wilson coefficient
of the operator $\mathcal{O}$, and $f_\mathcal{O}$ the numerical
factor depending on the operator type. For $\mathcal{O}_{duNe}$
$f_\mathcal{O} = 1$, whereas for $\mathcal{O}_{LNQd}$,
$\mathcal{O}_{LdQN}$, and $\mathcal{O}_{QuNL}$ $f_\mathcal{O} = 4$.
To arrive at the total decay width, one has to add also the final state
with neutrinos for all operators, except $\mathcal{O}_{duNe}$. Since
the partial width to neutrinos follows the same equation as for
charged leptons, this results in total decay widths being twice the partial decay widths given in Eq.~\eqref{eq:Gamma} (again, except for $\mathcal{O}_{duNe}$).
Finally, Eq.~\eqref{eq:Gamma} applies to Dirac neutrinos.
For Majorana neutrinos, one has to add also the charged conjugated channels, leading to another factor of 2 for the widths.
\subsection{Ultra-violet completions for four-fermion single-$N_R$ operators}
The single-$N_R$ operators of interest can be generated in UV-complete
models containing heavy scalars or vectors. Here, we do not aim to
provide a complete classification of such ultra-violet (UV)
completions, but rather give a few examples. In what follows, we
consider scalar leptoquarks and an inert $SU(2)_L$ doublet scalar. A
catalog of models with scalar and vector leptoquarks generating
four-fermion operators involving one or two $N_R$'s and quarks can be
found in Ref.~\cite{Bischer:2019ttk}.
The operator $\mathcal{O}_{duNe}$ can arise from a model with a scalar
leptoquark $S_d$ having the gauge quantum numbers of the down quark,
cf.~Table~\ref{tab:UV}.
\begin{table}[t]
%
\centering
\renewcommand{\arraystretch}{1.5}
%
\begin{tabular}{|l|ccc|c|c|}
\hline
Heavy scalar & $SU(3)_C$ & $SU(2)_L$ & $U(1)_Y$ & Operator & Matching relation \\
\hline
\hline
Leptoquark $S_d$ & $\mathbf{3}$ & $\mathbf{1}$ & $-1/3$ & ${\cal O}_{duNe}$ & $\dfrac{c_{duNe}}{\Lambda^2} = \dfrac{g_{dN} g_{ue}}{2 m_{S_d}^2}$ \\[0.3cm]
\hline
Leptoquark $S_Q$ & $\mathbf{3}$ & $\mathbf{2}$ & $\phantom{-}1/6$ & ${\cal O}_{LdQN}$ & $\dfrac{c_{LdQN}}{\Lambda^2}= \dfrac{g_{dL} g_{QN}}{m_{S_Q}^2}$ \\[0.3cm]
\hline
\multirow{2.5}{*}{Inert doublet $\Phi$} & \multirow{2.5}{*}{$\mathbf{1}$} & \multirow{2.5}{*}{$\mathbf{2}$} & \multirow{2.5}{*}{$\phantom{-}1/2$} & ${\cal O}_{LNQd}$ & $\dfrac{c_{LNQd}}{\Lambda^2}= \dfrac{g_{LN} g_{Qd}}{m_{\Phi}^2}$ \\[0.3cm]
& & & & ${\cal O}_{QuNL}$ & $\dfrac{c_{QuNL}}{\Lambda^2}= \dfrac{g_{Qu} g_{LN}}{m_{\Phi}^2}$ \\[0.3cm]
\hline
\end{tabular}
\caption{Heavy scalars with their gauge quantum numbers
and the four-fermion single-$N_R$ operators they can generate.
The last column reports the tree-level matching relations between
the Wilson coefficients and the couplings of the UV model.}
\label{tab:UV}
\end{table}
The interaction Lagrangian of $S_d$ is given by
\begin{equation}
%
-\mathcal{L}_{S_d} = g_{dN} \overline{d_R} N_R^c S_d + g_{ue} \overline{u_R} e_R^c S_d + g_{QL} \overline{Q} \epsilon L^c S_d + \text{h.c.}
\label{eq:LSd}
%
\end{equation}
Upon integrating out $S_d$, the operator $\mathcal{O}_{duNe}$ is
generated with the tree-level matching condition for the Wilson
coefficient $c_{duNe}$ given in the last column of
Table~\ref{tab:UV}.\footnote{For simplicity, here, we assume the
renormalizable couplings to be real and suppress flavor indices.
The factor of two in the denominator originates from a Fierz
identity.} Analogously, a scalar leptoquark $S_Q$ with the quantum
numbers of the $SU(2)_L$ quark doublet can lead to
$\mathcal{O}_{LdQN}$. The Yukawa interactions of $S_Q$ read
\begin{equation}
%
-\mathcal{L}_{S_Q} = g_{QN} \overline{Q} N_R S_Q + g_{dL} \overline{d_R} L^T \epsilon S_Q + \text{h.c.}
\label{eq:LSQ}
%
\end{equation}
We note that the first terms in Eqs.~\eqref{eq:LSd} and~\eqref{eq:LSQ}
also generate the $N_R$ pair operators $\mathcal{O}_{qN} =
(\overline{q} \gamma^\mu q) (\overline{N_R} \gamma_\mu N_R)$, where $q= d_R$ and $q=Q$, respectively, cf.~Ref.~\cite{Cottin:2021lzz}.
The operators $\mathcal{O}_{LNQd}$ and $\mathcal{O}_{QuNL}$, in turn,
can originate from a two Higgs doublet model, after the second, heavy
doublet $\Phi$ has been integrated out. The interactions of interest
in the UV model have the following form:
\begin{equation}
%
-\mathcal{L}_{\Phi} = g_{Qd} \overline{Q} \Phi d_R + g_{Qu} \overline{Q} \tilde{\Phi} u_R + g_{LN} \overline{L} \tilde{\Phi} N_R + \text{h.c.},
%
\end{equation}
where $\tilde{\Phi} = \epsilon \Phi^\ast$. From Table~\ref{tab:UV},
it is clear that the Wilson coefficients of the operators depend on different combinations of independent
couplings in the UV model. Therefore, in this example, the generated
operators are uncorrelated.
We have implemented these renormalizable models in \texttt{FeynRules}
\cite{Christensen:2008py,Alloul:2013bka} for both Dirac and Majorana
$N_R$. Using the generated UFO~\cite{Degrande:2011ua} model files and
\texttt{MadGraph5}~\cite{Alwall:2011uj,Alwall:2014hca}, we have
checked that both cases lead to the same single-$N_R$ production cross
section. We note that for $N_R$ pair production triggered by the
four-fermion operators with two $N_R$'s, the cross section is
different for Dirac and Majorana HNLs, especially for values of $m_N
\gtrsim 100$~GeV at LHC energies (we refer the interested reader
to Sec.~3.1 of Ref.~\cite{Cottin:2021lzz}). The fact that the HNL
nature does not affect the production triggered by the four-fermion
single-$N_R$ operators allows us to implement these operators directly
in \texttt{FeynRules} for Dirac HNLs and use the resulting UFO model
file in \texttt{MadGraph5}. (Recall that \texttt{MadGraph5} can
not handle Majorana fermions in operators with more than two fermions,
cf.~Sec.~3.1 of Ref.~\cite{Cottin:2021lzz}.)
\section{Introduction}\label{sect:intro}
Interest in long-lived particles (LLPs) has grown largely in the last
few years~\cite{Alimena:2019zri,Lee:2018pag,Curtin:2018mvb}. Many
models for LLPs have been discussed in the literature, most of which
are motivated by either dark matter or neutrino masses. Heavy neutral
leptons (HNLs) are the prime example for LLPs connected with the
neutrino masses. HNLs are Standard Model (SM) singlet fermions that
couple to SM particles via their mixing with active neutrinos.
The minimal model that can realize this effective setup is the seesaw
mechanism, in which right-handed Majorana neutrinos, $N_R$, are added
to the SM particle content~\cite{Minkowski:1977sc,Yanagida:1979as,
GellMann:1980vs,Mohapatra:1979ia,Schechter:1980gr}. However, many
SM extensions, that aim to explain observed neutrino
data~\cite{deSalas:2020pgw} (see also Refs.
\cite{Capozzi:2021fjo,Esteban:2020cvm}) via electroweak scale
variants of the classical
seesaw~\cite{Mohapatra:1986bd,Bernabeu:1987gr,Akhmedov:1995ip,Akhmedov:1995vm},
do not include only HNLs. For example, right-handed neutrinos appear
necessarily in left-right (LR) symmetric extension of the SM as the
neutral component of the right-lepton
doublet~\cite{Mohapatra:1974hk,Senjanovic:1975rk}. If the additional
non-SM states, such as the $W_R$ and $Z'$ in the LR model, have masses
which are too large to be produced on-shell at the LHC, their effects
on HNL phenomenology is best treated in effective field theory (EFT).
The EFT of the SM, SMEFT, (see Ref.~\cite{Brivio:2017vri} for a
review) is a well-established framework in LHC searches (for global
analyses of collider data in this framework, see
Refs.~\cite{Ellis:2020unq,Ethier:2021bye}). The extension of the
SMEFT to include right-handed neutrinos is called
$N_R$SMEFT.\footnote{In the literature, sometimes also called
$\nu_R$SMEFT.} This EFT has been originally discussed in Refs
\cite{delAguila:2008ir,Aparici:2009fh} and has attracted significant
interest in the last few years, from both
theoretical~\cite{Bhattacharya:2015vja,Liao:2016qyd,Li:2021tsq,Chala:2020vqp,Chala:2020pbn,Datta:2020ocb,Datta:2021akg}
and
phenomenological~\cite{Bischer:2019ttk,Alcaide:2019pnf,Butterworth:2019iff,Biekotter:2020tbd,Dekens:2020ttz,Han:2020pff,Li:2020lba,Li:2020wxi,DeVries:2020jbs,Cottin:2021lzz}
perspectives. Effective operators in the $N_R$SMEFT are now known up
to dimension $d=9$~\cite{Li:2021tsq}. Phenomenological interest in
this EFT is motivated by the future upgrades of the LHC on one side
and the improvement in the sensitivities of low-energy experiments on
the other.
Effective interactions of $d \leq 6$ are the most interesting from a
phenomenological point of view. There are two $d = 5$ operators
involving $N_R$. Their phenomenology has been studied in detail in
Refs.~\cite{Aparici:2009fh,Caputo:2017pit,Barducci:2020icf}. The
$d=6$ operators containing $N_R$ can be divided into two classes: (i)
operators with two fermions and bosons and (ii) four-fermion
operators. The second class, in turn, can be partitioned into
operators with two $N_R$'s and operators with a single $N_R$.\footnote{There is also a lepton-number-violating operator with four
$N_R$'s, but it requires at least two generations of HNLs. } The LLP
phenomenology of pair operators has recently been studied in
Ref.~\cite{Cottin:2021lzz}. Here, we will concentrate on operators
with a single $N_R$. The phenomenology of single-$N_R$ operators is
decidedly different from that of pair operators. First, pair
operators do not by themselves lead to decays of (the lightest) $N_R$.
Instead, for these operators $N_R$ decays are controlled by the mixing with active neutrinos.
This is different from the single-$N_R$ operators, which will usually
dominate the decay length of the HNLs in those parts of parameter
space where the operators are large enough to dominate $N_R$
production. Thus, the parameter space that can be explored for these
two types of operators is very different, see Sec.~\ref{sect:res}.
Second, pair operators do not produce prompt charged leptons, except
in the parameter region where the decay length of the $N_R$ is so
short that the lepton from a $N_R$ decay is confused with a charged
lepton produced directly from $pp$ collisions at the interaction point (IP). In all lepton-number-conserving single-$N_R$
operators, on the other hand, $N_R$'s are accompanied by a prompt
lepton (either a neutrino or a charged lepton). This affects the
search strategy for the different operators.
Ref.~\cite{DeVries:2020jbs} studied single-$N_R$ operators for various proposed LLP ``far'' detectors, such as
MATHUSLA~\cite{Chou:2016lxi,Curtin:2018mvb,Alpigiani:2020tva},
CODEXb~\cite{Gligorov:2017nwh}, AL3X~\cite{Gligorov:2018vkc},
FASER~\cite{Feng:2017uoz}, and ANUBIS~\cite{Bauer:2019vqk}, as well as ATLAS, for HNLs produced from charm and bottom mesons decays and hence with mass below 5 GeV.\footnote{For the expectations for these experiments in the minimal HNL scenario with only active-sterile neutrino mixing, see for example~Refs.~\cite{Helo:2018qej,Dercks:2018wum,Hirsch:2020klk}.}
For this reason, in our numerical simulation we concentrate on ATLAS for heavier HNLs, and a short discussion will also be given for the expectations for CMS (see Sec.~\ref{sect:res}).
The rest of this paper is organized as follows. In the next section,
we will discuss briefly $N_R$SMEFT at $d=6$. This section
also entails a short discussion on how the single-$N_R$ operators
could be the low-energy remnant of some leptoquark or two Higgs
doublet models. Sec.~\ref{sect:sim} discusses the details of the
simulation we perform for the ATLAS detector. In Sec.~\ref{sect:res},
we present our numerical results. First, we discuss again briefly the
minimal case, in which HNLs are produced and decay via mixing only.
While this was previously done by some of us in
Ref.~\cite{Cottin:2018nms}, we now also simulate the expectations for
HNLs coupled to $\tau$'s, including both neutral and charged currents
leading to more realistic estimates for the future ATLAS
sensitivities. We then present our results for the different
single-$N_R$ operators. Cross sections and decay lengths depend on
both, operator type and generation indices in the SM sector. For the
first generation of SM quarks, sensitivities will reach new physics
scales in excess of 20~TeV at the high-luminosity LHC. We then close
with a short summary of our results.
|
2,877,628,091,022 | arxiv | \section{Introduction}
The determinantal point processes (DPPs) are elegant probabilistic models for subset selection problems where both quality and diversity are considered. Formally, given a set of items $\mathcal{Y}=\{1,\cdots,N\}$, a DPP defines a probability measure $\mathcal{P}$ on $2^{\mathcal{Y}}$, the set of all subsets of $\mathcal{Y}$. For every subset $Y \subseteq \mathcal{Y}$ we have
\begin{equation}
\mathcal{P}_{\mathbf{L}}(Y)\propto\det({\mathbf{L}_Y}),
\end{equation}
where the L-ensemble kernel $\mathbf{L}$ is an $N$ by $N$ positive semi-definite matrix. By writing $\mathbf{L}=\mathbf{B}^T \mathbf{B}$ as a Gram matrix, $\det({\mathbf{L}_Y})$ could be viewed as the squared volume spanned by the column vectors $\mathbf{B}_i$ for $i\in Y$. By defining $\mathbf{B}_i=q_i \bm{\phi}_i$, a popular decomposition of the kernel is given as
\begin{equation}
L_{ij}=q_i \bm{\phi}_i^T \bm{\phi}_j q_j,
\end{equation}
where $q_i\in \mathbb{R}^+$ measures the quality (magnitude) of item $i$ in $\mathcal{Y}$, and $\bm{\phi}_i\in \mathbb{R}^k$, $\Vert \bm{\phi}_i\Vert=1$ can be viewed as the angle vector of diversity features so that $\bm{\phi}_i^T \bm{\phi}_j$ measures the similarity between items $i$ and $j$. It can be shown that the probability of including $i$ and $j$ increases with the quality of $i$ and $j$ and diversity between $i$ and $j$. As a result, a DPP assigns high probability to subsets that are both of good quality and diverse \cite{kulesza2012determinantal}.
\begin{figure}
\centering
\subfigure[~]{\includegraphics[width=5cm]{SpeechDemo.pdf}}
\subfigure[~]{\includegraphics[width=3cm]{TelData.pdf}}
\caption{(a) A 10-sec part of a 2-min speech recording, shown with change-point candidates. Segments of different speakers or noises are plotted in different colors. (b) BwDPP kernel constructed for the whole 2-min recording, with the 112 change-point candidates as BwDPP items. The white denotes non-zero entries while the black indicates zero.}
\label{fig: TelData}
\end{figure}
For DPPs, the \emph{maximum a posteriori} (MAP) problem $\argmax_{Y \subseteq \mathcal{Y}} \det(\mathbf{L}_Y)$, aiming at finding the subset with highest probability, has attracted much attention due to its broad range for potential applications. Noting that this is an NP-hard problem \cite{ko1995exact}, a number of approximate inference methods have been purposed, including the greedy methods for optimizing the submodular function $\log \det(L_Y)$ \cite{buchbinder2012tight,nemhauser1978analysis}, optimization via continuous relaxation \cite{gillenwater2012near}, and minimum Bayes risk decoding that minimizes the application-specific loss function \cite{kulesza2012determinantal}.
These existing methods need to calculate determinants or conduct eigenvalue decomposition. Both computations are taken at the scale of the kernel size $N$ and with the cost of around $\mathcal{O}(N^3)$ time that become intolerably high when $N$ become large, e.g. thousands. Nevertheless, we find that for a class of DPPs where the kernel is almost block diagonal (Fig. \ref{fig: TelData} (b)), the MAP inference with the whole kernel could be replaced by a series of sub-inferences with its sub-kernels. Since the sizes of the sub-kernels become smaller, the overall computational cost can be significantly reduced. Such DPPs are often defined over a line where items are only similar to their neighbourhoods on the line and significantly different from those far away. Since the MAP inference for such DPPs is conducted in a block-wise manner, we refer to them as BwDPPs (block-wise DPPs) in the rest of the paper.
The above observation is mainly motivated by the problem of change-point detection (CPD) that aims at detecting abrupt changes in time-series data \cite{gustafsson2000adaptive}. In CPD, the period of time between two consecutive change-points, often referred to as a segment or a state, is with homogeneous properties of interest (e.g. the same speaker in a speech \cite{chen1998speaker} or the same behaviour in human activity data \cite{liu2013change}). After choosing a number of change-point candidates without much difficulty, we can treat these change-point candidates as DPP items, and select a subset from them to be our final estimate of the change-points. Each change-point candidate has its own quality of being a change-point. Moreover, the true locations of change-points along the timeline tend to be diverse, since states (e.g. speakers in Fig. \ref{fig: TelData} (a)) would not change rapidly. Therefore, it is preferred to conduct change-point selection that incorporates both quality and diversity. DPP-based subset selection clearly suits this purpose well. Meanwhile, the corresponding kernel will then become almost block diagonal (e.g. Fig. \ref{fig: TelData} (b)), as neighbouring items are less diversified, and items far apart more diversified, In this case, the DPP becomes BwDPP.
The problem of CPD have been actively studied for decades, where various CPD methods could be broadly classified into Bayesian or frequentist approach. In Bayesian approach, the CPD problem is reduced to estimating the posterior distribution of the change-point locations given the time-series data \cite{green1995reversible}. Other posteriors to be estimated include the 0/1 indicator sequence \cite{lavielle2001application}, and the ``run length" \cite{adams2007bayesian}. Although many improvements were made, e.g. using advanced Monte Carlo method, the efficiency for estimating these posteriors is still a big challenge for real-world tasks.
In frequentist approach, the core idea is hypothesis testing and the general strategy is to first define a metric (test statistic) by considering the observations over past and present windows. As both windows move forward, change-points are selected when the metric value exceeds a threshold. Some widely-used metrics include the cumulative sum \cite{basseville1993detection}, the generalized likelihood-ratio \cite{gustafsson1996marginalized}, the Bayesian information criterion (BIC) \cite{chen1998speaker}, the Kullback Leibler divergence \cite{delacourt2000distbic}, and more recently, subspace-based metrics \cite{ide2007change,kawahara2007change}, kernel-based metrics \cite{desobry2005online}, and density-ratio \cite{kanamori2010theoretical,kawahara2012sequential}. While various metrics have been explored, how to choose thresholds and perform change-point selection, which is also a determining factor for detection performance, is relatively less studied. Heuristic-based rules or procedures are dominant and not well-performed, e.g. selecting local peaks above a threshold \cite{kawahara2007change}, discarding the lower one if two peaks are close \cite{liu2013change}, or requiring the metric differences between change-points and their neighbouring valleys above a threshold \cite{delacourt2000distbic}.
In this paper, we propose to apply DPP to address the difficulty of selecting change-points. Based on existing well-studied metrics, we can create a preliminary set of change-point candidates without much difficulty. Then, we treat these change-point candidates as DPP items, and conduct DPP-based subset selection to obtain the final estimate of the change-points that favours both quality and diversity.
The contribution of this paper is two-fold. First, we introduce a class of DPP, called BwDPPs, that are characterized by an almost block diagonal kernel matrix and thus can allow efficient block-wise MAP inference. Second, BwDPPs are successfully applied to address the difficult problem of selecting change-points, which results in a new BwDPP-based CPD method, named BwDppCpd.
The rest of the paper is organized as follows. After describing brief preliminaries, we introduce BwDPPs and give our theoretical result on the BwDPP-MAP method. Next, we introduce BwDppCpd and present evaluation experiment results on a number of real-world datasets. Finally, we conclude the paper with a discussion on potential future directions.
\section{Preliminaries \label{Sec: Pre}}
Throughout the paper, we are interested in MAP inference for BwDPPs, a particular class of DPP where the L-ensemble kernel $\mathbf{L}$ is almost block diagonal\footnote{Such matrices could also be defined as a particular class of block tridiagonal matrices, where the off-diagonal sub-matrices $\mathbf{A}_i$ only have a few non-zeros entries at the bottom left.}, namely
\begin{equation}\label{def: abd}
\mathbf{L}\triangleq\left[ \begin{array}{ccccc}
\mathbf{L}_1 & \mathbf{A}_1 & ~ & \cdots & \mathbf{0}\\
\mathbf{A}_1^T & \mathbf{L}_2 & \mathbf{A}_2 & ~ & ~ \\
~ & \ddots & \ddots & \ddots & \vdots \\
~ & ~ & \mathbf{A}_{m-2}^T & \mathbf{L}_{m-1} & \mathbf{A}_{m-1} \\
\mathbf{0} & \cdots & ~ & \mathbf{A}_{m-1}^T & \mathbf{L}_{m}
\end{array}\right],
\end{equation}
where the diagonal sub-matrices $\mathbf{L}_i\in \mathbb{R}^{l_i\times l_i}$ are sub-kernels containing DPP items that are mutually similar, and the off-diagonal sub-matrices $\mathbf{A}_{i} \in \mathbb{R}^{l_i\times l_{i+1}}$ are sparse sub-matrices with non-zero entries only at the bottom left, representing the connections between adjacent sub-kernels. Fig. \ref{fig: Syn Data} (a) gives a good example of such matrices.
Let $\mathcal{Y}$ be the set of all indices of $\mathbf{L}$ and let $\mathcal{Y}_1,\cdots,\mathcal{Y}_m$ be that of $\mathbf{L}_1,\cdots,\mathbf{L}_m$ correspondingly. For any set of indices $C_i,C_j \subseteq \mathcal{Y}$, we use $\mathbf{L}_{C_i}$ to denote the square sub-matrix indexed by $C_i$ and $\mathbf{L}_{C_i,C_j}$ the $\vert C_i\vert \times \vert C_j\vert$ sub-matrix with rows indexed by $C_i$ and columns by $C_j$. Following general notations, by $\mathbf{L}=\diag (\mathbf{L}_1,...,\mathbf{L}_m)$ we mean the block diagonal matrix $\mathbf{L}$ consisting of sub-matrices $\mathbf{L}_1,...,\mathbf{L}_m$ and $\mathbf{L}\succeq 0$ means that $\mathbf{L}$ is positive semi-definite.
\section{MAP Inference for BwDPPs \label{Sec: MAP}}
\subsection{Strictly Block Diagonal Kernel}
We first consider the motivating case where the kernel is strictly block diagonal, i.e. all elements in the off-diagonal sub-matrices $\mathbf{A}_i$ are zero. It can be easily seen that the following divide-and-conquer theorem holds.
\begin{theorem}\label{thrm 0 ordr slt}
For the DPP with a block diagonal kernel $\mathbf{L}=\diag(\mathbf{L}_1,\cdots,\mathbf{L}_m)$ over ground set $\mathcal{Y}=\bigcup_{i=1}^m \mathcal{Y}_i$ which is partitioned correspondingly, the MAP solution can be obtained as:
\begin{equation}
\hat{C} = \hat{C}_1 \cup \cdots \cup \hat{C}_m,
\end{equation}
where $\hat{C}=\displaystyle \argmax _ {C \subseteq \mathcal{Y}}\det(\mathbf{L}_C)$, and $\displaystyle\hat{C}_i=\argmax_{C_i \subseteq \mathcal{Y}_i} \det(\mathbf{L}_{C_i})$
\end{theorem}
Theorem \ref{thrm 0 ordr slt} tells us that the MAP inference with a strictly block diagonal kernel can be decomposed into a series of sub-inferences with its sub-kernels. In this way, the overall computation cost can be largely reduced. Noting that no exact DPP-MAP algorithms are available so far, any approximate DPP-MAP algorithms could be used in a plug-and-play way for the sub-inferences.
\subsection{Almost Block Diagonal Kernel}
Now we analyze the MAP inference for BwDPP with an almost block diagonal kernel as defined in (\ref{def: abd}). Let $C \subseteq \mathcal{Y}$ be the hypothesized subset to be selected from $\mathbf{L}$ and let $C_1 \subseteq \mathcal{Y}_1,\cdots,C_m \subseteq \mathcal{Y}_m$ be that from $\mathbf{L}_1,\cdots,\mathbf{L}_m$ correspondingly, where $C_i=C \cap \mathcal{Y}_i$. Without loss of generality, we assume $\mathbf{L}_{C_i}$ is invertible\footnote{That simply assumes that we only consider the non-trivial subsets selected with a DPP kernel $\mathbf{L}$, i.e. $\det(\mathbf{L}_{C_i})>0$. \label{Assump}} for $i=1,\cdots,m$. By defining $\mathbf{\tilde{L}}_{C_i}$ recursively as $\mathbf{\tilde{L}}_{C_i}\triangleq$
\begin{equation} \label{def: L tilde C}
\left\{
\begin{array}{cl}
\mathbf{L}_{C_i} & i=1,\\
\mathbf{L}_{C_i}-\mathbf{L}_{C_{i-1},C_i}^T\mathbf{\tilde{L}}_{C_{i-1}}^{-1}\mathbf{L}_{C_{i-1},C_i} & i=2,\cdots,m
\end{array}\right.
\end{equation}
one could rewrite the MAP objective function: $\det(\mathbf{L}_C)$
\begin{equation}\label{apr e1}
\begin{split}
&=\det(\mathbf{L}_{C_1})\det(\mathbf{L}_{\cup_{i=2}^mC_2}-\mathbf{L}_{C_1,\cup_{i=2}^m C_i}^T\mathbf{L}_{C_1}^{-1}\mathbf{L}_{C_1,\cup_{i=2}^m C_i})\\
&=\det(\mathbf{\tilde{L}}_{C_1})\det(\begin{bmatrix}
\mathbf{\tilde{L}}_{C_2} & [\mathbf{L}_{C_2,C_3} ~ \mathbf{0}]\\ [\mathbf{L}_{C_2,C_3} ~ \mathbf{0}]^T & \mathbf{L}_{\cup_{i=3}^m C_i}
\end{bmatrix}),
\end{split}
\end{equation}
where $\mathbf{0}$ represents zero matrix of appropriate size that fill the corresponding area with zeros. The key to the second equation above is $\mathbf{L}_{C_1,C_i}=\mathbf{0}$ for $i\geq 3$, since $\mathbf{L}$ is an almost block diagonal kernel. Continuing this recursion,
\begin{equation}\label{apr e3}
\textstyle\det(\mathbf{L}_C)=\cdots=\prod_{i=1}^m \det(\mathbf{\tilde{L}}_{C_i}).
\end{equation}
Hence, the MAP objective function is reduced to:
\begin{equation}
\argmax_{C\in \mathcal{Y}}\det(\mathbf{L}_C) = \argmax _{C_1\in \mathcal{Y}_1,\cdots,C_m\in \mathcal{Y}_m}{\textstyle \prod_{i=1}^m \det(\mathbf{\tilde{L}}_{C_i})}.
\end{equation}
As $\mathbf{\tilde{L}}_{C_i}$ depends on $C_1,\cdots,C_i$, we cannot optimize $\det(\mathbf{\tilde{L}}_{C_1}),\cdots,\det(\mathbf{\tilde{L}}_{C_m})$ separately. Alternatively, we provide an approximate method that optimize over $C_1,\cdots,C_m$ sequentially, named the BwDPP-MAP method, which is a depth-first greedy search method in essence. The BwDPP-MAP is described in Table \ref{BwDPP-MAP Alg}, where $\argmax_{C_i;C_j=\hat{C}_j, j=1,\cdots,i-1}$ denotes optimizing over $C_i$ with the value of $C_j$ fixed as $\hat{C}_j$ for $j=1,\cdots,i-1$, and the sub-kernel\footnote{Both $\mathbf{L}_{\mathcal{Y}_i}$ and $\mathbf{\tilde{L}}_{\mathcal{Y}_i}$ are called sub-kernels.} $\mathbf{\tilde{L}}_{\mathcal{Y}_i}$ is given similarly as $\mathbf{\tilde{L}}_{C_i}$, namely $\mathbf{\tilde{L}}_{\mathcal{Y}_i}\triangleq$
\begin{equation}\label{apr e4}
\left\{
\begin{array}{cl}
\mathbf{L}_{i} & i=1,\\
\mathbf{L}_{i}-\mathbf{L}_{C_{i-1},\mathcal{Y}_i}^T\mathbf{\tilde{L}}_{C_{i-1}}^{-1}\mathbf{L}_{C_{i-1},\mathcal{Y}_i} & i=2,\cdots,m
\end{array}\right.
\end{equation}
One may notice that $(\mathbf{\tilde{L}}_{\mathcal{Y}_i})_{C_i}$ is equivalent to $\mathbf{\tilde{L}}_{C_i}$.
\begin{table}
\caption{BwDPP-MAP Algorithm}
\label{BwDPP-MAP Alg}
\centering
\begin{tabular}{l}
\toprule[1pt]
{\bf Input:} \hspace{0.5em} $\mathbf{L}$ as defined in (\ref{def: abd}); \\
{\bf Output:} \hspace{0.5em} Subset of items $\hat{C}$.\\
\hline
{\bf For:} $i = 1,\cdots, m$\\
\hspace{1.5em} Compute $\mathbf{\tilde{L}}_{\mathcal{Y}_i}$ via (\ref{apr e4});\\
\hspace{1.5em} Perform sub-inference over $C_i$ via \\
\hspace{1.5em} $ \hat{C}_i=\argmax_{C_i\in \mathcal{Y}_i;C_j=\hat{C}_j, j=1,\cdots,i-1}\det((\mathbf{\tilde{L}}_{\mathcal{Y}_i})_{C_i})$;\\
{\bf Return:} $\hat{C}=\bigcup_{i=1}^m \hat{C}_i$. \\
\bottomrule[1pt]
\end{tabular}
\end{table}
In conclusion, similar to the MAP inference with a strictly block diagonal kernel, by using BwDPP-MAP, the MAP inference for an almost block diagonal kernel can be decomposed into a series of sub-inferences for the sub-kernels as well. There are four comments for this conclusion.
First, it should be noted that the above BwDPP-MAP method is an approximate optimization method, even if each sub-inference step is conducted exactly. This is because $\mathbf{\tilde{L}}_{C_i}$ depends on $C_1,\cdots,C_i$. We provide an empirical evaluation later, showing that through block-wise operation, the greedy search in BwDPP-MAP can achieve computation speed-up with marginal sacrifice of the accuracy.
Second, by the following Lemma \ref{lm apr}, we show that each sub-kernel $\mathbf{\tilde{L}}_{\mathcal{Y}_i}$ is positive semi-definite, so that it is theoretically guaranteed that we can conduct each sub-inference via existing DPP-MAP algorithms, e.g. the greedy DPP-MAP algorithm (Table \ref{Greedy MAP Alg}) \cite{gillenwater2012near}. One may find the proof of Lemma \ref{lm apr} in the appendix.
\begin{lemma}\label{lm apr}
$\mathbf{\tilde{L}}_{\mathcal{Y}_i}\succeq 0$, for $i=1,\cdots,m$.
\end{lemma}
Third, in order to apply BwDPP-MAP, we need to first partition a given DPP kernel into the form of an almost block diagonal matrix as defined in (\ref{def: abd}). The partition is not unique. A trivial partition for an arbitrary DPP kernel is no partition, i.e., regarding the whole matrix as a single block. We leave the study of finding the optimal partition for further work. Here we provide a heuristic rule for partition, which is called $\gamma$-partition and performs well in our experiments.
\begin{definition}\label{def: conlvl}
($\gamma$-partition) A $\gamma$-partition is defined by partitioning a DPP kernel $\mathbf{L}$ into the almost block diagonal form as defined in (\ref{def: abd}) with the maximum number of blocks (i.e. the largest possible m)\footnote{Generally speaking, a partition of a kernel of size $N$ into $m$ sub-kernels will approximately reduce the computational complexity $m^2$ times. A larger $m$ implies larger computation reduction.}, where for every off-diagonal matrix $\mathbf{A}_i$, the size of its non-zero area is only at the bottom left and does not exceed $\gamma \times \gamma$.
\end{definition}
A heuristic way to obtain $\gamma$-partition for a kernel L is to first identify a series of non-overlapping dense square sub-matrices along the main diagonal as many as possible. Next, two adjacent square sub-matrices in the main diagonal are merged if the size of the non-zero area in their corresponding off-diagonal sub-matrix exceeds $\gamma \times \gamma$.
It should be noted that a kernel could be subject to $\gamma$-partition in one or more ways with different values of $\gamma$. By taking $\gamma$-partition for a kernel with different values of $\gamma$, we can obtain a balance between computation cost and optimization accuracy. A smaller $\gamma$ implies smaller $m$ achievable in $\gamma$-partition, and thus smaller computation reduction. On the other hand, a smaller $\gamma$ means smaller degree of interaction between adjacent sub-inferences, and thus better optimization accuracy.
Fourth, an empirical illustration of BwDPP-MAP is given in Fig. \ref{fig: Syn Data}, where the greedy MAP algorithm (Table \ref{Greedy MAP Alg}) \cite{gillenwater2012near} is used for the sub-inferences in BwDPP-MAP. The synthetic kernel size is fixed as $500$. For each realization, the area of non-zero entries in the kernel is first specified by uniformly randomly choosing the size of sub-kernels from $[10, 30]$ and the size of the non-zero areas in off-diagonal sub-matrices from $\{0,2,4,6\}$. Next, a vector $\mathbf{B}_i$ is generated for each item $i$ separately, following standard normal distribution. Finally, for all non-zero entries ($L_{ij}\neq 0$) specified in the previous step, the entry value is given by $L_{ij}=\mathbf{B}_i^T \mathbf{B}_j$. Fig. \ref{fig: Syn Data} (a) provides an example for such synthetic kernel.
We generate 1000 synthetic kernels as described above. For each synthetic kernel, we take $\gamma$-partition with $\gamma=0,2,4,6$, and then run BwDPP-MAP. The performance of directly applying the greedy MAP algorithm on the original unpartitioned kernel is used as baseline. The results in Fig. \ref{fig: Syn Data} (b) show that BwDPP-MAP runs much faster than the baseline. With the increase of $\gamma$, the runtime drops while the inference accuracy degrades within a tolerable range.
\subsection{Connection between BwDPP-MAP and its Sub-inference Algorithm}
Any DPP-MAP inference algorithm can be used in a plug-and-play fashion for the sub-inference procedure of BwDPP. It is natural to ask the connection between BwDPP-MAP and its corresponding DPP-MAP algorithm. The relation is given by the following result.
\begin{theorem} \label{thrm connection}
Let $f$ be any DPP-MAP algorithm for BwDPP-MAP sub-inference, where $f$ maps a positive semi-definite matrix to a subset of its indices, i.e. $f: \mathbf{L} \in \mathbb{S}_+ \mapsto Y \subseteq \mathcal{Y}$. BwDPP-MAP (table \ref{BwDPP-MAP Alg}) is equivalent to applying the following steps successively to the almost block diagonal kernel as defined in (\ref{def: abd}):
\begin{equation}
\hat{C}_1 = f(\mathbf{L}_{\mathcal{Y}_1}),
\end{equation}
and for $i=2,...,m$,
\begin{equation} \label{eq conditional}
\hat{C}_i = f(\mathbf{L}_{\cup_{j=1}^i \mathcal{Y}_j} \vert \hat{C}_{1:i-1} \subseteq Y, \bar{\hat{C}}_{1:i-1}\cap Y=\emptyset).
\end{equation}
where $\hat{C}_{1:i-1} = \cup_{j=1}^{i-1} \hat{C}_j$, $\bar{\hat{C}}_{1:i-1}=\cup_{j=1}^{i-1} (\mathcal{Y}_i/\hat{C}_j)$, and the input of $f$ is the conditional kernel\footnote{The conditional distribution (over set $\mathcal{Y}-A^{in}-A^{out}$) of the DPP defined by $\mathbf{L}$,
\begin{equation}
\mathcal{P}_{\mathbf{L}} (Y=A^{in} \cup B \vert A^{in} \subseteq Y, A^{out}\cap Y=\emptyset),
\end{equation} is also a DPP \cite{kulesza2012determinantal}, and the corresponding kernel, $\left(\mathbf{L}\vert A^{in}\subseteq Y, A^{out}\cap Y=\emptyset\right)$, is called the conditional kernel.}.
\end{theorem}
The proof of Theorem \ref{thrm connection} is in the appendix. Theorem \ref{thrm connection} states that BwDPP-MAP is essentially a series of Bayesian belief updates, where in each update a conditional kernel is fed into $f$ that contains the information of previous selection result. The equivalent form allows us to compare BwDPP-MAP directly with the method of applying $f$ on the entire kernel. The latter does inference on the entire set $\mathcal{Y}$ for one time, while the former does the inference on a sequence of smaller subsets $\mathcal{Y}_1,...,\mathcal{Y}_m$. Concretely, in the $i$-th update, a subset $\mathcal{Y}_i$ is added to have the kernel $L_{\cup_{j=1}^i \mathcal{Y}_j}$. Then the information of previous selection result is incorporated into the kernel to generate the conditional kernel. Finally, the DPP-MAP inference is performed on the conditional kernel to select $\hat{C}_i$ from $\mathcal{Y}_i$.
\begin{figure}
\centering
\subfigure[~]{\includegraphics[width=3cm]{Data.pdf}}~~~~
\subfigure[~]{\includegraphics[width=3.6cm]{AprRT.pdf}}
\caption{(a) The top-left $100 \times 100$ entries from a $500\times 500$ synthetic kernel. (b) The log-probability ratio $\log(p/p_{\rm{ref}})$ and runtime ratio $t/t_{\rm{ref}}$, obtained from using BwDPP-MAP on the same kernel with different $\gamma$-partition, where $p_{\rm{ref}}$ and $t_{\rm{ref}}$ are the baseline performance of directly applying the greedy MAP algorithm on the original unpartitioned kernel. Results are averaged over $1000$ kernels. The error bar represents $99.7\%$ confidence level.}
\label{fig: Syn Data}
\end{figure}
\begin{table}
\caption{Greedy DPP-MAP Algorithm}
\label{Greedy MAP Alg}
\centering
\begin{tabular}{l}
\toprule[1pt]
{\bf Input:} \hspace{0.5em} $\mathbf{L}$; \hspace{1.5em} {\bf Output:} \hspace{0.5em} $\hat{C}$.\\
\hline
{\bf Initialization:} \hspace{0.5em} Set $\hat{C}\leftarrow\emptyset$, $U\leftarrow\mathcal{Y}$;\\
{\bf While} $U$ is not empty;\\
\hspace{1.5em} $i^*\leftarrow\argmax _{i \in U} L_{ii}$; \hspace{1.5em} $\hat{C}\leftarrow \hat{C}\cup\{i^*\}$;\\
\hspace{1.5em} Compute $\mathbf{L}^*=\left(\left[(\mathbf{L}+\mathbf{I}_{\bar{\hat{C}}})^{-1}\right]_{\bar{\hat{C}}}\right)^{-1}-\mathbf{I}$; \\
\hspace{1.5em} $\mathbf{L}\leftarrow \mathbf{L}^*$; \hspace{1.5em} $U\leftarrow \{i\vert i \notin \hat{C}, \mathbf{L}_{ii}>1\}$;\\
{\bf Return:} $\hat{C}$. \\
\bottomrule[1pt]
\end{tabular}
\end{table}
\section{BwDPP-based Change-Point Detection \label{Sec: BwDPP CPD}}
Let $\mathbf{x}_1,\cdots,\mathbf{x}_T$ be the time-series observations, where $\mathbf{x}_t \in \mathbb{R}^D$ represents the $D$-dimensional observation at time $t=1,\cdots,T$, and let $\mathbf{x}_{\tau:t}$ denote the segment of observations in the time interval $[\tau ,t]$. We further use $\mathbf{X}_1$, $\mathbf{X}_2$ to represent different segments of observations at different intervals, when explicitly denoting the beginning and ending times of the intervals are not necessary. The new CPD method will build on existing metrics. A dissimilarity metric is denoted as $d: (\mathbf{X}_1, \mathbf{X}_2) \mapsto \mathbb{R}$, which measures the dissimilarity between two arbitrary segments $\mathbf{X}_1$ and $\mathbf{X}_2$.
\subsection{Quality-Diversity Decomposition of Kernel}
Given a set of items $\mathcal{Y}=\{1,\cdots,N\}$, the DPP kernel $\mathbf{L}$ can be written as a Gram matrix $\mathbf{L}=\mathbf{B}^T\mathbf{B}$, where $\mathbf{B}_i$, the columns of $\mathbf{B}$, are vectors representing items in $\mathcal{Y}$.
A popular decomposition of the kernel is to define $\mathbf{B}_i=q_i \bm{\phi}_i$, where $q_i\in \mathbb{R}^+$ measures the quality (magnitude) of item $i$ in $\mathcal{Y}$, and $\bm{\phi}_i\in \mathbb{R}^k$, $\Vert \bm{\phi}_i\Vert=1$ can be viewed as the angle vector of diversity features so that $\bm{\phi}_i^T \bm{\phi}_j$ measures the similarity between items $i$ and $j$. Therefore, $\mathbf{L}$ is defined as
\begin{equation}
\mathbf{L}=\diag (\mathbf{q}) * \mathbf{S} * \diag(\mathbf{q}),
\end{equation}
where $\mathbf{q}$ is the quality vector consisting of $q_i$, and $\mathbf{S}$ is the similarity matrix consisting of $S_{ij}=\bm{\phi}_i^T \bm{\phi}_j$. The quality-diversity decomposition allows us to construct $\mathbf{q}$ and $\mathbf{S}$ separately to address different concerns, which is utilized below to construct the kernel for CPD.
\subsection{BwDppCpd}
BwDppCpd is a two-step CPD method, described as follows.
\textbf{Step 1:} Based on a dissimilarity metric $d$, a preliminary set of change-point candidates is created. Consider moving a pair of adjacent windows, $\mathbf{x}_{t-w+1:t}$ and $\mathbf{x}_{t+1:t+w}$, along $t=w,\cdots,T-w$, where $w$ is the size of local windows. Then, a large $d$ value for the adjacent windows, i.e. $d(\mathbf{x}_{t-w+1:t},\mathbf{x}_{t+1:t+w})$, suggests that a change-point is likely to occur at time t. After we obtain the series of $d$ values, local peaks above the mean of the $d$ values are marked and the corresponding locations, say $t_1,\cdots,t_N$, are selected to form the preliminary set of change-point candidates $\mathcal{Y}=\{1,\cdots,N\}$.
\textbf{Step 2:} Treat the change-point candidates $\mathcal{Y}=\{1,\cdots,N\}$ as BwDPP items, and select a subset from them to be our final estimate of the change-points.
The BwDPP kernel is built via quality-diversity decomposition. We use the similarity metric $d$ once more to measure the quality of a candidate change-point to be a true one. Specifically, we define
\begin{equation}
q_i = d (\mathbf{x}_{t_{i-1}:t_i}, \mathbf{x}_{t_i:t_{i+1}}),
\end{equation}
The higher the value $q_i$ is, the sharper contrast around the change-point candidate $i$, and the better quality of $i$.
Next, the BwDPP similarity matrix is defined to address the fact that the true locations of change-points along the timeline tend to be diverse, since states would not change rapidly. This is done by assigning high similarity score to items being close to each other. Specifically, we define
\begin{equation}
S_{ij}=\exp ({-{(t_i-t_j)^2}/{\sigma^2}}),
\end{equation}
where $\sigma$ is a parameter representing the position diversity level. Finally, after taking $\gamma$-partition of the kernel $\mathbf{L}$ into the almost block diagonal form, BwDPP-MAP is used to select a set of change-points that favours both quality and diversity (Fig. \ref{fig: Hasc Demo} (b)).
\subsection{Discussion}
There is a rich studies of metrics for CPD problem. The choice of the dissimilarity metric $d(\mathbf{X}_1,\mathbf{X}_2)$ is flexible and could be well-tailored to the characteristics of the data. We present two examples that are used in our experiments.
\begin{itemize}
\item Symmetric Kullback-Leibler Divergence (SymKL):\\
If the two segments $\mathbf{X}_{1}$,$\mathbf{X}_{2}$ to be compared are assumed to follow Gaussian processes, the SymKL metric is given:
\begin{equation}\label{eq SymKL}
\begin{split}
&{\rm{SymKL}} (\mathbf{X}_{1},\mathbf{X}_{2})=\tr (\bm{\Sigma}_{1} \bm{\Sigma}_{2}^{-1}) + \tr(\bm{\Sigma}_{2} \bm{\Sigma}_{1}^{-1}) -\\ &2D + \tr((\bm{\Sigma}_{1}^{-1}+\bm{\Sigma}_{2}^{-1})(\bm{\mu}_{1}-\bm{\mu}_{2})(\bm{\mu}_{1}-\bm{\mu}_{2})^T),
\end{split}
\end{equation}
where $\bm{\mu}$ and $\bm{\Sigma}$ are corresponding sample mean and covariance.
\item Generalized Likelihood Ratio (GLR):\\
Generally, the GLR metric is given by the likelihood ratio:
\begin{equation}
{\rm{GLR(\mathbf{X}_1,\mathbf{X}_2)}}=\frac{\mathcal{L}(\mathbf{X}_1\vert \lambda_1)\mathcal{L}(\mathbf{X}_2\vert \lambda_2)}{\mathcal{L}(\mathbf{X}_{1,2}\vert \lambda_{1,2})}.
\end{equation}
The numerator is the likelihood that the two segments follows two different models $\lambda_1$ and $\lambda_2$ respectively, while the denominator is that two segments together (denoted as $\mathbf{X}_{1,2}$) follows a single model $\lambda_{1,2}$. In practice, we plug the maximium likelihood estimates (MLE) for the parameters $\lambda_1$, $\lambda_2$, and $\lambda_{1,2}$. E.g. if we assume that the time-series segment $\mathbf{X}\triangleq \{x_1,\cdots,x_M\}$ follows a homogeneous Poisson process, where $x_i$ is the occurring time of the $i$-th event, $i=1,\cdots,M$. The log-likehood of $\mathbf{X}$ is
\begin{equation}
\mathcal{L}(\mathbf{X}\vert \lambda) = (M-1) \log\lambda - (x_M-x_1) \lambda
\end{equation}
where the MLE of $\lambda$ is used, $\lambda= (M-1)/ (x_M-x_1)$.
\end{itemize}
\begin{figure}
\centering
\subfigure[~]{\includegraphics[width=4.1cm]{HascDemo1.pdf}}
\subfigure[~]{\includegraphics[width=4.1cm]{HascDemo2.pdf}}
\qquad
\caption{An BwDppCpd example from \emph{Hasc}. (a) Change-point candidates selected in Step 1 with their $d$ scores (green cross). (b) Final estimate of change-points in step 2 with their $d$ scores (green cross).}
\label{fig: Hasc Demo}
\end{figure}
\section{Experiments \label{Sec: Exp}}
\begin{figure}
\centering
\subfigure[~]{\includegraphics[width=4.2cm]{WellLogData.pdf}}
\subfigure[~]{\includegraphics[width=4cm]{CoalMining.pdf}}
\qquad
\subfigure[~]{\includegraphics[width=6cm]{DJI.pdf}}
\caption{BwDppCpd results for \emph{Well-Log} (a), \emph{Coal Mine Disaster} (b), and \emph{DJIA} (c). Green lines are detected changes.}
\label{fig: real-world Data}
\end{figure}
The BwDppCpd method are evaluated on five real-world time-series data. Firstly, three classic datasets are examined for CPD, namely \emph{Well-Log} data, \emph{Coal Mine Disaster} data, and \emph{Dow Jones Industrial Average Return (DJIA)} data, where we set $\gamma=0$ due to the small data size.
Next, we experiment with human activity detection and speech segmentation, where the data size becomes larger and there is no accurate model to characterize the data, making the CPD task harder. In both experiments, the numbers of DPP items varies from hundreds to thousands, where, except BwDPP-MAP, no other algorithms can perform MAP inference within a reasonable cost of time due to the large kernel scale. We set $\gamma=3$ for human activity detection and $\gamma=0,2$ for speech segmentation to provide a comparison.
As for the dissimilarity metric $d$, Poisson processes and GLR are used in \emph{Coal Mine Disaster} and for other experiments, Gaussian models and SymKL are used.
\subsection{Well-Log Data}
\emph{Well-Log} contains 4050 measurements of nuclear magnetic response taken during the drilling of a well. It is an example of varying Gaussian mean and the changes reflect the stratification of the earth's crust \cite{adams2007bayesian}. Outliers are removed prior to the experiment. As shown in Fig. \ref{fig: real-world Data} (a), all changes are detected by BwDppCpd.
\subsection{Coal Mine Disaster Data}
\emph{Coal Mine Disaster} \cite{jarrett1979note}, a standard dataset for testing CPD method, consists of 191 accidents from 1851 to 1962. The occurring rates of accidents are believed to have changed a few times and the task is to detect them. The BwDppCpd detection result, as shown in Fig. \ref{fig: real-world Data} (b), agrees with that in \cite{green1995reversible}.
\subsection{1972-75 Dow Jones Industrial Average Return}
\emph{DJIA} contains daily return rates of Dow Jones Industrial Average from 1972 to 1975.
It is an example of varying Gaussian variance, where the changes are caused by big events that have potential macroeconomic effects. Four changes in the data are detected by BwDppCpd, which are matched well with important events (Fig. \ref{fig: real-world Data} (c)). Compared to \cite{adams2007bayesian}, one more change is detected (the rightmost), which corresponds to the date that 73-74 stock market crash ended\footnote{http://en.wikipedia.org/wiki/1973-74\_stock\_market\_crash}. This shows that the BwDppCpd discovers more information from the data.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
~ & {\rm{PRC}}$\%$ & {\rm{RCL}}$\%$ & $F_1$ \\
\hline
BwDppCpd & 93.05 & 87.88 & 0.9039 \\
\hline
RuLSIF & 86.36 & 83.84 & 0.8508 \\
\hline
\end{tabular}
\caption{CPD result on human activity detection data \emph{HASC}. \label{tab: Hasc}}
\end{table}
\begin{figure}
\centering
\includegraphics[width=5cm]{Roc.pdf}
\caption{The ROC curve of BwDppCpd and RuLISF.}
\label{fig: Roc}
\end{figure}
\subsection{Human Activity Detection}
\emph{HASC}\footnote{http://hasc.jp/hc2011/} contains human activity data collected by portable three-axis accelerometers and the task is to segment the data according to human behaviour changes. Fig. \ref{fig: Hasc Demo} (b) shows an example of \emph{Hasc}. The performance of the best algorithm in \cite{liu2013change}, RuLSIF, is used for comparison and the precision (PRC), recall (RCL), and $F_1$ measure \cite{kotti2008speaker} are used for evaluation:
\begin{align}
&{\rm{PRC}}={{\rm{CFC}}}/{{\rm{DET}}}, ~~~~{\rm{RCL}}={{\rm{CFC}}}/{{\rm{GT}}},\\
&{F_1}=2~{\rm{PRC}}~{\rm{RCL}}/({{\rm{PRC}}+{\rm{RCL}}}),
\end{align}
where ${\rm{CFC}}$ is the number of correctly found changes, ${\rm{DET}}$ is the number of detected changes, and ${\rm{GT}}$ is the number of ground-truth changes. $F_1$ score could be viewed as a overall score that balances PRL and RCL. The CPD result is shown in Table \ref{tab: Hasc}, where the parameters are set to attain the best $F_{\rm{1}}$ results for both algorithms.
The receiver operating characteristic (ROC) curve is often used to evaluate performance under different precision and recall, where true positive rate (TPR) and false positive rate (FPR) are given by ${\rm{TPR}}={\rm{RCL}}$ and ${\rm{FPR}}=1-{\rm{PRC}}$. For BwDppCpd, different levels of TPR and FPR are obtained by tuning the position diversity parameter $\sigma$ and for RuLSIF by tuning the threshold $\eta$ \cite{liu2013change}.
As shown in Table \ref{tab: Hasc} and Fig. \ref{fig: Roc}, BwDppCpd outperforms RuLISF on \emph{HASC} when the FPR is low. RuLISF has a better performance only when FPR exceeds $0.3$, which is less useful.
\subsection{Speech Segmentation}
We tested two datasets for speech segmentation. The first dataset, called \emph{Hub4m97}, is a subset (around 5 hours) from 1997 Mandarin Broadcast News Speech (HUB4-NE) released by LDC\footnote{http://catalog.ldc.upenn.edu/LDC98S73}. The second dataset, called \emph{TelRecord}, consists of 216 telephone conversations, each around 2-min long, collected from real-world call centres. Acoustic features of 12-order MFCCs (mel-frequency cepstral coefficients) are extracted as the time-series data.
Speech segmentation is to segment the audio data into acoustically homogeneous segments, e.g. utterances from a single speaker or non-speech portions. The two datasets contain utterances with hesitations and a variety of changing background noises, presenting a great challenge for CPD.
The BwDppCpd method with different $\gamma$ for kernel partition (denoted as Bw-$\gamma$ in Table \ref{tab SegResult}) is tested and two classic segmentation methods BIC \cite{chen1998speaker} and DISTBIC \cite{delacourt2000distbic} are used for comparison. As the same as in (Delacourt and Wellekens 2000), a post-processing step based on BIC values is also taken to reduce the false alarms for BwDppCpd.
The experiment results in Table \ref{tab SegResult} shows that BwDppCpd outperforms BIC and DISTBIC for both datasets. In addition, comparing the results obtained with $\gamma=0$ and $\gamma=2$, using $\gamma=2$ is found to be faster but has a slightly worse performance. This agrees with our analysis of BwDPP-MAP for using different $\gamma$-partition to tradeoff speed and accuracy.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
~ & BIC & DistBIC & Bw-$0$ & Bw-$2$\\
\hline
\multicolumn{5}{|c|}{\emph{Hub4m97}} \\
\hline
{\rm{PRC}$\%$} & 59.40 & 64.29 & 65.29 & 65.12\\
\hline
{\rm{RCL}$\%$} & 78.24 & 74.98 & 78.49 & 78.39\\
\hline
$F_1$ & 0.6753 & 0.6922 & 0.7128 & 0.7114\\
\hline
\multicolumn{5}{|c|}{\emph{TelRecord}} \\
\hline
{\rm{PRC}$\%$} & 54.05 & 61.39 & 66.54 & 66.47\\
\hline
{\rm{RCL}$\%$} & 79.97 & 81.72 & 85.47 & 84.83\\
\hline
$F_1$ & 0.6451 & 0.7011 & 0.7483 & 0.7454\\
\hline
\end{tabular}
\caption{Segmentation results on \emph{Hub4m97} and \emph{TelRecord}. \label{tab SegResult}}
\end{table}
\section{Conclusion\label{Sec: Con}}
In this paper, we introduced BwDPPs, a class of DPPs where the kernel is almost block diagonal and thus can allow efficient block-wise MAP inference. Moreover, BwDPPs are demonstrated to be useful in change-point detection problem. The BwDPP-based change-point detection method, BwDppCpd, shows superior performance in experiments with several real-world datasets.
The almost block diagonal kernels suit the change-point detection problem well, but BwDPPs may achieve more than that. Theoretically, BwDPP-MAP could be applied to any block tridiagonal matrices without modification. It remains to be studied the theoretical issues regarding exact or approximate partition of a DPP kernel into the form of an almost block diagonal matrix \cite{acer2013recursive}. Other potential BwDPP applications are also worth further exploration.
\section{Appendix: Proof of Lemma \ref{lm apr}}
\begin{proof}
Define
\begin{equation}
\mathbf{S}^i=\left\{\begin{array}{cl}
\mathbf{L} & i=0\\
\begin{bmatrix}
\mathbf{\tilde{L}}_{\mathcal{Y}_{i+1}} & [\mathbf{L}_{\mathcal{Y}_{i+1},\mathcal{Y}_{i+2}} ~ \mathbf{0}]\\ [\mathbf{L}_{\mathcal{Y}_{i+1},\mathcal{Y}_{i+2}} ~ \mathbf{0}]^T & \mathbf{L}_{\cup_{j=i+2}^m \mathcal{Y}_j}
\end{bmatrix} & i=1,\cdots,m-2 \\
\mathbf{\tilde{L}}_{\mathcal{Y}_{i+1}} & i=m-1
\end{array}\right..
\end{equation}
For $i=1,\cdots,m-1$, $\mathbf{S}^i$ is the Schur complement of $\mathbf{\tilde{L}}_{C_i}$ in $\mathbf{S}^{i-1}_{C_{i}\cup(\cup_{j=i+1}^m \mathcal{Y}_j)}$, the sub-matrix of $\mathbf{S}^{i-1}$. We next prove the lemma using the first principle of mathematical induction. State the predicate as:
\begin{itemize}
\item $P(i)$: $\mathbf{S}^{i-1}$ and $\mathbf{\tilde{L}}_{\mathcal{Y}_i}$ are positive semi-definite (PSD).
\end{itemize}
$P(1)$ trivially holds as $\mathbf{\tilde{L}}_{\mathcal{Y}_1}=\mathbf{L}_1$ and $\mathbf{S}^{0}=\mathbf{L}$ are PSD.
Assuming $P(i)$ holds. $\mathbf{S}^{i-1}_{C_i\cup(\cup_{j=i+1}^m \mathcal{Y}_j)}$ is PSD because $\mathbf{S}^{i-1}$ is PSD. Since $\mathbf{\tilde{L}}_{C_i} \succ 0$ (footnote \ref{Assump}) and $\mathbf{S}^{i}$ is the Schur complement of $\mathbf{\tilde{L}}_{C_i}$ in $\mathbf{S}^{i-1}_{C_{i}\cup(\cup_{j=i+1}^m \mathcal{Y}_j)}$, $\mathbf{S}^{i}$ is PSD. Being sub-matrix of $\mathbf{S}^{i}$, $\mathbf{\tilde{L}}_{\mathcal{Y}_{i+1}}$ is also PSD. Hence, $P(i+1)$ holds.
Therefore, for $i=1,\cdots,m$, $\mathbf{\tilde{L}}_{\mathcal{Y}_i}$ is PSD.
\end{proof}
\section{Appendix: Proof of Theorem \ref{thrm connection}}
For preparation, first I need to quote a result from \cite{kulesza2012determinantal}: the conditional kernel
\begin{equation}\label{eq con kernel}
\left(\mathbf{L}\vert A^{in}\subseteq Y, A^{out}\cap Y=\emptyset\right) = \left( \left[ (\mathbf{L}_{\bar{A}^{out}}+\mathbf{I}_{\bar{A}^{in}})^{-1}\right]_{\bar{A}^{in}}\right)^{-1} - \mathbf{I}.
\end{equation}
Next I need to use the following lemma:
\begin{lemma}\label{lm Lc inv}
$(\mathbf{L}_{\hat{C}_{1:i}}^{-1})_{\hat{C}_i}=\tilde{\mathbf{L}}_{\hat{C}_i}^{-1}$, for $i=1,...,m$, where $\tilde{\mathbf{L}}_{\hat{C}_i}$ is defined by (\ref{def: L tilde C}).
\end{lemma}
\begin{proof}
The proof is given by mathematical induction.
When $n=1$, the result trivially holds:
\begin{equation}
(\mathbf{L}_{\hat{C}_{1}}^{-1})_{\hat{C}_1}=\mathbf{L}_{\hat{C}_1}=\tilde{\mathbf{L}}_{\hat{C}_1}^{-1}.
\end{equation}
Assume the result holds for $n=i-1$, i.e.,
\begin{equation}
(\mathbf{L}_{\hat{C}_{1:i-1}}^{-1})_{\hat{C}_{i-1}}=\tilde{\mathbf{L}}_{\hat{C}_{i-1}}^{-1}.
\end{equation}
Consider the case when $n=i$. One has
\begin{equation}
\begin{split}
&(\mathbf{L}_{\hat{C}_{1:i}}^{-1})_{\hat{C}_i}=(\mathbf{L}_{\hat{C}_i}-\mathbf{L}_{\hat{C}_{1:i-1},\hat{C}_i}^T \mathbf{L}_{\hat{C}_{1:i-1}}^{-1}\mathbf{L}_{\hat{C}_{1:i-1},\hat{C}_i})^{-1}\\
&=(\mathbf{L}_{\hat{C}_i}-\mathbf{L}_{\hat{C}_{i-1},\hat{C}_i}^T (\mathbf{L}_{\hat{C}_{1:i-1}}^{-1})_{\hat{C}_{i-1}}\mathbf{L}_{\hat{C}_{i-1},\hat{C}_i})^{-1}\\
&=(\mathbf{L}_{\hat{C}_i}-\mathbf{L}_{\hat{C}_{i-1},\hat{C}_i}^T \tilde{\mathbf{L}}_{\hat{C}_{i-1}}^{-1}\mathbf{L}_{\hat{C}_{i-1},\hat{C}_i})^{-1}=\tilde{\mathbf{L}}_{\hat{C}_i}^{-1}.
\end{split}
\end{equation}
Therefore the result holds for $i=1,...,m$.
\end{proof}
To prove Theorem \ref{thrm connection}, it suffices to show that
\begin{equation}
\tilde{\mathbf{L}}_{\mathcal{Y}_i}=\left(\mathbf{L}_{\cup_{j=1}^i \mathcal{Y}_j} \vert \hat{C}_{1:i-1} \subseteq Y, \bar{\hat{C}}_{1:i-1}\cap Y=\emptyset\right).
\end{equation}
Using (\ref{eq con kernel}) one has
\begin{equation}
\begin{split}
& \left(\mathbf{L}_{\cup_{j=1}^i \mathcal{Y}_j} \vert \hat{C}_{1:i-1} \subseteq Y, \bar{\hat{C}}_{1:i-1}\cap Y=\emptyset\right)\\
&= \left( \left[ (\mathbf{L}_{\hat{C}_{1:i-1}\cup \mathcal{Y}_i}+\mathbf{I}_{\mathcal{Y}_i})^{-1}\right]_{\mathcal{Y}_i}\right)^{-1} - \mathbf{I}\\
&=\mathbf{L}_{\mathcal{Y}_i}-\mathbf{L}_{\hat{C}_{1:i-1},\mathcal{Y}_i}^T \mathbf{L}_{\hat{C}_{1:i-1}}^{-1}\mathbf{L}_{\hat{C}_{1:i-1},\mathcal{Y}_i}\\
&=\mathbf{L}_{\mathcal{Y}_i}-\mathbf{L}_{\hat{C}_{i-1},\mathcal{Y}_i}^T (\mathbf{L}_{\hat{C}_{1:i-1}}^{-1})_{\hat{C}_{i-1}}\mathbf{L}_{\hat{C}_{i-1},\mathcal{Y}_i}.
\end{split}
\end{equation}
Following Lemma \ref{lm Lc inv} to complete the proof
\begin{equation}
RHS=\mathbf{L}_{\mathcal{Y}_i}-\mathbf{L}_{\hat{C}_{i-1},\mathcal{Y}_i}^T \tilde{\mathbf{L}}_{\hat{C}_{i-1}}^{-1} \mathbf{L}_{\hat{C}_{i-1},\mathcal{Y}_i}=\tilde{\mathbf{L}}_{\mathcal{Y}_i}.
\end{equation}
|
2,877,628,091,023 | arxiv | \section{Introduction}
In recent years, we are seeing a renewed interest in symbolic regression (SR), the sub-field of machine learning (ML) which concerns searching for ML models in the form of mathematical expressions~\cite{udrescu2020ai,lacava2021contemporary,zhang2021rl,dascoli2022deep}.
These models are appealing because, by their very nature, they stand a chance of being interpretable.
This is increasingly considered important, e.g., to ensure that ML is used in a fair and responsible manner~\cite{adadi2018peeking,2019DARPA,la2020genetic}.
Today, genetic programming (GP)~\cite{koza1992genetic} is one of the best approaches to discover SR models~\cite{lacava2021contemporary}.
GP is a bio-inspired meta-heuristic that works by \emph{evolving} a population of solutions that, differently from traditional genetic algorithms, need be \emph{executed} to be evaluated, i.e., they are programs.
In the case of SR, the solutions evolved by GP encode functions as symbolic models that are evaluated in terms of their accuracy in fitting a (training) data set~\cite{koza1992genetic}.
However, when maximizing accuracy alone, GP tends to generate solutions that become unnecessarily large in the number of components (arithmetic operations, variables, constants, etc.), a phenomenon known as \emph{bloat}, which harms interpretability~\cite{bloatcontrol}.
To deal with this problem, GP can be set to optimize different objectives at the same time.
Multi-objective GP (MOGP) is typically used with the intention to search for solutions with different trade-offs between accuracy and interpretability ~\cite{kommenda2016evolving}.
At the end of a single run of MOGP, decision makers can choose the model that strikes the right balance between accuracy and interpretability.
Since interpretability is hard or impossible to define (in general terms)~\cite{lipton2018mythos,virgolin2021model}, the common way by which interpretability in pursued in MOGP for SR is by minimization of solution size (or derivations thereof, see e.g., the related work section in~\cite{Virgolin2020PPSN}), i.e., the number of components that constitutes the solution.
Minimizing size is typically in conflict with maximizing accuracy in (MO)GP, because (MO)GP typically discovers better solutions by refining the function approximation they represent, i.e., by incorporating additional components~\cite{langdon2021genetic}.
Just like solution size is the objective that is typically adopted, the second version of the non-dominated sorting genetic algorithm~\cite{nsgaii} (NSGA-II) is the most adopted framework to realize MOGP.
Unfortunately, as it has been shown by several works before~\cite{DeJong2003,populationcollapse2007,alphadominance,adaptivealphadominance} and is confirmed once more in this paper, NSGA-II can be inefficient when adopted for MOGP when one of the objectives is solution size.
In particular, small solutions are observed to take over the majority of the population in a few generations, while larger and more accurate solutions are hardly discovered.
In this paper, we tackle this problem at its root.
Specifically, we identify that the reason why small solutions over-replicate and hamper the discovery of larger but more accurate solutions is the fact that, besides obviously minimizing size well and thus having high chances of survival, small solutions lack \emph{evolvability}.
Here, by evolvability of a solution we mean the likelihood that variation (e.g., subtree crossover and mutation) produces a relatively accurate offspring when using that solution as a parent.
We call this cause of inefficiency of NSGA-II \emph{evolvability degeneration}.
Consequently, we present a new algorithm, named \emph{evoNSGA-II}, which improves upon standard NSGA-II by restraining the over-replication of solutions whose size is identified to be unhelpful in terms of discovering more accurate solutions.
Thanks to this, we find evoNSGA-II to be far more efficient than NSGA-II as well as other algorithms designed to deal with this issue.
\section{Background \& related work}\label{background}
\subsection{Brief recall on SR and (MO)GP}
In SR, we seek a model (or equivalently, function approximation) $f$ that is accurate in terms of fitting a given data set.
Accuracy is typically measured in terms of minimizing a loss function, such as the mean-squared-error (MSE).
Formally, given a data set $\mathcal{D} = \{ (\mathbf{x}_i, y_i ) \}^n_{i=1}$, where $n$ is the number of observations, $\mathbf{x}_i \in \mathbb{R}^d$ is the vector of \emph{d} feature values $\mathbf{x}_i = \left( x^{(1)}_i, \dots, x^{(d)}_i \right)^\top$, and $y_{i} \in \mathbb{R}$ the label or target variable, we seek an optimal $f^\star$ such that:
\begin{equation*}
\label{eq:loss}
f^\star := \argmin_{f\in F} \left\{\text{MSE} \left(\mathcal{D}, f\right) \right\}= \argmin_{f \in F}\left\{ \frac{\sum_{i=1}^n \left( y_i - f(\mathbf{x}_i) \right)^2}{n} \right\}.
\end{equation*}
An SR algorithm searches in the space of functions $F$ that is defined in terms of an \emph{encoding} (see next paragraph), and what atomic sub-functions ($+$, $-$, $\times$, $\div$, $\exp$, $\log$, etc.), variables ($x^{(1)}$, $x^{(2)}$, etc.), and constants ($\frac{1}{2}$, $-\pi$, $42$, etc.) appear in what order in that encoding.
Alongside maximizing accuracy, we wish the model to be interpretable.
Various metrics have been proposed to seek interpretable/simpler models, see e.g., ~\cite{Virgolin2020PPSN, Vladislavleva2009}.
However, reducing model size remains a simple and popular approach (e.g., it was recently used in a large SR benchmark~\cite{lacava2021contemporary}).
GP is a popular and often top-performing method for SR~\cite{lacava2021contemporary}.
In this work, we adopt traditional GP, where solutions are encoded by trees in which each node contains one of the possible sub-functions, variables, and constants~\cite{koza1992genetic,poli2008field}.
To discover of multiple solutions with trade-offs between accuracy and interpretability,
GP is set to work in a multi-objective fashion (MOGP), where the concept of Pareto-dominance is used to rank solutions.
Specifically, we say that solution $A$ Pareto-dominates solution $B$ if $A$ is \emph{equal or better} than $B$ in all objectives, and strictly better in at least one objective.
The outcome of MOGP is the best-found \emph{front}, i.e., the set of solutions that are not Pareto-dominated by any other ever found.
NSGA-II is widely considered to be the most popular multi-objective evolutionary algorithm (MOEA).
We conducted a small literature survey to assess whether this is indeed the case for MOGP.
We detail how the survey was conducted in the Appendix.
We found that, in the last five years, NSGA-II was typically adopted as MOGP algorithm in approximately $70\%$ of the works that we surveyed, either as the main algorithm or as a baseline.
We thus believe that our intent of improving NSGA-II for MOGP is amply justified.
\subsection{Prior works on improving NSGA-II for GP}
\label{sec:prior-attempts}
Several works in the literature have identified the problem of small solutions over-replicating and hampering further evolution, which we refer to as \emph{evolvability degeneration}.
A very-closely related concept was discovered almost twenty years ago in~\cite{DeJong2003}, and termed later as \emph{population collapse}~\cite{populationcollapse2007}.
Population collapse refers to the process where the entire population converges to copies of a single solution that has a single component, i.e., the population is unable to evolve any further.
As it will be shown in this paper (in \Cref{section:evolvDegenWhyHappened}), the behavior we observe is less extreme:
even though copies of small solutions do initially occupy most of the population in early generations, NSGA-II remains able to recover, i.e., larger solutions are discovered later on, albeit at a very slow rate.
To prevent population collapse, the use of a diversity preservation mechanism is advised in~\cite{DeJong2003}.
Instead, in~\cite{populationcollapse2007} it is argued that employing mutation is enough.
Here, we find that even if one employs mutation, NSGA-II still suffers from evolvability degeneration.
Other works have also noted, and proposed means to deal with, the problem of small solutions flooding the population.
In~\cite{bleuler2001multiobjective}, it is proposed to use SPEA2 for MOGP\cite{zitzler2001spea2}, to overcome the problem just mentioned as well as bloat.
SPEA2, which we also consider in our experiments, works in a fundamentally different way than NSGA-II.
For example, SPEA2 maintains two separate populations during the search, and measures the performance of a solution based on how many solutions are dominated by that solution.
Recently,~\cite{alphadominance} and~\cite{adaptivealphadominance} explored the idea of using $\alpha$-dominance.
Instead of the original objectives (here, accuracy and size), these algorithms use linear combinations of the original objectives which are weighted by coefficients ($\alpha$) that vary over time, so as to be able to put more pressure on finding solutions of a certain trade-off.
In particular, $\alpha$ is adapted to increase the importance of accuracy over the importance of size.
In the first work~\cite{alphadominance}, fixed schedules are considered to adapt $\alpha$, according to a function of the number of generations that is linear, a cosine, or a sigmoid.
In the second work~\cite{adaptivealphadominance}, $\alpha$ is adapted dynamically based on the state of the population: if more small than accurate (and vice versa, accurate than small) solutions are detected, then $\alpha$ is adapted to give more weight to accuracy (respectively, to size).
For NSGA-II applied to discrete optimization, in~\cite{HisaoOverlapping2005} strategies are explored to remove duplicate solutions from the population.
One such strategy is used for MOGP in~\cite{virgolin2021model}, where NSGA-II is modified so that duplicate solutions are assigned the lowest priority to survive selection.
Together with classic NSGA-II, SPEA2, and the $\alpha$-dominance based algorithms, we also include this algorithm in our comparisons.
To the best of our knowledge, our work differs from the previous ones because it makes an explicit link between the over-replication of small solutions and their lack of evolvability, and proposes an algorithm that uses this information to improve the search.
\section{Evolvability degeneration}
\label{section:evolvDegenWhyHappened}
In this section, we analyze the phenomenon of evolvability degeneration in NSGA-II for MOGP.
First, we describe it by considering a use case.
Then, we show what causes it.
The latter is done by means of an experiment in which we trace how solutions of different sizes contribute to finding offspring solutions that are relatively accurate.
\subsection{Over-replication of small solutions}
\label{sec:over-replication}
We begin by reporting how the size of solutions changes over time when using NSGA-II on an examplary use case.
The parameter settings for NSGA-II are those in bold font in \Cref{parametersetting}, except for the population size, which is set to 500.
We show the behavior of NSGA-II on the data set Airfoil (see Sec.~B of the Appendix).
We use this data set as a recurring example for no particular reason other than it being first in alphabetic order among the data sets we considered; we observe similar trends also on the other data sets.
\begin{figure}[]
\centering
\includegraphics[width=0.7\linewidth]{NSGAII-forPopulationCollapseAnalysis_exp1standalone.pdf}
\vspace{-2mm}
\caption{Proportion of solutions of different sizes during the evolution for 30 runs of NSGA-II on Airfoil.
Lines indicate means and shaded areas represent standard deviations.
Note the exponential scaling of solution size intervals.}
\label{fig:NSGAII-forPopulationCollapseAnalysis}
\end{figure}
\Cref{fig:NSGAII-forPopulationCollapseAnalysis} shows that, at the initial stages of the evolution, the proportion of small solutions grows to occupy the majority of the population.
Only later small solutions start to diminish and slightly larger solutions start to appear and compete.
However, the largest solutions, in this case the ones with more than $20$ nodes, are basically not discovered.
Importantly, the solutions of size one always occupy a rather large portion of the population (above $30\%$).
This abundance of small solutions can be explained by the fact that, reasonably, small solutions of relatively high accuracy and duplicates thereof are produced by GP relatively quickly; in particular, before larger and more accurate solutions are discovered.
Because of how NSGA-II works, solutions that have the best-so-far accuracy for any given size are set to survive the generation with high priority, no matter if they are duplicates or not.
Now, this abundance of small solutions would not necessarily be a problem if small solutions would represent fertile grounds to discover larger, more accurate solutions.
In the next section, we show that this is not the case.
\subsection{Evolvability of small and large solutions}
\label{sec:evolv-small-solutions}
A simple way to understand whether evolution stagnates or proceeds well is to measure evolvability in terms of the frequency by which well-performing offspring solutions are discovered.
Here, we particularly want to measure the frequency with which solutions of different size contribute to offspring solutions with an accuracy that is relatively high.
Since we aim to improve NSGA-II, we would ideally do this within an NSGA-II evolution.
However, as shown in \Cref{fig:NSGAII-forPopulationCollapseAnalysis}, larger solutions are hardly ever discovered, making it impossible for us to estimate their evolvability.
Thus, we design a workflow to collect enough solutions of various sizes.
First, we repeatedly run single-objective GP, 100 times, with different maximal size limitations, for up to a certain number of generations (e.g., 40).
This allows us to collect best-found solutions of various sizes which are relatively accurate, and can be imagined to contribute to a best-found front at a certain stage of an ``ideal'' NSGA-II evolution, where evolvability degeneration does not occur.
Second, we collect these solutions in different buckets, based on their size.
We also record the $90^\textit{th}$ percentile of accuracy ($\emph{acc}_{90}$) out of all solutions collected, irrespective of their size; we use this information later.
Next, for each bucket, we repeatedly (100 times) take a random solution to act as parent, and generate an offspring solution via subtree mutation.
We do the same for subtree crossover, this time considering pair of buckets and, importantly, generating a \emph{single} offspring instead of two (this is rather common in GP~\cite{poli2008field}).
Specifically, the offspring is generated by cloning the first parent and transplanting a random subtree from the second parent (which from now on will be called \emph{donor} to avoid confusion) to replace a random subtree of the first parent (which from now on will be simply called \emph{parent}).
We perform crossover this way because, for a sufficiently large parent, in expectation the majority of the nodes in the offspring comes from the parent instead of the donor; this may play an important role in terms of evolvability.
Lastly, we measure how frequently parents of different sizes produce relatively accurate offspring, using $\emph{acc}_{90}$ as a threshold.
We apply the proposed workflow and display the result in \Cref{heatmap-crossovermutation} (all the parameter settings are as per \Cref{sec:over-replication}), which concerns Airfoil and best-found solutions at generation 40.
We remark that we repeated the same approach on the second data set we consider, Boston, as well as with other termination limits (generation 10, 20, and 30), and means of assessing whether an offspring is relatively accurate (e.g., with respect to the accuracy of the parent);
we observed the same general trends as shown in \Cref{heatmap-crossovermutation}.
Note that the heat-map for subtree crossover is not symmetric due to the reasons explained in the previous paragraph.
The frequencies found for subtree crossover indicate that the parent needs to be sufficiently large for variation to be successful with large probability, while the donor can be of any size.
Similarly, also for mutation larger solutions are more evolvable.
\begin{figure}[]
\centering
\includegraphics[width=0.9\linewidth]{crossover_mutation_separate_share_colorbar_NotBold_reviseXYLabel2.pdf}
\caption{Frequency (normalized between min and max, color coded as depicted by the legend on the right) of producing an offspring with a good accuracy (above the $90^\textit{th}$ percentile of those obtained in all runs) based on the size of the parent and donor solutions.
Left: For subtree crossover. Right: For subtree mutation.
Note that a solution of size 2 (to be used as a parent or donor) was never returned by single-objective GP because a more accurate solution of size 1 exists and was systematically discovered.}
\label{heatmap-crossovermutation}
\end{figure}
The result just shown confirms our hypothesis that smaller solutions hamper the search.
Therefore, the fact that in the early stages of an NSGA-II run, the population is flooded by copies of small solutions, is highly undesirable.
We remark that penalizing duplicates altogether, as in fact was done in some earlier approaches (see \Cref{sec:prior-attempts}), is not necessarily the optimal strategy.
In fact, having duplicates of highly-evolvable solutions may be the best option.
This idea is explored in our algorithm, presented in the next section.
\section{Improving NSGA-II based on evolvability}\label{Evolvability-based NSGA-II}
We now present our proposal to improve NSGA-II, i.e., evoNSGA-II.
Since evoNSGA-II mostly follows NSGA-II, we begin by recalling the workings of NSGA-II.
Next, we explain what is new in evoNSGA-II, i.e., the estimation of the evolvability of solutions of different size, and the use of this information to prevent the over-replication of solutions with low evolvability.
\subsection{NSGA-II}
\label{sec:workings-nsga-ii}
\Cref{NSGAIIEVONSGAII} shows the pseudo-code of NSGA-II, as well as that of evoNSGA-II: In fact, the only change we apply is regarded to how the population is updated at the end of a generation.
In every generation of (evo)NSGA-II, firstly an offspring population $\mathcal{O}$ is generated from promising solutions of the current population $\mathcal{P}$.
Promising solutions are typically chosen with tournament selection, and then undergo variation, typically by means of subtree crossover and subtree mutation.
In (evo)NSGA-II, tournament selection compares solutions based on their non-domination \emph{rank} (explained below) and, if the solutions share the same rank, based on their \emph{crowding distance} (explained below too).
Next, $\mathcal{P}$ and $\mathcal{O}$ are merged and undergo non-dominated sorting.
Non-dominated sorting is a process that subdivides all solutions into layers called \emph{fronts}, such that for any two solutions in a same front, those two solutions do not Pareto-dominate each other;
moreover, for each solution in the $i^\textit{th}$ front, there exists at least one solution in the $(i-1)^\textit{th}$ front that Pareto-dominates it.
The rank of a solution represents the front to which that solution belongs, rank $1$ being the best.
The algorithm proceeds by parsing each front and assigning to each solution in that front a \emph{crowding distance}.
The crowding distance is a measure of sparseness (the more a solution is isolated the better) that is computed in the objective space using the L1 norm.
A solution for which an objective has the maximum value for that front is assigned an infinite (and thus best) crowding distance.
Finally, the population is updated for the next generation, using an NSGA-II-specific form of \emph{truncation} selection.
This is where NSGA-II and evoNSGA-II differ.
In NSGA-II, the new population is formed by selecting the solutions with rank 1, then those with rank 2, and so on, until the selection of all solutions with a certain rank would result in exceeding the population size.
In that case, the crowding distance is used to discern which subset of solutions with that certain rank still to select for the new population.
The remaining solutions are discarded.
\begin{algorithm}
\caption{Workflow of NSGA-II and evoNSGA-II\\ \small \textbf{Note}:
\emph{Truncation} is the only step that is different between the two.}
\begin{algorithmic}[1]
\Require \emph{Pop\_size, stop\_criteria}
\State $\mathcal{P} \leftarrow$ \Call{Initialize\_population}{Pop\_size}
\State \Call{Evaluate}{$\mathcal{P}$}
\State \emph{Fronts}$\leftarrow$\Call{Fast\_non-dominated\_sorting}{$\mathcal{P}$}
\For{\emph{front} in \emph{Fronts}}
\State \Call{Crowding\_distance}{\emph{front}}
\EndFor
\While{$\neg$ \emph{stop\_criteria}}
\State $\mathcal{P}^\prime \leftarrow$ \Call{Tournament}{$\mathcal{P}$}
\State $\mathcal{O}$ $\leftarrow$ \Call{Variation}{$\mathcal{P}^{'}$}
\State \Call{Evaluate}{$\mathcal{O}$}
\State \emph{Fronts} $\leftarrow$ \Call{Fast\_non-dominated\_sorting}{$\mathcal{P} \cup \mathcal{O}$}
\For{\emph{front} in \emph{Fronts}}
\State \Call{Crowding\_distance}{\emph{front}}
\EndFor
\State \emph{$\mathcal{P}$} $\leftarrow$ \boxed{\Call{Truncation}{\emph{Fronts}}}
\EndWhile
\end{algorithmic}
\label{NSGAIIEVONSGAII}
\end{algorithm}
evoNSGA-II additionally uses estimates of evolvability for each size of solution to decide whether a solution should be selected.
Specifically, we generate a table of bounds $\mathcal{B}$ that tells how many solutions of a certain size can be selected in the truncation selection step.
This way we can prevent the over-replication of small, non-evolvable solutions.
We proceed by explaining how $\mathcal{B}$ is built.
\subsection{Construction of $\mathcal{B}$}
We keep track of the evolvability of solutions in terms of their capability of generating accurate offspring of different sizes.
Namely, we build a table $\mathcal{B}$ containing pairs ($s$,$b$), where $s$ is a size and $b$ is a bound on the number of times that solutions of size $s$ can be selected by truncation selection to form the new population of evoNSGA-II.
We want the number $b$ to be proportionate to the (estimated) evolvability of the solutions of size $s$.
\Cref{alg:bounds} shows the construction of $\mathcal{B}$ in detail.
For each offspring, the size of \emph{its parent} $s$ is considered.
Then, a counter (\emph{successes}) that is dedicated to that $s$ is increased if the accuracy of the offspring is larger than that of the median accuracy computed over $\mathcal{P}$ (we choose the median over the mean because outliers are common in GP for SR).
Note that we do not need to re-compute the accuracy of solutions, as they can simply be cached when solutions are evaluated.
We also keep track of the number of offsprings that was generated from parents of size $s$ (\emph{attempts}).
Finally, a simple measure of evolvability is computed for $s$, as the ratio between the number of successes and the number of attempts.
This ratio is in $[0,1]$ and the larger its value, the better it is.
We fill $\mathcal{B}$ with these ratios, for each size.
Recall that we wish to use $\mathcal{B}$ in the truncation selection process, which is applied to $\mathcal{P} \cup \mathcal{O}$.
Importantly, $\mathcal{P} \cup \mathcal{O}$ contains both solutions that were not selected as parents, and offspring solutions:
for those, there may exist a size that is not in $\mathcal{B}$, i.e., for which we have no information on its evolvability.
Therefore, we artificially fill this information for potentially-missing sizes in $\mathcal{B}$ (line 13).
Namely, for each missing size, we take the weighted average of the ratios observed for the closest smaller and closest larger size.
Last but not least, we perform a normalization step on $\mathcal{B}$, transforming the ratios so that their sum amounts to the population size ($|\mathcal{P}|$).
This way, for any size $s$, $\mathcal{B}[s]$ defines how many solutions should be selected at most.
In the next section, we illustrate how $\mathcal{B}$ is used in the truncation selection of evoNSGA-II.
\begin{algorithm}
\caption{Build\_$\mathcal{B}$}
\begin{algorithmic}[1]
\Require $\mathcal{P}$, $\mathcal{O}$
\State \emph{max\_size} $\gets$ \Call{Max\_size}{$\mathcal{P} \cup \mathcal{O}$}
\State \emph{attempts}[i] $\gets 0 \text{ for } i \in \left\{1, \dots, \textit{max\_size} \right\}$
\State \emph{successes}[i] $\gets 0 \text{ for } i \in \left\{1, \dots, \textit{max\_size}\right\}$
\State \emph{median\_accuracy} $\leftarrow$ \Call{Median\_accuracy}{$\mathcal{P}$}
\For{$o \in \mathcal{O}$}
\State $s \gets \Call{Fetch\_parent\_size}{o}$
\If{$ \Call{Accuracy}{o} > \emph{median\_accuracy}$}
\State $\emph{successes}[s] \gets \emph{successes}[s] + 1$
\EndIf
\State $\emph{attempts}[s] \gets \emph{attempts}[s] + 1$
\EndFor
\State $\mathcal{B}[i] \leftarrow \frac{\textit{successes}[i]}{\textit{attempts}[i]} \text{ for } i \in \{1, \dots, \emph{max\_size} \} : \emph{attempts}[\emph{i}] \neq 0 $
\State $\mathcal{B} \leftarrow \Call{Fill\_missing\_sizes}{\mathcal{B}, \mathcal{P}, \mathcal{O}}$
\State $\mathcal{B} \leftarrow \Call{Normalize}{\mathcal{B}, |\mathcal{P}|}$
\State $\Return{~\mathcal{B}}$
\end{algorithmic}
\label{alg:bounds}
\end{algorithm}
\subsection{Use of $\mathcal{B}$ during truncation selection}
The way truncation selection works in evoNSGA-II is the same as in NSGA-II (as described before, in \Cref{sec:workings-nsga-ii}), except for the fact that we will now use $\mathcal{B}$ to decide how many solutions of a certain size can be selected.
We build $\mathcal{B}$ after the offspring population $\mathcal{O}$ has been evaluated, so that it is ready to be used for truncation selection.
Like in NSGA-II, our truncation selection parses the solutions progressively, based on their rank.
Different from NSGA-II, we do not immediately copy the solution that is currently in consideration;
first, we consider the size of $s$ of that solution, and the respective bound $\mathcal{B}[s]$.
If the number of solutions of size $s$ copied so far is less or equal to $\mathcal{B}[s]$, then the solution is selected;
else, the solution is skipped, and the next solution is considered.
It can happen that, in \Cref{alg:bounds}, large evolvability values are estimated for sizes for which there is a limited number of solutions in $\mathcal{P} \cup \mathcal{O}$, while low values are estimated for sizes for which there is an abundant number of solutions.
Consequently, to respect the bounds in $\mathcal{B}$, the number of selected solutions may be lower than the population size.
If that happens, we reset the counters for how many solutions of each size have been copied, and start the truncation selection process anew, from rank 1 onwards.
This way, even if the bound for the size $s$ is exceeded, we maintain an approximate proportionality between estimated evolvability of $s$ and the number of selected solutions of size $s$.
Lastly, we remark that a single generation of evoNSGA-II is basically as fast as NSGA-II, as it entails minimial overhead.
In fact, from the perspective of computational complexity, all operations needed to build and use $\mathcal{B}$ are linear in the population size, and thus subsumed by the complexity of other operations, particularly non-dominated sorting and evaluation of accuracy.
\section{Experimental setup}\label{section:experimental-setup}
We consider ten data sets that are commonly used in recent literature on GP for SR.
The information for these data sets is reported in Appendix B due to space limitations.
For any run, we use a traditional Monte-Carlo split of the data set into training and test set, with respective proportions of 75-25\%.
Moreover, all data sets are standardized (based on the information in the training set) by subtracting the mean and dividing by the standard deviation for each feature separately, as advised in~\cite{dick2020feature}.
For comparison, we consider seven algorithms besides evoNSGA-II: classic NSGA-II~\cite{nsgaii}, SPEA2~\cite{zitzler2001spea2}, $\alpha$-dominance-based NSGA-II~\cite{alphadominance} with $\alpha$ varied with a linear ($\alpha$-dom. lin.), cosine~($\alpha$-dom. cos.), or sigmoid~($\alpha$-dom. sig.) schedule, as well as its adaptive version~\cite{adaptivealphadominance} (Adap.~$\alpha$-dom.), and a simple extension of NSGA-II as mentioned in~\cite{virgolin2021model}, where non-dominated sorting assigns an artificial worst-possible rank to duplicate solutions.
We refer to the latter as NSGA-II with penalization of duplicates, NSGA-II+PD in short.
For each algorithm, we keep track of the best-ever found non-dominated solutions (with respect to the training set) in an external archive, and return that archive at the end of the evolution.
Solutions are evaluated in terms of accuracy (to maximize) and size (to minimize).
To maximize accuracy, we minimize the MSE (\Cref{eq:loss}) augmented by linear scaling~\cite{keijzer2003improving}.
Linear scaling effectively enables to optimize in terms of a form of absolute correlation to the target variable $y$, typically causing a large improvement when GP is applied on real-world SR data sets~\cite{virgolin2019linear}.
Many state-of-the-art GP algorithms use linear scaling during the evolution~\cite{lacava2021contemporary}.
To evaluate the quality of multi-objective search, we compute the hypervolume (HV) of the archive of best-found non-dominated solutions~\cite{Hypervolume}.
The HV indicates, for a set of solutions, the area in objective that is Pareto-dominated by that set of solutions, bounded by a reference point.
The reference point represents an artificial solution with (very) poor performance in terms of all considered objectives, and should be chosen to be commensurate to the ranges of the objectives at play.
We set the reference point to be $(1.1,1.1)$ (meaning that the best-possible HV will be $1.1^2=1.21$) and normalize the MSE and size to be within 0 and 1.
Even though the MSE would normally be unbounded from above, performing linear scaling guarantees that the maximal training error corresponds to predicting the mean of $y$;
thus we can achieve the desired normalization by dividing by the variance of $y$.
Regarding size, since very large solutions will likely not be interpretable, we enforce a maximal solution size of $100$ (see \Cref{parametersetting}) by deleting any offspring that exceeds that limit (and cloning the parent in its place); size is then normalized by dividing by $100$.
We perform $30$ repetitions for each run, to account for the randomness of train-test splitting and the stochasticity inherent to GP.
We strive to present our results in terms of a typical parameter configuration that appears often in GP literature.
To that end, we actually consider a number of typical configurations; see \Cref{parametersetting}, where some parameters have different possible settings (namely, population size, tournament size, and proportion between crossover and mutation).
Note that starred operators (e.g., $\div^*$) implement protection and ephemeral random constants (ERC)~\cite{poli2008field} are sampled within $\mathcal{U}(-5,+5) \times \max_{i,j} |x^{(j)}_i|$.
For each algorithm, we find the configuration that leads to the \emph{average} performance for that algorithm (on the training set).
The configuration that leads to the \emph{average} performance for an algorithm is found as follows.
First, for each data set, we consider the training HV (averaged across 30 runs) obtained on that data set by the different configurations.
Configurations are sorted based on their HV, and their sort order is taken as a score.
Next, an overall, single score is assigned to each configuration, by averaging the scores across the data sets of that configuration.
Finally, we select the configuration whose overall score is closest to the one obtained by averaging the scores of all configurations.
For example, the parameter settings in bold in \Cref{parametersetting} represent the configuration obtained for evoNSGA-II; The configurations for the other algorithms are reported in Appendix C.
\begin{table}
\caption{Parameter settings considered for \emph{evoNSGA-II} and the other algorithms.
Tournament size of 1 corresponds to random parent selection.
SPEA2 does not employ tournament selection.
For parameters with multiple possible settings (i.e., the first three), the settings in bold correspond to those that result in evoNSGA-II achieving the average overall performance in terms of hyper-volume on the training set.
}
\label{parametersetting}
\begin{tabular}{lc}
\toprule
Parameter & Considered settings\\
\midrule
Population size & 250, 500, \textbf{1000}, 2000, 5000\\
Tournament size & 1, \textbf{2}, 7\\
Crossover-mutation proportion & 0.5-0.5, \textbf{0.9-0.1}\\
\hline
Initialization & Ramped half-\&-half (2--6)\\
Maximum solution size & 100\\
Function set & \small $\{+,-,\times,\div^*,\sqrt{}^*,\log^* \}$\\
Terminal set & \small $\{x^{(1)}, \dots, x^{(d)}, \text{ERC} \}$\\
\bottomrule
\end{tabular}
\end{table}
We use the Mann-Whitney-U test~\cite{mann1947test} to assess whether the distribution of HVs obtained by an algorithm is better than that of another, determining significance for $p\text{-value} < 0.05$, with Bonferroni correction~\cite{bonferroni1936teoria}.
\begin{table*}
\caption{Mean (standard deviation) of the HV computed on the training set for 30 runs of the considered algorithms.
This table corresponds to the settings in bold in \Cref{parametersetting}.
The symbols $+,-,=$ indicate, for each algorithm other than evoNSGA-II, whether the corresponding distribution of results for evoNSGA-II is, respectively, significantly better, worse, or not significantly different.
The last row summarizes this information.
}
\resizebox{\textwidth}{!}{
\begin{tabular}{lcccccccc}
\toprule
Data set & evoNSGA-II & Adap. $\alpha$-dom. & $\alpha$-dom. cos. & $\alpha$-dom. lin. & $\alpha$-dom. sig. & NSGA-II & NSGA-II+PD & SPEA2 \\
\midrule
Airfoil & 0.799(0.021) & 0.595(0.023)- & 0.744(0.016)- & 0.745(0.019)- & 0.736(0.016)- & 0.652(0.035)- & 0.780(0.017)- & 0.624(0.049)- \\
Boston & 1.023(0.012) & 0.895(0.024)- & 0.984(0.010)- & 0.986(0.014)- & 0.980(0.014)- & 0.954(0.015)- & 1.017(0.010)= & 0.929(0.019)- \\
Concrete & 0.939(0.018) & 0.684(0.033)- & 0.884(0.029)- & 0.876(0.025)- & 0.864(0.030)- & 0.792(0.037)- & 0.941(0.018)= & 0.725(0.053)- \\
Dow chemical & 0.972(0.013) & 0.714(0.043)- & 0.914(0.013)- & 0.914(0.018)- & 0.908(0.023)- & 0.779(0.057)- & 0.977(0.007)= & 0.836(0.037)- \\
Energy: cooling & 1.119(0.008) & 1.043(0.013)- & 1.094(0.010)- & 1.094(0.010)- & 1.088(0.014)- & 1.058(0.009)- & 1.097(0.015)- & 1.040(0.013)- \\
Energy: heating & 1.152(0.004) & 1.076(0.017)- & 1.125(0.010)- & 1.125(0.008)- & 1.117(0.012)- & 1.098(0.010)- & 1.131(0.010)- & 1.054(0.014)- \\
Tower & 1.027(0.013) & 0.824(0.046)- & 0.994(0.012)- & 0.985(0.027)- & 0.984(0.024)- & 0.941(0.030)- & 1.029(0.006)= & 0.867(0.047)- \\
Wine: red & 0.513(0.005) & 0.446(0.007)- & 0.491(0.004)- & 0.490(0.008)- & 0.486(0.008)- & 0.461(0.009)- & 0.509(0.004)- & 0.459(0.009)- \\
Wine: white & 0.449(0.005) & 0.378(0.008)- & 0.426(0.007)- & 0.422(0.008)- & 0.416(0.009)- & 0.403(0.007)- & 0.446(0.004)= & 0.387(0.010)- \\
Yacht & 1.177(0.004) & 1.141(0.019)- & 1.174(0.001)- & 1.174(0.001)- & 1.174(0.001)- & 1.163(0.013)- & 1.178(0.001)+ & 1.150(0.011)- \\
\hline
Total $+/-/=$ & --- & 0/10/0 & 0/10/0 & 0/10/0 & 0/10/0 & 0/10/0 & 1/4/5 & 0/10/0 \\
\bottomrule
\end{tabular}
}
\label{resulttraining}
\end{table*}
\begin{table*}
\caption{Results for the test set, formatting similar to that of \Cref{resulttraining}.}
\resizebox{\textwidth}{!}{
\begin{tabular}{lcccccccc}
\toprule
Data set & evoNSGA-II & Adap. $\alpha$-dom. & $\alpha$-dom. cos. & $\alpha$-dom. lin. & $\alpha$-dom. sig. & NSGA-II & NSGA-II+PD & SPEA2 \\
\midrule
Airfoil & 0.813(0.021) & 0.669(0.027)- & 0.781(0.018)- & 0.782(0.017)- & 0.781(0.013)- & 0.708(0.030)- & 0.795(0.019)- & 0.690(0.041)- \\
Boston & 0.969(0.019) & 0.895(0.028)- & 0.966(0.013)= & 0.951(0.073)= & 0.959(0.017)= & 0.949(0.012)- & 0.976(0.019)= & 0.923(0.023)- \\
Concrete & 0.930(0.025) & 0.692(0.040)- & 0.892(0.027)- & 0.883(0.025)- & 0.875(0.025)- & 0.815(0.040)- & 0.939(0.015)= & 0.734(0.056)- \\
Dow chemical & 0.920(0.020) & 0.696(0.050)- & 0.864(0.016)- & 0.862(0.017)- & 0.855(0.025)- & 0.752(0.055)- & 0.927(0.026)= & 0.785(0.036)- \\
Energy: cooling & 1.111(0.009) & 1.033(0.017)- & 1.092(0.009)- & 1.089(0.012)- & 1.082(0.016)- & 1.052(0.009)- & 1.097(0.016)- & 1.028(0.016)- \\
Energy: heating & 1.144(0.007) & 1.087(0.016)- & 1.128(0.010)- & 1.126(0.009)- & 1.124(0.012)- & 1.101(0.009)- & 1.136(0.009)= & 1.067(0.013)- \\
Tower & 1.022(0.020)) & 0.812(0.049)- & 0.986(0.035)- & 0.981(0.039)- & 0.985(0.025)- & 0.941(0.037)- & 1.031(0.005)= & 0.859(0.050)- \\
Wine: red & 0.629(0.059) & 0.591(0.009)- & 0.634(0.009)= & 0.633(0.013)= & 0.632(0.011)= & 0.615(0.014)- & 0.647(0.013)= & 0.613(0.011)- \\
Wine: white & 0.359(0.074) & 0.354(0.010)= & 0.390(0.020)= & 0.378(0.050)= & 0.387(0.012)= & 0.379(0.006)= & 0.400(0.025)= & 0.361(0.012)= \\
Yacht & 1.170(0.002) & 1.131(0.024)- & 1.167(0.004)- & 1.167(0.002)- & 1.167(0.003)- & 1.155(0.013)- & 1.172(0.001)+ & 1.137(0.013)- \\
\hline
Total $+/-/=$ & --- & 0/9/1 & 0/7/3 & 0/7/3 & 0/7/3 & 0/9/1 & 1/2/7 & 0/9/1 \\
\bottomrule
\end{tabular}
}
\label{resulttesting}
\end{table*}
\section{Results}\label{section:results}
\subsection{Benchmarking results}
\subsubsection{Results and analysis}
\Cref{resulttraining,resulttesting} show the results obtained when considering the accuracy as measured on the training set and on the test set, respectively, for the parameter settings that result in the average performance.
At training time, evoNSGA-II performs significantly better than any other algorithm in a vast number of cases, sometimes substantially so (e.g., when compared to Adap.~$\alpha$-dom., NSGA-II, and SPEA2 on several data sets). In fact, evoNSGA-II is found to be significantly better than another algorithm 64 times, worse only 1 times, and not significantly different 5 times. When it comes to the test set, evoNSGA-II remains vastly superior, although the number of statistical comparisons that are not significantly different raises to 19 (better 50 times, worse 1 time). This is due to the generalization gap between the training and the test set. In fact, improving generalization is not on focus in this paper.
Overall, only NSGA-II+PD is capable of coming close to the performance of evoNSGA-II.
At training time, evoNSGA-II is better than NSGA-II+PD on 1 data sets, worse on 4, and equal on 5.
At test time, the difference between the two shrinks even more, due to the generalization gap.
Nevertheless, except for in one case (Yacht), evoNSGA-II is essentially equal or better than NSGA-II+PD.
We proceed by briefly describing what we observe for the other parameter configurations.
More detailed information is reported in the Appendix.
Across the algorithms, using a larger population size and larger tournament size contribute to improve the performance, while it is unclear whether using more or less crossover than mutation is preferable.
Across the configurations, evoNSGA-II remains the best-performing approach, although it is sometimes matched by NSGA-II+PD.
However, we observe that with larger population sizes, the gap between evoNSGA-II and NSGA-II+PD grows in favor of the former.
For example, at training time with the best-possible parameter configurations (for both algorithms, using a population size of 5000), evoNSGA-II is significantly superior to NSGA-II+PD on 6 data sets, equal on 4, and worse on none.
This is likely because larger population sizes allow for better estimations of evolvability.
\subsubsection{Further analysis: convergence of HV}
To provide further evidence that evoNSGA-II is typically superior to the other algorithms,
\Cref{HVCurveDuringIteration} (left) shows the convergence of the (training) HV, again with using parameter configurations that represent average performance, on Airfoil.
Due to space limitations, we show the respective plots for other data sets in Appendix E.
For clarity, since the non-adaptive $\alpha$-dominance algorithms perform similarly, we report only the one with linear scheduling ($\alpha$-dom.~lin.) in \Cref{HVCurveDuringIteration}.
As can be seen, the HV obtained by NSGA-II, SPEA2, and Adap.~$\alpha$-dom.~tends to converge to a suboptimal value very soon after the first dozen generations.
The other algorithms, i.e., evoNSGA-II, $\alpha$-dom.~lin., and NSGA-II+PD perform similarly, however evoNSGA-II is slightly superior throughout the whole search.
Furthermore, \Cref{HVCurveDuringIteration} (right) shows the distribution of the solutions in the final archives (for the 30 runs).
The most apparent result is that only evoNSGA-II is capable of reliably discovering accurate solutions with a larger size than 30 (approximately).
Interestingly, NSGA-II and NSGA-II+PD can be better than evoNSGA-II in discovering some relatively accurate solutions of size between 10 and 20 (approximately).
This is because the search of NSGA-II and NSGA-II+PD can concentrate more in that area, as they discover larger and even more accurate solutions less frequently than evoNSGA-II.
\begin{figure}[]
\centering
\includegraphics[width=\linewidth]{fronts_combined_V2.pdf}
\caption{Comparison between the algorithms in terms of HV during the evolution (left) and final front (right) for 30 runs on Airfoil at training time.
Left: Lines represent means and shaded areas represent standard deviations.
Right: All solutions in the archives from the 30 runs are shown.
}
\label{HVCurveDuringIteration}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.97\linewidth]{explen-iteration-marginalizedHeatMap_withoutNSGAII_revise_fontV2.pdf}
\caption{Left: Each column of the heat-map shows, for a given generation, the average evolvability between crossover and mutation computed with the workflow of \Cref{sec:evolv-small-solutions} on Airfoil, and normalized across solution size (dashed entries represent absent sizes).
Right: Proportions of solutions of different sizes in evoNSGA-II during 30 evolutions on Airfoil (lines are means, shaded areas are standard deviations).}
\label{MarginalizedHeatMap-explength}
\end{figure}
\subsection{Did it work as expected?}
As last result, we show that evoNSGA-II does not, in fact, exhibit evolvability degeneration.
\Cref{MarginalizedHeatMap-explength} shows, on Airfoil, the evolvability that is estimated using the workflow proposed in \Cref{sec:evolv-small-solutions} (left panel), and also the proportion of solutions of different sizes in the population of evoNSGA-II during the evolution (right panel), again using the parameter configuration that represents average performance.
As can be seen, the proportion of solutions of larger sizes increases over time during the evolution process of evoNSGA-II, which is in agreement with the expected evolvability from our analysis.
This result is in stark contrast with the one displayed in \Cref{fig:NSGAII-forPopulationCollapseAnalysis} (note the different scale in sizes), where NSGA-II could not discover larger and more accurate solutions.
We produced the same plots for the other algorithms and included them in the Appendix D;
there, it can be seen that SPEA2 and Adap.~$\alpha$~dom., like NSGA-II, suffer from evolvability degeneration.
The other algorithms perform better, yet still assign less copies to larger solutions than evoNSGA-II.
\section{Discussion}\label{section:discussion}
In this work, we investigated \emph{evolvability degeneration}, i.e., the phenomenon by which small solutions over-replicate and hamper search progress because they represent unfruitful parents for the discovery of larger and more accurate offspring.
Next, we proposed to extend NSGA-II into evoNSGA-II, which estimates the evolvability of solutions based on their size and, based on this, bounds how many solutions of any given size can be selected for the next generation.
Lastly, we found that evoNSGA-II is largely superior to other recent MOGP algorithms, and is indeed capable of allowing solutions of highly-evolvable size to thrive.
The reason for evolvability degeneration can be linked to the fact that the algorithm has insufficient time to discover more accurate solutions (because the probability that variation succeeds is low) compared to the speed by which small solutions duplicate.
Such hypothesis is strongly supported by the findings of~\cite{10.1162/evco.1998.6.4.293}, where it is shown that GP tends to fail when the pressure surpasses a certain threshold.
It is thus natural that MOGP algorithms that improve the diversity of the population perform better than classic NSGA-II.
In fact, the algorithms that we used in our comparisons that were built to improve NSGA-II, essentially realize some form of diversity preservation.
However, none of them considers tracking and using evolvability to decide which solutions to keep and which to discard.
In our view, this is the fundamental reason why evoNSGA-II performed best.
Interestingly, NSGA-II+PD, which is perhaps an even simpler approach than evoNSGA-II, performed similarly to evoNSGA-II in many data sets.
Still, evoNSGA-II performed typically equal to or better than NSGA-II+PD, suggesting that one may not always want to discard all duplicate solutions: keeping a number of copies for highly-evolvable solutions seems to be generally more helfpul.
Moreover, we observed that the performance gap between evoNSGA-II and NSGA-II+PD tends to increase when the population size is larger.
We believe that this happens because larger population sizes allow for better estimations of evolvability.
There exist a number of limitations in this paper that call for future research.
Firstly, our estimations of evolvability are repeated every generation, using solely the current population.
We attempted to use exponential-moving-averages to incorporate estimations from previous generations but preliminary findings indicated no statistically significant improvement.
However, one could study whether other approaches can lead to an improvement, such as learning an accurate model of evolvability of solution size across multiple data sets and parameter configurations, and using that model as starting point when dealing with a new problem.
A second important limitation is that we considered minimizing solution size, which is a simple but coarse way of pursuing interpretability.
Future work should consider other and better proxies of interpretability (e.g.,~\cite{burlacu2019parsimony,Virgolin2020PPSN}), and assess whether the good performance found here for evoNSGA-II transfers to those settings.
Transferability of the quality of our approach should also be assessed when other variation operators are used, such as geometric semantic-~\cite{moraglio2012geometric,pawlak2014semantic} or linkage-based ones~\cite{virgolin2021improving}, as well as, e.g., gradient descent ot optimize coefficients~\cite{dick2020feature}.
A third limitation is that evoNSGA-II makes no attempt to limit bloat.
If bloated solutions have larger evolvability, they will replicate more than others.
For SR this is not necessarily a problem, since it is reasonable to impose a cap on the maximally allowed size, above which solutions would certaintly not be interpretable.
However, capping the size might not be desirable for other problems.
There, evoNSGA-II might keep discovering larger and larger solutions, and thus fail to find medium-sized ones.
Thus, bloat-control mechanisms may need to be considered.
Finally, we conclude this work by reflecting on the fact that our results may, in principle, transfer to problems of very different nature than GP for SR.
Indeed, we remark that maximizing accuracy and minimizing solution size is an \emph{imbalanced} multi-objective problem:
on the one hand, minimizing solution size is easily done, since random deletion of components suffices to improve this objective;
on the other hand, maximizing accuracy is almost always challenging, since the right components need to appear in the right order to obtain an accurate model.
There might exist a number of problems where a similar situation happens, i.e., the objective that is easy-to-optimize inhibits the search of solutions with respect to the objective that is hard-to-optimize.
For any given problem, tracking and exploiting information on the evolvability in terms of the hard-to-optimize objective that is associated with the easy-to-optimize objective might be a consistent way to improve multi-objective evolutionary search.
\section{Conclusion}\label{sec:conclusion}
We studied an important cause of inefficiency in the use of the non-dominated sorting genetic algorithm II (NSGA-II) for the discovery of symbolic regression models with trade-offs between accuracy and simplicity.
Namely, we experimentally found that simpler models over-replicate and take over the majority of the population, because they lack \emph{evolvability}, i.e., they represent infertile grounds for larger but more accurate models to be discovered.
We named this phenomenon \emph{evolvability degeneration}, and proposed \emph{evoNSGA-II}, an algorithm that is explicitly built to prevent it.
With comparisons to NSGA-II and six other algorithms, upon ten real-world data sets, and across different parameter configurations, we found evoNSGA-II to be the superior approach.
The working principles of evoNSGA-II are not limited to symbolic regression: studying their transferability to other imbalanced multi-objective problems represents an interesting avenue for future research.
\begin{acks}
This research was funded by the European Commission within the HORIZON Programme (Trust AI Project, Contract No.:952060). We further thank Maurits and Anna de Kock Foundation for financing a high-performance computing system.
\end{acks}
|
2,877,628,091,024 | arxiv | \section{Introduction}
The mechanism of the metal-insulator transition (MIT) has long been
one of the central issues in strongly correlated electron systems.\cite{M90,IFT98}
In particular, the MIT in correlated Dirac fermion systems has attracted
much attention recently, a typical example of which is the honeycomb-lattice
Hubbard model at half filling representing graphene.\cite{graphene}
Because the honeycomb lattice is bipartite and free from frustration,
the N\'eel antiferromagnetic (AF) Mott insulator (MI) state is realized
in the strong coupling region. However, unlike in the square-lattice Hubbard model,
where perfect Fermi surface nesting is present, one expects that
the AF order will not appear in the weak coupling region but rather
the massless Dirac semimetallic (SM) state will be maintained until a
critical interaction strength is reached.\cite{ST92,H06,MK09,J09}
The MIT in the honeycomb lattice was studied by Meng \textit{et al.}\cite{MLWAM10}
using the quantum Monte Carlo (QMC) method, whereby they claimed the
presence of a quantum spin liquid (SL) state (or nonmagnetic MI state) in
the intermediate region between the Dirac SM state and the antiferromagnetic
Mott insulator (AFMI) state.
Their study attracted much interest because it suggested the emergence
of the SL state in systems without frustration in their spin degrees
of freedom.
However, subsequent studies based on the large-scale QMC method\cite{SOY12},
the pinning field approach using the QMC method,\cite{AH13} and analysis of the quantum
criticality by finite-size scaling \cite{AH13,THAH14} have consistently
suggested the direct transition from the SM state to the AFMI state, and
therefore we now anticipate that the SL state is absent in this model.
Similar debates have also been had for the $\pi$-flux Hubbard model,
another Dirac fermion system, whereby the direct transition from the SM state
to the AFMI state is now anticipated.\cite{CS12,IAS14,THAH14}
Quantum cluster methods have also been used to study the MIT in the
honeycomb Hubbard model.\cite{WCTTL10,L11,LI12,YXL11,WRLL12,HL12,SO12,HS13,LW13,CBSKC14,LRTR14}
In particular, cluster dynamical mean-field theory (CDMFT) and
variational cluster approximation (VCA) calculations have shown that
if the 6-site hexagonal ring is used as a solver cluster, the
single-particle band gap opens even in the weak coupling region where
the AF order is absent, thereby suggesting the presence of the SL
state.\cite{YXL11,WRLL12,HL12,SO12}
However, the opening of the band gap at the infinitesimal interaction
strength was questioned,\cite{HL12,SO12,HS13} and
moreover, from comparison with the results of the cluster dynamical
impurity approximation (CDIA) and dynamical cluster approximation (DCA),
the emergence of the nonmagnetic insulator phase predicted by the CDMFT
and VCA was considered to be unrealistic.\cite{HS13,LW13,CBSKC14,LRTR14}
So far, not much is known about the MIT in the $\pi$-flux Hubbard model
studied by quantum cluster methods.
In this paper, motivated by the above development in the field, we will
make a comparative study on the MIT of correlated Dirac fermions in the
honeycomb and $\pi$-flux Hubbard models at half filling by means of the VCA
and CDIA.
We will, in particular, point out that a suitable choice of the cluster geometry is essential in the quantum cluster calculations to suppress the opening of the band gap in the weak-coupling region and that the
inclusion of particle-bath sites is important in discussing the
order of the MIT as well as the transfer of spectral weight in the
single-particle spectral function.
We will thereby show that the direct transition from the Dirac SM state
to the AFMI state occurs with increasing interaction strength in
these models and that the SL phase is absent in their intermediate coupling
region.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.95\columnwidth]{afig1.eps}
\caption{(Color online)
Schematic representations of the (a) honeycomb ($t'/t=0$) and
(b) $\pi$-flux ($t'/t=-1$) lattices.
The dashed line in (a) indicates the bonds with the hopping parameter
$t'$ and the red lines in (b) indicate the bonds with the negative
hopping parameter $t'=-t$. Noninteracting DOSs [(c) and (d)]
and contour plots of the band dispersions $E_{\bm k}$ [(e) and (f)]
are also shown for the honeycomb (left panels) and $\pi$-flux
(right panels) lattices. The green dots in (e) and (f) indicate
the Dirac points in ${\bm k}$-space.
}\label{fig1}
\end{center}
\end{figure}
\section{Models and Methods}
The honeycomb and $\pi$-flux Hubbard models may be defined by the Hamiltonian
\begin{equation}
\mathcal{H}= -\sum_{ i,j,\sigma} t_{ij}c^{\dag}_{i\sigma}c_{j\sigma}
+U \sum_{i}n_{i\uparrow}n_{i\downarrow},
\end{equation}
where $c^{\dag}_{i\sigma}$ is the creation operator of a fermion (which
will be referred to as an electron hereafter) with spin $\sigma$ at site
$i$ and $n_{i\sigma}=c^{\dag}_{i\sigma}c_{i\sigma}$.
$t_{ij}$ is the hopping amplitude: we define $t_{ij}=t$ for the
nearest-neighbor bonds and $t_{i,j}=t'$ for the bonds connecting
hexagons in the honeycomb lattice [see Fig.~\ref{fig1}(a)].
$U$ is the on-site Coulomb repulsion.
We assume the filling of one electron per site (half filling).
Changing the value of $t'$, we can tune the system continuously from
the honeycomb lattice at $t'=0$ to the $\pi$-flux lattice at $t'=-t$
[see Fig.~\ref{fig1}(b)].\cite{HFA06}
At $U=0$, these systems at low energies are described in terms
of the massless Dirac fermions; their densities of states (DOSs)
and band dispersions are shown in Figs.~\ref{fig1}(c)-\ref{fig1}(f).
We apply the VCA,\cite{PAD03,S08,P12} which is a quantum
cluster method based on self-energy functional theory (SFT).\cite{P04_1,P04_2}
In the VCA, we introduce disconnected finite-size clusters
(that are solved exactly) as a reference system.
By restricting the trial self-energy to the self-energy of the reference
system $\Sigma'$, we can obtain the grand potential of the original
system in the thermodynamic limit as
\begin{align}
\Omega=\Omega'+\mathrm{Tr}\: \mathrm{ln} ( G^{-1}_0-\Sigma' )^{-1} - \mathrm{Tr}\: \mathrm{ln}( G' ), \label{gp}
\end{align}
where $\Omega'$, $G'$, and $G_0$ are the grand potential, the
Green's function of the reference system, and the noninteracting Green's
function, respectively.
The short-range correlations within the cluster of the reference system
are taken into account exactly.
The one-body parameters $\bm{t}'$ of the reference system are
optimized according to the variational principle
$\partial \Omega[\Sigma'(\bm{t}')]/\partial \bm{t}' = 0$.
In the VCA, we can treat the spontaneous symmetry breaking by adding
appropriate Weiss fields to the reference system.\cite{DAHAP04}
We have to choose an exactly solvable reference system; here we
apply an exact diagonalization method and solve the quantum many-body
problem in the cluster of the reference system.
We also use the CDIA,\cite{S12} which is an extended version
of the VCA where particle-bath sites are added to the clusters to
take into account the electron-number fluctuations in the correlation
sites. In the CDIA, we optimize the hybridization parameter between the
bath and correlation sites $V$ and the on-site energy of the bath
sites $\varepsilon$ based on SFT.\cite{BKSTP09,S12}
Note that the CDIA is intrinsically equivalent to CDMFT with an
exact-diagonalization solver.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=\columnwidth]{afig2.eps}
\caption{(Color online)
Single-particle gap $\Delta_{\mathrm{sp}}$ in the PM state and
local magnetization $m$ in the AF state calculated by the VCA as
functions of $U/t$.
The results for the honeycomb Hubbard model are shown in (a), (b),
and (c) and those for the $\pi$-flux Hubbard model are shown in
(d), (e), and (f). The geometry of the solver cluster used is
illustrated in each panel.
}\label{fig2}
\end{center}
\end{figure}
\begin{figure}[!t]
\begin{center}
\includegraphics[width=\columnwidth]{afig3.eps}
\caption{(Color online)
Single-particle spectral function $A({\bm k},\omega)$ [(a) and (b)]
and DOS $N(\omega)$ [(c) and (d)] in the PM states of the $\pi$-flux
Hubbard model calculated by the VCA with the 12-site cluster.
The horizontal dashed line in (a) and (b) indicates the Fermi level.
We applied the artificial Lorentzian broadening of the spectra of
$\eta/t=0.15$ in (a) and (b) and $\eta/t=0.05$ in (c) and (d).
The inset in the lower panels is an enlargement of the DOS near the Fermi level,
assuming $\eta/t=0.005$.
}\label{fig3}
\end{center}
\end{figure}
\section{Results and Discussion}
\subsection{Results of VCA}
First, let us consider the single-particle gap $\Delta_{\mathrm{sp}}$
in the paramagnetic (PM) state and the staggered magnetization $m$ in
the AF state, which are calculated for the honeycomb and $\pi$-flux
Hubbard models using the VCA. We introduce the Weiss field associated with
the two-sublattice N\'eel order and evaluate the local magnetization
$m=\langle n_{i\uparrow} -n_{i\downarrow} \rangle$.
The gap $\Delta_{\mathrm{sp}}$ is evaluated in the absence of the Weiss
field as the jump of the chemical potential with respect to the number of
electrons in the system. Note that the band gap always opens
when the AF order appears. We use clusters of 6, 10, and 12
sites as reference systems; the clusters used for the honeycomb and
$\pi$-flux lattices are topologically equivalent but with different
hopping parameters (see Fig.~\ref{fig2}).
The results for the honeycomb Hubbard model are shown in Figs.~\ref{fig2}(a)-\ref{fig2}(c). We find that the MIT is sensitive to the choice of the clusters,
i.e., the results obtained using the clusters of 6 and 12 sites are
qualitatively different from those in the case of 10 sites.
The AF order appears at $U_{\mathrm{AF}}/t=3.8$ for the clusters of
6 and 12 sites, the results of which are in good agreement with
results of QMC simulations.\cite{SOY12,AH13}
However, the gap $\Delta_{\mathrm{sp}}$ opens at infinitesimal $U$ values
and the SM phase appears only at $U=0$, thus suggesting the presence of
the PM insulator state at $0<U<U_{\mathrm{AF}}$.
Recent studies, however, have claimed that this gap cannot be regarded as the
true Mott gap,\cite{HS13,LW13} the details of which will be discussed
below.
For the cluster of 10 sites, on the other hand, the SM phase
persists up to a large $U$ value and the transition to the AF phase
occurs directly from the SM phase. Here, the AF order appears at
$U_{\mathrm{AF}}/t= 2.7$ and the gap $\Delta_{\mathrm{sp}}$ opens at
$U_{\mathrm{PM}}/t=3.0$, qualitatively consistent
with the results of recent QMC simulations, where the direct transition
from the Dirac SM phase to the AFMI phase was predicted.\cite{SOY12,AH13}
The results for the $\pi$-flux Hubbard model are shown in Figs.~\ref{fig2}(d)-\ref{fig2}(f). We find that the results obtained using the clusters of 6,
10, and 12 sites are qualitatively the same as each other, i.e., the
SM phase persists up to a large $U$ value. The AF order appears at
$U_{\mathrm{AF}}/t=3.4$, 2.9, and 3.4 and the gap $\Delta_{\mathrm{sp}}$
opens at $U_{\mathrm{PM}}/t=4.5$, 4.9, and 4.8 for the clusters of
6, 10, and 12 sites, respectively.
Therefore, the PM insulator state does not exist between the SM and AFMI
phases, in accordance with the results of recent QMC simulations
that show the direct transition from the SM phase to the AFMI phase.\cite{IAS14,THAH14}
Note that the transition point $U_{\mathrm{AF}}$ of our VCA
calculations is smaller than that of the QMC simulations,
$U_{\mathrm{AF}}/t=5.25$--$5.5$,\cite{IAS14} which may be due to the
anisotropy of the clusters used in our calculations; the agreement
becomes good if we use an isotropic cluster of $4$ sites, which gives
the value $U_{\mathrm{AF}}/t=5.0$.
We also calculate the single-particle spectral function and DOS using
cluster perturbation theory (CPT)\cite{SPP00} for the PM state of
the $\pi$-flux Hubbard model. The results are shown in Fig.~\ref{fig3}.
We immediately find that the Dirac linear band dispersion is clearly visible
near the Fermi level at $U<U_\mathrm{PM}$ [see Fig.~\ref{fig3}(a)],
whereas the band gap opens at $U>U_\mathrm{PM}$ [see Fig.~\ref{fig3}(b)].
The transfer of spectral weight occurring with increasing $U/t$ is seen
in Figs.~\ref{fig3}(c) and \ref{fig3}(d), which is characteristic of the VCA and
will be discussed below in Sect. 3.2 in comparison with the results of the CDIA.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=\columnwidth]{afig4.eps}
\caption{(Color online)
Ground-state energies [(a) and (b)] and single-particle gaps [(c) and (d)]
in the PM state of the honeycomb and $\pi$-flux Hubbard models calculated
by the CDIA, where the 4-site 6-bath cluster shown in the left panels is used.
}\label{fig4}
\end{center}
\end{figure}
\subsection{Results of CDIA}
Next, let us discuss the roles of bath sites in MIT using the CDIA.
Following a previous study on the honeycomb Hubbard model,\cite{HS13}
we examine the honeycomb and $\pi$-flux Hubbard models using the
4-site 6-bath cluster. The results are shown in Fig.~\ref{fig4}.
We find that the grand potentials of the honeycomb and $\pi$-flux
Hubbard models both have two stationary points around the transition
point $U_c$.
The SM solution exists at a small $U$, which vanishes at $U_{c2}$ with increasing $U$.
The MI solution exists at a large $U$, which vanishes at $U_{c1}$ with decreasing $U$.
The two solutions thus coexist in the region $U_{c1}\le U \le U_{c2}$, and
the ground-state energies cross at $U_c$.
We obtain the values $U_{c1}/t=6.6$, $U_{c2}/t=7.7$, and $U_{c}/t=7.5$
for the honeycomb Hubbard model and
$U_{c1}/t=8.6$, $U_{c2}/t=10.3$, and $U_{c}/t=9.8$
for the $\pi$-flux Hubbard model.
The calculated result for $\Delta_{\mathrm{sp}}$ [see Figs.~\ref{fig4}(c) and \ref{fig4}(d)]
shows hysteresis between the SM ($\Delta_{\mathrm{sp}}=0$) and MI
($\Delta_{\mathrm{sp}}>0$) solutions, which indicates that
$\Delta_{\mathrm{sp}}$ jumps discontinuously at $U_c$.
These results clearly indicate that the MIT is of the first-order
(or discontinuous) in the CDIA, which is in contrast to the results of
the VCA where the second-order (or continuous) transition is found
(see Fig.~\ref{fig2}).
The first-order MIT is thus expected in the actual honeycomb and
$\pi$-flux Hubbard models in which the electron-number fluctuation
is present.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=\columnwidth]{afig5.eps}
\caption{(Color online)
As in Fig.~\ref{fig3} but for the results of the CDIA with the
4-site 6-bath cluster.
In (a)-(c), the single-particle spectral functions and DOSs of
both the SM and MI states are shown at $U/t=9$, while in (d),
the DOS of only the SM state is shown.
}\label{fig5}
\end{center}
\end{figure}
To further clarify the roles of bath sites in the MIT, we examine the $U$
dependence of the single-particle spectral function and the DOS calculated
using CPT.
The results for the $\pi$-flux Hubbard model obtained in the CDIA are
shown in Fig.~\ref{fig5}, where the Dirac linear band dispersion is
clearly visible in the vicinity of the Fermi level. Note that
the slope of the dispersion at the Dirac point becomes steeper for
larger values of $U$.
Comparing the DOS curves, we find that the results in the VCA (see Fig.~\ref{fig3})
are indeed significantly different from those in the CDIA (see Fig.~\ref{fig5})
in the following respects.
(i) The spectral weight with a large peak at $\omega/t=2$ in the
$U/t\rightarrow 0$ limit [see Fig.~\ref{fig1}(d)] is partially
transferred to a broad higher-energy region corresponding to the
``upper Hubbard band'' with increasing $U/t$, which is observed in
both the VCA and CDIA.
(ii) With increasing $U$, the remaining spectral weight at
$\omega/t\simeq 2$ shifts to higher energies in the VCA [see Figs.~\ref{fig3}(c) and \ref{fig3}(d)], while in the CDIA, it shifts rapidly to lower energies and
simultaneously loses its weight [see Figs.~\ref{fig5}(c) and \ref{fig5}(d)].
(iii) We thus have a large spectral weight at low energies ($\omega/t\alt 1$)
in the CDIA, which is rather small in the VCA. The spectral weight characteristic
of the massless Dirac SM dispersions can, however, be seen in the vicinity
of the Fermi level in both the VCA and CDIA spectra.
(iv) More quantitatively, a kink appears in the lowest-energy
region of the DOS in the VCA (see the inset of the lower panels of
Fig.~\ref{fig3}), which shifts toward the Fermi level with increasing $U$.
The DOS curve becomes steeper near the Fermi level (or
the ${\bm k}$-linear dispersion becomes flatter at the Dirac point),
renormalizing the Fermi velocity but keeping the electrons massless.
No quasiparticle peak appears. At a critical $U$ value, the kink disappears
and simultaneously the gap begins to open gradually. In the CDIA, similar
but stronger effects can be seen with increasing $U/t$ in the lowest-energy
region of the DOS, until the gap opens discontinuously at $U_c$
[see Figs.~\ref{fig5}(c) and \ref{fig5}(d)]. These low-energy behaviors in the CDIA
are consistent with the results of the single-site DMFT for the honeycomb
Hubbard model\cite{MK09,J09} and are expected to be realistic in the
honeycomb and $\pi$-flux Hubbard models where the electron-number fluctuates.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\columnwidth]{afig6.eps}
\caption{(Color online)
Calculated single-particle gap $\Delta_{\mathrm{sp}}$ in the PM state of
the (a) honeycomb and (b) $\pi$-flux Hubbard models. For the honeycomb
lattice, we use the hexagonal 6-site cluster (h6) and 6-site 6-bath cluster
(h6-6b). For the $\pi$-flux lattice, we use the square 4-site cluster
($\pi$4), 4-site 4-bath cluster ($\pi$4-4b), and 4-site 8-bath cluster
($\pi$4-8b).
(c) $\pi$-flux lattice with renormalized hopping parameter $t^*$
violating the original translational symmetry and (d) its noninteracting
single-particle gap as a function of $t^*/t$.
}\label{fig6}
\end{center}
\end{figure}
\subsection{Cluster geometry dependence}
Finally, let us discuss the cluster geometry dependence of the
single-particle gap $\Delta_{\mathrm{sp}}$ in the PM phase.
In Figs.~\ref{fig6}(a) and \ref{fig6}(b), we show the results of the 6-site 6-bath
system for the honeycomb Hubbard model and of the 4-site 4-bath and
4-site 8-bath systems for the $\pi$-flux Hubbard model.
In the honeycomb lattice, we find that even if we add the bath sites,
the gap $\Delta_{\mathrm{sp}}$ opens at any infinitesimal $U$ value when
we use the 6-site hexagonal ring cluster as the reference
system.\cite{HL12,SO12,HS13}
In the $\pi$-flux Hubbard model, we also find that the gap $\Delta_{\mathrm{sp}}$
opens at any infinitesimal $U$ value when we use the 4-site square cluster
as the reference system.
Therefore, even though we use two bath sites per correlation site,
the gap opens at infinitesimal $U$ values, which does not agree with the
argument in Ref.~\citen{HS13} that at least two bath sites per correlation
site are necessary to discuss the MIT in the honeycomb lattice.
Rather, our results agree with the statement in Ref.~\citen{LW13} that the
opening of the gap at infinitesimal $U/t$ values is not caused by the
bath degrees of freedom but by the cluster geometry, which violates the
original translational symmetry of the lattice. We show the latter case
in Fig.~\ref{fig6}(c), where the original translational symmetry of the
$\pi$-flux lattice is violated by the renormalization of the hopping parameter $t^*$
by the interaction only within the cluster, leading to
$t^*\ne t$. Then, as shown in Fig.~\ref{fig6}(d), the noninteracting
band with $t^*$ and $t$ has a finite single-particle gap unless $t^*=t$.
A similar discussion has been given for the honeycomb lattice,\cite{WRLL12,LRTR14}
where the 6-site hexagonal clusters with the renormalized hopping parameter
$t^*$ are connected with the bare hopping parameter $t$.
A ``plaquette insulator'' state is thus realized at $t^*\ne t$ in the
noninteracting limit.
This is the reason why the single-particle gap opens at infinitesimal
$U/t$ values.
However, we here point out that it is always possible to make an appropriate
choice of clusters that maintains the Dirac zero-gap situation even
though it violates the original translational symmetry, examples of
which are shown in Figs.~\ref{fig2}(d)-\ref{fig2}(f) where the gap does not open
at small values of $U$. Thus, the statement in Ref.~\citen{LW13} is
too strict.
Careful choice of the clusters in the quantum cluster methods such as
CDMFT, the VCA, and the CDIA enables one to discuss the MIT of Dirac fermion
systems without spurious opening of the gap.
\section{Summary}
We have made a comparative study on the MIT of Dirac
electrons in the honeycomb and $\pi$-flux Hubbard models
using the VCA and CDIA, where we have calculated the single-particle
gap and staggered magnetization as functions of the interaction
strength $U$.
We have paid particular attention to the choice of the cluster
geometry and the inclusion of the bath sites. We have thus confirmed
that the spurious single-particle gap that opens at infinitesimal
$U$ values is not caused by the bath degrees of freedom but
rather by the cluster geometry.
We have shown that with increasing $U$, the first-order MIT to the nonmagnetic MI phase occurs in the presence of electron-number fluctuation.
However, the AFMI phase always preempts this MIT, at least in the
present models, and therefore the SL phase previously suggested
to emerge between the Dirac SM and AFMI phases is absent in
these models.
Our results imply that, if the AF ordering can be suppressed by,
for example, to the effect of spin frustrations in the triangular and
related lattices, one may expect that the MI phase without AF
orders will preempt the AFMI phase, resulting in the emergence
of the SL phase in the intermediate coupling region, as was
pointed out recently in Ref.~\citen{RLRT14} for the triangular
$\pi$-flux Hubbard model.
\bigskip
\acknowledgments{
We thank K. Seki for enlightening discussions.
T.~K. acknowledges support from a JSPS Research Fellowship for
Young Scientists.
This work was supported in part by KAKENHI Grant No.~26400349
from JSPS of Japan.
}
|
2,877,628,091,025 | arxiv | \section{Introduction}
Nowadays, accurate prediction of user responses, e.g., clicks or conversions, has become the core part in personalized online systems, such as search engines \cite{dupret2008user}, recommender systems \cite{qu2016product} and computational advertising \cite{he2014practical}.
The goal of user response prediction is to estimate the probability that a user would respond to a specific item or a piece of content provided by the online service.
The estimated probability may guide the subsequent decision making of the service provider, e.g.,
ranking the candidate items according to the predicted click-through rate \cite{qu2016product} or
performing ad bidding according to the estimated conversion rate \cite{zhang2014optimal}.
\begin{figure}[h]
\centering
\includegraphics[width=1\columnwidth]{figures/intro_plt.pdf}
\caption{User behavior (click) statistics from Alibaba e-commerce platform during April to September in 2018.
Left: the distribution of the user sequence lengths;
Right: the number of user behaviors between add-to-cart event and the final conversion.}
\label{fig:user-stat}
\vspace{-10pt}
\end{figure}
One key aspect of user response prediction is user modeling, which profiles each user through learning from her historical behavior data or other side information.
Generally speaking, the user behavior data have three characteristics.
First, the user behaviors not only reflect the intrinsic and multi-facet user interests \cite{jiang2014fema,koren2008factorization}, but also reveal the temporal dynamics of user tastes \cite{koren2009collaborative}.
Second, as is shown in Figure~\ref{fig:user-stat}, the length of behavior sequences vary for different users because of diverse activeness or registration time.
Third, there exist long-term dependencies in one's behavior history where some behaviors happened early may accounts for the final decision making of the user, as illustrated in the right plot of Figure~\ref{fig:user-stat}.
Moreover, the temporal dependency also shows multi-scale sequential patterns, i.e., various temporal behavior dependencies, of different users.
With two-decade of rapid development of Internet service platforms,
there have been abundant user behavior sequences cumulated in online platforms. Many works have been proposed for user modeling \cite{rendle2010factorizing,zhou2018deepa}, especially with sequential modeling \cite{hidasi2017recurrent,zhou2018deepb}.
Some of the existing methods for user modeling aggregate the historical user behaviors for the subsequent preference prediction \cite{koren2009matrix,koren2008factorization}.
However, they ignore temporal dynamics of user behaviors \cite{koren2009collaborative}.
Sequential modeling for user response prediction is to conduct a dynamic user profiling with sequential pattern mining.
Some other works \cite{hidasi2017recurrent,zhou2018deepb} aim to deal with temporal dynamics with sequential pattern mining.
Nevertheless, these sequential models focus only on short-term sequences, e.g., several latest behaviors of the user \cite{zhou2018deepb} or the behavior sequence within recent period of time \cite{hidasi2017recurrent} but abandon previous user behaviors.
Considering the situation of recommending items in the manual way.
Human may first take one's intrinsic tastes into consideration \cite{zhang2018next} and then consider her multi-facet interests \cite{koren2008factorization,jiang2014fema}, e.g., various preferences over different item categories.
Moreover, it is natural to combine one's long-term \cite{ying2018sequential} and recent experience \cite{hidasi2015session} so as to comprehensively recommend items.
In order to tackle these challenges, also to overcome the shortcomings of the related works, we formulate the \textit{lifelong sequential modeling} framework and propose a novel Hierarchical Periodic Memory Network (HPMN) to maintain user-specific behavior memories to solve it.
Specifically, we build a personalized memorization for each user, which remembers both intrinsic user tastes and multi-facet user interests with the learned while compressed memory.
Then the model maintains hierarchical memories to retain long-term knowledge for user behaviors.
The HPMN model also updates memorization from newly coming user behaviors with different periods at different layers so as to capture multi-scale sequential patterns during her lifetime.
The extensive experiments over three large-scale real-world datasets show significant improvements of our proposed model against several strong baselines including state-of-the-art.
This paper has three main contributions listed as follows.
\begin{itemize}[leftmargin=5mm]
\item To the best of our knowledge, it is the first work to propose the lifelong sequential modeling framework, which conducts a unified, comprehensive and personalized user profiling,
for user response prediction with extremely long user behavior sequence data.
\item In lifelong sequential modeling framework, we propose a memory network with incremental updating mechanism to learn from the retained knowledge of user lifelong data and the evolving user behavior sequences.
\item We further design a hierarchical architecture with multiple update periods to effectively mine and utilize the multi-scale sequential patterns in users' lifelong behavior sequences.
\end{itemize}
The rest of our paper is organized as below.
Section~\ref{sec:related-work} presents a comprehensive survey of user response prediction works.
Section~\ref{sec:method} introduces the motivation and model design of our methodology in detail.
The experimental setups with the corresponding results are illustrated in Section~\ref{sec:exp}.
We finally conclude this paper and discuss the future work in Section~\ref{sec:conclusion}.
\section{Related Works}\label{sec:related-work}
\subsection{User Response Prediction}
User response prediction is to model the interest of the user on the content from the provider and estimate the probability of the corresponding user event \cite{ren2018bid}, e.g., clicks and conversions.
It has become a crucial part of the online services, such as search engines \cite{dupret2008user}, recommender systems \cite{qu2016product,guo2017deepfm} and online advertising \cite{graepel2010web,zhou2018deepa,he2014practical}.
Typically, user response prediction is formulated as a binary classification problem with user response likelihood as the training objective \cite{richardson2007predicting,graepel2010web,agarwal2010estimating,oentaryo2014predicting}.
From the view of methodology, linear models such as logistic regression \cite{lee2012estimating,gai2017learning} and non-linear models such as tree-based models \cite{he2014practical} and factorization machines \cite{menon2011response,oentaryo2014predicting} have been well studied.
Recently, neural network models \cite{qu2016product,zhou2018deepa} have attracted huge attention.
\subsection{Sequential User Modeling}
User modeling, i.e. to capture the latent interests of the user and derive the adaptive representation for each user, is the key component in user response prediction \cite{zhou2018deepa,zheng2017joint}.
The researchers have proposed many methodologies ranging from latent factor methods \cite{koren2009matrix,rendle2010factorization} to deep representation learning methods \cite{qu2016product,zhou2018deepa}.
These models aggregate all historical behaviors as a whole while ignoring the temporal and drifting user interests.
Nowadays, sequential user modeling has drawn great attention since the sequences of user behaviors have rich information for the user interests, especially with drifting trends.
It has been a research hotspot for sequential modeling in online systems \cite{zhou2018deepb,ren2018learning,villatel2018recurrent}.
From the perspective of modeling, there are three categories for sequential user modeling.
The first is from the view of temporal matrix factorization \cite{koren2009collaborative} with the consideration of drifting user preferences but it heuristically made some assumptions about the behavior patterns.
The second stream is based on Markov-chain methodology \cite{rendle2010factorizing,he2016fusing,he2016vista} which implicitly models the user state dynamics and derive the outcome behaviors.
The third school is based on deep neural network for its stronger capacity of feature extraction, such as recurrent neural network (RNN) \cite{hidasi2015session,hidasi2017recurrent,wu2017recurrent,jing2017neural,liu2016context,beutel2018latent,villatel2018recurrent} and convolutional neural network (CNN) regarding the behavior history as an image \cite{tang2018personalized,kang2018self}.
However, these methods mainly focus on short-term user modeling which has been constrained in the most recent behaviors.
\citet{zhang2018next} additionally utilized a static user representation for user intrinsic interests along with short-term intent representation. \citet{ying2018sequential} proposed a hierarchical attentional method over a list of user behavior features for modeling long-term interests.
But they can only capture simple sequential patterns lacking of considering long-term and multi-scale behavior dependencies.
Moreover, few of the existing works consider modeling lifelong user behavior history thus cannot properly establish a comprehensive user profiling.
\subsection{Memory-augmented Networks}
Memory-augmented networks \cite{hochreiter1997long,weston2015memory,sukhbaatar2015end,kumar2016ask,graves2014neural} have been proposed in natural language processing (NLP) tasks for explicitly remembering the extracted knowledge by maintaining that in an external memory component.
Several works \cite{Ebesu:2018:CMN:3209978.3209991,chen2018sequential,huang2018improving,wang2018neural} utilize memory network for recommendation tasks.
However, these methods directly use the structure of memory network from NLP tasks, which does not consider practical issues in user response prediction.
Specifically, they fail to consider multi-scale knowledge memorization or long-term dependencies.
There is one work of recurrent model with multi-scale pattern mining \cite{chung2016hierarchical} in the NLP field.
The essential difference is that their model was designed for natural language sentence modeling with fixed length, while our model supports lifelong sequential modeling through the maintained user memory and additionally consider long-term dependencies within user behavior sequences with extremely large length.
\section{Methodology}\label{sec:method}
In this part, with discussions about the notations and preliminaries of user response prediction,
we make a definition of lifelong sequential modeling and discuss some characteristics of it.
Then we present the overall architecture of lifelong sequential modeling including the data flow with Hierarchical Periodic Memory Network (HPMN).
The notations have been summarized in Table~\ref{tab:notation}.
\subsection{Preliminaries}
The data in the online system are formulated as a set of triples $\{ (u,v,y) \}$ each of which includes the user $u \in \mathcal{U}$, item $v \in \mathcal{V}$ and the corresponding label of user behavior indicator
\begin{equation}
y = \left\{
\begin{array}{rcl}
1, & & u~ \text{has interacted with} ~v; \\
0, & & \text{otherwise.} \\
\end{array}
\right.
\end{equation}
Without loss of generality, we take click as the user behavior and the goal is to estimate click-through rate\footnote{In this paper, we focus on the CTR estimation, while the estimation of other responses can be done by following the same tokens.} (CTR) of user $u$ on item $v$ at the given time.
It approaches CTR prediction through a learned function $f_{\bs{\Theta}}(\cdot)$ with parameter $\bs{\Theta}$.
There are three parts of raw features $(\bs{u}, \bs{v}, \bm{c})$.
Here $\bs{v}$ is the feature vector of the target item $v$ including the item ID and some side information and $\bm{c}$ is the context feature of the prediction request such as web page URL.
User side feature $\bs{u} = (\bar{\bs{u}}, \{\bs{v}_i\}_{i=1}^{T})$ contains some side information $\bar{\bs{u}}$ and a sequence of user interacted (i.e., \textit{clicked}) items of user $u$.
Note that, the historical sequence length $T$ varies among different users.
The goal of sequential user modeling is to learn a function $g_{\bs{\Phi}}(\cdot)$ with parameter $\bs{\Phi}$ for conducting a comprehensive representation for user $u$
\begin{equation}\label{eq:user-model}
\bs{r} = g(\{\bs{v}_i\}_{i=1}^{s}; {\bs{\Phi}})
\end{equation}
taking the recent $s$ user behaviors.
Note that, this user modeling can be drifting since the user continues interacting with online systems and generating new behaviors.
Many sequential user modeling works set a fixed value $s < T$ as the maximal length of user behavior sequence, e.g., $s=5$ in \cite{hidasi2017recurrent} for session-based recommendation and $s=50$ in \cite{zhou2018deepb}, to capture recent user interests.
Thus the final task of user response prediction is to estimate the probability $\hat{y}$ of user action i.e. click, over the given item as
\begin{equation}
\hat{y} = \text{Pr}(y|\bs{u}, \bs{v}, \bm{c}) = \mathit{f}(\bs{r}, \bs{v}, \bm{c}; \bs{\Theta}).
\end{equation}
\begin{table}[t]
\centering
\caption{Notations and descriptions}\label{tab:notation}
\resizebox{\columnwidth}{!}{
\begin{tabular}{c|l}
\hline
Notation & Description. \\
\hline
$u, v$ & The target user and the target item. \\
$y, \hat{y}$ & The true label and the predicted probability of user response. \\
$\bs{u}, \bs{v}, \bm{c}$ & Feature of user $u$, item $v$ and the context information. \\
$\bar{\bs{u}}$ & The side information of the user. \\
$\bs{v}_i$ & Feature of the $i$-th interacted item in user's behavior history. \\
$\bs{r}$ & The inferred sequential representation of the user. \\
$T, D$ & The total behavior sequence length and the layer number of HPMN. \\
$i, j$ & The index of sequential behavior and network layer ($i \in [1,T], j \in [1,D]$). \\
$\bs{m}^j_i$ & The maintained memory content in the $j$-th layer at the $i$-th time step. \\
$w^j,t^j$ & The reading weight and the period for the $j$-th memory slot\\
& maintained by the $j$-th layer of HPMN. \\
\hline
\end{tabular}
}
\end{table}
\subsection{Lifelong Sequential Modeling}\label{sec:lsm}
Recall that, most existing works on sequential user modeling focus on the recent $s$ behaviors, while sometimes $s \ll T$ for the whole user behavior sequence with length $T$.
To the best of our knowledge, few of them consider lifelong sequential modeling.
We define it as below.
\begin{definition*}{Lifelong Sequential Modeling (LSM)}
in user response prediction is a process of continuous (online) user modeling with sequential pattern mining upon the lifelong user behavior history.
\end{definition*}
There are three characteristics of LSM.
\begin{itemize}[leftmargin=5mm]
\item LSM supports lifelong memorization of user behavior patterns. It is impossible for the model to maintain the whole behavior history of each user for real-time online inference. Thus it requires highly efficient knowledge preserving of user behavior patterns.
\item LSM should conduct a comprehensive user modeling of both intrinsic user interests and temporal dynamic user tastes, for future behavior prediction.
\item LSM also needs continuous adaptation to the up-to-date user behaviors.
\end{itemize}
Following the above principles, we propose a LSM framework for the whole evolving user behavior history, as is illustrated in Figure~\ref{fig:overall}.
Within the framework, we conduct a personalized memory with several slots for each user.
This memory will be maintained through an incremental updating mechanism (as Steps A and B in the figure) along with the evolving user behavior history.
As for online inference, when a user sends a visit request, the online service will transmit the request including the information of target user and target item.
Each user request will trigger a query procedure and we use the vector of the target item $\bs{v}$ as the query to obtain the associated user representation according to this specific item in the memory pool.
Then HPMN model will take the query vector to \textit{read} the lifelong maintained personalized memory of that user, to conduct the corresponding user representation, without inference over the whole historical behavior sequence.
After that, the user representation $\bs{r}$, item vector $\bs{v}$ and context features $\bm{c}$ will be together considered for the subsequent user response prediction, which will be described in Section~\ref{sec:pred-loss}. \
The details of HPMN will be presented in Section~\ref{sec:hpmn}.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\columnwidth]{figures/frameworks/overall-framework.pdf}
\caption{The LSM framework.}
\label{fig:overall}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=0.95\textwidth]{figures/frameworks/memnet-simple.pdf}
\caption{The framework of HPMN model with four layers maintaining user memory in four ($D=4$) memory slots. The update period $t^j$ of $j$-th layer follows an exponential sequence $\{2^{j-1}\}_{j=1}^D$ as an example. The red part means the incremental updating mechanism; the dotted line means the periodic memorization and forgetting.}
\label{fig:memnet}
\end{figure*}
\subsection{Hierarchical Periodic Memory Network}\label{sec:hpmn}
In this section, we first present the motivations of HPMN model and subsequently discuss the specific architectures.
Generally speaking, we propose HPMN model based on three considerations of the motivation.
\begin{itemize}[leftmargin=5mm]
\item As is stated above, the main goal of LSM is to capture sequential user patterns hidden in user behavior sequences. Many works \cite{he2016vista,hidasi2015session,zhou2018deepb} have been proposed for sequential pattern mining to improve the subsequent prediction. Thus HPMN model firstly introduces sequential modeling through a recurrent component.
\item There also exists long-term dependencies among lifelong user behaviors, i.e., the later user decision making may have some relationship to her previous actions. We will show some examples in the experiment of the paper. However, traditional sequential modeling methods either rely on the recent user behaviors, or updates user states too frequently which may result in memorization saturation and knowledge forgetting \cite{sodhani2018training}. Hence we incorporate periodic memory updating mechanism to avoid unexpected knowledge drifting.
\item The behavior dependencies may span various time distances, e.g., user may show preferences on the specific item at different time along the whole history. So that it requires multi-scale sequential pattern mining. HPMN model deals with this by maintaining hierarchical memory slots with different update periods.
\end{itemize}
Moreover, since the personalized memory stores a comprehensive understanding of each user with multi-facet user preferences, so HPMN model incorporates a regularization of memory covariance to preserve diverse knowledge of user interests.
Besides, for each query, the model reads the user memory through an attentional way which tries to match the target item over the multi-facet user modeling knowledge.
Next we will describe the model details from four aspects.
The memory architecture will be introduced in Section~\ref{sec:mem-arc} followed by the description of periodic yet incremental updating mechanism in Section~\ref{sec:mem-write}.
We introduce the usage of the user memory in Section~\ref{sec:mem-read} and the covariance regularization in Section~\ref{sec:cov-reg}.
\subsubsection{Hierarchical Memory for Sequential Modeling}\label{sec:mem-arc}
As is illustrated in Figure~\ref{fig:overall}, for each user $u$, there is a user-specific memory pool containing $D$ memory slots $\left\{ \bs{m}^j \right\}_{j=1}^D$ and $\bs{m}^j \in \mathbb{R}^p$ is a piece of real-value representation of user modeling.
The idea of the external memory has been used in the NLP field \cite{miller2016key,kumar2016ask} for better memorization of the context information embedded in the previously consumed paragraph.
We utilize this external memory pool for capturing the intrinsic user interests with temporal sequential patterns, yet it is also evolving and supports incremental memory update along with the growing behavior sequences.
Generally speaking, HPMN model is a layer-wise memory network which contains $D$ layers, as is shown in Figure~\ref{fig:memnet}.
Each layer maintains the specific memory slot $\bs{m}^j$.
The output $\bs{m}^j_i$ of the $j$-th layer at the $i$-th time step (i.e., $i$-th sequential user behavior) will be transmitted not only to the next time step, but also to the next layer at the specific time step.
\subsubsection{Continuous Memory Update}\label{sec:mem-write}
Considering the rapidly growing user-item interactions, it is impossible for the model to scan through the complete historical behavior sequence at each prediction time. That's the reason why almost all the existing methods only consider recent short-term user behaviors.
Thus it is necessary to maintain only the latest memories and implement an incremental update mechanism in real time.
After each user behavior on a item at the $i$-th time step, the memory slot at each layer would be updated as
\begin{equation}
\bs{m}^j_i =
\left\{
\begin{array}{ccr}
g^j \left(\bs{m}^{j-1}_i, ~ \bs{m}^j_{i-1} \right) & { \text{if}~ i ~ \text{mod} ~ t^j ~ = 0 } ~, \\
\bs{m}^j_{i-1} & {\text{otherwise}} ~,
\end{array}
\right.
\label{eq:mem-write}
\end{equation}
where $j \in \left[ 1, D \right] $ and $t^j$ is the update period of $j$-th layer.
In Eq.~(\ref{eq:mem-write}), the memory writing in each layer is based on the Gated Recurrent Unit (GRU) \cite{cho2014learning} cell $g^j$ as
\begin{equation}
\begin{aligned}
\bm{z}^j_i &= \sigma (\overline{\bm{W}}^j_z \bs{m}^{j-1}_i + \overline{\bm{U}}^j_z \bs{m}^j_{i-1} + \overline{\bm{b}}^j_z) \\
\bm{r}^j_i &= \sigma (\overline{\bm{W}}^j_r \bs{m}^{j-1}_i + \overline{\bm{U}}_r \bs{m}^j_{i-1} + \overline{\bm{b}}^j_r) \\
\bs{m}^j_{i} &= (1 - \bm{z}^j_i) \odot \bs{m}^j_{i-1} \\
&~~~~~~~~~~~~~~~~ + \bm{z}^j_i \odot \tanh(\overline{\bm{W}}^j_m \bs{m}^{j-1}_i + \overline{\bm{U}}_m (\bm{r}^j_i \odot \bs{m}^j_{i-1}) + \overline{\bm{b}}^j_m) ~.
\end{aligned}
\label{eq:mem-update}
\end{equation}
For each cell in different layers, the parameters $(\overline{\bm{W}}^j, \overline{\bm{U}}^j, \overline{\bm{b}}^j )$ of $g^j$ differs.
Note that, it is a soft-writing operation on the memory slot $\bs{m}$ since the last operation of $g^j$ function has the ``erase'' vector $\bm{z}^j$ as the same as that in the other memory network literature such as Neural Turing Machine (NTM) \cite{graves2014neural}.
Note that the first layer of memory $\bs{m}^1_i$ will be updated with the raw feature vector $\bs{v}_i$ of the user interacted item and the memory contents from the last time step $\bs{m}^1_{i-1}$.
Moreover, the memory update is periodic where each memory $\bs{m}^j$ at $j$-th memory slot will be updated according to the time step $i$ and the period $t^j$ of each layer.
Here we set the period of each layer $t^j$ as the hyperparameter which is reported in Table~\ref{tab:HPMN-structure} in the experimental setup.
By applying this periodic updating mechanism, the upper layers are updated less frequently to achieve two goals. (i) First it avoids gradient vanishing or explosion, thus being able to model long sequences better; (ii) It then remembers the long-term dependency better than the memory maintained by the lower layer.
The different update behaviors of each layer may capture multi-scale sequential patterns, which is illustrated in Section~\ref{sec:extend-exp}.
The similar idea of clockwork update has been implemented in RNN model \cite{koutnik2014clockwork}.
However, they simply split the parameters in the recurrent cell and update the hidden states separately.
We make two improvements that (i) we connect the network layers through state transferring so as to make layer-wise information transmitting; (ii) we incorporate the external memory component to preserve both intrinsic and multi-scale sequential patterns for lifelong sequential modeling.
\subsubsection{Attentional Memory Reading}\label{sec:mem-read}
Till now, the model has conducted the long-term memorization of the intrinsic and multi-scale temporal dynamics, which may connect the intrinsic properties and the multi-scale patterns of behavior dependency, to the current user response prediction.
Besides, we conduct the attentional memory usage similar to the common memory networks \cite{weston2015memory,sukhbaatar2015end,graves2014neural}.
We calculate the comprehensive user representation $\bs{r}$ as
\begin{equation}
\begin{aligned}
\bs{r} &= \sum_{j=1}^D w^j \cdot \bs{m}^j ~.
\end{aligned}
\label{eq:repr-long}
\end{equation}
Here $\bs{m}^j$ is the maintained memory at the last time step of the long-term sequence, i.e., $i = T$ and $T$ is the final behavior log of the user.
The weight of each memory $w^j$ means the contribution of each memory slot to the final representation $\bs{r}$ and it is calculated as
\begin{equation}\label{eq:attn-weight}
w^j = \frac{\exp(e^j)}{\sum_{k=1}^D \exp(e^k)} ~, \text{where } e^j = E(\bs{m}^j, \bs{v})
\end{equation}
is an energy model which measures the relevance between the query vector $\bs{v}$ and the long-term memory $\bs{m}^j$.
Note that the energy function $E$ is a nonlinear multi-layer deep neural network with Rectifier (Relu) activation function $\text{Relu}(x) = \max (0, x)$.
The way we calculate the attention through the energy function $E$ is similar to that in the NLP field \cite{bahdanau2014neural}.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{figures/frameworks/prediction.pdf}
\caption{The overall user response prediction.
}\label{fig:dual}
\vspace{-10pt}
\end{figure}
\subsubsection{Memory Covariance Regularization}\label{sec:cov-reg}
As is described in the previous sections, the maintained user memory captures long-term sequential patterns with multi-facet user interests.
Recall that our model uses $D$ memory slots with $p$ dimensions to memorize user behavior patterns.
We expect that different memories store knowledge of user interests from different perspectives.
However, unlike the models like NTM \cite{graves2014neural}, HPMN does not utilize attention mechanism to reduce redundancy when updating memory slots.
In order to facilitate memorization utility, we utilize a covariance regularization on memories following \cite{cogswell2016reducing}.
Specifically, we first define $\bm{C}$ as the covariance matrix of the memory contents as
\begin{equation}
\bm{C} = \frac{1}{p}(\bm{M} - \overline{\bm{M}})(\bm{M} - \overline{\bm{M}})^\top, \text{where } \bm{M}=[\bs{m}^1,...,\bs{m}^j,...,\bs{m}^D]^\top
\end{equation}
is the matrix of memories and $\overline{\bm{M}}$ is the mean matrix with regard to each row of $\bm{M}$ and $p$ is the dimension of each memory slot. Note that $\overline{\bm{M}}$ has the same shape with $\bm{M}$.
After that, we define the loss $\mathcal{L}_c$ to regularize the covariance as
\begin{equation}\label{eq:mem-cov}
\mathcal{L}_c = \frac{1}{2} (\|\bm{C}\|_{F}^2 - \|\text{diag}(\bm{C})\|_2^2)
\end{equation}
where $\|\cdot\|_F$ is the Frobenius norm of matrix.
We need to minimize covariance between different memory slots, which corresponds to penalizing the norm of $\bm{C}$.
\subsection{Prediction Function and Losses}\label{sec:pred-loss}
For each prediction request, we obtain the comprehensive representations $\bs{r}$ through querying the personalized memory for the target user by Eqs.~(\ref{eq:user-model}) and (\ref{eq:repr-long}).
The final estimation for the user response probability will be calculated as that in Figure~\ref{fig:dual} as
\begin{equation}\label{eq:dual-func}
\hat{y} = f(\bs{r}, \bs{v}, \bm{c}; \bs{\Theta}) ~,
\end{equation}
where $f$ is implemented as a multi-layer deep network with three layers, whose widths are 200, 80 and 1 respectively. The first and second layer use ReLU as activation function while the third layer uses sigmoid function as $\text{Sigmoid}(x)=\frac{1}{1+e^{-x}}$.
As for the loss function, we take an end-to-end training and introduce (i) the widely used cross entropy loss \cite{zhou2018deepa,zhou2018deepb,ren2018bid} $\mathcal{L}_{\text{ce}}$ over the whole dataset with (ii) the covariance regularization $\mathcal{L}_c$ and (iii) the parameter regularization $\mathcal{L}_r$.
We utilize gradient descent for optimization.
Thus the final loss function is
\begin{equation}
\begin{aligned}
\argmin_{\bs{\Theta}, \bs{\Phi}} &= \mathcal{L}_{\text{ce}} + \lambda \mathcal{L}_c + \mu \mathcal{L}_r \\
&= -\sum_{k=1}^N \big[ y_k \log \hat{y}_k + (1-y_k) \log (1-\hat{y}_k)\big] \\
& + \frac{1}{2} \lambda \left( \|\bm{C}\|_{F}^2 - \|\text{diag}(\bm{C})\|_2^2 \right) + \frac{1}{2} \mu \left( \|\bs{\Theta} \|_2^2 + \| \bs{\Phi} \|_2^2 \right) ~,
\end{aligned}
\end{equation}
where $\lambda$ and $\mu$ are the weights of the two regularization losses, $\bs{\Phi} = \{(\overline{\bm{W}}^j, \overline{\bm{U}}^j, \overline{\bm{b}}^j )\}_{j=1}^D$ is the set of model parameters of HPMN and $N$ is the size of training dataset.
\minisection{Discussions}
We propose the lifelong sequential user modeling with the personalized memory for each user.
The memory are updated periodically to capture long-term yet multi-scale sequential patterns of user behavior.
For user response prediction, the maintained user memory will be queried with the target item to forecast the user preference over that item.
Note that, LSM has some essential differences from the lifelong machine learning (LML) proposed by \cite{chen2016lifelongb}. First, the retained knowledge in LSM is user-specific while LML is model-specific; Second, LSM is conducted for user modeling while LML aims at continuously multi-task learning \cite{chen2016lifelonga}; Finally the user behavior patterns drift in LSM while the data samples and tasks change in LML.
The retained while compressed memory guarantees that the time complexity of our model is acceptable for industrial productions.
The personalized memory will be created from the first registration of the user and maintained by HPMN model as lifelong modeling.
For each prediction, the model only needs to query the maintained memory, rather than inferring over the whole behavior sequence as adopted by the other related works \cite{hidasi2015session,zhou2018deepb}.
Meanwhile, our model has an advantage of sequential behavior modeling to those aggregation-based model, such as traditional latent factor models \cite{koren2009collaborative,koren2008factorization}.
For memory updating, the time complexity is $O(DC)$ where $C$ is the calculation time of the recurrent component.
All the matrix operations can be parallelly executed on GPUs.
The model parameters of HPMN can be updated in a normal way as common methods \cite{qu2016product,zhou2018deepa} where the model is retrained periodically depending on the specific situations.
The number of memory slots $D$ is the hyperparameter and the specific slot number depends on the practical situation.
Along with the lifelong sequential user modeling, the memory of each user is expanded accordingly.
We conduct an experiment about the relations between the number of memory slots and the task performance and discuss in Section~\ref{sec:extend-exp}.
We may follow \cite{sodhani2018training} and expand the memory when the performance drops in some margin.
However, we only need to add one layer with a larger updating period on the top, without retraining all the parameters of HPMN as that in \cite{sodhani2018training}.
\section{Experiments}\label{sec:exp}
In this section, we present the details of the experiment setups and the corresponding results.
We also make some discussions with an extended investigation to illustrate the effectiveness of our model.
Moreover, we have also published our code\footnote{Reproducible code link: https://github.com/alimamarankgroup/HPMN.}.
We start with three research questions (RQs) to lead the experiments and discussions.
\begin{itemize}
\item [\textbf{RQ1}] Does the incorporation of lifelong behavior sequence contributes to the final user response prediction?
\item [\textbf{RQ2}] Under the comparable experimental settings, does HPMN achieve the best performance?
\item [\textbf{RQ3}] What patterns does HPMN capture from user behavior sequences? Does it have the ability to capture long-term, short-term and multi-scale sequential patterns?
\end{itemize}
\subsection{Experimental Setups}
In this part, we present the experiment setups including dataset description, preprocessing method, evaluation metrics, experiment flow and the discussion of the compared settings.
\subsubsection{Datasets}
We evaluate all the compared models over three real-world datasets.
The statistics of the three datasets are shown in Table \ref{tab:dataset-statistics}.
\begin{description}[leftmargin=15pt]
\item [Amazon] \cite{McAuley2015IRS} is a collection of user browsing logs over e-commerce products with reviews and product metadata from Amazon Inc. We use the subset of Electronic products which contains user behavior logs from May 1999 to July 2014.
Moreover, we regard all the user reviews as user click behaviors. This processing method has been widely used in the related works \cite{zhou2018deepb, zhou2018deepa}.
\item [Taobao] \cite{zhu2018learning} is a dataset of user behaviors from the commercial platform of Taobao. The dataset contains several types of user behaviors including click, purchase, add-to-cart and item favoring. It is consisted of user behavior sequences from nearly one million users from November 25 to December 3, 2017.
\item [XLong] is sampled from the click logs of more than twenty thousand users on Alibaba e-commerce platform from April to September 2018. It contains relatively longer historical behavior sequences than the other two datasets. Note that there is no public dataset containing such long behavior history of each user for sequential user modeling. We have published this dataset for further research\footnote{Dataset download link: https://tianchi.aliyun.com/dataset/dataDetail?dataId=22482.}.
\end{description}
\noindent\textbf{Dataset Properties}.
These datasets are selected as typical examples in real-world applications.
\textbf{Amazon} dataset covers a very long time range of user behaviors during about fifteen years while some of the users were inactive and generated relatively sparse behaviors during this long time range.
For \textbf{XLong} dataset, each user has a behavior sequence of one thousand clicks that happened in half a year. And modeling such long sequence is a major challenge for lifelong sequential modeling.
As for \textbf{Taobao} dataset, although it only covers nine days' logs, the users in it have generated quite a few behaviors which reflects that the users are quite active.
\noindent\textbf{Dataset Preprocessing}.
To simulate the environment of lifelong sequential modeling, for each dataset, we sort the behaviors of each user by the timestamp to form the lifelong behavior sequence for each user.
Assuming there are $T$ behaviors of user $\bs{u}$, we use this behavior sequence to predict the user response probability at the target item for the $(T+1)$-th behavior.
Note that 50\% target items at the prediction time in each dataset have been replaced with another item from the non-clicked item set for each user, to build the negative samples.
\noindent\textbf{Training \& Test Splitting}.
We split the training and test dataset according to the timestamp of the prediction behavior.
We set a cut time within the time range covered by the full dataset.
If the prediction behavior of a sequence took place before the cut time, the sequence is put into the training set. Otherwise it would be in the test set. In this way, training set is about 70\% of the whole dataset and test set is about 30\%.
\begin{table}[t]
\centering
\caption{The dataset statistics. $T$: length of the whole lifelong sequence (maximal length in the dataset). $s$: length of recent behavior sequence. }\label{tab:dataset-statistics}
\resizebox{0.55\columnwidth}{!}{
\begin{tabular}{c|c|c|c}
\hline
Dataset & Amazon & Taobao & XLong \\
\hline\hline
User \# & 192,403 & 987,994 & 20,000 \\
\hline
Item \# & 63,001 & 4,162,024 & 3,269,017 \\
\hline
$s$ & 10 & 44 & 232 \\
$T$ & 100 & 300 & 1,000 \\
\hline
\end{tabular}
}
\vspace{-10pt}
\end{table}
\subsubsection{Evaluation Metrics}
We use two measurements for the user response prediction task.
The first metric is area under ROC curve (\textbf{AUC}) which assesses the pairwise ranking performance of the classification results between the clicked and non-clicked samples.
The other metric is \textbf{Log-loss} calculated as
\begin{equation}\label{eq:obj-func}\small
\text{Log-loss} = \sum_{k=1}^N \big[- y_k \log \hat{y}_k - (1-y_k) \log(1 - \hat{y}_k) \big] ~.
\end{equation}
Here $N$ is the number of samples in the test set. Log-loss is to measure the overall likelihood of the whole test data and has been widely used for the classification tasks \cite{ren2016user,qu2016product}.
\subsubsection{Experiment Flow}
Recall that each sample of user behaviors contains at most $T$ interacted items.
As some of our baseline models were proposed to model recent short behavior sequence, thus we first split the recent $s$ user behaviors as the short-term sequential data for baseline model evaluation ($s<T$), as is shown in Table~\ref{tab:dataset-statistics}.
Moreover, for fair comparison, we also conduct the experiments over the whole lifelong sequences with length $T$ for all the baselines.
Note that, all the compared models are fed with the same features including contextual features and side information for fair comparison.
Finally, we conduct the \textbf{significance test} to verify the statistical significance of the performance improvement of our model against the baseline models.
Specifically, we deploy a MannWhitney U test \cite{mason2002areas} under AUC metric, and a t-test \cite{bhattacharya2002median} under Log-loss metric.
\subsubsection{Compared Settings}\label{sec:comp-models}
To show the effectiveness of our method, we compare it with three groups of eight baselines. The first group consists of aggregation-based models, they aggregate the user behaviors for user modeling and response prediction, without considering the sequential patterns.
\begin{itemize}[leftmargin=35pt]
\item [\textbf{DNN}] is a multi-layer feed-forward deep neural network which has been widely used as the base model in recent works \cite{zhou2018deepa,zhang2016deep,qu2016product}.
We follow \cite{zhou2018deepa} and use sum pooling operation to integrate all the sequential behavior features concatenating the other features as the user representation.
\item [\textbf{SVD++}] \cite{koren2008factorization} is a MF-based model that combines the user clicked items and latent factors for response prediction.
\end{itemize}
The second group contains short-term sequential modeling methods including RNN-based models, CNN-based models and a memory network model. For these methods, they either use the behavior data within a session or just truncate the recent behavior sequence to the fixed length.
\begin{itemize}[leftmargin=35pt]
\item [\textbf{GRU4Rec}] \cite{hidasi2015session} bases on RNN and it is the first work using recurrent cell to model sequential user behaviors. It is originally proposed for session-based recommendation.
\item [\textbf{Caser}] \cite{tang2018personalized} is a CNN based model, using horizontal and vertical convolutional filters to capture behavior patterns at different scales.
\item [\textbf{DIEN}] \cite{zhou2018deepb} is a two-layer RNN structure with attention mechanism. It uses the calculated attention values to control the second RNN layer to model drifting user interests.
\item [\textbf{RUM}] \cite{chen2018sequential} is a memory network model which uses an external memory following the similar architecture in NLP tasks \cite{miller2016key,graves2014neural} to store user's behavior features.
We implement feature-level RUM as it performed best in the paper \cite{chen2018sequential}.
\end{itemize}
The third group is formed of some long-term sequential modeling methods. However, note that, our HPMN model is the first work on the lifelong sequential modeling for user response prediction.
\begin{itemize}[leftmargin=35pt]
\item [\textbf{LSTM}] \cite{hochreiter1997long} is the first model to do long-term sequential modeling whose memory capacity is limited.
\item [\textbf{SHAN}] \cite{ying2018sequential} is a hierarchical attention network. It uses two attention layers to handle user's long- and short-term sequences, respectively. However, this model does not capture sequential patterns.
\item [\textbf{HPMN}] is our proposed model described in Section ~\ref{sec:method}.
\end{itemize}
We first evaluate the models in the second group over the short length data as they were proposed for short-term sequential modeling. Then we test all the models over the whole length data comparing to our proposed model.
Some state-based user models \cite{rendle2010factorizing,he2016fusing} have been compared in \cite{tang2018personalized} thus we just compare with state-of-the-art \cite{tang2018personalized}.
We omit comparison to the other memory-based models \cite{ebesu2018collaborative,huang2018improving} since they are not aiming at sequential user modeling.
For online inference, all of the baselines except memory models, i.e., RUM and HPMN, need to load the whole user behavior sequence to further conduct user modeling for response prediction, while the memory-based models only need to read the user's personalized memory contents for the subsequent prediction.
Thus the space utility is more efficient of memory-based model considering online sequential modeling.
The difference between our model and the other memory network model, i.e., RUM, is two-fold.
(i) RUM implements the memory architecture following \cite{miller2016key} in NLP tasks, which may not be appropriate for user response prediction, since the user generated data are quite different to language sentences.
And the experiment results in the below section also reflect this.
(ii) Our model utilizes periodic updated memories through hierarchical network to capture multi-scale sequential patterns while RUM has no consideration of that.
\subsubsection{Hyperparameters}\label{sec:hyperparameters}
There are two sets of hyperparameters.
The first set is training hyperparameters, including learning rate and regularization weight. We consider learning rate from $\{1 \times 10^{-4}, 5 \times 10^{-3}, 1 \times 10^{-3}\}$ and regularization weight $\lambda$ and $\mu$ from $\{1 \times 10^{-3}, 1 \times 10^{-4}, 1 \times 10^{-5}\}$. Batch size is fixed on 128 for all the models.
The hyperparameters of each model are tuned and the best performances have been reported below.
The second group is the structure hyperparameters of HPMN model, including size of each memory slot and the update periods $t^j$ of the $j$-th layer which are shown in Table~\ref{tab:HPMN-structure}.
The reported update periods are listed from the first (lowest) layer to the last (highest).
\begin{table}[h]
\scriptsize
\centering
\caption{The HPMN structures on different datasets.}\label{tab:HPMN-structure}
\resizebox{0.6\columnwidth}{!}{
\begin{tabular}{c|c|c}
\hline
Dataset & Mem. Size & Update Periods\\
\hline
Amazon & 32 & 3 layers: 1, 2, 4 \\
\hline
Taobao & 32 & 4 layers: 1, 2, 4, 12 \\
\hline
XLong & 32 & 6 layers: 1, 2, 4, 8, 16, 32 \\
\hline
\end{tabular}
}
\vspace{-10pt}
\end{table}
\begin{table}[h]
\centering
\caption{Performance Comparison. (* indicates p-value < $10^{-6}$ in the significance test. $\uparrow$ and $\downarrow$ indicates the \textit{performance} over lifelong sequences (with length $T$) is better or worse than the same model over short sequences (with length $s$).
AUC: the higher, the better; Log-loss: the lower, the better.
The second best performance of each metric is underlined.)}\label{tab:perf-table}
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{c|c|c|lll|lll
\hline
\multirow{2}{*}{Model~Group} & \multirow{2}{*}{Model} & \multirow{2}{*}{Len.} & \multicolumn{3}{c|}{AUC} & \multicolumn{3}{c}{Log-loss}\\
& & & Amazon & Taobao & XLong & Amazon & Taobao & XLong\\
\hline
\multirow{4}{*}{Group 2} &
GRU4Rec & $s$ & 0.7669 & 0.8431 & 0.8716 & 0.5650 & 0.4867 & 0.4583\\
& Caser & $s$ & 0.7509 & 0.8260 & 0.8467 & 0.5795 & 0.5094 & 0.4955\\
& DIEN & $s$ & 0.7725 & 0.8914 & \underline{0.8725} & 0.5604 & 0.4184 & \underline{0.4515}\\
& RUM & $s$ & 0.7434 & 0.8327 & 0.8512 & 0.5819 & 0.5400 & 0.4931\\
\hline
\hline
\multirow{2}{*}{Group 1} &
DNN & $T$ & 0.7546 & 0.7460 & 0.8152 & 0.6869 & 0.5681 & 0.5365\\
& SVD++ & $T$ & 0.7155 & 0.8371 & 0.8008 & 0.6216 & 0.8371 & 1.7054\\
\hline
\multirow{4}{*}{Group 2} &
GRU4Rec & $T$ & 0.7760 $\uparrow$ & 0.8471 $\uparrow$ & 0.8702 $\downarrow$ & 0.5569 $\uparrow$ & 0.4827 $\uparrow$ & 0.4630 $\downarrow$\\
& Caser & $T$ & 0.7582 $\uparrow$ & 0.8745 $\uparrow$ & 0.8390 $\downarrow$ & 0.5704 $\uparrow$ & 0.4550 $\uparrow$ & 0.5050 $\downarrow$\\
& DIEN & $T$ & \underline{0.7770} $\uparrow$ & \underline{0.8934} $\uparrow$ & 0.8716 $\downarrow$ & \underline{0.5564} $\uparrow$ & \underline{0.4155} $\uparrow$ & 0.4559 $\downarrow$\\
& RUM & $T$ & 0.7464 $\uparrow$ & 0.8370 $\uparrow$ & 0.8649 $\uparrow$ & 0.6301 $\downarrow$ & 0.4966 $\uparrow$ & 0.4620 $\uparrow$\\
\hline
\multirow{3}{*}{Group 3} &
LSTM & $T$ & 0.7765 & 0.8681 & 0.8686 & 0.5612 & 0.4603 & 0.4570\\
& SHAN & $T$ & 0.7763 & 0.8828 & 0.8369 & 0.5595 & 0.4318 & 0.5000\\
& HPMN & $T$ & \textbf{0.7809}* & \textbf{0.9240}* & \textbf{0.8929}* & \textbf{0.5535}* & \textbf{0.3487}* & \textbf{0.4150}*\\
\hline
\end{tabular}
}
\vspace{-10pt}
\end{table}
\subsection{Experimental Results and Analysis}
In this section, we present the experiment results in Table~\ref{tab:perf-table} and conduct an analysis from several perspectives.
Recall that the compared models are divided into three groups as mentioned in Sec.~\ref{sec:comp-models}.
\minisection{Comparison between HPMN and baselines}
From Table~\ref{tab:perf-table}, we can tell that HPMN improves the performance significantly against all the baselines and achieves state-of-the-art performance (\textbf{RQ2}).
The aggregation-based models in Group 1, i.e., DNN and SVD++, perform not well as the sequential modeling methods, which indicates that there exist sequential patterns in user behavior data and simply aggregating user behavior features may result in poor performance.
Comparing with the other sequential modeling methods of Group 2, HPMN outperforms all of them regardless of the length of user behavior sequences.
Since GRU4Rec was proposed for short-term session-based recommendation, thus it has the same issue as LSTM which may lose some knowledge of the long-term behavior dependency.
Though the attention mechanism of DIEN improves the performance from GRU4Rec in a large margin, it either ignores the multi-scale user behavior patterns, which will be illustrated from an example in the next section.
Moreover, DIEN model requires to conduct online inference over the whole sequence for prediction, which lacks of practical efficiency considering extremely long, especially lifelong user behavior sequences.
From the results of Caser which uses CNN to extract sequential patterns, we may tell that convolution operation may not be appropriate for sequential user modeling.
As for RUM model, though it utilizes an external memory for user modeling, it fails to capture sequential patterns which results in quite poor performance. Moreover, this proposed model was originally optimizing for other metrics \cite{chen2018sequential}, e.g., precision and recall, thus it may not perform well for user response prediction.
By comparing HPMN with the models in Group 3, i.e., LSTM and SHAN, we find that although both baselines are proposed to deal with long-term user modeling, HPMN has better performance on the very long sequences.
The reason would be that LSTM has limited memory capacity to retain the knowledge, and SHAN has not considered any sequential patterns in the user behaviors.
\minisection{Analysis about Lifelong Sequential Modeling}
Recall that we evaluate all the short-term sequential modeling methods on both short sequence data and lifelong sequence data, as is shown in Table~\ref{tab:perf-table} and we have highlighted the results of the performance gain $\uparrow$ (and drop $\downarrow$) in the table of the latter case compared with the former case.
From the table, we find that almost all the models gain an improvement when modeling on the lifelong user behavior sequences on Amazon and Taobao datasets.
However, on XLong dataset, the performance of GRU4Rec, Caser and DIEN drops, while the memory-based model, i.e., RUM achieve better performance than itself on short sequences.
Note that, our HPMN model performs best.
All the phenomenon reflect that the incorporation of lifelong sequences contributes better user modeling and response prediction (\textbf{RQ1}). Nevertheless, it also requires well designed memory model for lifelong modeling, while our HPMN model achieves satisfying performance on this problem.
\minisection{Model Convergence}
We plot the learning curves of HPMN model over the three datasets in Figure~\ref{fig:lc}.
As is shown in the figure, HPMN converges quickly, the Log-loss value on three datasets all drop to the stable convergence after about one iteration over the whole training set.
\begin{figure}[h]
\centering
\includegraphics[width=1.0\columnwidth]{figures/learning_curve.pdf}
\caption{The learning curves on three datasets. Here one epoch means the whole iteration over the training dataset.}\label{fig:lc}
\vspace{-20pt}
\end{figure}
\begin{figure*}[h]
\centering
\includegraphics[width=0.65\textwidth]{figures/atten_seq.pdf}
\caption{An illustration of long-term, short-term and multi-scale sequential patterns that are captured by HPMN.}\label{fig:atten_seq}
\end{figure*}
\begin{figure}[h]
\centering
\includegraphics[width=0.55\columnwidth]{figures/num_of_layer.pdf}
\caption{The performance of HPMN with various memory numbers on XLong Dataset. The update period of each $j$-th layer follows exponential sequence $\{2^{j-1}\}_{j=1}^{11}$.}\label{fig:num_of_layer}
\vspace{-15pt}
\end{figure}
\subsection{Extended Investigation}\label{sec:extend-exp}
In this section, we further investigate the patterns that HPMN captures when dealing with lifelong sequence (\textbf{RQ3}) and the model capacity of memorization.
\minisection{Sequential Patterns with Multi-scale Dependency}
In Figure~\ref{fig:atten_seq}, we plot three real examples of user behavior sequence with length $T=1000$ sampled from XLong dataset.
These three sequences reflect the long-term, short-term and multi-scale sequential patterns captured by HPMN, respectively.
In the first example, the target item is ``lotion'' clicked by the user at the final prediction time. As we find in her behavior history, there are several clicks on lotions at the 31st, 33rd and 37th positions of her behavior sequence, which is far from the tail of her latest behaviors. When HPMN model takes the target item as query to conduct the user representation, from the attention heatmap calculated by HPMN as that in Eq.~(\ref{eq:attn-weight}), we can tell that the fifth layer of HPMN has the maximum attentions, whose update period is relative large. It shows that HPMN captures long-term sequential pattern in the memory maintained by high layers.
In the second example, User 2 at last clicked a desk, and some similar items (table, cabinet) are also clicked in very recent history. However, these kinds of furniture are not clicked in the former part of the sequence. The first memory of HPMN has the maximum attention value which shows that the lower layer is better at modeling short-term pattern for that it updates the memory more frequently to capture user's short-term interests.
As for User 3, the click behavior on the target item has both long-term and short-term dependencies, the similar items are clicked in the recent history and in the former part of her behavior sequence. After inference through HPMN model, the second and fifth layers have higher attention values, for they could capture short-term and long-term dependencies respectively. Thus, this demonstrates that HPMN has the ability to capture multi-scale sequential patterns.
\minisection{Memory Capacity}
In Figure \ref{fig:num_of_layer}, we plot the AUC performance of HPMN with different numbers of memory slots on XLong Dataset. Note that the number of memory slots is equal to the number of HPMN layers.
On one hand, when the number of memory slots
for each user is less than 5, the prediction performance of the model rises sharply as the memory increases.
This indicates that it requires large memory of sequential patterns for long behavior sequences.
And increasing the memory according to the growth of user behavior sequence helps HPMN to better capture lifelong sequential patterns.
However, when the number of memory slots is larger than 5, the AUC score drops slightly as the memory number increases.
This demonstrates that, on the other hand, the model capacity has some constraints for the specific length of user behavior sequence.
It provides some guides about memory expanding and the principle of enlarging HPMN model for lifelong sequential modeling with evolving user behavior sequence, as is discussed in Section~\ref{sec:pred-loss}.
\vspace{-5pt}
\section{Conclusion}\label{sec:conclusion}
In this paper, we present lifelong sequential modeling for user response prediction.
To achieve this goal, we conduct a framework with a memory network model maintaining the personalized hierarchical memory for each user.
The model updates the corresponding user memory through periodic updating machanism to retain the knowledge of multi-scale sequential patterns.
The user lifelong memory will be attentionally read for the subsequent user response prediction.
The extensive experiments have demonstrated the advantage of lifelong sequential modeling and our model has achieved a significant improvement against strong baselines including state-of-the-art.
In the future, we will adopt our lifelong sequential modeling to improve multi-task user modeling such as prediction of both user clicks and conversions \cite{ma2018entire}.
We also plan to investigate learning for dynamic update period of each layer, to capture more flexible user behavior patterns.
\minisection{Acknowledgments}
The work is sponsored by Alibaba Innovation Research. The corresponding author Weinan Zhang thanks the support of National Natural Science Foundation of China (61702327, 61772333, 61632017) and Shanghai Sailing Program (17YF1428200).
\bibliographystyle{ACM-Reference-Format}
\balance
|
2,877,628,091,026 | arxiv | \section{Alarming Mechanisms and Empirical Thresholding}
\label{sec:approach}
An alarm system needs two components to minimize the costs of future cases: (1) a probabilistic classifier $\widehat{\mathit{out}}_L\in\mathcal{E}^*\rightarrow[0,1]$ that estimates the likelihood of an undesired outcome for a partial trace based on some historical observations $L$, and (2) an alarming mechanism that, for a given incomplete case, decides whether or not to raise an alarm based on the prediction made by $\widehat{\mathit{out}}_L$. We propose to implement the second component using a function $\mathit{agent}\in[0,1]\rightarrow\{\textsf{true},\textsf{false}\}$ that operates on the estimated likelihood of an undesired outcome, where value $\textsf{true}$ represents the decision to raise an alarm. Together, the two components form an \emph{alarm system}, $\mathit{alarm}(\mathit{hd}^k(\sigma))=\mathit{agent}(\widehat{\mathit{out}}_L(\mathit{hd}^k(\sigma)))$, which makes the decision on whether or not to raise an alarm based on the observed $k$ events of trace $\sigma$.
The first component, function $\widehat{\mathit{out}}_L$, can be implemented using any classification algorithm that is naturally probabilistic, i.e., that outputs likelihood scores on a $[0,1]$-interval instead of a binary outcome. Examples of probabilistic classification algorithms include naive Bayes, logistic regression, and random forest.
The classifier is trained on historical cases recorded in a log $L_\mathit{train}$.
It is easy to see that the decision on whether or not to raise an alarm should be dependent not only on $\widehat{\mathit{out}}_{L_\mathit{train}}(\mathit{hd}^k(\sigma))$, but also on the configuration of $c_\mathit{in}$, $c_\mathit{out}$, $c_\mathit{com}$, and $\mathit{eff}$. When $c_\mathit{in}$ and $c_\mathit{com}$ are very low compared to $c_\mathit{out}$, it might be beneficial to use a lower threshold for the estimated likelihood $\widehat{\mathit{out}}_{L_\mathit{train}}(\mathit{hd}^k(\sigma))$, while one would want to be more certain that the undesired outcome will happen when $c_\mathit{in}$ or $c_\mathit{com}$ is high.
We propose to implement the second component, $\mathit{agent}$, as an \emph{alarming threshold}, i.e., a mechanism that alarms when the estimated likelihood of an undesired outcome is at least $\tau$. We define function $\mathit{alarm}_\tau(\mathit{hd}^k(\sigma))$ to be the alarming function that uses the alarming mechanism $\mathit{agent}_\tau(\widehat{\mathit{out}}_{L_\mathit{train}}(\mathit{hd}^k(\sigma))) = \widehat{\mathit{out}}_{L_\mathit{train}}(\mathit{hd}^k(\sigma)) \ge \tau$.
We aim at finding the optimal value $\overline{\tau}$ of the alarming threshold that minimizes the cost on a log $L_{\mathit{thres}}$ consisting of historical observations such that $L_{\mathit{thres}}\cap L_{\mathit{train}}=\emptyset$ with respect to a given likelihood estimator $\widehat{\mathit{out}}_{L_\mathit{train}}$ and cost model $\mathit{cm}$. The total cost of an alarming mechanism $\mathit{alarm}$ on a log $L$ is defined as $\mathit{cost}(L,\mathit{cm},\mathit{alarm})=\Sigma_{\sigma\in L}\mathit{cost}(\sigma,L,\mathit{cm},\mathit{alarm})$. Using this definition, we define $\overline{\tau} = \arg\min_{\tau\in[0,1]} \mathit{cost}(L_\mathit{thres},\mathit{cm}, \mathit{alarm}_\tau)$. Optimizing a threshold $\tau$ on a separate thresholding set is called \emph{empirical thresholding}~\cite{sheng2006thresholding} and the search for the optimal threshold $\overline{\tau}$ wrt.\ a specified cost model and log $L_{\mathit{thres}}$ can be performed using any hyperparameter optimization technique, such as Tree-structured Parzen Estimator (TPE) optimization~\cite{bergstra2011algorithms}. The resulting approach can be considered to be a form of cost-sensitive learning, since the value $\overline{\tau}$ depends on how the cost model $\mathit{cm}$ is specified.
Note that as an alternative to a single global alarming threshold $\overline{\tau}$ it is possible to optimize a separate threshold $\overline{\tau_k}$ for each prefix length $k$. We experimentally found a single global threshold $\overline{\tau}$ optimized on $L_\mathit{thres}$ to outperform separate prefix-length-dependent thresholds $\overline{\tau_k}$ optimized on $L_\mathit{thres}$, therefore we propose to use a single optimized threshold.
After creating the fully functional alarm system by training a classifier on $L_\mathit{train}$ and optimizing the alarming threshold on $L_\mathit{thres}$ for the given cost model $\mathit{cm}$, the obtained alarming function $\mathit{alarm}$ can be applied to the continuous stream of events coming from the executions of a business process, thereby reducing the processing costs of the running cases.
\begin{comment}
Given our goal to maximize reward function $R$ we propose to raise an alarm for a trace $\sigma$ after observing $k$ events if the expected reward is higher than or equal to the expected reward after $k+1$ events.
The expected reward over trace $\sigma$ conditional to the observed $\mathit{hd}^k(\sigma)$ events is defined as follows:
\begin{equation}
\mathbb{E}[R(\sigma)|\mathit{hd}^k(\sigma)] = \hat{\mathcal{Y}}^p(\mathit{hd}^k(\sigma),1) \cdot f_{1,1}(|\sigma|, k) + \hat{\mathcal{Y}}^p(\mathit{hd}^k(\sigma),0) \cdot f_{1,0}(|\sigma|, k)
\end{equation}
Note that this expected value is dependent on $|\sigma|$, which at the time of $\mathit{hd}^k(\sigma)$ is not yet known. Therefore, we train a separate regression function $\widehat{\mathit{length}}\in\mathcal{E}^*\rightarrow\mathbb{N}$, which aims to estimate the length of the completed trace $|\sigma|$ based on its prefix $\mathit{hd}^k(\sigma)$. This allows us to calculate the expected reward of trace $\sigma$ conditional to the observed prefix $\mathit{hd}^k(\sigma)$ as follows:
\begin{equation}
\begin{split}
\mathbb{E}[R(\sigma)|\mathit{hd}^k(\sigma)] = \hat{\mathcal{Y}}^p(\mathit{hd}^k(\sigma),1) \cdot f_{1,1}(\widehat{\mathit{length}}(\mathit{hd}^k(\sigma)), k) + \\ \hat{\mathcal{Y}}^p(\mathit{hd}^k(\sigma),0) \cdot f_{1,0}(\widehat{\mathit{length}}(\mathit{hd}^k(\sigma)), k)
\end{split}
\end{equation}
In case we do not raise an alarm after $k$ events, the expected reward after $k+1$ events can be estimated as follows:
\begin{equation}
\begin{split}
\mathbb{E}[R(\sigma)|\mathbb{E}[\mathit{hd}^{(k+1)}(\sigma)|\mathit{hd}^k(\sigma)]] = \alpha\cdot\hat{\mathcal{Y}}^p(\mathit{hd}^{k+1}(\sigma),1) \cdot f_{1,1}(\widehat{\mathit{length}}(\mathit{hd}^k(\sigma)), k) +\\ \alpha\cdot\hat{\mathcal{Y}}^p(\mathit{hd}^{k+1}(\sigma),0) \cdot f_{1,0}(\widehat{\mathit{length}}(\mathit{hd}^k(\sigma)), k)+\\
(1-\alpha) \cdot\hat{\mathcal{Y}}^p(\mathit{hd}^{k+1}(\sigma),1) \cdot f_{0,1}(\widehat{\mathit{length}}(\mathit{hd}^k(\sigma)), k) + \\
(1-\alpha) \cdot \hat{\mathcal{Y}}^p(\mathit{hd}^{k+1}(\sigma),0) \cdot f_{0,0}(\widehat{\mathit{length}}(\mathit{hd}^k(\sigma)), k)
\end{split}
\end{equation}
Parameter $\alpha$ reflects the eagerness of the trigger to raise alarms, i.e., the likelihood that we raise an alarm at the next step. Note that the optimal value of $\alpha$ is dependent on the scales of the reward functions $f_{\hat{\mathcal{Y}}(\sigma),\mathcal{Y}(\sigma)}$.
\subsection{Simpler Formalization}
Let $\Sigma$ be the universe of trace prefixes.
Let $\hat{\mathcal{Y}}_A: \Sigma \rightarrow [0,1]$ be the estimator of the likelihood $\hat{\mathcal{Y}}_A(\sigma)$ of
$\sigma$ being the prefix of a \emph{unsatisfactory execution}.\footnote{Unsatisfactory execution to be defined.}
Let $\hat{\mathcal{Y}}_{A+1}: \Sigma \rightarrow [0,1]$ be the estimator of the likelihood $\hat{\mathcal{Y}}_{A+1}(\sigma)$ of being $\sigma$ being the prefix of a \emph{unsatisfactory execution} when an additional event, unknown at the time of estimation, is observed.
\noindent The expected reward after $|\sigma|$ is:\footnote{Note that $1-\hat{\mathcal{Y}}_{A}(\sigma)$ is the likelihood of $\sigma$ not being the prefix of an unsatifactory process execution.}
\begin{equation}
\begin{split}
\mathbb{E}[R(\sigma)|\sigma]] =
\hat{\mathcal{Y}}_A(\sigma) \Big(f_{1,1}(\widehat{\mathit{length}}(\sigma), |\sigma|) +
f_{0,1}(\widehat{\mathit{length}}(\sigma), |\sigma|) \Big) +\\
\Big(1-\hat{\mathcal{Y}}_A(\sigma)\Big) \Big(f_{1,0}(\widehat{\mathit{length}}(\sigma), |\sigma|) +
f_{0,0}(\widehat{\mathit{length}}(\sigma), |\sigma|) \Big)
\end{split}
\end{equation}
\noindent The expected reward after $|\sigma|$ followed by an unknown event is:\footnote{Note that $1-\hat{\mathcal{Y}}_{A+1}(\sigma)$ is the likelihood of $\sigma$ not being the prefix of an unsatifactory process execution.}
\begin{equation}
\begin{split}
\mathbb{E}[R_{+1}(\sigma)|\sigma]] =
\hat{\mathcal{Y}}_{A+1}(\sigma) \Big(f_{1,1}(\widehat{\mathit{length}}(\sigma), |\sigma|+1) +
f_{0,1}(\widehat{\mathit{length}}(\sigma), |\sigma|+1) \Big) +\\
\Big(1-\hat{\mathcal{Y}}_{A+1}(\sigma)\Big) \Big(f_{1,0}(\widehat{\mathit{length}}(\sigma), |\sigma|+1) +
f_{0,0}(\widehat{\mathit{length}}(\sigma), |\sigma|+1) \Big)
\end{split}
\end{equation}
After observing $\sigma$, an alarm is raised iff $\mathbb{E}[R_{+1}(\sigma)|\sigma]]<\mathbb{E}[R(\sigma)|\sigma]]$ and $\hat{\mathcal{Y}}_A(\sigma)$ is greater than a certain threshold.\footnote{Question: Is this threshold customizable or just 0.5. If we stick to the constant 0.5, it means that it is more likely than the execution will be unsatisfactory. It might be reasonable not to wait further because it is not worth (i.e., the reward is not higher) after the subsequent event.}
\end{comment}
\section{Background: Events, Traces, and Event Logs}
\label{sec:background}
For a given set $A$, $A^*$ denotes the set of all sequences over $A$ and $\sigma=\langle a_1,a_2,\dots,a_n\rangle$ a sequence of length $n$; $\langle\rangle$ is the empty sequence and $\sigma_1 \cdot \sigma_2$ is the concatenation of sequences $\sigma_1$ and $\sigma_2$. $\mathit{hd}^k(\sigma)=\langle a_1, a_2, \dots, a_k\rangle$ is the prefix of length $k$ ($0 < k < n$) of sequence $\sigma$. For example, $\mathit{hd}^2(\langle a,b,c,d,e\rangle)=\langle a,b\rangle$.
Let $\mathcal{E}$ be the event universe, i.e., the set of all possible event identifiers, and $\mathcal{T}$ the time domain. We assume that events are characterized by various properties, e.g., an event has a timestamp, corresponds to an activity, is performed by a particular resource, etc. We do not impose a specific set of properties, however, we assume that two of these properties are the timestamp and the activity of an event, i.e., there is a function $\pi_\mathcal{T}\in \mathcal{E}\rightarrow\mathcal{T}$ that assigns timestamps to events, and a function $\pi_\mathcal{A}\in\mathcal{E}\rightarrow\mathcal{A}$ that assigns to each event an activity from a finite set of process activities $\mathcal{A}$. An \emph{event log} is a set of events, each linked to one trace and globally unique, i.e., the same event cannot occur twice in a log. A trace in a log represents the execution of one case.
\begin{definition}[Trace, Event Log]
A \emph{trace} is a finite non-empty sequence of events $\sigma\in\mathcal{E}^*$ such that each event appears only once and time is non-decreasing, i.e., for $1\le i < j \le |\sigma|:\sigma(i)\neq\sigma(j)$ and $\pi_\mathcal{T}(\sigma(i))\le\pi_\mathcal{T}(\sigma(j))$. An \emph{event log} is a set of traces $L\subset\mathcal{E}^*$ such that each event appears at most once in the entire log.\looseness=-1
\end{definition}
\section{Conclusion}
\label{sec:conclusion}
This paper outlined an alarm-based prescriptive process monitoring framework that extends existing predictive process monitoring approaches with the concepts of alarms, interventions, compensations, and mitigation effects.
The framework incorporates a cost model to analyze the tradeoffs between the cost of intervention, the benefit of mitigating or preventing undesired outcomes, and the cost of compensating for unnecessary interventions induced by false alarms. The cost model allows one to estimate the benefits of deploying a prescriptive process monitoring system for the purposes of return on investment analysis.
Additionally, the framework incorporates a technique to optimize the alarm generation mechanism with respect to a given configuration of the cost model and a given event log.
An empirical evaluation on real-life logs showed the benefits of applying this optimization versus a baseline where a fixed likelihood score threshold is used to generate alarms, as considered in previous work in the field.
\section{Evaluation}
\label{sec:evaluation}
In this section, we describe the experimental setup for evaluating the proposed framework and the results of the evaluation. We address the following research questions:
\begin{enumerate}[label=RQ\arabic*,leftmargin=*]
\item Can empirical thresholding find thresholds that consistently lead to a reduction in the average processing cost for different cost model configurations?
\item Does the alarm system consistently yield a benefit over different values of the mitigation effectiveness?
\item Does the alarm system consistently yield a benefit over different values of the cost of compensation?
\end{enumerate}
\subsection{Approaches and Baselines}
We experiment with two different implementations of $\widehat{\mathit{out}}_{L_\mathit{train}}$ by using different well-known classification algorithms, namely, random forest (RF) and gradient boosted trees (GBT). Both classification algorithms have shown to be amongst the top performing classification algorithms on a variety of classification tasks~\cite{fernandez2014we,olson2017data}. We employ a single classifier approach where the features for a given prefix are obtained using the aggregation encoding~\cite{de2016general}, which has been shown to perform better than alternative encodings for event logs~\cite{teinemaa2017outcome}.
We apply the TPE optimization procedure for the alarming mechanism to find the optimal threshold $\overline{\tau}$.
We use several fixed thresholds as baselines. First, we compare with the \emph{as-is} situation in which alarms are never raised. Secondly, we compare with the baseline $\tau = 0$, allowing us to compare with the situation where alarms are always raised directly at the start of a case. Finally, we compare with $\tau = 0.5$ enabling the comparison with the cost-insensitive scenario that simply alarms when an undesired outcome is expected. The implementation of the approach and the experimental setup are openly available online.\footnote{\url{https://taxxer.github.io/AlarmBasedProcessPrediction/}
\subsection{Datasets}
For each event log, we use all available data attributes as input to the classifier. Additionally, we extract the \emph{event number}, i.e., the index of the event in the given case, the \emph{hour, weekday, month, time since case start}, and \emph{time since last event}.
Infrequent values of categorical attributes (occurring less than 10 times in the log) are replaced with value ``other'', to avoid exploding the dimensionality. Missing attributes are imputed with the respective most recent (preceding) value of that attribute in the same trace when available, otherwise with zero.
Traces are cut before the labeling of the case becomes trivially known and are truncated at the 90th percentile of all case lengths to avoid bias from very long traces.
We use the following datasets to evaluate the alarm system:
\begin{description}
\item[BPIC2017.] This log records execution traces from a loan application process in a Dutch financial institution.\footnote{\url{https://doi.org/10.4121/uuid:5f3067df-f10b-45da-b98b-86ae4c7a310b}}
The event log was split into two sub-logs, denoted with \emph{bpic2017\_refused} and \emph{bpic2017\_cancelled}. In the first one, the undesired cases refer to the process executions in which the applicant has refused the final offer(s) by the financial institution and, in the second one, the undesired cases consist of those cases where the financial institution has cancelled the offer(s).
\item[Road traffic fines.] This event log originates from the Italian local police.\footnote{\url{https://doi.org/10.4121/uuid:270fd440-1057-4fb9-89a9-b699b47990f5}}
The desired outcome is that a fine is fully paid, while in the undesired cases the fine needs to be sent for credit collection.
\item[Unemployment.] This event log corresponds to the \emph{Unemployment Benefits} scenario (Box~\ref{ex:UWV} in Section~\ref{sec:costs_model}).
The undesired outcome is that a resident will receive more benefits than entitled, causing the need for a reclamation. Privacy constraints prevent us from making this event log publicly available.
\end{description}
Table~\ref{table:dataset_stats} describes the characteristics of the event logs used. The classes are well balanced in \emph{bpic2017\_cancelled} and \emph{traffic\_fines}, while the undesired outcome is more rare in case of \emph{unemployment} and \emph{bpic2017\_refused}. In \emph{traffic\_fines}, the traces are very short, while in the other datasets the traces are longer.\looseness=-1
\begin{table}[t]
\vspace{-0.2cm}
\caption{Dataset statistics}
\label{table:dataset_stats}
\begin{center}
\resizebox{0.8\textwidth}{!}{
\begin{tabular}{@{}lcccccc@{}}
\toprule
& \# & class & min & med & (trunc.) max & \# \\
dataset name & traces & ratio & length & length & length & events \\ \midrule
bpic2017\_refused & 31\,413 & 0.12 & 10 & 35 & 60 & 1\,153\,398 \\
bpic2017\_cancelled & 31\,413 & 0.47 & 10 & 35 & 60 & 1\,153\,398 \\
traffic\_fines & 129\,615 & 0.46 & 2 & 4 & 5 & 445\,959 \\
unemployment & 34\,627 & 0.2 & 1 & 21 & 79 & 1\,010\,450 \\
\bottomrule
\end{tabular}
}
\end{center}
\vspace{-0.2cm}
\end{table}
\subsection{Experimental Setup}
We apply a temporal split, i.e., we order the cases by their start time and from the first 80\% of the cases randomly select 80\% (i.e., 64\% of the total) for $L_\mathit{train}$ and 20\% (i.e., 16\% of the total) for $L_\mathit{thres}$, and use the remaining 20\% as the test set $L_\mathit{test}$. The events in cases in $L_\mathit{train}$ and $L_\mathit{thres}$ that overlap in time with $L_\mathit{test}$ are discarded in order to not use any information that would not be available yet in a real setting. We use TPE with 3-fold cross validation on $L_\mathit{train}$ to optimize the hyperparameters for RF and GBT.
We optimize the alarming threshold $\overline{\tau}$ by building the final classifiers using all the traces in $L_\mathit{train}$ and search for $\overline{\tau}$ using $L_\mathit{thres}$.
It is common in cost-sensitive learning to apply calibration techniques to the resulting classifier in order to obtain accurate probability estimates and, therefore, more accurate estimates of the expected cost~\cite{zadrozny2001learning}. However, we found that calibrating the classifier using Platt scaling~\cite{platt1999probabilistic} does not consistently improve the estimated likelihood of undesired outcome on the four event logs, and frequently even leads to less accurate likelihood estimates. Therefore, we decided to skip the calibration step. Moreover, since we use empirical thresholding, it is not necessary that the probabilities are well calibrated and it is sufficient that the likelihoods are reasonably ordered.
Table~\ref{table:cost_models_eval} shows the configurations of the cost model that we explore in the evaluation. To answer RQ1, we vary the ratio between $c_\mathit{out}(\sigma,L)$ and $c_\mathit{in}(k,\sigma,L)$ (keeping $c_\mathit{com}(\sigma,L)$ and $\mathit{eff}(k,\sigma,L)$ unchanged). To answer RQ2, we vary both $\mathit{eff}(k,\sigma,L)$ and the ratio between $c_\mathit{out}(\sigma,L)$ and $c_\mathit{in}(k,\sigma,L)$. To answer RQ3, we vary two ratios: 1) between $c_\mathit{out}(\sigma,L)$ and $c_\mathit{in}(k,\sigma,L)$ and 2) between $c_\mathit{in}(k,\sigma,L)$ and $c_\mathit{com}(\sigma,L)$.
\begin{table}[t]
\vspace{-0.2cm}
\caption{Cost model configurations}
\label{table:cost_models_eval}
\begin{center}
\resizebox{1\textwidth}{!}{
\begin{tabular}{@{}lcccc@{}}
\toprule
& $c_\mathit{out}(\sigma,L)$ & $c_\mathit{in}(k,\sigma,L)$ & $c_\mathit{com}(\sigma,L)$ & $\mathit{eff}(k,\sigma,L)$ \\ \midrule
RQ1 & $\{1, 2, 3, 5, 10, 20\}$ & $1$ & $0$ & $1 - k / |\sigma|$ \\
RQ2 & $\{1, 2, 3, 5, 10, 20\}$ & $1$ & $0$ & $\{0, 0.1, 0.2, ..., 1\}$ \\
RQ3 & $\{1, 2, 3, 5, 10, 20\}$ & $1$ & $\{0, 1/20, 1/10, 1/5, 1/2, 1, 2, 5, 10, 20\}$ & $1 - k / |\sigma|$ \\
\bottomrule
\end{tabular}
}
\end{center}
\vspace{-0.2cm}
\end{table}
We measure the average processing cost per case in $L_\mathit{test}$, and aim at minimizing this cost.
Additionally, we measure the \emph{benefit} of the alarm system, i.e., the reduction in the average processing cost of a case when using the alarm system compared to the average processing cost when not using it.
\subsection{Results}
\figurename~\ref{fig:results_ratios} shows the average cost per case when increasing the ratio of $c_\mathit{out}(\sigma,L)$ and $c_\mathit{in}(k,\sigma,L)$ from left to right. We only present the results obtained with GBT as we found it to slightly outperform RF. When the ratio between these two costs is balanced (i.e., 1:1), the minimal cost is obtained by never alarming. This is in agreement with the ROI analysis, where we found $\mathit{eff}c_\mathit{out}>c_\mathit{in}$ to be a necessary condition for having an advantage from an alarm system.
When $c_\mathit{out} \gg c_\mathit{in}$ the best strategy is to always alarm. When $c_\mathit{out}$ is slightly higher than $c_\mathit{in}$ the best strategy is to sometimes alarm based on $\widehat{out}$.
We found that the optimized $\overline{\tau}$ always outperforms the baselines. An exception is ratio 2:1 for \emph{traffic\_fines}, where never alarming is slightly better.
\begin{figure}[t]
\vspace{-0.2cm}
\centering
\includegraphics[width=1\textwidth]{results_ratios2}
\caption{Cost over different ratios of $c_\mathit{out}(\sigma,L)$ and $c_\mathit{in}(k,\sigma,L)$ (GBT)}
\label{fig:results_ratios}
\vspace{-0.2cm}
\end{figure}
In \figurename~\ref{fig:results_thresholds}, the average cost per case is plotted against different (fixed) thresholds. The optimized threshold is marked with a red cross and each line represents one particular cost ratio.
We observe that, while the optimized threshold generally obtains minimal costs, there sometimes exist multiple optimal thresholds for a given cost model configuration. For instance, in the case of the 5:1 ratio in \emph{bpic2017\_cancelled}, all thresholds between 0 and 0.4 are cost-wise equivalent. We conclude that the empirical thresholding approach consistently finds a threshold that yields the lowest cost in a given event log and cost model configuration (cf.~RQ1).
\begin{figure}[t]
\vspace{-0.2cm}
\centering
\includegraphics[width=1\textwidth]{results_thresholds_selected2}
\caption{Cost over different thresholds ($\overline{\tau}$ is marked with a red cross)}
\label{fig:results_thresholds}
\vspace{-0.2cm}
\end{figure}
\figurename~\ref{fig:results_effectiveness_const_lgbm_selected} shows the benefit of having an alarm system compared to not having it for different (constant) mitigation effectiveness values. As the results are similar for logs with similar class ratios, hereinafter, we only show the results for one log from each of the groups: \emph{bpic2017\_cancelled} (balanced classes) and \emph{unemployment} (imbalanced classes). As expected, the benefit increases both with higher $\mathit{eff}(k,\sigma,L)$ and with higher $c_\mathit{out}(\sigma,L):c_\mathit{in}(k,\sigma,L)$ ratio. For \emph{bpic2017\_cancelled}, the alarm system yields a benefit when $c_\mathit{out}(\sigma,L):c_\mathit{in}(k,\sigma,L)$ is high and $\mathit{eff}(k,\sigma,L) > 0$. Also, a benefit is always obtained when $\mathit{eff}(k,\sigma,L) > 0.5$ and $c_\mathit{out}(\sigma,L) > c_\mathit{in}(k,\sigma,L)$. In the case of \emph{unemployment}, the average benefits are smaller, since there are fewer cases with undesired outcome and, therefore, the number of cases where $c_\mathit{out}$ can be prevented by alarming is lower. In this case, a benefit is obtained when both $\mathit{eff}(k,\sigma,L)$ and $c_\mathit{out}(\sigma,L):c_\mathit{in}(k,\sigma,L)$ are high.
We conducted analogous experiments with linear effectiveness decay, varying the maximum possible effectiveness (at the start of the case), which confirmed that the observed patterns remain the same. We have empirically confirmed our theoretical finding (Section~\ref{sec:roi}) that $\mathit{eff}c_\mathit{out} > c_\mathit{in}$ is a necessary condition to obtain a benefit from using an alarm system, and have shown that a benefit is in practice also obtained under this condition when an optimized alarming threshold is used (cf.~RQ2).
\begin{figure}[t]
\vspace{-0.2cm}
\label{fig:results_heatmaps}
\hspace{-0.25cm}
\subfloat[Varying $\mathit{eff}(k,\sigma,L)$\label{fig:results_effectiveness_const_lgbm_selected}]{\includegraphics[width=0.515\linewidth]{results_effectiveness_const_lgbm_selected_for_subfig}}
\hspace{0.05cm}
\subfloat[Varying $c_\mathit{com}(\sigma, L)$\label{fig:results_compensation_lgbm_selected}]{\includegraphics[width=0.5\linewidth]{results_compensation_lgbm_selected2_for_subfig}}
\vspace{0.505\baselineskip}
\caption{Benefit with different cost model configurations
}
\vspace{-0.2cm}
\end{figure}
Similarly, the benefit of the alarm system is plotted in \figurename~\ref{fig:results_compensation_lgbm_selected} across different ratios of $c_\mathit{out}(\sigma,L):c_\mathit{in}(k,\sigma,L)$ and $c_\mathit{in}(k,\sigma,L):c_\mathit{com}(\sigma,L)$. We observe that when $c_\mathit{com}(\sigma,L)$ is high, the benefit decreases due to false alarms. For \emph{bpic2017\_cancelled}, a benefit is obtained almost always, except when $c_\mathit{out}(\sigma,L) : c_\mathit{in}(k,\sigma,L)$ is low (e.g., 2:1) and $c_\mathit{com}(\sigma,L)$ is high (i.e., higher than $c_\mathit{in}(k,\sigma,L)$). For \emph{unemployment}, a benefit is obtained with fewer cost model configurations, e.g., when $c_\mathit{out}(\sigma,L):c_\mathit{in}(k,\sigma,L) = 5:1$ and $c_\mathit{com}(\sigma,L)$ is smaller than $c_\mathit{in}(k,\sigma,L)$.
We conducted analogous experiments with linearly increasing cost of intervention, varying the maximum possible cost (at the end of the case), which confirmed that the patterns described above remain the same. To answer RQ3, we have empirically confirmed that the alarm system achieves a benefit as discussed in Section~\ref{sec:roi} in case the cost of the undesired outcome is sufficiently higher than the cost of the intervention and/or the cost of the intervention is sufficiently higher than the cost of compensation.
\begin{comment}
\subsubsection{Comparing manually selected and optimized probability thresholds.}
Figure~\ref{fig:costs_fixed_threshold_baselines_bpic2012_cancelled} illustrates the overall cost given different cost configurations and probability thresholds (c10-c90). We see that the best confidence threshold highly depends on the cost configuration. In cases where the reward for a true alarm is relatively large as compared to the cost of a false alarm, a small confidence threshold works well. Conversely, when the cost of a false alarm is high, a larger threshold should be chosen. Therefore, selecting the threshold manually as done in existing works~\cite{di2016clustering,teinemaa2016predictive} can be difficult and requires good intuition about the costs at hand. Instead, optimizing the probability threshold $\gamma_{min}$ automatically by maximizing the reward function (\emph{opt} in the Figure) achieves almost always as good or better results than the best-performing manual threshold.
\begin{figure}[hbtp]
\centering
\includegraphics[width=1\textwidth]{images/costs_fixed_threshold_baselines_bpic2012_cancelled}
\caption{Costs on different confidence thresholds}
\label{fig:costs_fixed_threshold_baselines_bpic2012_cancelled}
\end{figure}
Figure~\ref{fig:earliness_fscore_optimized_threshold_bpic2012} illustrates how the F-score and earliness (on the optimal probability threshold) change, given different cost configurations.
\begin{figure}[hbtp]
\centering
\includegraphics[width=1\textwidth]{images/earliness_fscore_optimized_threshold_bpic2012}
\caption{F-score and earliness on the optimal thresholds for each cost configuration}
\label{fig:earliness_fscore_optimized_threshold_bpic2012}
\end{figure}
\subsubsection{Comparing our methods to the baseline}
\begin{figure}[hbtp]
\centering
\includegraphics[width=1\textwidth]{images/results_bpic2017_refused}
\caption{Reward on different cost settings}
\label{fig:results_bpic2017_refused}
\end{figure}
\end{comment}
\subsection{Return on Investment Analysis}
\label{sec:roi}
In this section, we provide an analysis and guidelines that suggest when it is valuable to invest in developing an alarm system, namely, when the return on investment (ROI) is positive. To this aim, we need to compare the case of a business process execution supported by an alarm system with the \emph{as-is} situation where the business process is executed without this support. For this analysis, we consider a set of cases recorded in an event log $L$, where no interventions were done, and a cost model $cm=(c_\mathit{in},c_\mathit{out},c_\mathit{com},\mathit{eff})$.
The \emph{as-is} situation implies that no interventions are done in any of the cases $\sigma \in L$ that lead to an undesired outcome, yielding a cost $c_\mathit{out}(\sigma)$. When applied to the entire log $L$, the cost is $\mathit{cost}_{as\textit{-}is}(L)=\sum_{\sigma \in L \text{ s.t. } \mathit{out}(\sigma)} c_\mathit{out}(\sigma)$. Instead, when a certain system $\mathit{alarm}$ is in effect, the costs are $\mathit{cost}_{\mathit{alarm}}(L)=\sum_{\sigma \in L} \mathit{cost}(\sigma,L,\mathit{cm},\mathit{alarm})$ (cf.\
Defs.~\ref{def:costs}, \ref{def:alarm}).
With this setting, the return on investment of the system $\mathit{alarm}$ is $\mathit{ROI}(L,\mathit{cm},\mathit{alarm})=\mathit{cost}_{as\textit{-}is}(L)-\mathit{cost}_{\mathit{alarm}}(L)$, which must be positive to make deploying the system worthwhile.
The question that remains is: \emph{how does the ROI depend on the cost model and the alarm system?}
For the sake of simplicity, we assume, in this analysis, that every component of the cost model is constant. Furthermore, the initial investment costs are not considered because we assume the system to be fully operational already for a sufficiently long time, so that the the initial costs have been amortized. The above assumptions yield the following case cost:
$ \mathit{cost}(\sigma,L,cm,\mathit{alarm})=
\begin{cases}
c_\mathit{in} + (1-\mathit{eff})c_\mathit{out}& \mathit{out}(\sigma)\land\mathcal{I}(\sigma,\mathit{alarm})>0,\\
c_\mathit{in} + c_\mathit{com} & \neg\mathit{out}(\sigma)\land\mathcal{I}(\sigma,\mathit{alarm})>0,\\
c_\mathit{out} & \mathit{out}(\sigma)\land\mathcal{I}(\sigma,\mathit{alarm})=0,\\
0 & \text{otherwise}
\end{cases}$
where $c_\mathit{in}$, $c_\mathit{out}$, $c_\mathit{com}$, and $\mathit{eff}$ are constants. In order for the ROI to be positive, it should be
$\mathit{cost}_{as\textit{-}is}(L) > \mathit{cost}(\sigma,L,cm,\mathit{alarm})$, that is:
\[
\begin{footnotesize}
\begin{array}{l}
|L_{und}|\cdot c_\mathit{out} > |L_{und\&al}|(c_\mathit{in} + (1-\mathit{eff})c_\mathit{out})+|L_{des\&al}|(c_\mathit{in} + c_\mathit{com})+|L_{und\&nal}| \cdot c_\mathit{out}
\end{array}
\end{footnotesize}
\]
where $L_{und\&al}$, $L_{des\&al}$, $L_{und\&nal}$ respectively consist of the traces in $L$ related to the cases with an \underline{und}esired outcome that would be \underline{al}armed, with a \underline{des}ired outcome that would still be \underline{al}armed, with an \underline{und}esired outcome that would \underline{n}ot be \underline{al}armed; also,
$L_{und}= L_{und\&al} \cup L_{und\&nal}$.
After simplification:
\begin{equation}\label{equ:ROIsimpl}
|L_{und\&al}|(\mathit{eff}c_\mathit{out}-c_\mathit{in} ) >|L_{des\&al}|(c_\mathit{in} + c_\mathit{com}).
\end{equation}
Because the right-hand side of Eq.~\ref{equ:ROIsimpl} is non-negative, it follows as a corollary that $\mathit{eff}c_\mathit{out} > c_\mathit{in}$ is a necessary condition for return on investment. In other words, it must be possible to avoid a cost that is higher than the cost of doing the intervention. This provides a validation of our framework: it complies with the \emph{reasonableness condition} in the cost-sensitive learning literature~\cite{elkan2001foundations}, which states that the cost of labeling an example incorrectly should always be greater than the cost of labeling it correctly.
Eq.~\ref{equ:ROIsimpl} also illustrates that the policy of always alarming does not yield a positive ROI, unless the number of cases with undesired outcome and the cost of the undesired outcome are sufficiently high. When the number of cases with an undesired outcome is small (e.g., the unemployment benefits and the financial institution scenarios described in Boxes~1 and 2) and at the same time the cost of this undesired outcome is small, then the left-hand side of Eq.~\ref{equ:ROIsimpl} is negligible, thus leading to condition $c_\mathit{in} + c_\mathit{com}<0$, which can never hold.
So far we have assumed, for the sake of simplicity, that costs and mitigation effectiveness are constant, similarly to traditional cost-sensitive learning.
However, the novelty of our formulation lays in the fact that costs are functions that depend on the time when an intervention is made. As a result, the reasonableness of the cost matrix would not be fixed, but potentially changes over time.
Still, variable costs do not invalidate the ROI analysis.
In fact, in order for the ROI to be positive, it is sufficient that the cost model is reasonable for a certain time period; otherwise, the alarm system would never raise alarms because of the cost model. Clearly, the longer the reasonable-cost period is, the higher the ROI.
\section{Prescriptive Process Monitoring Framework}
\label{sec:framework}
In this section, we introduce a cost model for alarm-based prescriptive process monitoring and illustrate this model using three scenarios (Section~\ref{sec:costs_model}). We then formalize the concept of alarm system (Section~\ref{sec:costInstances}) and discuss conditions under which an alarm system has a positive return on investment (Section~\ref{sec:roi}).
\subsection{Concepts and Cost Model}
\label{sec:costs_model}
An alarm-based prescriptive process monitoring system (\emph{alarm system} for short) is a monitoring system that raises an alarm in relation to a running case of a business process, in order to indicate that the case is likely to lead to an undesired outcome. These alarms are handled by process workers who intervene by performing an action (e.g., calling a customer or blocking a credit card) in order to prevent or mitigate the undesired outcome. These actions may have a cost, which we call \emph{cost of intervention}. Instead, if the case ends in a negative outcome, this leads to a cost called \emph{cost of undesired outcome}.
As an example, consider a municipality that needs to collect city taxes. If the inhabitants do not pay their taxes on time, the municipality may run into cash flow issues. Accordingly, in case of an unpaid tax debt (undesired outcome), the municipality may decide to outsource the debt collection to an external collection agency, for which it has to pay a recovery fee. These fees constitute the cost of the undesired outcome.
In light of their characteristics and past payment history, certain inhabitants may have a higher risk of missing the payment deadline. Therefore, sending a reminder letter to these high-risk inhabitants may increase the likelihood of receiving the payment on time. However, such an intervention comes with costs related to preparing the letter by an employee (proportional to the employee's hourly salary rate) and the postal costs for sending the letter.
In certain scenarios, the cost of an intervention may increase over time, acknowledging the importance of alarming as early as possible. For instance, in a railway maintenance process, if an alarm about a possible railway disruption is raised early, the problem could be solved with regular maintenance procedures. Conversely, if the alarm is raised when the need for maintenance has become urgent, the maintenance provider could be required to allocate more resources in order to solve the problem on time.
When an alarm is raised, there is a certain probability, but no certainty, that the case will reach an undesired outcome if no intervention is made. If the case does not conclude with an undesired outcome even without interventions, doing the intervention causes unnecessary costs (e.g., a company could lose customers and/or opportunities). The cost related to such unnecessary interventions is referred to as \emph{cost of compensation}.
For instance, financial institutions may block credit card payments when they suspect that a card was cloned. However, in some cases, it may happen that the suspicion was unfounded and that the payment was legitimate. If these cases become too frequent, the reputation of the financial institution could be hampered.\looseness=-1
The purpose of alarming is to avoid an undesired outcome. However, in several scenarios, it is not possible to
fully prevent the cost of the undesired outcome, while the intervention could still help to mitigate it. Based on this rationale, we introduce the concept of \emph{mitigation effectiveness} of an intervention, reflecting the proportion of the cost of an undesired outcome that can be avoided by carrying out the intervention. Oftentimes, the mitigation effectiveness decreases with time, i.e., the earlier the intervention takes place, the higher is the proportion of costs that can be avoided.
Consider, for instance, the process of paying unemployment benefits by a social security institution.
In this case, the aim of an alarm system could be to notify the institution about citizens who might be receiving unentitled benefits. Since the benefits that have already been issued are unlikely to be recollected, the cost of the undesired outcome cannot be avoided completely. Therefore, it is important to raise the alarm as early as possible, in order to effectively mitigate the cost of the undesired outcome.
An alarm system is intended as a system where cases are continuously monitored. However, since continuous monitoring is impractical, we assume that cases are monitored after each executed event and, therefore, alarms can only be raised after that an event has occurred.
In the remainder, each case is identified by a trace $\sigma$ that is (eventually) recorded in an event log.
Definition~\ref{def:costs} formalizes the costs defined above. Since costs may depend on the position in the case in which the alarm is raised and/or on other cases being executed, we define the costs as functions over the number of already executed events and over the entire set of cases under execution.
\begin{definition}[Alarm-based Cost Model]
\label{def:costs}
An \emph{alarm-based cost model} is a tuple $(c_\mathit{in},c_\mathit{out},c_\mathit{com},\mathit{eff})$ consisting of:
\begin{itemize}[noitemsep,topsep=0pt]
\item a function $c_\mathit{in}\in\mathbb{N}\times\mathcal{E}^*\times 2^{\mathcal{E}^*}\rightarrow \mathbb{R}^+_0$ modeling the \emph{cost of \underline{in}tervention}:
given a trace $\sigma$ belonging to an event log $L$, $c_\mathit{in}(k,\sigma,L)$ indicates the cost of an intervention in $\sigma$ when the intervention takes place after the $k$-th event;
\item a function $c_\mathit{out}\in\mathcal{E}^*\times 2^{\mathcal{E}^*}\rightarrow \mathbb{R}^+_0$ modeling the \emph{cost of undesired \underline{out}come};
\item a function $c_\mathit{com}\in\mathcal{E}^*\times 2^{\mathcal{E}^*}\rightarrow \mathbb{R}^+_0$ modeling the \emph{cost of \underline{com}pensation};
\item a function $\mathit{eff} \in\mathbb{N}\times\mathcal{E}^*\times 2^{\mathcal{E}^*}\rightarrow[0,1]$ modeling the \emph{mitigation \underline{eff}ectiveness} of an intervention:
given a trace $\sigma$ belonging to an event log $L$, $\mathit{eff}(k,\sigma,L)$ indicates the mitigation effectiveness of an intervention in $\sigma$ when the intervention takes place after the $k$-th event.
\end{itemize}
\end{definition}
\begin{scenario}{Unemployment Benefits}\label{ex:UWV}
In several countries, a social security institution is responsible for the execution of a number of employee-related insurances, such as unemployment benefits.
When residents (hereafter customers) become unemployed, they are usually entitled to monthly monetary benefits for a certain period of time.
These payments are stopped when the customer reports that he/she has found a new job. Unfortunately, several customers omit to inform the institution about finding a job and, thus, keep receiving benefits they are not entitled to. Those customers are expected to return the amount of benefits that they have received unlawfully. However, in practice, this rarely happens and the overpaid amount is lost to the institution.
In light of the above, the social security institution would benefit from an alarm system that would inform about customers who are likely to be receiving unentitled benefits.
Let $\mathit{unt}(\sigma)$ denote the amount of unentitled benefits received in a case corresponding to trace $\sigma$.
Based on discussions with the stakeholders of a real social security institution, we designed the following cost model instantiation for such an alarm system.
\begin{description}
\item[Cost of intervention.] For the intervention, an employee needs to check if the customer is indeed receiving unentitled benefits and, if so, fill in the forms for stopping the payments. Let $S$ be the employee's average salary rate per time unit; let $i_s$ and $i_f$ denote the positions of the events in $\sigma$ when the employee started working on the intervention and finished it, respectively. The cost of an intervention can be modeled as: $c_\mathit{out}(\sigma,L)=(\pi_\mathcal{T}(\sigma(i_f))-\pi_\mathcal{T}(\sigma(i_s)))\cdot S$.
\item[Cost of undesired outcome.] The total amount of unentitled benefits that the customer would obtain without stopping the payments, i.e.,
$c_\mathit{out}(\sigma,L)=\mathit{unt}(\sigma)$.
\item[Cost of compensation.] The social security institution works in a situation of monopoly, which means that the customer cannot be lost because of moving to a competitor, i.e., there is no cost of compensation: $c_\mathit{com}(\sigma,L)=0$.
\item[Mitigation effectiveness.] The proportion of unentitled benefits that will not be paid thanks to the intervention: $\mathit{eff}(k,\sigma,L)=\frac{\mathit{unt}(\sigma)-\mathit{unt}(\mathit{hd}^k(\sigma))}{\mathit{unt}(\sigma)}$. Note that this cost function is not employed if there is no undesired outcome (i.e., if $\mathit{unt}(\sigma)=0$).
\end{description}
\end{scenario}
\begin{scenario}{Financial Institution}\label{ex:bank}
Suppose that the customers of a financial institution use their credit cards to make payments online. Each such transaction is associated with a risk that the transaction is made through a cloned card. In this scenario, an alarm system is intended to determine whether the credit card needs to be blocked due to a high risk of being cloned. However, in case the credit card is not malicious, blocking the card would cause discomfort to the customer who may consequently opt to switch to a different financial institution.
Let $\sigma$ be the trace of credit card transactions for a customer and $\mathit{value}(\sigma)$ the total amount of money related to malicious transactions in $\sigma$, the following is a possible cost model instantiation for this scenario.
\begin{description}
\item[Cost of intervention.] The card is automatically blocked by the system and, therefore, the intervention costs are limited to \textsc{Post\_Cost}, i.e., to the costs for sending a new credit card to the customer by mail: $c_\mathit{in}(k,\sigma,L)=$ \textsc{Post\_Cost}.
\item[Cost of undesired outcome.] The total amount of money related to malicious transactions that the bank would need to reimburse to the legitimate customer:
$c_\mathit{out}(\sigma,L)=\mathit{value}(\sigma)$.
\item[Cost of compensation.] Denoting the asset value of a customer (consisting of the amount of the investment portfolio, the account balance, etc.) with $\mathit{asset}(\sigma)$ and supposing that a fraction $p$ (i.e., $p \in [0,1]$) of the customers would switch to a different institution, the cost of compensation can be estimated as the value of the lost asset (the customer), multiplied by $p$: $c_\mathit{com}=p\cdot\mathit{asset}(\sigma)$.
\item[Mitigation effectiveness.] The proportion of the total amount of money related to malicious transactions that does not need to be reimbursed by blocking the credit card after that $k$ events have been executed: $\mathit{eff}(k,\sigma,L)=\frac{\mathit{value}(\sigma)-\mathit{value}(\mathit{hd}^k(\sigma))}{\mathit{value}(\sigma)}$.
\end{description}
\end{scenario}
\begin{scenario}{Railway Maintenance}\label{ex:railway}
In a process for railway maintenance, an alarm should be raised when there is a risk that the railway may break down within a relatively short time range. Railway breakdowns can cause severe disruptions in the train transportation (i.e., trains could be canceled or delayed), thereby causing losses of reimbursing tickets to travelers.
\begin{description}
\item[Cost of intervention.] The cost of an intervention increases with time because the more urgent the disruption is, the more resources need to be allocated for handling it. We assume that the cost is at its minimum $m$ at the beginning of a trace $\sigma$ and grows exponentially with time: $c_\mathit{in}(k,\sigma,L)=m \cdot \beta\exp(\pi_\mathcal{T}(\sigma(k)))$ for some $\beta > 0$.
\item[Cost of undesired outcome.] Let $P$ be the average total price of tickets sold per time unit; let $i_d(\sigma)$ and $i_m(\sigma)$ be the positions of the events in $\sigma$ when the disruption took place and was resolved, respectively. The cost of the undesired outcome can be calculated as $P$ multiplied by the length of the timeframe when the railway service was disrupted: $c_\mathit{out}(\sigma,L)=(\pi_\mathcal{T}(\sigma(i_m))-\pi_\mathcal{T}(\sigma(i_d)))\cdot P$.
\item[Cost of compensation.] Assuming that performing (unnecessary) maintenance actions does not cause inconveniences to the customers, no cost of compensation is present: $c_\mathit{com}(\sigma,L)=0$.
\item[Mitigation effectiveness.] A timely intervention fully avoids the undesired outcome: $\mathit{eff}(k,\sigma,L)=1$ for any $k \in [1,|\sigma|]$.
\end{description}
\end{scenario}
To illustrate the versatility of the above cost model, we discuss three use cases for alarm systems and their corresponding cost model configurations. The first scenario, in Box 1, refers to the provision of unemployment benefits. The cost model for this scenario is based on several discussions with the stakeholders of a real social security institution~\cite{D_dL_M@COOP17}.
The second scenario, in Box 2, refers to the detection of malicious credit card payments in a financial institution. Differently from the previous scenario, in this case, there is a risk of cost of compensation: due to the inconvenience caused by blocking their credit card, customers can switch to competitors.
Box 3 refers to the process of predictive maintenance in railway services.
This scenario is different from the previous ones because, in this case, the cost of an intervention increases over time.
\subsection{Alarm-Based Prescriptive Process Monitoring System}
\label{sec:costInstances}
An alarm-based prescriptive process monitoring system is driven by the outcome of the cases. Hereon, the outcome of the cases is represented by a function $\mathit{out} \in \mathcal{E}^*\rightarrow \{\textsf{true},\textsf{false}\}$: given a case identified by a trace $\sigma$, if the case has an undesired outcome, $\mathit{out}(\sigma)=\textsf{true}$; otherwise, $\mathit{out}(\sigma)=\textsf{false}$. In reality, during the execution of a case, its outcome is not yet known and needs to be estimated based on past executions that are recorded in an event log $L \subset \mathcal{E}^*$. The outcome estimator is a function $\widehat{out}_L\in\mathcal{E}^*\rightarrow[0,1]$ predicting the likelihood $\widehat{out}_L(\sigma')$ that the outcome of a case that starts with prefix $\sigma'$ is undesired. We can define an alarm system as a function that returns true or false depending on whether an alarm is raised based on the predicted outcome or not.
\begin{definition}[Alarm-Based Prescriptive Process Monitoring System]
\label{def:alarm}
Given an event log $L \subset \mathcal{E}^*$, let $\widehat{out}_L$ be an outcome estimator built from $L$.
An \emph{alarm-based prescriptive process monitoring system} is a function $\mathit{alarm}_{\widehat{out}_L} \in \mathcal{E}^*\rightarrow \{\textsf{true},\textsf{false}\}$.
Given a running case identified by a trace $\sigma$ and with current prefix $\sigma'$, $\mathit{alarm}_{\widehat{out}_L}(\sigma)$ returns $\textsf{true}$, if an alarm is raised based on the predicted outcome $\widehat{out}_L(\sigma')$, or $\textsf{false}$, otherwise.
\end{definition}
For simplicity, we omit the subscript $L$ from $\widehat{out}_L$ and omit $\widehat{out}_L$ from $\mathit{alarm}_{\widehat{out}_L}$ when it is clear from the context. An alarm system can raise an alarm at most once per case, since we assume that already the first alarm triggers an intervention by the stakeholders.
\looseness=-1
\begin{table}[tb]
\centering
\vspace{-0.2cm}
\caption{Cost of a case $\sigma$ based on its outcome and whether an alarm was raised}
\label{table:cost_matrix_business_costs}
\begin{tabular}{c|c|c}
\toprule
& undesired outcome & desired outcome \\
\midrule
alarm raised & $c_\mathit{in}(k,\sigma,L) + (1-\mathit{eff}(k,\sigma,L)) c_\mathit{out}(\sigma,L)$ & $c_\mathit{in}(k,\sigma,L) + c_\mathit{com}(\sigma,L)$ \\
alarm not raised & $c_\mathit{out}(\sigma,L)$ & $0$ \\
\bottomrule
\end{tabular}
\vspace{-0.3cm}
\end{table}
The purpose of an alarm system is to minimize the cost of executing a case. \tablename~\ref{table:cost_matrix_business_costs} summarizes how the cost of a case is determined based on a cost model (cf.\ Def.~\ref{def:costs}), on the case outcome, and on whether an alarm was raised or not.
\begin{definition}[Cost of Case Execution]
\label{sec:instanceExecution}
Let $cm=(c_\mathit{in},c_\mathit{out},c_\mathit{com},\mathit{eff})$ be an alarm-based cost model.
Let $out\in \mathcal{E}^*\rightarrow \{\textsf{true},\textsf{false}\}$ be an outcome function.
Let $\mathit{alarm} \in \mathcal{E}^*\rightarrow \{\textsf{true},\textsf{false}\}$ be an alarm-based prescriptive process monitoring system.
Let $L \subset E^*$ be the entire set of \emph{complete} (i.e., no more running) cases.
Let $\sigma \in L$ be a case.
Let $\mathcal{I}(\sigma,\mathit{alarm})$ be the index of the event in $\sigma$ when the alarm was raised or zero if no alarm was raised:
\noindent\resizebox{\linewidth}{!}{
$\mathcal{I}(\sigma,\mathit{alarm})=\begin{cases}
0& \text{if }\forall{k\in[1,|\sigma|{-}1]}. \neg \mathit{alarm}(\mathit{hd}^k(\sigma)),\\
1 & \text{if } \mathit{alarm}(\mathit{hd}^1(\sigma)),\\
i \in [2,|\sigma|] \text{ s.t. } \mathit{alarm}(\mathit{hd}^i(\sigma)) \land & \text{otherwise.}\\
\quad \forall k\in[1,i-1].\; \neg \mathit{alarm}(\mathit{hd}^k(\sigma)) &
\end{cases}$}
\noindent The \emph{cost of execution of case $\sigma$} supported by the alarm system is:
\noindent\resizebox{\linewidth}{!}{
$ \mathit{cost}(\sigma,L,cm,\mathit{alarm})=
\begin{cases}
c_\mathit{in}(\mathcal{I}(\sigma,\mathit{alarm}),\sigma,L) + (1-\mathit{eff}(\mathcal{I}(\sigma,\mathit{alarm}),\sigma,L))\cdot c_\mathit{out}(\sigma,L)& \mathit{out}(\sigma)\land \mathcal{I}(\sigma,\mathit{alarm})>0, \\
c_\mathit{in}(\mathcal{I}(\sigma,\mathit{alarm}),\sigma,L)+c_\mathit{com}(\sigma,L) & \neg\mathit{out}(\sigma)\land \mathcal{I}(\sigma,\mathit{alarm})>0,\\
c_\mathit{out}(\sigma,L) & \mathit{out}(\sigma)\land \mathcal{I}(\sigma,\mathit{alarm})=0,\\
0 & \text{otherwise.}
\end{cases}$
}
\end{definition}
Section~\ref{sec:approach} illustrates how an alarm-based prescriptive process monitoring system can be designed aiming at the minimization of the case execution costs (according to Def.~\ref{sec:instanceExecution}).\looseness=-1
\input{framework_roi_analysis}
\section{Introduction}
\label{sec:intro}
\begin{comment}
Modern organizations often execute their business processes on top of Process-Aware Information Systems (PAIS), such as Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), and Business Process Management (BPM) systems~\cite{FBPM}. These systems record a range of events that occur while executing the processes that they support, e.g., events that signal the start or the completion of an instance (called a \emph{case}) of the business process, or the start and completion of process activities within the cases.
Event records produced by PAIS systems can be extracted and pre-processed to produce business process \emph{event logs}~\cite{van2016process}, each consisting of the sequence (called \emph{trace}) of event records produced by a case of the process.
Each event record contains attributes. Three of such attributes that are present in every event record are the \emph{activity} specifying which activity the event refers to, the \emph{timestamp} specifying when the event occurred, and the \emph{case id} indicating the case of the process that generated this event. In other words, every event represents the occurrence of an \emph{activity} at a particular \emph{point in time} and in the context of a given \emph{case}. An event record may carry additional attributes in its payload.
Case attributes appear in every event of every case and have the same value for all events generated within the same case.
In other words, the value of a case attribute is static, i.e., it does not change throughout the lifetime of a case, as opposed to attributes in the event payload, which are dynamic as they change from an event to the other.
Moreover, data can also be recorded as \emph{case attributes} which remain unchanged throughout each case. For example, in a healthcare process case attributes could specify the diagnosis for a given patient or her age.
\end{comment}
\emph{Predictive process monitoring}~\cite{maggi2014predictive,MetzgerLISFCDP15} is a family of techniques to predict the future state of ongoing cases of a business process based on event logs recording past executions thereof.
A predictive process monitoring technique may provide predictions on the remaining execution time of an ongoing case, the next activity to be executed, or the final outcome of the case wrt.\ a set of possible outcomes. This paper is concerned with the latter type of predictive process monitoring, which we call \emph{outcome-oriented}~\cite{teinemaa2017outcome}. For example, in a lead-to-order process, an outcome-oriented predictive process monitoring technique may predict whether a case will end up in a purchase order (desired outcome) or not (undesired outcome).
Existing outcome-oriented predictive process monitoring techniques are able to predict, after each event of a case, the likelihood that the case will end up in an undesired outcome. These techniques are restricted in scope to predicting. They do not suggest nor prescribe how and when process workers should intervene in order to decrease the likelihood of undesired outcomes.
This paper proposes a framework to extend outcome-oriented predictive process monitoring techniques in order to make them prescriptive.
Concretely, the proposed framework extends a given outcome-oriented predictive process monitoring model with a mechanism for generating alarms that lead to interventions, which, in turn, mitigate (or altogether prevent) undesired outcomes.
The proposed framework is armed with a parameterized cost model that captures, among others, the tradeoff between the cost of an intervention and the cost of an undesired outcome.
Based on this cost model, the paper outlines an approach for return on investment analysis of a prescriptive process monitoring system under a configuration of cost parameters and a predictive model trained on a given dataset.
Finally, the paper proposes and empirically evaluates an approach to tune the generation of alarms to minimize the expected cost for a given dataset and set of parameters.
The paper is structured as follows. Section~\ref{sec:background} introduces basic concepts and notations. Next, Section~\ref{sec:framework} presents the prescriptive process monitoring framework, Section~\ref{sec:approach} outlines the approach to optimize the alarm generation mechanism, and Section~\ref{sec:evaluation} reports on our empirical evaluation. Finally, Section~\ref{sec:related} discusses related work, Section~\ref{sec:threats} delineates the limitations of our framework and consequent future work, and Section~\ref{sec:conclusion} summarizes the contributions.
\section{Related Work}
\label{sec:related}
The problem of cost-sensitive training of machine learning models has received significant attention.
For example, Elkan~\cite{elkan2001foundations} analyzes the notion of misclassification cost and defines conditions under which a misclassification cost matrix is reasonable. Turney~\cite{turney2002types} examines a broader range of costs in the context of inductive concept learning. This latter study introduces the notion of cost of intervention, which we include in our proposed cost model. These approaches, however, do not take into account the specific costs that arise in prescriptive process monitoring.
Predictive and prescriptive process monitoring are related to Early Classification of Time Series (ECTS), which aims at classifying a (partial) time series as early as possible, while achieving high classification accuracy~\cite{xing2012early}.
To the best of our knowledge, works~\cite{mori2017early,dachraoui2015early,tavenard2016cost} are the only ECTS methods trying to balance
accuracy-related and earliness-related costs.
However, these approaches assume that predicting a positive class early has the same effect on the cost function as predicting a negative class early, which is not the case in typical business process monitoring scenarios, where earliness matters only when an undesired outcome is predicted.
Works~\cite{metzger2017predictive,di2016clustering} focus on alarm-based prescriptive process monitoring, but only allow alarms to be raised when a given state of the process is reached. This moment might potentially be late to mitigate the consequences, which would have been possible if the alarm was raised earlier. Furthermore, our approach does not require an explicit modelling of the process states. Last but not least, they rely on
a fixed-threshold alarming mechanism provided by process owners, as opposed to our empirical thresholding approach.
Gr\"oger et al.~\cite{groger2014prescriptive} is an existing approach to provide recommendations, but it misses
the two core elements of our proposed prescriptive process monitoring framework, i.e., cost models and earliness.
\section{Limitations and Future Work}
\label{sec:threats}
While the scenarios discussed in Boxes~1-3 show that the proposed framework is versatile enough to cover a variety of cases, the current version of the framework relies on two main assumptions. First, it assumes that an alarm always triggers an intervention, thus ignoring that a process worker might in some cases decide not to or be unable to intervene. Additionally, the current version of the framework considers each case in isolation, omitting the overall workload of the process workers, which in reality is an important factor for determining the number of alarms that can be acted upon. This limitation can be lifted by, e.g., combining the alarm system with~\cite{CONFORTI20151}, which proposes a recommender system that optimizes suggestions in case of concurrent process executions. A second limitation of the framework is that only one possible type of intervention is envisaged. This assumption can be lifted by extending the framework so that the cost of an intervention can vary depending on the specific action suggested by a recommender system.
Next to these limitations, we acknowledge the importance of further investigation on the applicability of the framework in practice.
In particular, in the future, we aim at collaborating with companies and institutions to study whether process stakeholders are able to define the costs in a natural and simple way. Also, we plan to further investigate the consequences of incorrect and/or imprecise instantiations of the cost models.
Furthermore, the current evaluation is limited to measuring the benefit of the alarm system in an offline manner, while a more thorough evaluation would consist in deploying the alarming mechanism in a real organization and making an end-to-end comparison of the costs before and after the deployment of the alarm system. However, this is a difficult task for two main reasons. First, companies need to be willing to let the technique really influence the process executions. Second, the end-to-end effectiveness analysis cannot be conducted without coupling the alarm system with a recommender system: if the system raises proper alarms, but inappropriate interventions are taken, the system would still be ineffective.
Another avenue for future work is to extend the framework with active learning methods in order to incrementally tune the alarming mechanism based on feedback about the relevance of the alarms and the effectiveness of the interventions.
|
2,877,628,091,027 | arxiv | \section{Introduction}
\label{sec:intro}
Neutrino physics presents one of the biggest puzzles yet to be
addressed in modern particle physics. The extremely small values of the neutrino masses compared to
the masses of the other fermions appear unnatural in the Standard Model
(SM)~\cite{Esteban:2018azc}. The seesaw
mechanism~\cite{Minkowski:1977sc,yanagida1979proc,glashow1980proceedings,gell1980supergravity,Mohapatra:1979ia}
provides an elegant way to give a very small mass $m_\nu$ to each of the SM
neutrinos by introducing a heavy Majorana neutrino with mass
$M$. The spontaneously broken electroweak (EW) symmetry explains the neutrino mass
as a Yukawa coupling.
Three types of seesaw mechanisms have been proposed and their
phenomenology can be tested at collider experiments. The \typeIIIseesaw ~\cite{Foot:1988aq} introduces at
least one extra fermionic $\mathrm{SU(2)_L}$ triplet field coupled to
EW gauge bosons. These heavy charged and neutral leptons
can in principle be produced by EW processes at the Large Hadron Collider (LHC).
Type-III seesaw heavy-lepton searches have already been performed in
various decay channels by both the ATLAS and CMS collaborations. In
\RunOne, ATLAS excluded heavy leptons with masses below
\SI{335}{\GeV}~\cite{EXOT-2014-07} using final states containing two
light leptons (electrons or muons) and two jets. This mass limit was then
improved to \SI{470}{\GeV}, still using \RunOne data, by adding the
three-lepton channel~\cite{EXOT-2014-08} as suggested in Ref.~\cite{delAguila:2008cj1}. Using the full \RunTwo data sample of
proton--proton collisions at \(\rts = \SI{13}{\TeV}\), the CMS
Collaboration has excluded heavy-lepton masses up to
\SI{880}{\GeV}~\cite{CMS-EXO-19-002} by analysing three- and four-lepton final
states, while ATLAS has excluded heavy-lepton masses up to
\SI{790}{\GeV}~\cite{EXOT-2018-33} by using only the two-lepton-plus-jets final
state. The analysis presented in this paper searches for
a \typeIIIseesaw heavy
lepton in three- and four-lepton final states. For the first time, a combination with the
two-lepton-plus-jets final state is performed, giving a significant improvement in the
sensitivity of the analysis.
The \typeIIIseesaw model targeted in this search is described
in Ref.~\cite{Biggio:2011ja}. It assumes the pair production of the
neutral Majorana ($\Nz$)
and charged ($\Lpm$) heavy leptons proceeds via the
$s$-channel production of virtual EW gauge bosons. \Nz pairs are not
produced because \Nz has $T_3 = Y = 0$ and thus does not couple to the
$\Zboson$~\cite{delAguila:2008cj1,Strumia:2006db}. The production cross-section depends only on the masses of the $\Nz$ and $\Lpm$, which are assumed to be
degenerate as the mass splitting
due to electroweak radiative corrections is expected to be smaller
than \SI{\sim 200}{\MeV}
~\cite{Arhrib:2009mz}. The decays allowed in this model are $\Lpm\rightarrow H\ell^\pm,Z\ell^\pm,W^\pm\nu$ and
$\Nz \rightarrow Z\nu, H\nu, W^\pm \ell^\mp$, where the SM leptons can be of any flavour, i.e.\ \( \ell = e,\mu,\tau\).
The branching ratios \(\mathcal{B}_\ell \) for the heavy-lepton decays into $\ell$ plus one SM boson are
determined by the parameters \( V_\ell \) which govern mixing between the new heavy leptons and the SM leptons. Current bounds on \( V_\ell \) can be found in Ref.\cite{Das:2020uer}. In this analysis,
we assume the \textit{democratic} scenario, where the three mixing parameters are equal, so that \(\mathcal{B}_e =
\mathcal{B}_\mu = \mathcal{B}_\tau = 1/3 \). The branching
ratios $\mathcal{B}_{\Zboson}$, $\mathcal{B}_{\Wboson}$, $\mathcal{B}_{\Hboson}$
for heavy-lepton decays into any SM lepton plus $Z$, $W$ or $H$, namely
\(\Lpm,\Nz\rightarrow \Zboson,\Wboson^\pm,\Hboson \)\,,
are independent of the mixing parameters. For \Nz masses
larger than
a few times the \Hboson mass, as considered in this analysis, the decays into different SM bosons
are independent of the heavy-lepton mass,
and therefore \(2 \mathcal{B}_\Hboson \simeq 2 \mathcal{B}_\Zboson \simeq
\mathcal{B}_\Wboson \simeq 1/2 \). Examples of Feynman diagrams
in three- and four-lepton final states are shown in
\Fig{\ref{fig:feynman3l_4l}}. These events are characterised by the production of two
SM bosons ($VV$, $VH$ or $HH$, where $V= \Wboson, \Zboson$) and two charged leptons or neutrinos in the final state. The Majorana nature of \Nz allows
final states with four leptons and non null total lepton electrical
charge as shown in \Fig{\ref{fig:feynman3l_4l}(b)}. This analysis
focuses on events with high light-lepton multiplicity, including light leptons from \( \tau \)-lepton decays.
\begin{figure}[tbp]
\centering
\subfloat[]{
\includegraphics[width=0.4\textwidth]{fig_01a.pdf}
\label{fig:feynman3l_4la}
}
\subfloat[]{
\includegraphics[width=0.4\textwidth]{fig_01b.pdf}
}\\
\subfloat[]{
\includegraphics[width=0.4\textwidth]{fig_01c.pdf}
}
\caption{Examples of Feynman diagrams for the considered \typeIIIseesaw model~\cite{Biggio:2011ja}
producing three- and four-lepton final states.
}
\label{fig:feynman3l_4l}
\end{figure}
This paper is structured as follows. The ATLAS detector is described in
\Sect{\ref{sec:detector}}, the data and simulated events used in the analysis are outlined
in \Sect{\ref{sec:data-sim}}, and the event reconstruction procedure is
detailed in \Sect{\ref{sec:definitions}}. The analysis strategy and
background estimation are presented
in \Sects{\ref{sec:regions}}{\ref{sec:background}}, respectively.
The systematic uncertainties are described in \Sect{\ref{sec:systematics}}.
Finally, results and their statistical interpretation are presented in
\Sect{\ref{sec:results}}, followed by the conclusions in \Sect{\ref{sec:conclusion}}.
\FloatBarrier
\newcommand{\AtlasCoordFootnote}{
ATLAS uses a right-handed coordinate system with its origin at the nominal
interaction point (IP) in the centre of the detector and the \(z\)-axis
along the beam pipe. The \(x\)-axis points from the IP to the centre of the
LHC ring, and the \(y\)-axis points upwards. Polar coordinates
\((r,\phi)\) are used in the transverse plane, \phi being the azimuthal
angle around the \(z\)-axis. The pseudorapidity is defined in terms of the
polar angle \( \theta \) as \( \eta = -\ln \tan(\theta/2) \).
Angular distance is measured in units of
\( \Delta R \equiv \sqrt{{(\Delta\eta)}^{2} + {(\Delta\phi)}^{2}} \).}
\section{ATLAS detector}
\label{sec:detector}
The ATLAS detector~\cite{PERF-2007-01} at the LHC is a multipurpose particle
detector with a near-\( 4\pi \) coverage in solid angle around the
collision point and a cylindrical
geometry\footnote{\AtlasCoordFootnote} coaxial with the beam
axis. It consists of
an inner tracking detector surrounded by a thin superconducting
solenoid providing a 2 T magnetic field,
electromagnetic and hadronic calorimeters, and a muon spectrometer with superconducting toroidal magnets.
The inner detector (ID) provides charged-particle tracking in the
range \( \abseta < 2.5 \) and, going outwards from the beam pipe,
is composed of a high-granularity silicon pixel detector that
typically provides four measurements per track,
the first hit normally being in the insertable B-layer installed
before Run~2~\cite{ATLAS-TDR-19,PIX-2018-001}, a silicon microstrip
tracker, and a
transition radiation tracker that covers the region up to
\(\abseta = 2.0\).
The calorimeter system covers the pseudorapidity range \(|\eta| < 4.9\).
Within the region \(|\eta|< 3.2\), electromagnetic calorimetry is
provided by barrel and
endcap high-granularity lead/liquid-argon (LAr) calorimeters,
with an additional thin LAr presampler covering \(|\eta| < 1.8\)
to correct for energy loss in material upstream of the calorimeters.
Hadron calorimetry is provided by the steel/scintillator-tile calorimeter,
segmented into three barrel structures within \(|\eta| < 1.7\), and
two copper/LAr hadron endcap calorimeters.
The solid angle coverage is completed with forward copper/LAr and
tungsten/LAr calorimeter modules
optimised for electromagnetic and hadronic energy measurements respectively.
The muon spectrometer (MS) instruments the outer part of the
detector and is composed of high-precision tracking
chambers up to \( \abseta =
2.7 \) and fast detectors for triggering up to \( \abseta =
2.4 \). The MS is immersed in a magnetic field produced by
three large superconducting air-core toroidal magnets with eight coils
each.
A two-level trigger system is used to select events that are of
interest for the ATLAS physics programme~\cite{TRIG-2016-01}.
The first-level trigger is implemented in hardware and reduces the
event rate to below \SI{100}{\kHz}. A software-based trigger further
reduces this to a recorded event rate of approximately \SI{1}{\kHz}.
An extensive software suite~\cite{ATL-SOFT-PUB-2021-001} is used in the reconstruction and analysis of real and simulated data, in detector operations, and in the trigger and data acquisition systems of the experiment.
\section{Data and simulated events}
\label{sec:data-sim}
This analysis uses data collected in proton--proton collisions at \(\rts = \SI{13}{\TeV}\)
with proton bunches colliding every \SI{25}{\ns}. After requiring that
all ATLAS subdetectors collected high-quality data and were operating
normally~\cite{DAPR-2018-01}, the total integrated luminosity amounts to \SI{139}{\ifb}.
The uncertainty in the combined 2015--2018 integrated luminosity is 1.7\%~\cite{ATLAS-CONF-2019-021},
obtained using the LUCID-2 detector~\cite{LUCID2} for the primary luminosity measurements.
Events were collected using dilepton triggers selecting pairs of
electrons~\cite{TRIG-2018-05} or muons~\cite{TRIG-2018-01}.
The transverse momentum (\pt) threshold of the unprescaled dilepton trigger
was raised during the data taking, due to the increasing luminosity of the colliding beams,
but was never higher than \SI{24}{\GeV} for the leading electrons and
\SI{22}{\GeV} for the leading muons.
Signal and background events
were modelled using different Monte Carlo (MC) generators as listed in
\Tab{\ref{tab:MC}}. The response of the ATLAS
detector was simulated~\cite{SOFT-2010-01} using the $\GEANT4$
toolkit~\cite{Agostinelli:2002hh} and simulated events were reconstructed with the same
algorithms as those applied to data~\cite{ATL-SOFT-PUB-2021-001}. The
\typeIIIseesaw signal model was implemented in the
\MGNLO~\cite{Alwall:2014hca} generator at leading order (\LO)
using \textsc{FeynRules}~\cite{Alloul:2013bka} and the
\textsc{NNPDF3.0lo}~\cite{Ball:2014uwa} parton distribution function
(PDF) set. All decays of \Lpm and \Nz into the different leptonic flavours
and subsequent decays of the \Wboson, \Zboson and \Hboson are
considered. Matrix element (ME) events were interfaced to
$\PYTHIA\,8.230$~\cite{Sjostrand:2014zea} for parton showering with the
A14 set of tuned parameters~\cite{ATL-PHYS-PUB-2014-021} and the
\textsc{NNPDF2.3lo} PDF set~\cite{Ball:2012cx}.
The signal cross-section and its uncertainty at next-to-leading-order (NLO) plus
next-to-leading-logarithm (NLL) accuracy were calculated from \( \text{SU}(2)\) triplet production
in an electroweak chargino--neutralino model ~\cite{Fuks:2012qx,Fuks:2013vua}. The calculated
cross-sections are compatible
within uncertainties with the \typeIIIseesaw NLO
implementation~\cite{Ruiz:2015zca,Cai:2017mow}. Production cross
section for a 600, 800 and 1000 \GeV \typeIIIseesaw heavy lepton are
$29.6\pm 3.0$, $7.0\pm 0.8$ and $1.97 \pm 0.25$ \textit{fb} respectively.
Simulated SM background samples include diboson processes, which are the dominant ones, followed by processes
labelled \textit{rare top quark} that include multi-top-quark
production and
top-quark production in association with EW bosons (\( \ttbar V,
\ttbar H, t \Wboson\Zboson\)). Other
SM simulated samples are triboson ($VVV$), \ttbar, single top, and
\DY (\( \qqbar \rightarrow \Zboson/\gam^{*} \rightarrow \ellell \,
(\ell=e,\mu,\tau) \)) production processes. They are mainly used for the estimation
of reducible backgrounds as described in~\Sect{\ref{sec:background}}. The MEPS@NLO
prescription~\cite{Hoeche:2012yf} was used in the generation of
\DY processes to match the ME
to the parton shower.
The generators used in the MC sample production and
the cross-section calculations used for MC sample normalisations are listed in
\Tab{\ref{tab:MC}}. The normalisation
of the dominant backgrounds, diboson and rare top-quark processes, are extracted from the final likelihood fit,
as described in \Sect{\ref{sec:results}}.
\begin{table}[tbp]
\begin{center}
\caption{Configurations used for event generation of signal and
most-relevant background processes. For the cross-section,
the order in the strong coupling constant is shown for the
perturbative calculation. If only one parton distribution
function is shown, the same one is used for both the ME and
parton shower generators; if two are shown, the first is used
for the ME calculation and the second for the parton
shower. Tune refers to the set of tuned underlying-event parameters used by the parton
shower generator. The masses of the top quark and SM Higgs
boson were set to 172.5~\GeV\ and 125~\GeV, respectively. The samples with negligible impact are mentioned in the table but not discussed in the text.}
\label{tab:MC}
\vspace{0.25cm}
\scriptsize
\resizebox{\textwidth}{!}{
\begin{tabular}{l l c c c c}
\toprule
Process & Generator & Cross- & Parton &PDF & Tune \\
& & section & shower &set & \\
\midrule
Type-III seesaw & & & & & \\
$\Lp \Lm,\Lpm \Nz$ & \MGNLO\cite{Alwall:2014hca} & NLO+NLL & $\PYTHIA\,8.230$~\cite{Sjostrand:2014zea}
& \textsc{NNPDF3.0lo}~\cite{Ball:2014uwa} \textsc{NNPDF2.3lo}~\cite{Ball:2012cx}
& A14~\cite{ATL-PHYS-PUB-2014-021} \\
\midrule
Top quark & & & & & \\
$\ttbar$ & \textsc{Powheg\,Box}\,v2~\cite{Frixione:2007nw,Nason:2004rx,Frixione:2007vw,Alioli:2010xd} & NNLO & $\PYTHIA\,8.230$ & \textsc{NNPDF3.0nnlo}~\cite{Ball:2014uwa} \textsc{NNPDF3.0nlo}~\cite{Ball:2014uwa} & A14 \\
Single $t$ & \textsc{Powheg\,Box}\,v2 & NNLO & $\PYTHIA\,8.230$ & \textsc{NNPDF3.0nnlo} \textsc{NNPDF3.0nlo} & A14 \\
\midrule
Rare top quark & & & & \\
3$t$, 4$t$ & \MGNLO & LO & $\PYTHIA\,8.230$ & \textsc{NNPDF3.0lo} & A14 \\
$\ttbar$ + $W/Z/H$, $tWZ$ & \MGNLO & NNLO & $\PYTHIA\,8.230$ &
\textsc{NNPDF3.0nlo} & A14 \\
\midrule
Diboson & & & & & \\
$ZZ$, $WZ$ & $\SHERPA\,2.2.1$~\cite{Bothmann:2019yzt} \& 2.2.2 & NLO & \SHERPA & \textsc{NNPDF3.0nnlo} & \SHERPA default \\
\midrule
Triboson & & & & & \\
$WWW, WWZ, WZZ, ZZZ$ & $\SHERPA\,2.2.1$ \& 2.2.2 & NNLO
& \SHERPA & \textsc{NNPDF3.0nnlo} & \SHERPA default \\
\midrule
Drell--Yan & & & & & \\
$\Zboson/\gam^{*}\rightarrow \ee/\mumu/\tautau$ &
$\SHERPA\,2.2.1$ & NLO & \SHERPA & \textsc{NNPDF3.0nnlo} & \SHERPA default \\
\bottomrule
\end{tabular}
}
\end{center}
\end{table}
Diboson \(VV\) and triboson \(VVV\) events were
simulated with the
$\SHERPA\,2.2.1$--2 generator~\cite{Bothmann:2019yzt}. Off-shell effects and Higgs boson contributions
were included. ME calculations were matched
and merged with the \SHERPA parton shower based on the Catani--Seymour
dipole factorisation~\cite{Gleisberg:2008fv,Schumann:2007mg} using the
MEPS@NLO
prescription~\cite{Hoeche:2011fd,Hoeche:2012yf,Catani:2001cc,Hoeche:2009rj}. The
\OPENLOOPS library~\cite{Cascioli:2011va,Denner:2016kdg-fixed}
provided QCD corrections. Diboson samples
were produced with different lepton multiplicities
using MEs at NLO accuracy in QCD for up to one additional parton
and at LO accuracy for up to three additional parton emissions.
Loop-induced \( gg \to VV \) processes were
generated with LO MEs for emission of up to one additional parton
for both the fully leptonic and semileptonic final states.
Electroweak production of a diboson pair in association with two jets
(\(VVjj\)) was also simulated at LO\@. The PDFs used for the nominal samples
were CT14~\cite{Dulat:2015mca}
and MMHT2014~\cite{Harland-Lang:2014zoa}.
Samples of events from \( \ttbar + V\) (where \( V \) stands for \( \gamma^\ast, \Wboson,\Zboson\, \mathrm{and}\, \Hboson \)) and \(t \Wboson \Zboson\) processes were produced using
\MGNLO~\cite{Alwall:2014hca} at NLO accuracy and were
interfaced to the $\PYTHIA\,8.230$ parton
shower with \textsc{NNPDF3.0nlo}~\cite{Ball:2014uwa} PDFs. The \hdamp parameter, which controls
the matching between the ME and the parton shower, was set
to \SI{1.5}{\mtop}~\cite{ATL-PHYS-PUB-2016-020},
using a top quark mass of \( \mtop = \SI{172.5}{\GeV}
\).
All simulated events were overlaid with a simulation of multiple \pp interactions
occurring in the same or neighbouring bunch crossings. These \(pp\) inelastic scattering events were
generated by $\PYTHIA\,8.186$
using the \textsc{NNPDF2.3lo} set of PDFs and the A3 set of tuned
parameters~\cite{ATL-PHYS-PUB-2016-017}. Their effects
are referred to as \textit{pile-up}. The simulated events were reweighted such
that the distribution of the average number of interactions per
bunch crossing is compatible with that observed in the data.
\section{Event reconstruction}
\label{sec:definitions}
Events considered in this analysis are required to have at least one collision vertex
reconstructed with at least two tracks with transverse momentum
greater than \SI{500}{\MeV}. The primary vertex of the
hard-scattering event is the one with the largest sum of the
associated tracks' squared transverse momenta. Events have to satisfy
the quality criteria listed in
Ref.~\cite{DAPR-2018-01}, including rejection of events with a large
amount of calorimeter noise or non-collision background.
Electrons are reconstructed by matching a charged-particle track in the ID with an
energy deposit in the electromagnetic calorimeter. Electron candidates
are required to satisfy a \textit{Loose} likelihood-based identification selection~\cite{EGAM-2018-01} and to be in the fiducial
volume of the inner detector, \( \abseta < 2.47 \). The transition region
between the barrel and endcap calorimeters (\( 1.37 < \abseta < 1.52 \))
is excluded because it is partially non-instrumented due to services infrastructure.
The transverse impact parameter \( \dzero \) of the track associated with
the candidate electron must have a significance satisfying \( |\dzero|/\sigma(\dzero) < 5\).
This is required in order to reduce the number of electrons originating from secondary decays.
Similarly, the track's longitudinal impact parameter $z_0$ relative to the primary vertex must
satisfy $|z_0\sin(\theta)|<\SI{0.5}{\mm}$, where $\theta$ is the track's polar angle.
The electron's transverse energy \ET must exceed \SI{10}{\GeV}. After this preselection, to refine the electron quality,
a \textit{Tight} likelihood-based
identification selection and a set of \textit{Loose} isolation criteria based on both
calorimetric and tracking information are applied to primarily select electrons
coming from the decays of the heavy leptons or the EW bosons.
Track segments in the MS are matched with ID tracks to reconstruct
muons if they
are within the $\eta$ coverage of the ID\@. Muon
candidates with \pt lower than \SI{300}{\GeV} are required to satisfy the
\textit{Medium}
muon identification requirements, while for high-\pt muons, a specific identification
working point is applied~\cite{MUON-2018-03}. Muon candidates are
required to have $\abseta < 2.5$, a transverse
impact parameter significance of \( |\dzero|/\sigma(\dzero) < 3 \) and
a longitudinal impact parameter value of
\( |z_{0}\sin(\theta)| < \SI{0.5}{mm} \). The minimum muon \pt is \SI{10}{\GeV}.
After this preselection, an isolation requirement based only on
tracking information is applied.
In this analysis, particle-flow objects~\cite{PERF-2015-09} are formed from energy-deposit clusters in the calorimeters
and tracks measured in the ID but not matched to identified leptons.
Particle-flow objects are then clustered into
jets using the \antikt algorithm~\cite{Cacciari:2008gp} with a
radius parameter $R = 0.4$. The measured jet \pt is
corrected for detector effects to measure the particle energy before
interactions with the detector material~\cite{PERF-2016-04}. Energy within jets that is
due to \pileup is estimated and removed by subtracting an amount equal to the mean
pile-up energy deposition density multiplied by the \( \eta
\)--\( \phi \) area of the jet. Pile-up can also produce additional
jets that are identified and rejected by the jet-vertex tagger
(JVT) algorithm~\cite{PERF-2014-03}, which distinguishes them from jets originating from
the hard-scattering primary vertex. Only jets with transverse energy greater
than \SI{20}{\GeV} and $|\eta| < 2.4$ are considered.
Jets originating from heavy-flavour quarks are identified with the
MV2c10 multivariate $b$-tagging algorithm using the
\SI{77}{\%} efficiency working point~\cite{PERF-2012-04,ATL-PHYS-PUB-2016-012,ATL-PHYS-PUB-2017-013},
with measured rejection factors of approximately 134, 6 and 22 for
light-quark and gluon jets, \(c\)-jets, and hadronically decaying
\( \tau \)-leptons, respectively.
The missing transverse momentum $\vec{p}_{\mathrm{T}}^{\mathrm{miss}}$ (with magnitude \MET)
is calculated as the negative vectorial sum of the \pt of reconstructed jets and leptons in the event.
A `soft term' taking into account tracks associated with the primary vertex but not with any hard object
is then added to guarantee the best performance in a high \pileup environment~\cite{PERF-2016-07}.
The \MET significance \metsig, calculated with a maximum-likelihood ratio method, is used in
\Sect{\ref{sec:regions}} to define the various analysis regions, taking into account the direction of
the $\vec{p}_{\mathrm{T}}^{\mathrm{miss}}$ and calibrated objects as well as their respective resolutions.
Different objects reconstructed close together in the $\eta$--$\phi$ plane could in principle have originated from
the same primary object. Possible overlaps are resolved by an algorithm that appropriately removes one of the two closely spaced objects to avoid double-counting. If a muon candidate is found to share an ID track with an electron candidate,
the electron candidate is rejected. If two electron candidates
share an ID track, the one with the lower
\pt is rejected. Jets are rejected if they are within $\DeltaR = 0.2$ of a lepton candidate, except if the candidate is
a muon and three or more collinear tracks are found. Finally, lepton candidates that are within $ \DeltaR= 0.4$
of any remaining jet are removed.
\section{Analysis strategy}
\label{sec:regions}
Once events have been classified according to the presence of exactly
either three or
four light leptons,\footnote{Leptons are ordered going from the
highest to the lowest momentum, $\ell_1$ to $\ell_3$ ($\ell_4$ for
four-lepton events).}
the two lepton-multiplicity categories are refined with dedicated
selections. Events with larger lepton multiplicities can be
categorized in lower lepton multiplicities regions if one or more leptons escape
detection.
Signal regions (SRs) are defined so as to maximise the significance of
the signal event
count predicted by the targeted model relative to the expected number of SM
background events. SM backgrounds are normalised
by performing a simultaneous fit in the SRs and in dedicated
control regions (CRs). The CRs are defined so as to be
enriched in relevant background processes and depleted in events from
signal processes.
The fit uses a kinematic variable chosen to
optimise the sensitivity to the small cross-sections expected for the signal processes.
Validation regions (VRs), also
depleted in signal events, are used to validate the extrapolation of the SM
background expectations
obtained from the background-only fit to
independent regions kinematically
close to the SRs. All CRs and VRs are characterised by a signal contamination below 2\%.
Backgrounds are assigned to two broad categories: reducible and irreducible backgrounds.
Reducible backgrounds include leptons from misreconstructed objects such as jets, or from
light- or heavy-quark decays or, in the electron case, photon conversions.
These are called \textit{fake or non-prompt} (FNP) leptons, and events containing at least one such lepton are referred to as the FNP background.
Its contribution is calculated using a data-driven method.
Irreducible backgrounds are produced by SM processes with three or
four prompt leptons in the final state.
Prompt leptons are leptons produced in the decays of \Wboson bosons, \Zboson bosons, and $\tau$-leptons,
as well as direct decays of the heavy leptons considered as signal in this analysis.
The most important sources are diboson and rare top-quark processes, with
the latter being primarily \ttbar pairs produced in association with an EW or Higgs boson.
For these processes, kinematic distributions are obtained from MC simulation and their normalisation
is extracted from the fit.
Because low-mass heavy leptons have been excluded by previous searches,
this search focuses on higher masses, where signal events are characterised by objects having high momenta.
Details about the three- and four-lepton analysis regions are provided below, while details of the two-lepton analysis regions are
given in Ref.~\cite{EXOT-2018-33}. The analysis regions are all orthogonal to one another.
\newcommand{\MTfootnote}{
The generic transverse mass of one or multiple objects $N_{\mathrm{obj}}$ is defined as:
$m_{\mathrm{T}}^2(N_{\mathrm{obj}}) = \left(\sum_i^{N_{\mathrm{obj}}}E_{\mathrm{T},i} + E_{\mathrm{T}}^{\mathrm{miss}}\right)^2- |\sum_i^{N_{\mathrm{obj}}}{\vec{p}_{\mathrm{T},i}} + \vec{p}_{\mathrm{T}}^{\mathrm{miss}} |^2 $.
}
\subsection{Three-lepton channel}
\Tab{\ref{tab:3l_regions}} summarises the selection criteria used to define the three-lepton SRs, CRs and VRs.
The ZL SR is characterised by a
leptonically decaying \Zboson boson, and thus an opposite-sign, same-flavour (OSSF) lepton pair compatible with the \Zboson boson mass
is required. The SM boson from the decay of the other heavy lepton produced in the event, decays hadronically.
Signal events are expected to have a large three-lepton invariant mass ($m_{\ell\ell\ell}$),
and the transverse masses of the two highest-\pt leptons,
$m_\mathrm{T}(\ell_{1})$ and $m_\mathrm{T}(\ell_{2})$, are also expected to be large.\footnote{\MTfootnote}
An additional
requirement is placed on the
angular distance between the leading and subleading leptons to
further increase the signal-to-background ratio in the SRs.
A complementary ZLveto SR, targeting signals involving leptonic decays
of \Wboson bosons and hadronic decays of \Zboson bosons (including those from \Hboson bosons),
is defined by vetoing events containing OSSF lepton pairs compatible with a leptonic decay of an on-shell \Zboson boson
requiring the invariant mass of the pair to be larger than 115\, \GeV. The \HT variable is defined as the scalar sum of the \pt of all selected objects in the event. In cases where the scalar sum of the \pt is restricted to only a subset of the objects, they are specified.
Signal events are characterised by large \HT
and \met values and by a large value of the scalar sum of the momenta of the
same-sign leptons, denoted by $\HT (\mathrm{SS})$.
Since the presence of SS leptons in this region is mainly due
to rare top events and FNP leptons, the $\HT$ of this
pair is used as discriminating variable looking for values
$\HT(\mathrm{SS}) \ge 300$\GeV. To account for possible hadronic
decays of electroweak bosons from diboson background
sources, an upper limit is placed on the two leading jets invariant
mass $m_{jj}$.
Finally, the JNLow SR targets events where the electroweak bosons
decay leptonically, and therefore events with low jet multiplicity,
as in \Fig{\ref{fig:feynman3l_4la}}, are selected. A lower bound is
imposed on the invariant mass of
the OSSF lepton pair ($m_{\ell\ell}(\mathrm{OSSF})$)\footnote{If two
OSSF lepton pairs are present, the requirement is applied to both pairs.} and a large value of the scalar
sum of the \pt of the three leptons, $\HT(\ell\ell\ell)$, is required.
Fake-lepton background is further reduced by requiring
$m_{\mathrm{T}}(\ell_{1})$ and $m_{\mathrm{T}}(\ell_{2})$ to exceed a minimum value.
The angular separation $\Delta R(\ell_1,\ell_2)$ between the two leptons is required to exceed a
minimum value to reduce the FNP contribution.
Overall selection efficiencies for the production of a 800 \GeV \typeIIIseesaw
heavy lepton are 0.29 \%, 0.57\%, 0.41\% for the ZL, ZLveto and JNLow
SR respectively.
SM backgrounds in the three-lepton SRs consist of diboson events,
which contribute
${\sim}60\%$, ${\sim}80\%$ and ${\sim}40\%$ in the ZL,
JNLow and ZLveto regions, respectively. Another background in the ZL and
ZLveto regions originates from rare top-quark processes involving one or more top quarks, which contribute
${\sim}40\%$ and ${\sim}50\%$ of the background in those regions, respectively.
Therefore, a CR targeting the normalisation of the diboson
background is defined by requiring at least two jets and a low transverse
mass for the subleading lepton such that $m_\mathrm{T}(\ell_2)\le \SI{200}{\GeV}$.
Two VRs are defined in order to validate background estimates for events containing a \Zboson boson decaying into leptons, both obtained by inverting the
$\DeltaR(\ell_1,\ell_2)$ selection of the ZL SR, and applying additional requirements. The DB-VR also requires a $b$-tag
veto, while in the RT-VR the presence of at least one $b$-tagged jet is required. These VRs validate the predictions and normalisation of diboson and rare top-quark processes respectively. An additional JNLow-VR is obtained from the JNLow
SR by inverting the transverse mass requirement on the leading lepton,
$m_\mathrm{T}(\ell_1)\le \SI{240}{\GeV}$. Moreover, a Fake-VR is
defined by inverting the \metsig selection common to all the other
regions without applying any additional requirement except for lepton
\pt ones. This region is enriched in contributions from FNP
backgrounds and is therefore used to validate them.
In the three-lepton channel SRs, the kinematic variable used as the final discriminant to the data is the transverse mass of the three-lepton system.
\begin{table}[htpb]
\begin{center}
\caption{ Summary of the selection criteria used to define
relevant regions in the three-lepton analysis. No selection is
applied when a dash is present in the corresponding cell.
}
\vspace{0.25cm}
\renewcommand{\arraystretch}{1.3}
\setlength\tabcolsep{3.2pt}
\footnotesize
\begin{tabularx}{\textwidth}{c | *{8}{Y|}}
\toprule
\multicolumn{2}{c |}{ }
& \multicolumn{4}{c |}{\textbf{ZL} }
& \multicolumn{1}{c |}{\textbf{ZLveto} }
& \multicolumn{2}{c |}{\textbf{JNLow} }\\
\hline
& Fake-VR & CR & DB-VR & RT-VR & SR & SR & VR & SR\\
\hline
& \multicolumn{8}{ c |}{ $\pt (\ell_1) > 40 \ \GeV$ } \\
& \multicolumn{8}{ c |}{ $\pt (\ell_2) > 40 \ \GeV$ } \\
& \multicolumn{8}{ c |}{ $\pt (\ell_3) > 15 \ \GeV$ } \\
\hline
$\metsig$ & $<5$ & \multicolumn{7}{ c |}{ $ \ge 5 $ } \\
\hline
$N(\mathrm{jet})$
& -
& \multicolumn{5}{c |}{$\geq 2$}
& \multicolumn{2}{c |}{$\leq 1$} \\
\hline
$N(\bjet)$
& - & - & \multicolumn{1}{c |}{$0$}
& \multicolumn{1}{c |}{$\geq 1$} & - & - & - & - \\
\hline
$m_{\ell\ell} \, (\mathrm{OSSF}) ~ [\GeV]$ & -
& \multicolumn{4}{c |}{80--100 }
& \multicolumn{1}{c |}{$\geq 115$}
& \multicolumn{2}{c |}{ $\geq 80$ } \\
\hline
$H_{\mathrm{T}}+\met ~ [\GeV]$
& - & - & - & - & -
& \multicolumn{1}{c |}{$\geq600$}
& - & - \\
\hline
$m_{\ell\ell\ell} ~ [\GeV]$
& - & - & \multicolumn{3}{c |}{$\geq 300$}
& \multicolumn{1}{c |}{$\geq 300$}
& - & - \\
\hline
$H_{\mathrm{T}}(\mathrm{SS}) ~ [\GeV]$
& - & - & - & - & -
& \multicolumn{1}{c |}{$\geq 300$}
& - & - \\
\hline
$m_{jj} ~ [\GeV]$
& - &- &- &- &-
& \multicolumn{1}{c |}{$< 300$}
& - & - \\
\hline
$H_{\mathrm{T}}(\ell\ell\ell) ~ [\GeV]$
& - & - & - & - & - & -
& \multicolumn{2}{c |}{$\geq 230$} \\
\hline
$m_{\mathrm{T}}(\ell_1) ~ [\GeV]$ &
- & - & \multicolumn{3}{c |}{$\geq 200$}
& -
& \multicolumn{1}{c |}{$< 240$} & \multicolumn{1}{c |}{$\geq 240$} \\
\hline
$m_{\mathrm{T}}(\ell_2) ~ [\GeV]$ & -
& \multicolumn{1}{c |}{$< 200$} & \multicolumn{3}{c |}{$\geq 200$}
& -
& \multicolumn{2}{c |}{$\geq 150$} \\
\hline
$\Delta R (\ell_1, \ell_2)$ &
- & - & \multicolumn{2}{c |}{$< 1.2 $} & \multicolumn{1}{c |}{1.2--3.5}
& -
& \multicolumn{2}{c |}{$\geq 1.3$} \\
\bottomrule
\end{tabularx}
\label{tab:3l_regions}
\end{center}
\end{table}
\subsection{Four-lepton channel}
In the four-lepton channel, the momentum requirement on the three leading leptons is the same as in the three-lepton channel; the momentum of the fourth lepton is required to be larger than $\SI{10}{\GeV}$. Events are classified using the sum of the charges of the
four leptons in the final state: $\sum q_\ell$. The conditions $\sum q_\ell= 0$ and
$|\sum q_\ell|= 2$ identify the \textit{zero charge} (Q0) and
\textit{double charge} (Q2) regions, respectively.
A summary of the selection criteria defining the four-lepton regions
is shown in \Tab{\ref{tab:4l_regions}}.
Signal events are characterised by large \HT and
large invariant mass $m_{\ell\ell\ell\ell}$ of the four-lepton system. The presence of possible neutrinos in the final state is taken into account using the $\HT + \met$ variable; therefore
$m_{\ell\ell\ell\ell}\ge\SI{300}{\GeV}$ and $\HT + \met \ge \SI{300}{\GeV}$
are required in both the Q0 and Q2 signal regions.
Apart from the common lepton \pT requirements, these are the only kinematic selections applied in the Q2 SR,
which has less background than the Q0 SR since it is
very rare for a SM process to produce a doubly charged final state.
To reduce the \textit{ZZ$^\ast$} contribution in
the Q0 SR, no more than one OSSF
lepton pair in an event is allowed to be
compatible with a leptonic \Zboson decay defined by the invariant mass window 80--100\,\GeV.
Background in the Q0 SR is further reduced by requiring $\metsig \ge
5$. Overall selection efficiencies for the production of a 800 \GeV \typeIIIseesaw
heavy lepton are 0.14 \% and 0.11\% for the Q0 and Q2 SR respectively.
Two CRs and two VRs are defined in the zero-charge Q0 kinematic space. A DB-CR targeting diboson
backgrounds is built by requiring a $b$-jet veto and defining an invariant mass window for
the four-lepton system ($\SI{170}{\GeV}\le m_{\ell\ell\ell\ell} <
\SI{300}{\GeV}$). A RT-CR targeting rare top-quark background
is obtained by requiring at least two $b$-tagged jets and
$m_{\ell\ell\ell\ell} < \SI{500}{\GeV}$. To ensure orthogonality to the CRs,
the VRs require events to have exactly one $b$-jet. To increase the
contributions of diboson and rare top-quark backgrounds in the VRs, $m_{\ell\ell\ell\ell}$
must satisfy $\SI{170}{\GeV}\le m_{\ell\ell\ell \ell} <
\SI{300}{\GeV}$ in DB-VR and $ \SI{300}{\GeV} \le m_{\ell\ell\ell\ell} <
\SI{500}{\GeV}$ in RT-VR\@. The RT-VR also requires $\HT + \met \ge \SI{400}{\GeV}$ and $\metsig \ge 5$.
The main sources of background in the Q2 signal region are diboson or rare top-quark events where the electric charge of one of the
electrons is mismeasured. As mentioned above, the only additional kinematic selections used to
define the Q2 SR are that both $\HT + \met$ and the four-lepton invariant mass must exceed $\SI{300}{\GeV}$. A dedicated Q2 VR
is obtained by requiring $m_{\ell\ell\ell\ell} < \SI{200}{\GeV}$ or $\HT + \met < \SI{300}{\GeV}$ in order to validate both the
diboson and FNP background estimates.
In the four-lepton channel SRs, the kinematic variable $\HT+\met$ is used as the final discriminant to fit to the data.
\begin{table}[htpb]
\begin{center}
\caption{ Summary of the selection criteria used to define relevant regions in the four-lepton analysis. $N_\Zboson$ is the number of leptonically reconstructed \Zboson bosons,
using opposite-sign same-flavour leptons. No selection is
applied when a dash is in the corresponding cell.
}
\label{tab:4l_regions}
\vspace{0.25cm}
\renewcommand{\arraystretch}{1.3}
\setlength\tabcolsep{3.2pt}
\footnotesize
\begin{tabularx}{\textwidth}{c | *{8}{Y|}}
\toprule
& \multicolumn{5}{c |}{\textbf{Q0} } & \multicolumn{2}{c |}{\textbf{Q2} } \\
\hline
& DB-CR & RT-CR & DB-VR & RT-VR & SR & VR & SR \\
\hline
& \multicolumn{7}{ c |}{ $\pt (\ell_{1,2}) > 40 \ \GeV$~~\, } \\
& \multicolumn{7}{ c |}{ $\pt (\ell_3) > 15 \ \GeV$ } \\
& \multicolumn{7}{ c |}{ $\pt (\ell_4) > 10 \ \GeV$ } \\
\hline
$| \sum{q_\ell} |$ & \multicolumn{5}{c |}{ 0 } & \multicolumn{2}{c |}{ 2 } \\
\hline
$N(\bjet)$ & 0 & $\geq 2$ & 1 & 1 & 0 & - & - \\
\hline
\multirow{2}{*}{$m_{\ell\ell\ell\ell} ~ [\GeV]$} & \multirow{2}{*}{170--300} & \multirow{2}{*}{$<500$} & \multirow{2}{*}{170--300} & \multirow{2}{*}{300--500} & \multirow{2}{*}{$\geq 300 $} & $<200$ & \multirow{2}{*}{$\geq 300$} \\
& \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & OR & \\
\cline{1-6}\cline{8-8}
$H_{\mathrm{T}}+\met ~ [\GeV]$ & - & - & - & $\geq 400 $ & $\ge 300 $ & $<300$ & $\ge 300 $ \\
\hline
$N_{Z}$ & - & - & - & - & $\leq 1$ & - & - \\
\hline
$\metsig$ & - & - & - & $\geq 5 $ & $\geq 5$ & - & - \\
\bottomrule
\end{tabularx}
\end{center}
\end{table}
\section{Background composition and estimation}
\label{sec:background}
The background estimation techniques used in the analysis, combining simulations and data-driven methods common to all channels, are discussed in this section.
Irreducible-background predictions are obtained directly from simulations, but normalisation of diboson and rare top-quark
processes is obtained from the fit.
To avoid double-counting between background estimates derived from MC simulation and
the data-driven reducible-background
predictions, a specific check is performed: events from irreducible-background MC samples
are considered only if generator-level prompt leptons can be matched to their reconstructed counterparts.
There are two sources of reducible background: events in which at least one lepton charge is misidentified and FNP leptons. The former source is
relevant only in the Q2 four-lepton signal region where an event
in the Q0 category can migrate to the Q2 category if the charge of one of the leptons is mismeasured.
Charge misidentification for
muons is well described by the simulation and occurs only for
high-momentum muons where detector misalignments degrade the muon
momentum resolution. However, electrons are more susceptible to charge misidentification, as a combination of effects from bremsstrahlung and photon conversions that might not
be adequately described by the detector simulation.
Correction factors (scale factors) accounting for charge misreconstruction are
applied to the simulated background events. They
are derived by comparing the charge misidentification probability
measured in data with the one in simulations and are parameterised as
function of \pt and \eta. The charge
misidentification probability is extracted
by performing a likelihood fit in a dedicated \Zee data sample, as
described in Ref.~\cite{EGAM-2018-01}. The charge misidentification
probability increases from ${\sim}10^{-4}$ to
${\sim}10^{-1}$ with increasing \pt and $|\eta|$.
FNP leptons are produced by secondary decays
of light- or heavy-flavour mesons into light leptons embedded within
jets. Although the \(b\)-jet veto and lepton isolation significantly
reduce the number of FNP leptons, a fraction
still satisfy the selection requirements.
Significant components of FNP electrons arise from photon conversions and from jets
that are wrongly reconstructed as electrons.
MC samples are not used to estimate these background sources because
the simulation
of jet production and hadronisation has large intrinsic uncertainties.
Instead, the FNP background
is estimated with a data-driven approach, known as the \textit{fake factor} (FF)
method, as described in Ref.~\cite{EXOT-2012-25}.
The FF is measured in dedicated
FNP-enriched regions where the events satisfy single-lepton triggers
without isolation requirements and have low \met,
no \(b\)-jets, and only one reconstructed lepton that
satisfies the lepton identification preselection described in
\Sect{\ref{sec:regions}}. In these regions, two kinds of leptons are identified, $\mathcal{L}$ leptons satisfy looser object selection criteria than the ones used to identify the leptons considered in the analysis regions, which are here named $\mathcal{T}$ leptons.
Electron and muon FFs are
then defined as the ratio of the number of $\mathcal{T}$ leptons to the number
of $\mathcal{L}$ leptons, and are parameterised as functions of \pt and
\eta. The FNP background
is then estimated in the SRs by applying the FF as
an event weight
in a template region defined with the same selection criteria as the
corresponding SRs, except that at least one of the leptons must be an $\mathcal{L}$ lepton but not a $\mathcal{T}$ one.
The prompt-lepton contribution is subtracted from the template region by
using the irreducible-background MC samples to estimate the prompt-lepton
contamination in the adjacent regions~\cite{EGAM-2018-01}.
The Fake-VR, as defined in \Tab{\ref{tab:3l_regions}}, is used to validate
the data-driven FNP-lepton estimate.
\section{Systematic uncertainties}
\label{sec:systematics}
Uncertainties affect several aspects of this analysis. Experimental
uncertainties related to the trigger selection, lepton reconstruction,
identification, momentum measurement and isolation selection affect
both the global selection efficiency and the shape of the kinematic
distributions used in the fit. The main contributions come from the electron
selection efficiency and the sagitta resolution of the muon spectrometer.
Uncertainties are estimated mainly from $\Zboson\rightarrow \ell \ell$
and $\jpsi \rightarrow \ell \ell$ processes~\cite{EGAM-2018-01,MUON-2018-03}.
Uncertainties in the jet energy scale and
resolution are evaluated from MC simulations and data, using
multi-jet, \Zjets
and $\gamma$\,+\,jets events~\cite{ATLAS:2020cli}; they are estimated to be
less than \SI{2}\% in the range of jet transverse momentum of interest and
affect both the selection efficiency measurement and the kinematic
distributions used in the fit.
Uncertainties in the
\btag efficiencies are evaluated from data using dileptonic \ttbar events
\cite{FTAG-2018-01}. They
are estimated to range from \SI{8}\% at low momentum to \SI{1} \% at high momentum.
These uncertainties affect the analysis region selection
efficiencies. The \MET measurement uncertainties are estimated from data-to-MC
comparison in \Zmm events without jets
as described in \Refn{\cite{ATLAS-CONF-2018-023}}. The \MET uncertainties affect
both the selection efficiencies and the kinematic distributions used in the
fit. The charge misidentification uncertainty
affects only the Q2 analysis regions. The uncertainty in the
charge-misidentification scale factor is estimated to be less
than 10\% from a comparison between same-sign dielectron data and MC events,
with the same electron selection as used in this analysis, and
with $|m_{ee}-m_\Zboson|<\SI{10}{\GeV}$ as described in
\Refn{\cite{EGAM-2018-01}}. The uncertainty in the \pileup simulation,
derived from a comparison of data with simulation, is also taken
into account~\cite{ATLAS-CONF-2014-018}. The limited size of the MC
samples is taken into account as an additional uncertainty.
The FNP background uncertainty comes from the modelling and normalisation
of the prompt leptons subtracted in the FF estimation. The origin of the
FNP background is then varied by selecting slightly modified FNP-enriched regions,
where the FF is measured. Variations of the FNP-enriched regions
are, for example, obtained by varying the jet multiplicity
requirement and the \MET
selection.
The resulting uncertainty of the FF depends on the lepton momentum and
pseudorapidity and ranges from 5\% to 40\% for electrons and from
10\% to 30\% for muons.
Theoretical uncertainties affect both the signal and background
predictions. For both, the uncertainties from missing higher orders
are evaluated by independently varying the
QCD factorisation and renormalisation scales in the matrix element
by up to a factor of two~\cite{Bothmann:2016nao}.
The PDF uncertainties are evaluated using the LHAPDF
toolkit~\cite{Buckley:2014ana} and the PDF4LHC prescription~\cite{Butterworth:2015oua}.
An additional
uncertainty of \SI{10}\% is added to the diboson cross-section to take into
account variations in the level of data-to-MC agreement for \VV
processes in different jet multiplicity regions.
For rare top-quark backgrounds, uncertainties in the
$\ttbar \Wboson $ cross-section are evaluated to be $\pm 50\%$, while
for the $\ttbar \Hboson$ cross-section the uncertainty from varying the
QCD factorisation and renormalisation scales is $^{+5.8}_{-9.2}\%$, with another $\pm 3.6\%$
from PDF+\alphas variations.
Since the yields of the rare top-quark and diboson backgrounds are derived from
the likelihood fit to the data in the CRs, the systematic variations
have little impact on the final yields of the background predictions
in the CR and SR.
\section{Statistical analysis and results}
\label{sec:results}
The \textsc{HistFitter}~\cite{Baak:2014wma} statistical package is
used to fit the predictions to the data in the CRs and SRs. The fit considers the $m_{\mathrm{T},3\ell}$ and $\HT+\met$ distributions for the three- and four-lepton channels, respectively, calculating lower limits on the heavy-lepton mass with a test statistic based on the binned profile likelihood in the asymptotic approximation~\cite{Cowan:2010js}, whose validity is tested using a pseudo-experiment approach. The lower limits on the mass are calculated at 95\% confidence level (CL) and the binning is chosen to
optimise the sensitivity to signal. The various
components of the background
predictions are validated in the corresponding VRs.
Background and signal contributions are modelled by a product of independent Poisson
probability density functions representing the
likelihood of the fit. Systematic uncertainties are modelled by Gaussian probability density functions centred on
the pre-fit prediction of the nuisance parameters, with widths
that correspond to the magnitudes of these uncertainties.
Four different fitting procedures are
performed: the three-lepton channel on its own, the four-lepton
channel on its own, the three- and four-lepton channels combined, and finally the two-, three- and four-lepton channels
combined, where results for the two-lepton channel are taken from
Ref.~\cite{EXOT-2018-33}. All the contributions from the experimental
uncertainties in the lepton, jet and
\MET selections and reconstruction, pile-up simulation, background
simulation,
theoretical calculations and
irreducible-background estimates are considered correlated among the
different multiplicity channels in multi-channel fits.
After a background-only likelihood fit in the CRs, the three- and four-lepton channel
diboson normalisation factors are found to be $0.80\pm 0.09$ and
$1.08\pm0.03$, respectively. The normalisation and shape
of the $m_{\mathrm{T},3\ell}$ and $\HT+\met$ distributions are
validated in the ZL DB-VR and Q0 DB-VR, respectively, by comparing data and SM
expectations after the fit.
The rare top-quark contribution normalisation is estimated to be
$1.3\pm0.2$ in the four-lepton channel
Q0 RT-CR and is then extrapolated to all the SRs. The
background modelling is validated in the ZL RT-VR and
Q0 RT-VR for the three- and four-lepton channels,
respectively.
Event yields after the likelihood fit for the analysis regions in the three- and four-lepton channels
are shown in \Fig{\ref{fig:34Regions}}.
Good agreement within
statistical and systematic uncertainties
between data and SM predictions is observed in all regions,
demonstrating the validity of the background
estimation procedure as shown in \Tab{\ref{tab:TriFourLeptonSR}}.
\begin{table}
\begin{center}
\caption{Observed data and background yields in the three- and four-lepton signal regions after the background-only fit in the combined three- and four-lepton regions; the combination of statistical and systematic uncertainties have been reported.
}
\vspace{0.25cm}{\footnotesize
\setlength{\tabcolsep}{3.5pt}
\begin{tabular}{lS[table-format=2.3]
@{\,}@{$\pm$}@{\,} S[table-format=1.3] S[table-format=2.2]
@{\,}@{$\pm$}@{\,} S[table-format=1.2] S[table-format=2.2]
@{\,}@{$\pm$}@{\,} S[table-format=1.2] S[table-format=2.2]
@{\,}@{$\pm$}@{\,} S[table-format=1.2] S[table-format=2.4]
@{\,}@{$\pm$}@{\,} S[table-format=1.4] S[table-format=1.4]}
\toprule
& \multicolumn{6}{c}{Three-lepton signal regions} & \multicolumn{4}{c}{Four-lepton signal regions} \\
\midrule
& \multicolumn{2}{c}{ZL SR} & \multicolumn{2}{c}{ZLveto SR} & \multicolumn{2}{c}{JNLow SR} & \multicolumn{2}{c}{Q0 SR} & \multicolumn{2}{c}{Q2 SR} \\
\midrule
Data & \multicolumn{2}{l}{~~7} & \multicolumn{2}{l}{16} & \multicolumn{2}{l}{25} & \multicolumn{2}{l}{25} & \multicolumn{2}{l}{17} \\
\midrule
Total background & 6.25 & 0.52 & 25.2 & 2.8 & 24.4 & 2.3 & 19.0 & 1.6 & 10.3 & 1.9 \\
\midrule
Diboson & 2.62 & 0.27 & 7.64 & 0.95 & 18.0 & 2.1 & 7.70 & 0.78 & 8.5 & 1.6 \\
Rare top & 3.2 & 0.5 & 11.2 & 1.7 & 1.82 & 0.32 & 9.4 & 1.4 & 1.63 & 0.35 \\
Fakes & 0.29 & 0.05 & 5.98 & 0.85 & 4.3 & 0.5 & 1.37 & 0.36 & 0.07 & 0.37 \\
Other & 0.113 & 0.015 & 0.36 & 0.12 & 0.33 & 0.03 & 0.49 & 0.04 & 0.1001 & 0.0098 \\
\bottomrule
\end{tabular}
}
\label{tab:TriFourLeptonSR}
\end{center}
\end{table}
\begin{figure}[htpb]
\centering
\includegraphics[width=\textwidth]{fig_02.pdf}
\caption{Observed and expected event yields in the CRs, VRs and SRs for the three- and four-lepton channels after the fit procedure described in the text. \textit{Diboson}
indicates background from diboson processes. \textit{Rare
top} indicates background from \( \ttbar + V \) and \(t \Wboson \Zboson\)
processes. \textit{FNP} includes the background from fake or non-prompt leptons.
\textit{Other} indicates all the other considered backgrounds that
contribute less than 2\%. The hatched bands
include systematic uncertainties with the correlations between
various background sources taken into account.
The lower panel shows the ratio of the observed data to the predicted
SM background after the likelihood fit. }
\label{fig:34Regions}
\end{figure}
The post-fit distributions of the $m_{\mathrm{T},3\ell}$ and $\HT+\met$ variables used in the likelihood
fit in the three- and four-lepton channels are shown in
\Figs{\ref{fig:Post_Signal_3L}}{\ref{fig:Post_Signal_4L}}, respectively, for the
signal regions, with the binning used in the fit.
After the fit,
the compatibility of the data and the expected background is assessed.
Good agreement is observed. The \textit{p}-values\footnote{The \(p\)-value is defined as
the probability of observing an excess at least as large as the one observed in data, in the absence of signal.},
evaluated using the distributions in \Figs{\ref{fig:Post_Signal_3L}}{\ref{fig:Post_Signal_4L}},
are 0.38, 0.090 and 0.25 for
the three-, four- and the combined three-and-four lepton channels, respectively. \Fig{\ref{fig:crs_lep}} shows the distributions of these discriminating variables in some of the control and validation regions.
\begin{figure}[tbp]
\centering
\subfloat[]{
\includegraphics[width=0.45\textwidth]{fig_03a.pdf}
\label{fig:Post_Signal_3L_ZL}
}
\subfloat[]{
\includegraphics[width=0.45\textwidth]{fig_03b.pdf}
\label{fig:Post_Signal_3L_ZLVeto}
}\\
\subfloat[]{
\includegraphics[width=0.45\textwidth]{fig_03c.pdf}
\label{fig:Post_Signal_3L_JNLow}
}
\caption{Distributions of $m_{\mathrm{T},3\ell}$ in the three-lepton signal
regions after the combined fit:
\protect\subref{fig:Post_Signal_3L_ZL} the ZL signal region,
\protect\subref{fig:Post_Signal_3L_ZLVeto} the ZLveto signal
region and
\protect\subref{fig:Post_Signal_3L_JNLow} the JNLow signal
region. The coloured lines
correspond to signal samples with the \Nz and \Lpm mass values stated in
the legend. The hatched bands include all statistical and systematic
post-fit uncertainties with the correlations between various background
sources taken into account. The lower panel shows the
ratio of the observed data to the predicted SM background. The
last bin in the distributions contains the overflows.
}
\label{fig:Post_Signal_3L}
\end{figure}
\begin{figure}[tbp]
\centering
\subfloat[]{
\includegraphics[width=0.45\textwidth]{fig_04a.pdf}
\label{fig:Post_Signal_4L_HH}
}
\subfloat[]{
\includegraphics[width=0.45\textwidth]{fig_04b.pdf}
\label{fig:Post_Signal_4L_OD}
}
\caption{Distributions of $\HT + \met$ in the four-lepton signal
regions after the combined fit:
\protect\subref{fig:Post_Signal_4L_HH} the Q0 signal region where
the sum of lepton charges is zero and
\protect\subref{fig:Post_Signal_4L_OD} the Q2 signal region where
the sum of lepton charges is $\pm 2$. The coloured lines
correspond to signal samples with the \Nz and \Lpm mass values stated in
the legend. The hatched bands include all statistical and systematic
post-fit uncertainties with the correlations between various background
sources taken into account.
The lower panel shows the
ratio of the observed data to the predicted SM background. The
last bin in the distributions contains the overflows.
}
\label{fig:Post_Signal_4L}
\end{figure}
\begin{figure}[tbp]
\centering
\subfloat[]{
\includegraphics[width=0.40\textwidth]{fig_05a.pdf}
\label{fig:zlcr}
}
\subfloat[]{
\includegraphics[width=0.40\textwidth]{fig_05b.pdf}
\label{fig:fvr}
}
\\
\subfloat[]{
\includegraphics[width=0.40\textwidth]{fig_05c.pdf}
\label{fig:q0dbcr}
}
\subfloat[]{
\includegraphics[width=0.40\textwidth]{fig_05d.pdf}
\label{fig:rtcr}
}
\caption{Distributions of $m_{\mathrm{T},3\ell}$ in the three-lepton control and validation regions
\protect\subref{fig:zlcr} ZL-CR and \protect\subref{fig:fvr} fake-VR,
and of $\HT+\met$ in the four-lepton control regions \protect\subref{fig:q0dbcr} Q0 DB-CR
and \protect\subref{fig:rtcr} Q0 RT-CR after the combined fit.
The simulated signal contribution was found to
be below 2\% and is not shown in the figure. The hatched
bands include all statistical and systematic
post-fit uncertainties with the correlations between various background
sources taken into account. The lower panel shows the
ratio of the observed data to the predicted SM background. The
last bin in the distributions contains the overflow.
}
\label{fig:crs_lep}
\end{figure}
The relative uncertainties in the background yield estimates are
shown in \Fig{\ref{fig:34RegionsSys}} for all analysis
regions in the three- and four-lepton channels. The dominant uncertainty in the SRs, and in most of the other regions, is
the statistical uncertainty of the data, which varies
from $\SI{20}{\%}$ to $\SI{37}{\%}$ depending on the signal region. The MC statistical uncertainty varies from $\SI{2}{\%}$ to $\SI{7}{\%}$ instead.
In the Q2~SR an uncertainty contribution close to the data statistical uncertainty comes from the charge misidentification background, considered in the \textit{Experimental} category.
\begin{figure}[htpb]
\centering
\includegraphics[width=\textwidth]{fig_06.pdf}
\caption{Relative contributions from different sources of statistical
and systematic uncertainty to the total background yield estimates
after the fit. \textit{Experimental} uncertainties are related to
the lepton, jet and \MET selection and reconstruction, and also to lepton
charge misidentification.
\textit{FNP} includes the fake or non-prompt leptons contribution. \textit{Luminosity} is related to the
luminosity uncertainty that affects the background simulation
yields. \textit{Theory} includes theoretical uncertainties
associated with the PDF, \alphas, and renormalisation
and factorisation scales. \textit{Normalisation} is related to the
diboson and rare top-quark normalisation factors extracted by the
likelihood fit. Systematic uncertainties are calculated by changing each nuisance parameter from its fit value by one standard deviation,
keeping all the other parameters at their central values, and comparing
the resulting event yield with the nominal yield.
Individual uncertainties can be correlated within each region, and do not necessarily add
in quadrature to the total background uncertainty, which is shown as
\textit{Total MC uncertainty (correlated)}. \textit{Data Stat.\ uncertainty} refers to the statistical uncertainty of the collected data. }
\label{fig:34RegionsSys}
\end{figure}
In the absence of a significant deviation from SM expectations,
\SI{95}{\%} CL upper limits on the signal
production cross-section are derived using the \(
\mathrm{CL}_\mathrm{s} \)
method~\cite{Read:2002hq}. The upper limits on the production cross-sections of the \( pp \rightarrow W^{*} \rightarrow \Nz \Lpm \)
and
\( pp \rightarrow Z^{*} \rightarrow \Lpm \Lmp \) processes
are evaluated as a function of the heavy-lepton
mass, using the three- and four-lepton channels with the democratic $\mathcal{B}_\ell$ scenario.
By comparing the upper limits on the cross-section with the
theoretical cross-section calculation as a function of the
heavy-lepton mass, a lower limit on the
mass of the \typeIIIseesaw heavy leptons \Nz and \Lpm is derived.
The observed (expected) exclusion
limit is \SI{870}{\GeV} ( $900^{+80}_{-80}$\,GeV).
The signal hypothesis in the three- and four-lepton channel result is also
tested in a combined fit with the
similar \typeIIIseesaw search regions in the two-lepton
channel~\cite{EXOT-2018-33}. All the CRs, VRs and SRs in the various lepton
multiplicity regions are statistically independent. The reconstruction algorithms and working points are the same in all cases, and the
FNP and lepton charge misidentification backgrounds are estimated using the same method.
The parameter of interest, namely the
number of signal events,
and common systematic
uncertainties are treated as correlated. Normalisations of the diboson, \ttbar (for the
two-lepton multiplicity region) and rare top-quark (for the three- and four-lepton multiplicity regions) backgrounds are treated as
uncorrelated since they
account for different physics processes and different acceptances in
each final state. The three-lepton channel's limit dominates in the high
heavy-lepton mass region, while the two-lepton channel dominates in the
lower mass region.
The combined observed (expected) exclusion limits on the total cross-section are shown in \Fig{\ref{fig:exclusion_234L}}, excluding heavy-lepton masses lower
than \SI{910}{\GeV} ( $960^{+90}_{-80}$\,GeV) at 95\% CL. The combined observed (expected) exclusion limits on the total cross-section restraining to three-lepton and four-lepton channels are shown in \Fig{\ref{fig:exclusion_3L} and \ref{fig:exclusion_4L}} respectively.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.9\textwidth]{fig_07.pdf}
\caption{Expected and observed
exclusion limits in the two-lepton channel
(from Ref.~\cite{EXOT-2018-33}), the three- and four-lepton channels, and
the two-, three- and four-lepton channels for the
\typeIIIseesaw process with the corresponding
one- and two-standard-deviation uncertainty bands, showing the \SI{95}{\%} CL upper limit
on the cross-section.
The theoretical signal cross-section prediction,
given by the NLO calculation~\cite{Fuks:2012qx,Fuks:2013vua},
with its corresponding uncertainty band is also shown.
}
\label{fig:exclusion_234L}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[width=0.9\textwidth]{fig_08.pdf}
\caption{Expected and observed \SI{95}{\%} \( \mathrm{CL}_\mathrm{s} \)
exclusion limits in the three lepton channel for the \typeIIIseesaw process with the corresponding
one- and two-standard-deviation bands, showing the \SI{95}{\%} CL upper limit
on the cross-section.
The theoretical signal cross-section prediction,
given by the NLO calculation~\cite{Fuks:2012qx,Fuks:2013vua}, is shown
with the corresponding uncertainty bands for the expected limit.
}
\label{fig:exclusion_3L}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[width=0.9\textwidth]{fig_09.pdf}
\caption{Expected and observed \SI{95}{\%} \( \mathrm{CL}_\mathrm{s} \)
exclusion limits in the four lepton channel for the \typeIIIseesaw process with the corresponding
one- and two-standard-deviation bands, showing the \SI{95}{\%} CL upper limit
on the cross-section.
The theoretical signal cross-section prediction,
given by the NLO calculation~\cite{Fuks:2012qx,Fuks:2013vua}, is shown
with the corresponding uncertainty bands for the expected limit.
}
\label{fig:exclusion_4L}
\end{figure}
\FloatBarrier
\section{Conclusion}
\label{sec:conclusion}
ATLAS has searched for pair-produced heavy leptons predicted by the
\typeIIIseesaw model in \SI{139}{\ifb} of data from proton--proton collisions at
\( \sqrt{s} = \SI{13}{\TeV} \), recorded during the 2015--2018
data-taking period.
A lower limit on the
mass of the \typeIIIseesaw heavy leptons \Nz and \Lpm is derived for final states with three or four light leptons. No significant deviation from SM expectations is observed. The observed (expected) exclusion
limit on the heavy-lepton mass is \SI{870}{\GeV} ($900^{+80}_{-80}$\,GeV) at the 95\% CL.
This result is combined with the result of
the two-lepton analysis, which used very similar experimental
methodologies and treatment of statistics.
In the full combination, heavy leptons with masses below
\SI{910}{\GeV} are excluded at the
\SI{95}{\%} CL, while the expected lower limit on the mass is
\SI[parse-numbers=false]{960^{+90}_{-80}}{\GeV}. This is the
most stringent limit to date on the \typeIIIseesaw model from events with light leptons at LHC\@.
\clearpage
\section*{Acknowledgements}
We thank CERN for the very successful operation of the LHC, as well as the
support staff from our institutions without whom ATLAS could not be
operated efficiently.
We acknowledge the support of
ANPCyT, Argentina;
YerPhI, Armenia;
ARC, Australia;
BMWFW and FWF, Austria;
ANAS, Azerbaijan;
SSTC, Belarus;
CNPq and FAPESP, Brazil;
NSERC, NRC and CFI, Canada;
CERN;
ANID, Chile;
CAS, MOST and NSFC, China;
Minciencias, Colombia;
MEYS CR, Czech Republic;
DNRF and DNSRC, Denmark;
IN2P3-CNRS and CEA-DRF/IRFU, France;
SRNSFG, Georgia;
BMBF, HGF and MPG, Germany;
GSRI, Greece;
RGC and Hong Kong SAR, China;
ISF and Benoziyo Center, Israel;
INFN, Italy;
MEXT and JSPS, Japan;
CNRST, Morocco;
NWO, Netherlands;
RCN, Norway;
MEiN, Poland;
FCT, Portugal;
MNE/IFA, Romania;
JINR;
MES of Russia and NRC KI, Russian Federation;
MESTD, Serbia;
MSSR, Slovakia;
ARRS and MIZ\v{S}, Slovenia;
DSI/NRF, South Africa;
MICINN, Spain;
SRC and Wallenberg Foundation, Sweden;
SERI, SNSF and Cantons of Bern and Geneva, Switzerland;
MOST, Taiwan;
TENMAK, T\"urkiye;
STFC, United Kingdom;
DOE and NSF, United States of America.
In addition, individual groups and members have received support from
BCKDF, CANARIE, Compute Canada and CRC, Canada;
COST, ERC, ERDF, Horizon 2020 and Marie Sk{\l}odowska-Curie Actions, European Union;
Investissements d'Avenir Labex, Investissements d'Avenir Idex and ANR, France;
DFG and AvH Foundation, Germany;
Herakleitos, Thales and Aristeia programmes co-financed by EU-ESF and the Greek NSRF, Greece;
BSF-NSF and GIF, Israel;
Norwegian Financial Mechanism 2014-2021, Norway;
NCN and NAWA, Poland;
La Caixa Banking Foundation, CERCA Programme Generalitat de Catalunya and PROMETEO and GenT Programmes Generalitat Valenciana, Spain;
G\"{o}ran Gustafssons Stiftelse, Sweden;
The Royal Society and Leverhulme Trust, United Kingdom.
The crucial computing support from all WLCG partners is acknowledged gratefully, in particular from CERN, the ATLAS Tier-1 facilities at TRIUMF (Canada), NDGF (Denmark, Norway, Sweden), CC-IN2P3 (France), KIT/GridKA (Germany), INFN-CNAF (Italy), NL-T1 (Netherlands), PIC (Spain), ASGC (Taiwan), RAL (UK) and BNL (USA), the Tier-2 facilities worldwide and large non-WLCG resource providers. Major contributors of computing resources are listed in Ref.~\cite{ATL-SOFT-PUB-2021-003}.
\clearpage
\printbibliography
\clearpage
\input{atlas_authlist}
\end{document}
|
2,877,628,091,028 | arxiv | \section{Introduction}
Aperiodic subshifts over finite alphabets play a vital role in various branches of mathematics, physics, and computer science. The theory of aperiodic order is a relatively young field of mathematics, which has attracted considerable attention in recent years, see for instance \cite{AR:1991,BG:2013,BaMo:2000,BFMS:2002,FGJ:2015,KLS:2011,KS:2012,Mo:1997,MH:1940,P:1998,S:2015}. It has grown rapidly over the past three decades; on the one hand, due to the experimental discovery of physical solid substances, called quasicrystals, exhibiting such features \cite{INF:1985,SBGC:1984}; and on the other hand, due to intrinsic mathematical interest in describing the very border between crystallinity and aperiodicity. While there is no axiomatic framework for aperiodic order, various types of order conditions have been studied, see \cite{AR:1991,BG:2013,D:2000,BFMS:2002,FGJ:2015,HKW:2015,KS:2012,Lag:1999,Lag:2002,Lag:2003,MH:1940} and references therein. In particular, through the work of Durand \cite{D:2000}, and Lagarias and Pleasants \cite{Lag:2003} it has become apparent that key features of aperiodic minimal subshifts (and their higher-dimensional analogues) to be studied are linearly repetitive, repulsive and power free. Generalisations and extensions of these characteristics, namely $\alpha$-repetitive, $\alpha$-repulsive and $\alpha$-finite ($\alpha \geq 1$), were recently introduced in \cite{GKMSS16}. Indeed, we have that $1$-repetitive is equialent to aperiodic and linearly repetitive, that $1$-repulsive implies repulsive, and that $1$-finite is equivalent to power free.
For $\alpha \geq 1$, a subshift $Y$ which is $\alpha$-repetitive roughly means that the maximum return time (with respect to the left-shift map) of an infinite word in $Y$ to a cylinder set $U \subset Y$ generated by a finite word $u$ is of the order $\lvert u \rvert^\alpha$; $\alpha$-repulsive loosely means that if $W$ is a factor of an infinite word in $Y$ and if $w \neq W$ is a prefix and a suffix of $W$, then the overlap of these two appearances of $w$ in $W$ is at most of the order $\lvert w \rvert - \lvert w \rvert^{1/\alpha}$; and $\alpha$-finite roughly means that if $n$ is the largest natural number such that the $n$-fold concatenation of a finite word $u$ is a factor of an infinite word in $Y$, then $n$ is at most of the order $\lvert u \rvert^{\alpha -1}$.
In \cite{GKMSS16,KS:2012}, for Sturmian subshifts with slope $\theta$ and for $\alpha \geq 1$, it was shown that the characteristics $\alpha$-repetitive, $\alpha$-repulsive and $\alpha$-finite are equivalent. Here, links between regularity of spectral metrics built from noncommutative representations (spectral triples), aperiodic behaviour of the subshift and the Diophanitine properties of $\theta$ were obtained.
Here, we address the following question. For an arbitrary subshift and for $\alpha \geq 1$, which of the order conditions $\alpha$-repetitive, $\alpha$-repulsive and $\alpha$-finite are equivalent?
We prove that, for $\alpha \geq 1$, a subshift is $\alpha$-repulsive if and only if it is $\alpha$-finite (\Cref{thm:equivalence}). However, for $\alpha > 1$, we establish that $\alpha$-repetitive is not necessarily equivalent to $\alpha$-repulsive, and hence, nor $\alpha$-finite (\Cref{thm:G-alpha-free,thm:G-alpha-repetative}). This latter result is provided by a class of subshifts stemming from Grigorchuk's infinite $2$-group \mbox{$G$ -- the} first known group of intermediate growth introduced by Grigorchuk \cite{GRIGO:1984,G:84b} (see also \cite{G:84a}, where a general class of groups, denoted by $G_{\omega}$, of intermediate growth is introduced). They have been studied, for instance, by Bon \cite{Bon201567}, Grigorchuk, Lenz and Nagnibeda \cite{GLN:16,GLN:16b}, and Lenz and Sell \cite{LenzSell:2016}. These subshifts are determined by an infinite sequence $l = (l_{i})_{i \in \mathbb{N}}$ of natural numbers and we refer to them as $l$-Grigorchuk subshifts.
We show that $l$-Grigorchuk subshifts are aperiodic and minimal (\Cref{prop:minimality,cor:Gri-aperiodic}). Additionally, we establish necessary and sufficient conditions for these new subshifts to be $\alpha$-repetitive and $\alpha$-repulsive, and hence, $\alpha$-finite (\Cref{thm:G-alpha-free,thm:G-alpha-repetative}). More precisely, we prove that an $l$-Grigorchuk subshift is $\alpha$-repulsive (and hence $\alpha$-finite) if and only if
\begin{align*}
\limsup_{n \to \infty} \left\lvert l_{n+1} + (1 - \alpha) \sum_{i = 1}^{n} l_{i} \right\rvert <\infty,
\end{align*}
and that an $l$-Grigorchuk subshift is $\alpha$-repetitive if and only if
\begin{align*}
\limsup_{n \to \infty} \left\lvert l_{n+2} + l_{n+1} + (1 - \alpha) \sum_{i = 1}^{n} l_{i} \right\rvert < \infty.
\end{align*}
We also obtain an explicit formula in terms of the sequence $l$ for the complexity function of an \mbox{$l$-Grigorchuk} subshift (\Cref{thm:complexity}), from which we are able to deduce that an \mbox{$l$-Grigorchuk} subshift is uniquely ergodic (\Cref{Cor:unique_ergodic}). Indeed, we show that there exist at most two and at least one right special word per length. We would like to emphasis that, independently, Lenz and Sell \cite{LenzSell:2016} have obtained an explicit formula for the repetitive and complexity functions of an $l$-Grigorchuk subshift. Moreover, they have also computed an explicit formula for the palindromic complexity function. Further, in the case that $l$ is the constant one sequence, results concerning the complexity function have been obtained in \cite{GLN:16,GLN:16b}.
When $l$ is the constant one sequence, the resulting $l$-Grigorchuk subshift is intimately related to Lysenok group presentation of Grigorchuk's infinite $2$-group $G$. By studying this subshift, very recently \cite{GLN:16,GLN:16b} the spectral type of the Laplacian on the Schreier graphs describing the action of Grigorchuk's infinite $2$-group $G$ on the boundary of the infinite binary rooted tree were determined and it has been shown that it is different in the isotropic and anisotropic cases. In fact, the spectrum is shown to be a Cantor set of Lebesgue measure zero in the anisotropic case, whereas it consists of one or two intervals in the isotropic case. Here (\Cref{sec:sec4}), we implicitly associate to a given $l$-Grigorchuk subshift a group, investigating properties of such groups and if the results of \cite{GLN:16,GLN:16b} can be extended to encompass our setting, we believe, would be a worthwhile and fruitful venture.
\subsection*{Outline} In the next section, we present key definitions and results concerning subshifts and define $\alpha$-repetitive, $\alpha$-repulsive and $\alpha$-finite. In \Cref{sec:GR} we state and prove the equivalence of $\alpha$-repulsive and $\alpha$-finite for arbitrary subshifts over a finite alphabet. We conclude with \Cref{sec:4}, which is divided into five parts. The first part (\Cref{sec:sec4}) is concerned with introducing and defining $l$-Grigorchuk subshifts as well as stating some of their basic properties. In \Cref{sec:section4.2,sec:section4.3} we provide necessary and sufficient conditions on a sequence $l$ which ensures that the associated $l$-Grigorchuk subshift is $\alpha$-repulsive (and hence $\alpha$-finite), and $\alpha$-repetitive respectively; after which, in \Cref{sec:examples}, we present several examples of sequences $l = (l_{n})_{n \in \mathbb{N}}$ for which the associated $l$-Grigorchuk subshift is \mbox{$\alpha$-repetitive}, and $\alpha$-repulsive (and hence $\alpha$-finite) for specific values of $\alpha$. Here, we also show that if an $l$-Grigorchuk subshift is \mbox{$\alpha$-repulsive} and hence $\alpha$-finite, then it is $\alpha^{2}$-repetitive. In our concluding part, \Cref{sec:section4.4}, we obtain an explicit formula for the complexity function (in terms of the sequence $l$) of an $l$-Grigorchuk subshift from which we deduce that any $l$-Grigorchuk subshift is aperiodic and uniquely ergodic.
\subsection*{Acknowledgements}
The authors would like to thank Daniel Lenz and Daniel Sell for bringing the problem to their attention. The fourth author would like to thank AG Dynamical Systems and Geometry at Universit\"at Bremen, Fakult\"at f\"ur Mathematik und Informatik at Friedrich-Schiller-Universit\"at Jena and Institut f\"ur Mathematik at Universit\"at zu L\"ubeck for hosting him and providing a stimulating research environment while working on this project. The last author is grateful to the Institut f\"ur Mathematik at Universit\"at zu L\"ubeck for providing a stimulating working environment during the writing of this article.
\section{Preliminary Definitions}
Here, we review the key definitions of subshifts and define three notions of aperiodic order ($\alpha$-repetitive, $\alpha$-repulsive and $\alpha$-finite, for a given $\alpha \geq 1$) first introduced for Sturmian subshifts in \cite{GKMSS16}, and which generalise and extend the order conditions often referred to as linearly repetitive, repulsive and power free.
\subsection{Subshifts}\label{sec:subshiftt_intro}
Let $\mathscr{A}$ denote a set of $m \in \mathbb{N}$ symbols called the \textit{alphabet}. For $n \in \mathbb{N}$ we define $\mathscr{A}^{n}$ to be the set of all finite words in the alphabet $\mathscr{A}$ of length $n$, and set
\begin{align*}
\mathscr{A}^{*} \coloneqq \bigcup_{n \in \mathbb{N}_{0}} \mathscr{A}^{n},
\end{align*}
where by convention $\mathscr{A}^{0}$ is the set containing only the \textit{empty word} $\varepsilon$. We denote by $\mathscr{A}^{\mathbb{N}}$ the set of all infinite words over the alphabet $\mathscr{A}$ and equip it with the discrete product topology. The continuous map $\sigma \colon \mathscr{A}^{\mathbb{N}} \to \mathscr{A}^{\mathbb{N}}$ defined by $\sigma( x_{1}, x_{2}, \dots ) \coloneqq ( x_{2}, x_{3}, \dots )$ is called the \textit{left-shift}. A closed set $Y \subseteq \mathscr{A}^{\mathbb{N}}$ which is left-shift invariant (that is $\sigma(Y) = Y$) is referred to as a \emph{subshift} and the tuple $(Y, \sigma)$ forms a dynamical system. For an infinite word $x = (x_{n})_{n \in \mathbb{N}}$ over a finite alphabet $\mathscr{A}$, we set
\begin{align*}
\Omega(x) \coloneqq \overline{\{ \sigma^{k}(x) \colon k \in \mathbb{N}_{0} \}},
\end{align*}
where the closure is taken with respect to the discrete product topology. We call $\Omega(x)$ the \textit{subshift generated by} $x$. For a subshift $Y$, the dynamical system $(Y, \sigma)$ is called \textit{minimal} if for all $y \in Y$ the set $\Omega(y)$ is dense in $Y$. If $Y$ does not contain a periodic element (that is, an element $y$, such that there exists $k \in \mathbb{N}$ with $\sigma^{k}(y) = y$), then we call $Y$ \textit{aperiodic}.
For $w = (w_{1}, \dots, w_{k})$ and $v = (v_{1}, \dots, v_{n}) \in \mathscr{A}^{*}$, we set $w v \coloneqq (w_{1}, \dots, w_{k}, v_{1}, \dots, v_{n})$, that is the \textit{concatenation} of $w$ and $v$. For $m \in \mathbb{N}$, we denote by $v^{m}$ the $m$-fold concatenation of $v$ with itself, namely
\begin{align*}
v^{m} \coloneqq \underbrace{v v \dots v}_{m - \text{times}}.
\end{align*}
Note that $\mathscr{A}^{*}$ together with the operation of concatenation defines a monoid with identity element $\varepsilon$. The \textit{length} of $v$ is denoted by $\lvert v \rvert$ with $\lvert \varepsilon \rvert=0$ and, for $k \leq n$ a natural number, we set $v\lvert_{k} \coloneqq (v_{1}, v_{2}, \dots, v_{k})$. We say that a word $u \in \mathscr{A}^{*}$ is a \textit{factor} of $v$ if there exists an integer $j$ with $u = \sigma^{j-1}(v)\lvert_{\lvert u \rvert}$. We use the same notations when $v$ is an infinite word. The integer $j$ is refereed to as an \textit{occurrence} of $u$ in $v$.
An infinite word $x$ over a finite alphabet $\mathscr{A}$ is called \textit{recurrent} if every factor has infinitely many different occurrences in $x$. A \textit{gap} of a factor $u$ of $x$ is an integer $k$ which is a difference between two successive occurrences of $u$ in $x$. We say that $x$ is \textit{uniformly recurrent} if $x$ is recurrent and for each factor $u$ of $x$ there exists an upper bound for the corresponding gaps. This is equivalent to the minimality of the corresponding subshift generated by $x$, see for instance \cite{Combinatorics:2010}.
The \textit{language $\mathcal{L}(Y)$ of a subshift $Y$} is the set of all factors of the elements of $Y$. Similarly, we define the \textit{language $\mathcal{L}(x)$ of an infinite word $x$} to be the set of all factors of $x$. Notice, the language of $\Omega(x)$ of an infinite word $x$ is equal to the language of $x$, namely $\mathcal{L}(\Omega(\nu)) = \mathcal{L}(\nu)$. Following convention, the empty word $\varepsilon$ is assumed to be contained in every language. For $s \geq 2$, we call $w = (w_{1}, \dots, w_{k}) \in \mathcal{L}(Y)$ \textit{$s$-right special} if the cardinality of the set \mbox{$\{ a \in \mathscr{A} \colon (w_{1}, \dots, w_{k}, a) \in \mathcal{L}(Y) \}$} is equal to $s$. A word is called \textit{right special} if it is $s$-right special for some $s \geq 2$.
\subsection{Notions of aperiodic order}\label{sec:aperiodic_order_intro}
We begin by stating the definition of $\alpha$-repetitive, first defined in \cite{GKMSS16} for Sturmian subshifts, which generalises the concept of linearly repetitive.
\begin{definition}\label{defn:rep_fun}
The \textit{repetitive function} $R \colon \mathbb{N} \to \mathbb{N}$ of a subshift $Y$ assigns to $r$ the smallest $r'$ such that any element of $\mathcal{L}(Y)$ with length $r'$ contains (as factors) all elements of $\mathcal{L}(Y)$ with length $r$.
\end{definition}
\begin{definition}\label{defn:repetitive}
Let $\alpha \geq 1$ be given and set
\begin{align*}
R_{\alpha} \coloneqq \limsup_{n \to \infty} \frac{R(n)}{n^{\alpha}}.
\end{align*}
A subshift $Y$ is called \textit{$\alpha$-repetitive} if $R_{\alpha}$ is finite and non-zero.
\end{definition}
\begin{remark}\label{rmk:1=linear}
If $1 \leq \alpha < \beta$ and $0 < R_{\beta} < \infty$, then $R_{\alpha} = \infty$. Similarly, if $0 < R_{\alpha} < \infty$, then $R_{\beta} = 0$. Also, recall that a subshift $Y$ is said to be \text{linearly repetitive}, if and only if, there exists a positive constant $K$, such that $R(n) \leq K n$, for all $n \in \mathbb{N}$. Since aperiodicity of a subshift guarantees that the number of words of length $n$ is strictly greater than $n$, for all $n\in\mathbb{N}$, see for instance \cite{F:2002}, this yields that linearly repetitive and $1$-repetitive are equivalent for aperiodic subshifts.
\end{remark}
Next, for $\alpha \geq 1$, we state the definition of $\alpha$-repulsive, which generalises the notion of repulsive. We recall that a subshift $Y$ is called \emph{repulsive} if the value
\begin{align*}
\ell \coloneqq \inf \left\{ \frac{\lvert W \rvert - \lvert w \rvert}{\lvert w \rvert} \colon w, W \in \mathcal{L}(Y), w \; \text{is a prefix and suffix of} \; W, \; \text{and} \; W \neq w \neq \varepsilon \right\}
\end{align*}
is non-zero.
\begin{definition}\label{defn:repulsive}
Let $\alpha \geq 1$ be given. For a subshift $Y$ set
\begin{align*}
\ell_{\alpha} \coloneqq \liminf_{n \to \infty} A_{\alpha, n},
\end{align*}
where for a given natural number $n \geq 2$
\begin{align*}
A_{\alpha, n} \coloneqq \inf \left\{ \frac{\lvert W \rvert - \lvert w \rvert}{\lvert w \rvert^{1/\alpha}} \colon w, W \in \mathcal{L}(Y), w \; \text{is a prefix and suffix of} \; W, \; \lvert W \rvert = n \; \text{and} \; W \neq w \neq \varepsilon \right\},
\end{align*}
and if $\ell_{\alpha}$ is finite and non-zero, then we say that $Y$ is \textit{$\alpha$-repulsive}.
\end{definition}
\begin{remark}\label{rmk:repulsive_unique}
Notice that, if $1 \leq \alpha < \beta$ and $0 < \ell_{\beta} < \infty$, then $\ell_{\alpha} = 0$. Similarly, if $0 < \ell_{\alpha} < \infty$, then $\ell_{\beta} = \infty$.
\end{remark}
The next definition is a generalisation of the notion of a subshift being power free. If $\alpha = 1$, then $1$-finite is equivalent to the property of being power free.
\begin{definition}\label{defn:free}
For a subshift $Y$ and for $n \in \mathbb{N}$ set
\begin{align*}
Q(n) \coloneqq \sup \{ p \in \mathbb{N} \colon \text{there exists} \; W \in \mathcal{L}(Y) \; \text{with} \; \lvert W \rvert = n \; \text{and} \; W^{p} \in \mathcal{L}(Y) \}.
\end{align*}
Let $\alpha \geq 1$ be given. We say that the subshift $Y$ is \textit{$\alpha$-finite} if the value
\begin{align*}
Q_{\alpha} \coloneqq \limsup_{n \to \infty} \frac{Q(n)}{n^{\alpha - 1}}
\end{align*}
is non-zero and finite. Also, for ease of notation, for a given word $v \in \mathcal{L}(Y)$, we let $Q(v)$ denote the largest integer $p$ such that $v^{p} \in \mathcal{L}(Y)$, in the case that no such $p$ exists, we set $Q(v) = \infty$.
\end{definition}
\begin{remark}\label{rmk:1=powerfree}
If $1 \leq \alpha < \beta$ and $0 < Q_{\beta} < \infty$, then $Q_{\alpha} = \infty$. Similarly, if $0 < Q_{\alpha} < \infty$, then $Q_{\beta} = 0$.
\end{remark}
To conclude this section, we state the definition of the complexity function.
\begin{definition}
For a subshift $Y$, we define the \textit{complexity function} $p \colon \mathbb{N} \to \mathbb{N}$ of $Y$ by
\begin{align*}
p(n) \coloneqq \operatorname{card}\{ w \in \mathcal{L}(Y) \colon \lvert w \rvert = n \} .
\end{align*}
\end{definition}
\section{General Results}\label{sec:GR}
\begin{theorem}\label{thm:equivalence}
For $\alpha \geq 1$ and $x$ an infinite word over a finite alphabet, we have that $\Omega(x)$ is $\alpha$-repulsive if and only if it is $\alpha$-finite.
\end{theorem}
\begin{proof}
Let $\alpha \geq 1$ be fixed and let $\Omega(x)$ be $\alpha$-repulsive. Suppose that $Q_{\alpha} = \infty$. In this case there exist sequences of natural numbers $(n_{k})_{k \in \mathbb{N}}$ and $(p_{k})_{k \in \mathbb{N}}$ satisfying
\begin{enumerate}[itemsep=0.1em,topsep=-0.25em,label=(\roman*)]
\item $(n_{k})_{k \in \mathbb{N}}$ is increasing with $p_{k}n_{k}^{1 - \alpha} > k$, and
\item there exists $W_{(k)} \in \mathcal{L}(x)$ with $\lvert W_{(k)} \rvert = n_{k}$ and $W_{(k)}^{p_{k}} \in \mathcal{L}(x)$.
\end{enumerate}
Thus, we have that $p_{k} > 1$, for all $k$ sufficiently large. Since $W_{(k)}^{p_{k}-1}$ is a prefix and a suffix of $W_{(k)}^{p_{k}}$ we have that
\begin{align*}
\frac{\lvert W_{(k)}^{p_{k}} \rvert - \lvert W_{(k)}^{p_{k}-1} \rvert}{{\lvert {W_{(k)}^{p_{k}-1}} \rvert}^{1/\alpha}}
=\frac{\lvert W_{(k)} \rvert}{{\lvert W_{(k)} \rvert}^{1/\alpha}{(p_{(k)}-1)}^{1/\alpha}}
=\frac{n_k}{{n_k}^{1/\alpha}(p_{k}-1)^{1/\alpha}}
\leq\frac{2^{1/\alpha} {n_k}^{(\alpha-1)/\alpha}}{{p_{k}}^{1/\alpha}}
<\frac{2^{1/\alpha}}{k^{1/\alpha}},
\end{align*}
for all $k$ sufficiently large. Therefore, we have that $\ell_\alpha=0$.
Suppose that $Q_{\alpha} = 0$. For $n \in \mathbb{N}$ let $V_{(n)}, v_{(n)} \in \mathcal{L}(x)$ be such that $\lvert V_{(n)} \rvert = n$, $v_{(n)} \neq V_{(n)}$ is a prefix and suffix of $V_{(n)}$ and
\begin{align*}
\frac{\lvert V_{(n)} \rvert - \lvert v_{(n)} \rvert}{{ \lvert v_{(n)} \rvert^{\frac{1}{\alpha}}}} = A_{\alpha , n}.
\end{align*}
Since $0 < \ell_\alpha < \infty$, this means that there exists a sequence $(n_{k})_{k \in \mathbb{N}}$ of natural numbers such that $2 \lvert v_{(n_{k})} \rvert > \lvert V_{(n_k)} \rvert$, for all $k \in \mathbb{N}$. Thus, for each $k \in \mathbb{N}$, there exists a $q_{k} \geq 2$ such that
\begin{align*}
v_{(n_{k})} = \underbrace{u_{(k)} u_{(k)} \cdots u_{(k)}}_{q_{k}-1} z_{(k)} \quad \text{and} \quad V_{(n_{k})} = \underbrace{u_{(k)} u_{(k)} \cdots u_{(k)}}_{q_{k}} z_{(k)},
\end{align*}
where $u_{({k})}, z_{({k})} \in \mathcal{L}(x)$ with $0 < \lvert z_{(k)} \rvert < \lvert u_{(k)} \rvert$. Hence, it follows that
\begin{align}\label{eq:MT100816}
\left( \frac{\lvert V_{(n_{k})} \rvert - \lvert v_{(n_{k})} \rvert}{{\lvert v_{(n_{k})} \rvert^{\frac{1}{\alpha}}}} \right)^\alpha
= \frac{(\lvert V_{(n_{k})} \rvert - \lvert v_{(n_{k})} \rvert)^\alpha}{\lvert v_{(n_{k})} \rvert}
\geq \frac{\lvert u_{(k)} \rvert^\alpha}{q_{k} \lvert u_{(k)} \rvert}
=\frac{\lvert u_{(k)} \rvert^{\alpha -1 }}{q_{k}}
\geq \frac{\lvert u_{(k)} \rvert^{\alpha -1}}{Q( u_{(k)} )}
\geq \frac{\lvert u_{(k)} \rvert^{\alpha -1}}{Q( \lvert u_{(k)} \rvert )},
\end{align}
where the lengths of the $u_{(k)}$ are unbounded, as otherwise $\limsup_{k\to\infty} Q(u_{(k)}) = \infty$.
However, since by assumption $Q_{\alpha} =0$, we have
\begin{align*}
\liminf_{n \to \infty} {\frac{n^{\alpha - 1}}{Q(n)}}=\infty.
\end{align*}
This together with \eqref{eq:MT100816} yields that $\ell_{\alpha} = \infty$.
The reverse direction follows from the proof of (3) $\Rightarrow$ (2) in \cite[Theorem 3.4]{GKMSS16}. We note that the statement of \cite[Theorem 3.4]{GKMSS16} is in terms of Sturmian subshifts and it is assumed that $\alpha > 1$, however, the proof of (3) $\Rightarrow$ (2) holds for arbitary subshifts and for $\alpha =1$.
\end{proof}
\begin{proposition}\label{lem:Lemma2}
Let $\alpha \geq 1$ be given and let $x$ denote an infinite word over a finite alphabet. If $\Omega(x)$ is $\alpha$-repulsive, or equivalently $\alpha$-finite, then it is aperiodic.
\end{proposition}
\begin{proof}
We show the contrapositive. Suppose that there exists a $y \in \Omega(x)$ such that $\sigma^{k}(y) = y$, for some $k \in \mathbb{N}$. This implies that $Q( n k ) = \infty$, for all $n \in \mathbb{N}$, and so, for all $\alpha \geq 1$ we have that $Q_\alpha = \infty$. Therefore, the subshift $\Omega(x)$ is not $\alpha$-finite for any \mbox{$\alpha \geq 1$}.
\end{proof}
\begin{proposition}\label{Prop:lowerbound}
For an aperiodic subshift $Y$ we have that $R(n) > n Q(n)$, for all $n\in\mathbb{N}$.
\end{proposition}
\begin{proof}
Recall that aperiodicity of a subshift guarantees that the number of words of length $n$ is strictly greater than $n$, for all $n\in\mathbb{N}$, see for instance \cite{F:2002}.
Let $n \in \mathbb{N}$ be fixed. Let $w \in \mathcal{L}(Y)$ be such that $\lvert w \rvert = n$ and $w^{Q(n)} \in \mathcal{L}(Y)$. The word $w^{Q(n)}$ has at most $n$ different factors of length $n$. Thus, since $\lvert w^{Q(n)} \rvert = n Q(n)$ and since $\mathcal{L}(Y)$ is aperiodic, we have that $R(n) > n Q(n)$.
\end{proof}
\begin{corollary}\label{cor:R<Q}
For an aperiodic subshift $Y$ and for $\alpha \geq 1$, we have that $R_{\alpha} \geq Q_{\alpha}$. In particular, $R_{\alpha} = 0$ implies $Q_{\alpha} = 0$ and $Q_{\alpha} = \infty$ implies $R_{\alpha} = \infty$.
\end{corollary}
\begin{remark}\label{rmk:wquivalence}
In general it is not true that if $Q_{\alpha} = 0$, then $R_{\alpha} = 0$ and if $R_{\alpha} = \infty$, then $Q_{\alpha} = \infty$. An infinite word $x$ in which one of the letters only occurs exactly once gives rise to a subshift $\Omega(x)$ where this occurs. However, this subshift is not minimal. The $l$-Grigorchuk subshifts (which we will shortly introduce in the next section) provide examples of uniquely ergodic and minimal subshifts which are \mbox{$\alpha$-finite} (or equivalently \mbox{$\alpha$-repulsive}), but not \mbox{$\alpha$-repetitive}, see \Cref{ex:example}.
\end{remark}
\section{$l$-Grigorchuk subshifts}\label{sec:4}
\subsection{$\boldsymbol{l}$-Grigorchuk subshifts}\label{sec:sec4}
The Grigorchuk subshift is a subshift associated to Grigorchuk's infinite $2$-group $G$. The group $G$ was originally introduced in \cite{GRIGO:1984,G:84b} and is an infinite finitely generated torsion group and so belongs to the class of Burnside groups, see also \cite{G:84a}. It has growth between polynomial and exponential, hence is amenable but not elementary amenable, see \cite{G:84a}. This group therefore provided simultaneous answers to the question of Milnor \cite{M:1968} on existence of groups of intermediate growth, and to the question of Day \cite{Day:1957} on existence of amenable but not elementary amenable groups. Lysenok \cite{L:85}, gave a recursive presentation of $G$ by generators and relations using a homomorphism $\kappa$, which we will shortly define, see \eqref{eq:kappa} and \eqref{eq:G_Presentation}. It is remarkable that the homomorphism $\kappa$ serves not only to define $G$ algebraically, but also, as is shown in \cite{GLN:16}, to describe spectral properties of $G$ and to determine $G$ in terms of topological dynamics as a subgroup of the topological full group of a minimal Cantor system.
Following convention we consider the alphabet $\{ a, x, y, z \}$. We define the semi-group homomorphism $\kappa \colon \{ a, x, y, z \}^{*} \to \{ a, x, y, z \}^{*}$ by
\begin{align}\label{eq:kappa}
\kappa(a) \coloneqq (a,x, a), \quad \kappa(x) \coloneqq y, \quad \kappa(y) \coloneqq z, \quad \kappa(z) \coloneqq x,
\end{align}
and for a finite word $w = (w_{1}, \dots, w_{n})$ we set $\kappa(w) \coloneqq \kappa(w_{1}) \dots, \kappa(w_{n})$. The homomorphism $\kappa$ is defined to act on infinite words analogously. It is known that there exists a unique infinite word $\eta \in \{ a, x, y, z \}^{\mathbb{N}}$ such that $\kappa(\eta) = \eta$, see for instance \cite{GLN:16}. We call the subshift $\Omega(\eta)$ the \textit{Grigorchuk subshift}. Alternatively, this subshift can be generated by the three semi-group homomorphisms $\tau_{x}$, $\tau_{y}$ and $\tau_{z}$ defined by
\begin{align*}
\tau_{\beta}(a) \coloneqq (a, \beta, a), \quad \tau_{\beta}(x) \coloneqq x, \quad \tau_{\beta}(y) \coloneqq y, \quad \tau_{\beta}z \coloneqq z,
\end{align*}
where $\beta \in \{ x, y, z \}$, and for $w = (w_{1}, \dots, w_{n})$ we set $\tau_{\beta}(w) \coloneqq {\tau}_{\beta}(w_{1}), \dots, {\tau}_{\beta}(w_{n})$. Indeed, the word $\eta$ is the unique word with the prefix
\begin{align*}
(\tau_{x} \circ \tau_{y} \circ \tau_{z})^{n}(a) = \underbrace{\underbrace{\tau_{x} \circ \tau_{y} \circ \tau_{z}} \circ \underbrace{\tau_{x} \circ \tau_{y} \circ \tau_{z}} \circ \dots \circ\underbrace{\tau_{x} \circ \tau_{y} \circ \tau_{z}}}_{n - \text{times}} (a),
\end{align*}
for all $n \in \mathbb{N}$. We now introduce a more general class of subshifts based on this latter construction, which we call $l$-Grigorchuk subshifts, where each $l = (l_{k})_{k \in \mathbb{N}}$ is a sequence of natural numbers.
Let $l = (l_{k})_{k \in \mathbb{N}}$ denote a fixed sequence of natural numbers. For $j \in \mathbb{N}$, we denote by $N(j)$ and $q(j)$ the unique integers such that
\begin{align*}
j=q(j)+ \sum_{i=1}^{N(j)-1} l_i \quad \text{with } 0 \leq q(j) < l_{N(j)}.
\end{align*}
We define $\tau^{(j)}$ by
\begin{align}\label{eq:tauj}
\tau^{(j)}\coloneqq
\begin{cases}
\tau^{l_{1}}_{x} \circ \tau^{l_{2}}_{y} \circ \tau^{l_{3}}_{z} \circ \dots \circ \tau^{l_{N(j)}}_{z} \circ \tau_{x}^{q(j)} & \mbox{if} \; N(j) \equiv 0 \pmod{3},\\
\tau^{l_{1}}_{x} \circ \tau^{l_{2}}_{y} \circ \tau^{l_{3}}_{z} \circ \dots \circ \tau^{l_{N(j)}}_{x} \circ \tau_{y}^{q(j)} & \mbox{if} \; N(j) \equiv 1 \pmod{3},\\
\tau^{l_{1}}_{x} \circ \tau^{l_{2}}_{y} \circ \tau^{l_{3}}_{z} \circ \dots \circ \tau^{l_{N(j)}}_{y} \circ \tau_{z}^{q(j)} & \mbox{if} \; N(j) \equiv 2 \pmod{3},
\end{cases}
\end{align}
and let $\tau^{(0)}$ be the identity. Additionally, we set
\begin{align}\label{eq:betaj}
\beta^{(j)} \coloneqq
\begin{cases}
x & \mbox{if} \; N(j) \equiv 0 \pmod{3},\\
y & \mbox{if} \; N(j) \equiv 1 \pmod{3},\\
z & \mbox{if} \; N(j) \equiv 2 \pmod{3}.
\end{cases}
\end{align}
\begin{proposition}\label{Prop:Prop1}
For $l=(l_k)_{k \in \mathbb{N}}$, there exists a unique infinite word $\eta_{l}$ with prefix $\tau^{(j)}(a)$, for all $j \in \mathbb{N}_{0}$.
\end{proposition}
\begin{proof}
This is a consequence of the fact that, $\tau^{(j)}(a)$ is a prefix of $\tau^{(j+1)}(a)$, for all $j \in \mathbb{N}_{0}$, and, as we will see in \Cref{prop:length_of_tau}, $\lim_{j \to \infty} \lvert \tau^{(j)}(a) \rvert = \infty$.
\end{proof}
For a given sequence of natural numbers $l = (l_{k})_{k \in \mathbb{N}}$, we refer to the subshift $\Omega(\eta_{l})$ as the \textit{$l$-Grigorchuk subshift}, where $\eta_{l}$ is the unique word given in \Cref{Prop:Prop1}. When it is clear from the context, we will write $\eta$ instead of $\eta_{l}$. Note that the Grigorchuk subshift is an $l$-Grigorchuk subshift with $l$ equal to the constant one sequence, namely $l = (1, 1, 1, \dots )$. By construction, for all $j \in \mathbb{N}$, we observe that $\eta$ has the form
\begin{align}\label{eq:tau_structure_eta}
\raisebox{-1em}{
\begin{tikzpicture}
\draw(0,0)--(6.75,0);
\draw [dotted] (6.75,0)--(8,0);
\foreach \x in {0,1.75,2.25,4,4.5,6.25,6.75}
\draw(\x,0.1)--(\x,-0.1);
\draw[decorate, decoration={brace}, yshift=1.5ex] (0,0) -- node[above=0.4ex] {$\tau^{(j)}(a)$} (1.75,0);
\draw[decorate, decoration={brace}, yshift=1.5ex] (2.25,0) -- node[above=0.4ex] {$\tau^{(j)}(a)$} (4,0);
\draw[decorate, decoration={brace}, yshift=1.5ex] (4.5,0) -- node[above=0.4ex] {$\tau^{(j)}(a)$} (6.25,0);
\draw[decorate, decoration={brace}, yshift=1.5ex] (1.75,0) -- node[above=0.4ex] {$?$} (2.25,0);
\draw[decorate, decoration={brace}, yshift=1.5ex] (4,0) -- node[above=0.4ex] {$?$} (4.5,0);
\draw[decorate, decoration={brace}, yshift=1.5ex] (6.25,0) -- node[above=0.4ex] {$?$} (6.75,0);
\node at (-0.4,-0.05) {$\eta = $};
\node at (8.05,-0.015) {,};
\end{tikzpicture}}
\end{align}
where the letters $x$, $y$ and $z$ occur infinitely often, in a prescribed order determined by the sequence $l$, in place of the question marks. One can also define an $l$-Grigorchuk subshift where elements of $l$ are allowed to take the value zero, see \Cref{rmk:last_remark}.
\begin{proposition}\label{prop:length_of_tau}
For $j \in \mathbb{N}_{0}$ we have that $\displaystyle \lvert \tau^{j}(a) \vert = 2^{j + 1} - 1$.
\end{proposition}
\begin{proof}
We have that $\lvert \tau^{(0)}(a) \rvert = \lvert a \rvert = 1$. Suppose the result holds true for some $j \in \mathbb{N}_{0}$, then
\begin{align*}
\lvert \tau^{(j+1)}(a) \rvert = \lvert \tau^{(j)}(\tau_{\beta^{(j)}}(a)) \rvert = \lvert \tau^{(j)}(a) \beta^{(j)} \tau^{(j)}(a) \rvert = 2 \lvert \tau^{(j)}(a) \rvert + 1 = 2^{(j + 1) + 1} - 1.
\end{align*}
This completes the proof.
\end{proof}
\begin{corollary}\label{cor:rpulsive_iff}
An $l$-Grigorchuk subshift is repulsive if and only if it is $1$-repulsive.
\end{corollary}
\begin{proof}
For an $l$-Grigorchuk subshift, we observe that since $\tau^{(j)}(a)$ is a prefix and suffix of $\tau^{(j+1)}(a)$ and since $\tau^{(j)}(a) \in \mathcal{L}(\eta)$, for $j \in \mathbb{N}$, by \Cref{prop:length_of_tau} we have $Q_{1} \leq 1$. Therefore, an $l$-Grigorchuk subshift is repulsive if and only if it is $1$-repulsive.
\end{proof}
\begin{proposition}\label{prop:minimality}
An $l$-Grigorchuk subshift is minimal.
\end{proposition}
\begin{proof}
For every word $w$ in the language of $\eta$ there exists a $j \in\mathbb{N}$ such that $w$ is a factor of $\tau^{(j)}(a)$. The structure of $\eta$, namely that given in \eqref{eq:tau_structure_eta}, yields that the gap between two successive occurrences of $w$'s is bounded, and so, $\eta$ is uniformly recurrent. As uniformly recurrence is equivalent to minimality, see for instance \cite{Combinatorics:2010}, this completes the proof.
\end{proof}
While we do not use it in the sequel we would like to highlight the role $\kappa$ and $\tau^{(j)}$, and hence $\tau_{x}$, $\tau_{y}$ and $\tau_{z}$, play in Grigorchuk's infinite $2$-group $G$. Indeed, $\kappa$ is (a version of) the substitution used by Lysenok \cite{L:85} to obtain a presentation of $G$. More specifically, \cite{L:85} shows that
\begin{align}\label{eq:G_Presentation}
G = \langle a, x, y, z \mid 1 = a^{2} = x^{2} = y^{2} = z^{2} = \kappa^{k}((a,z)^{4}) = \kappa^{k}((a,z,a,x,a,x)^{4}), k \in \mathbb{N}_{0} \rangle.
\end{align}
This presentation can be written using $\tau^{(j)}$, and hence $\tau_{x}$, $\tau_{y}$ and $\tau_{z}$, by using the fact that
\begin{align*}
\kappa^{j}(a,z) = \tau^{(j)}(a, \beta^{(j-1)}),
\end{align*}
and that, for all $j \in \mathbb{N}$,
\begin{align*}
\kappa^{j}(a,z,a,x,a,x) =
\tau^{(1)}(a,x)
\tau^{(2)} (a,y)
\tau^{(3)} (a,z)
\tau^{(4)} (a,x)
\dots
\tau^{(j+1)}(a \beta^{(j)}).
\end{align*}
Here $\tau^{(j)}$ and $\beta^{(j)}$ are as defined in \eqref{eq:tauj} and \eqref{eq:betaj} with $l$ equal to the constant $1$ sequence, that is $l = (l_{i})$ with $l_{i} = 1$.
\subsection{$\boldsymbol{\alpha}$-Finite and $\boldsymbol{\alpha}$-Repulsive}\label{sec:section4.2}
\Cref{thm:G-alpha-free} below gives a necessary and sufficient condition on a given sequence of natural numbers $l$ to guarantee that the associated $l$-Grigorchuk subshift is $\alpha$-finite, which by \Cref{thm:equivalence} is equivalent to the subshift being $\alpha$-repulsive. In particular, we obtain that an $l$-Grigorchuk subshift is $1$-finite (and hence $1$-repulsive) if and only if $l$ is a bounded sequence. Thus, as $1$-repulsive implies repulsive, if $l$ is a bounded sequence, then the associated $l$-Grigorchuk subshift is repulsive.
\begin{theorem}\label{thm:G-alpha-free}
For $\alpha \geq 1$ the following three statements are equivalent.
\begin{enumerate}[itemsep=0.1em,topsep=-0.25em,label=(\roman*)]
\item An $l$-Grigorchuk subshift is $\alpha$-repulsive.
\item An $l$-Grigorchuk subshift is $\alpha$-finite.
\item $\limsup_{n \to \infty} \lvert l_{n+1} + (1 - \alpha) \sum_{i = 1}^{n} l_{i} \rvert <\infty$.
\end{enumerate}
\end{theorem}
\begin{proof}
The result follows from \Cref{thm:equivalence} and \Cref{thm:Q-Bounds} given below.
\end{proof}
\begin{theorem}\label{thm:Q-Bounds}
For $\alpha > 1$, an $l$-Grigorchuk subshift fulfils the following equality.
\begin{align*}
Q_\alpha = \limsup_{m\to\infty} \frac{2^{l_{m+1} + 1}}{ 2^{(\alpha-1)\sum_{i=1}^{m} l_{i}} }
\end{align*}
Moreover, we have that
\begin{align*}
\limsup_{m\to\infty} 2^{l_{m+1}+1} -1 \leq Q_{1} \leq \limsup_{m\to\infty} 2^{l_{m+1}+1}.
\end{align*}
\end{theorem}
For the proof of this result we will require the following definition and remark.
\begin{definition}
Fix a sequence $l = (l_{i})_{i \in \mathbb{N}}$ and let $\eta$ denote the unique infinite word given by \Cref{Prop:Prop1}. For $j \in \mathbb{N}$, define $\eta^{(j)}$ to be the infinite word associated to the sequence
\begin{align*}
( \hspace{-0.5em}\underbrace{0, 0, \dots, 0}_{(N(j)-1) - \text{times}}\hspace{-0.5em}, l_{N(j)} - q(j), l_{N(j) +1}, l_{N(j) + 2}, \dots ),
\end{align*}
given by \Cref{Prop:Prop1}.
\end{definition}
\begin{remark}\label{rmk:remark_to_be_added}
Let $(l_{i})_{i \in \mathbb{N}}$ be a sequence of natural numbers. The (generalised) Grigorchuk subshifts associated to the sequences $(0, 0, \dots, 0, l_{1}, l_{2}, l_{3}, \dots)$ and $(l_{1}, l_{2}, l_{3}, \dots)$ are topologically conjugate through the semi-group homomorphism which maps $a$ to $a$ and applies a cyclic permutation to $\{ x, y, z \}$.
\end{remark}
\begin{proof}[{Proof of \Cref{thm:Q-Bounds}}]
We structure the proof as follows. We prove the following five statements from which we will deduce the required result.
\begin{enumerate}[itemsep=0.1em,topsep=-0.25em,label=(\roman*)]
\item\label{item1} $Q(2) = 2^{l_{1} +1} -1$
\item\label{item3a} If $k \in \mathbb{N}$ is such that $k \equiv 1 \pmod{4}$ or $k \equiv 3 \pmod{4}$, then $Q(k) = 1$.
\item\label{item3b} If $k \in \mathbb{N}$ is such that $k \equiv 2 \pmod{4}$ and $\eta\lvert_{k} \eta\lvert_{k} \in \mathcal{L}(\eta)$, then $\displaystyle Q(\eta\lvert_{k}) = \lfloor (2^{l_{1} + 2} - 2)/ k \rfloor$.
\item\label{item3c}
If $k \in \mathbb{N}$ is such that $k \equiv 0 \pmod{4}$ and $\eta\lvert_{k} \eta\lvert_{k} \in \mathcal{L}(\eta)$, then
\begin{align}\label{eq:k=0mod4}
Q(\eta\lvert_{k}) = \left\lfloor \frac{2^{l_{N(j)}-q(j) + 1}-1}{k/2^{j+1}} \right\rfloor,
\end{align}
where $j$ is the smallest integer such that $k/2^{j} \equiv 2 \pmod{4}$.
\item\label{item2} Let $n \in \mathbb{N}$ and let $0 \leq r < 2^{n}$. For each $v = (v_{1}, v_{2}, \dots, v_{2^{n} + r} )\in \mathcal{L}(\eta)$ with $Q(v) \geq 3$, there exists $1 \leq k \leq 2^{n} + r$ such that $\eta\lvert_{2^n+r} = (v_{k}, \ldots, v_{2^n+r}, v_{1}, \ldots ,v_{k-1})$ and, moreover, $Q(v) - 1 \leq Q(\eta\lvert_{2^n+r}) \leq Q(v)$.
\end{enumerate}
To prove Statement \ref{item1}, notice that $(y,a,y)$ and $(z,a,z)$ are not factors of $\eta$. This follows, since each $(4k + 2)$-th letter of $\eta$ is equal to $x$, for all $k \in \mathbb{N}_{0}$. By definition, we have that $\eta = \tau_{x}^{l_{1}} ( \eta^{(l_{1})})$. Since the $(4k + 2)$-th letter of $\eta^{(l_{1})}$ is equal to $y$, for all $k \in \mathbb{N}_{0}$, it follows that $(x, a, x)$ is not a factor of $\eta^{(l_{1})}$, and hence, by \Cref{prop:length_of_tau},
\begin{align*}
Q((a, x))
= \frac{\lvert \tau^{l_{1}}_{x}(a) x \tau^{l_{1}}_{x}(a) \rvert - 1}{2}
= 2^{l_{1}+1} - 1.
\end{align*}
Since every second letter of $\eta$ is equal to $a$, it follows that if $n \equiv 1 \pmod{2}$, then $Q(n) = 1$.
Assume that the conditions of Statement \ref{item3b} hold, that is $k = 2^{n} + r \equiv 2 \pmod{4}$, where $n \in \mathbb{N}$ and $0 \leq r < 2^{n}$. By construction we have that $\eta_{i} = x$ for all $i \equiv 2 \pmod{4}$. Thus,
\begin{align*}
\raisebox{-1em}{
\begin{tikzpicture}
\draw(0,0)--(6.75,0);
\draw [dotted] (6.75,0)--(8,0);
\draw(8,0)--(11.4166,0);
\foreach \x in {0,1.75,2.25,4,4.5,6.25,6.75,8,9.75,10.25,11.4166}
\draw(\x,0.1)--(\x,-0.1);
\draw[decorate, decoration={brace}, yshift=1.5ex] (0,0) -- node[above=0.4ex] {$(a, x, a)$} (1.75,0);
\draw[decorate, decoration={brace}, yshift=1.5ex] (2.25,0) -- node[above=0.4ex] {$(a, x, a)$} (4,0);
\draw[decorate, decoration={brace}, yshift=1.5ex] (4.5,0) -- node[above=0.4ex] {$(a, x, a)$} (6.25,0);
\draw[decorate, decoration={brace}, yshift=1.5ex] (8,0) -- node[above=0.4ex] {$(a, x, a)$} (9.75,0);
\draw[decorate, decoration={brace}, yshift=1.5ex] (10.25,0) -- node[above=0.4ex] {$(a, x)$} (11.4166,0);
\draw[decorate, decoration={brace}, yshift=1.5ex] (1.75,0) -- node[above=0.4ex] {$?$} (2.25,0);
\draw[decorate, decoration={brace}, yshift=1.5ex] (4,0) -- node[above=0.4ex] {$?$} (4.5,0);
\draw[decorate, decoration={brace}, yshift=1.5ex] (6.25,0) -- node[above=0.4ex] {$?$} (6.75,0);
\draw[decorate, decoration={brace}, yshift=1.5ex] (9.75,0) -- node[above=0.4ex] {$?$} (10.25,0);
\node at (-0.4,-0.05) {$\eta\lvert_{k} = $};
\end{tikzpicture}}
\end{align*}
Since $\eta\lvert_{k} \eta\lvert_{k} \in \mathcal{L}(\eta)$, we have that $\eta\lvert_{k} = (a, x)^{k/2}$. This in tandem with Statement \ref{item1} yields that
\begin{align*}
Q(\eta\lvert_{k}) = \left\lfloor \frac{2^{l_1+1}-1}{k/2} \right\rfloor.
\end{align*}
For Statement \ref{item3c}, notice that for all $j \in \mathbb{N}$ with $k\equiv 0 \pmod{2^{j}}$, we have
\begin{align*}
\eta\lvert_{k} = \tau^{(j)}(\eta^{(j)}\lvert_{k/2^{j}}).
\end{align*}
Since $\tau^{(j)}$ is a semi-group homomorphism on $\{a,x,y,z\}^{*}$, it follows that $Q(\eta\lvert_{k}) = Q(\eta^{(j)}\lvert_{k\vert2^{j}})$. (Note here that $Q(\eta\lvert_{k})$ is taken with respect to the language $\mathcal{L}(\eta)$ and $Q(\eta^{(j)}\lvert_{k/2^{j}})$ is taken with respect to the language $\mathcal{L}(\eta^{(j)})$). This in tandem with \Cref{rmk:remark_to_be_added} and Statement \ref{item3b} yields that
\begin{align*}
Q(\eta\lvert_{k}) = Q(\eta^{(j)}\lvert_{k/2^{j}}) = \left\lfloor \frac{2^{l_{N(j)}-q(j) + 1}-1}{k/2^{j+1}} \right\rfloor,
\end{align*}
where $j$ is the smallest integer such that $k/2^{j} \equiv 2 \pmod{4}$.
We now turn to the proof of Statement \ref{item2}. By Statement \ref{item3a} it is sufficient to consider words of even length. To this end, let $v \in \mathcal{L}(\eta)$ with $Q(v) \geq 3$ and with $\lvert v \rvert = 2^{n}+r$, for some $n\in\mathbb{N}$, and $0 \leq r < 2^{n}$. Due to the structure of $\eta$ given in \eqref{eq:tau_structure_eta}, where we set $j = n -1$, and since every $(2m + 1)$-th question mark in \eqref{eq:tau_structure_eta} is equal to $\beta^{(j)}$, for all $m \in \mathbb{N}_{0}$, we have that $\tau^{(j)}((a, \beta^{(j)}))$ is a factor of $v^{Q(v)}$. Thus there exists a natural number $k \leq 2^{n} + r$ such that
\begin{align*}
\eta\lvert_{2^{n}+r} = (v_{k}, \dots, v_{2^{n}+r}, v_{1}, \dots, v_{k-1}),
\end{align*}
see \eqref{eq:fig.1}, which yields that $Q(v)-1 \leq Q(\eta\lvert_{2^{n}+r}) \leq Q(v)$.
\begin{align}\label{eq:fig.1}
\raisebox{-2.45em}{
\begin{tikzpicture}
\draw(0.75,0)--(10.25,0);
\draw [dotted] (0.75,0)--(0,0);
\draw [dotted] (10.25,0)--(11,0);
\foreach \x in {1,4,7,10}
\draw(\x,0.2)--(\x,0);
\draw[decorate, decoration={brace}, yshift=2ex] (1,0) -- node[above=0.4ex] {$v$} (4,0);
\draw[decorate, decoration={brace}, yshift=2ex] (4,0) -- node[above=0.4ex] {$v$} (7,0);
\draw[decorate, decoration={brace}, yshift=2ex] (7,0) -- node[above=0.4ex] {$v$} (10,0);
\draw(2,0)--(2,-0.2);
\draw(5,0)--(5,-0.2);
\draw[decorate, decoration={brace, mirror}, yshift=-2ex] (2,0) -- node[below=0.4ex] {$\eta\lvert_{2^{n}+r}$} (5,0);
\draw[decorate, decoration={brace, mirror}, yshift=-2ex] (5,0) -- node[below=0.4ex] {$\eta\lvert_{2^{n}+r}$} (8,0);
\end{tikzpicture}}
\end{align}
With Statements \ref{item1}, \ref{item3a}, \ref{item3b},\ref{item3c} and \ref{item2} at hand we can now prove the required result. If $k \equiv 0 \pmod{4}$, then the left hand side of \eqref{eq:k=0mod4} is maximised on the set $[2^{n}, 2^{n+1}) \cap \mathbb{N}$, at $j = n -1$, namely when $k = 2^{n}$. Further, \eqref{eq:k=0mod4} in tandem with \eqref{eq:tau_structure_eta} and \Cref{prop:length_of_tau}, yields
\begin{align*}
Q(\eta\lvert_{2^{n}}) = 2^{l_{N(n-1)}-q(n-1) + 1} - 1.
\end{align*}
The function $n \mapsto Q(\eta\lvert_{2^{n}})$ is maximised on the set $[\sum_{i = 1}^{m} l_{i}, \sum_{i=1}^{m+1} l_{i}) \cap \mathbb{N}$ when $n - 1 = \sum_{i = 1}^{m} l_{i}$. Indeed, we have that
\begin{align}\label{eq:explaination_lower}
Q(\eta\lvert_{k}) = 2^{l_{m+1}+1} - 1,
\end{align}
where $k = 2^{1 + \sum_{i = 1}^{m} l_{i}}$.
Hence,
\begin{align*}
\limsup_{m\to\infty} \frac{2^{l_{m+1}+1} - 1}{{\left(2^{1 + \sum_{i=1}^{m} l_{i}}\right)}^{\alpha-1}}
\leq \limsup_{n\to\infty} \frac{Q(n)}{n^{\alpha-1}}
\leq \limsup_{n\to\infty} \frac{Q(\eta\lvert_{n}) + 1}{n^{\alpha-1}}
\leq \limsup_{m\to\infty} \frac{2^{l_{m+1}+1}}{{\left(2^{1 + \sum_{i=1}^{m} l_{i}}\right)}^{\alpha-1}}.
\end{align*}
Here
the first inequality follows from \eqref{eq:explaination_lower};
the second inequality follows from the latter results of Statement \ref{item2};
the last inequality follows from Statements \ref{item1}, \ref{item3a}, \ref{item3b}, \ref{item3c} together with \eqref{eq:explaination_lower}.
\end{proof}
\begin{corollary}
An $l$-Grigorchuk subshift satisfies $Q(2^{j + 1}) = 2^{l_{N(j)} - q(j) + 1} - 1$, for all $j \in \mathbb{N}$.
\end{corollary}
\begin{proof}
The result follows from \eqref{eq:tau_structure_eta}, \Cref{prop:length_of_tau} and Statement \ref{item3c} in the proof of \cref{thm:Q-Bounds} together with an argument by contradiction.
\end{proof}
\subsection{$\boldsymbol{\alpha}$-Repetitive}\label{sec:section4.3}
Our next result gives a necessary and sufficient condition on a given sequence of natural numbers $l = (l_{i})_{i = 1}^{\infty}$ to guarantee that the associated $l$-Grigorchuk subshift is $\alpha$-repetitive. In particular, we obtain that an $l$-Grigorchuk subshift is $1$-repetitive if and only if $l$ is a bounded sequence. Thus, as $1$-repetitive implies linearly repetitive, if $l$ is a bounded sequence, then the associated $l$-Grigorchuk subshift is linearly repetitive. We would also like to mention here that an exact formula for the repetitive function of an $l$-Grigorchuk subshift has been obtained, independently, in \cite{LenzSell:2016}, and hence they have also obtained a criterion similar to ours for an $l$-Grigorchuk subshift to be $\alpha$-repetitive.
\begin{theorem}\label{thm:G-alpha-repetative}
For $\alpha \geq 1$ an $l$-Grigorchuk subshift is $\alpha$-repetitive if and only if
\begin{align*}
\limsup_{n \to \infty} \left\lvert l_{n+2} + l_{n+1} + (1 - \alpha) \sum_{i = 1}^{n} l_{i} \right\rvert <\infty.
\end{align*}
\end{theorem}
We prove \Cref{thm:G-alpha-repetative} by using the following bounds on the repetitive function.
\begin{lemma}\label{lem:repetative_function_bounds}
Let $l = (l_{i})_{i \in \mathbb{N}}$ denote a sequence of natural numbers. The repetitive function for an $l$-Grigorchuk subshift satisfies the following inequalities, for $j \in \mathbb{N}$,
\begin{align*}
2^{l_{N(j) + 1} + l_{N(j)} - q(j) + j + 1} \leq R\left( 2^{j + 1} - 1 \right)
\leq 2^{l_{N(j) + 1} + l_{N(j)} - q(j) + j + 2}.
\end{align*}
\end{lemma}
\begin{proof}
By \eqref{eq:tau_structure_eta} we have that $\tau^{(j-1)} \circ \tau_{x} (a)$, $\tau^{(j-1)} \circ \tau_{y} (a)$ and $\tau^{(j-1)} \circ \tau_{z} (a)$ all belong to $\mathcal{L}(\eta)$ and that
\begin{align*}
\left\lvert \tau^{(j-1)} \circ \tau_{x} (a) \right\rvert =
\left\lvert \tau^{(j-1)} \circ \tau_{y} (a) \right\rvert =
\left\lvert \tau^{(j-1)} \circ \tau_{z} (a) \right\rvert =
\left \lvert \tau^{(j)}(a) \right\rvert =
2^{1 + q(j) + \sum_{i = 1}^{N(j) - 1} l_{i}} - 1 = 2^{j + 1} - 1,
\end{align*}
We claim that, for all $k \in \{ 1, 2, \dots, l_{N(j)} - q(j)\}$, the word
\begin{align*}
\tau^{(j + k)}(a) = \tau^{(j)} \circ \tau_{\beta^{(j)}}^{k}(a)
\end{align*}
does not contain as factors both the words
\begin{align}\label{eq:factors}
\tau^{(j-1)} \circ \tau_{\beta^{(j + l_{N(j)} - q(j))}}(a)
\quad \text{and} \quad
\tau^{(j-1)} \circ \tau_{\beta^{(j + l_{N(j)} - q(j) + l_{N(j) + 1})}}(a).
\end{align}
For if this were the case, then, since the first letter of the words in \eqref{eq:factors} is equal to $a$ and both $\tau^{(j-1)}(a)$ and $\tau^{(j)}(a)$ are palindromes, there exists an integer $m \in [2^{j-1}+1, 2^{j}-1]$ with
\begin{align}\label{eq:repetative_x_z1}
\sigma^{2m}(\tau^{(j)}(a) \beta^{(j)} \tau^{(j)}(a)) \lvert_{2^{j+1}-1} = \tau^{(j-1)}(a) \beta^{(j + l_{N(j)} - q(j))} \tau^{(j-1)}(a)
\end{align}
or, such that
\begin{align}\label{eq:repetative_x_z2}
\sigma^{2m}(\tau^{(j)}(a) \beta^{(j)} \tau^{(j)}(a)) \lvert_{2^{j+1}-1} = \tau^{(j-1)}(a) \beta^{(j + l_{N(j)} - q(j) + l_{N(j) + 1})} \tau^{(j-1)}(a).
\end{align}
Thus, the $(2^{j+1}-2m)$-th letter of $\tau^{(j-1)}(a)$ is equal to $\beta^{(j)}$ and the $(2m - 2^{j})$-th letter of $\tau^{(j-1)}(a)$ is equal to $\beta^{(j + l_{N(j)} - q(j))}$ in the case of \eqref{eq:repetative_x_z1} and $\beta^{(j + l_{N(j)} - q(j) + l_{N(j) + 1})}$ in the case of \eqref{eq:repetative_x_z2}. As $\tau^{(j-1)}(a)$ is a palindrome, $\beta^{(j)} \neq \beta^{(j + l_{N(j)} - q(j))}$ and $\beta^{(j)} \neq \beta^{(j + l_{N(j)} - q(j)+ l_{N(j) + 1})}$, this yields a contradiction to the initial assumption.
Similarly, for all $k \in \{1, 2, \dots, l_{N(j)+1} \}$, the word
\begin{align*}
\tau^{(j + l_{N(j)} - q(j) + k)}(a) = \tau^{(j)} \circ \tau^{l_{N(j)} - q(j)}_{\beta^{(j)}} \circ \tau^{k}_{\beta^{(j + l_{N(j)} - q(j))}}(a)
\end{align*}
does not contain as a factor the word
\begin{align*}
\tau^{(j-1)} \circ \tau_{\beta^{(j + l_{N(j)} - q(j)+ l_{N(j) + 1})}}(a).
\end{align*}
This yields the lower bound for the repetitive function, namely that
\begin{align*}
R(2^{j + 1} - 1) \geq \left\lvert \tau^{(j + l_{N(j)} - q(j) + l_{N(j) + 1})}(a) \right\rvert + 1 = 2^{j + l_{N(j)} - q(j) + l_{N(j) + 1}}.
\end{align*}
Due to the structure of $\eta$, given a word of length $2^{j+1}-1$ in $\mathcal{L}(\eta)$ it is necessarily a factor of $\tau^{(j)} \circ \tau_{x}(a)$, $\tau^{(j)} \circ \tau_{y}(a)$ or $\tau^{(j)} \circ \tau_{z}(a)$. Thus, any word of length $2^{j+1}-1$ is a factor of
\begin{align*}
\tau^{(j + l_{N(j)} - q(j) + l_{N(j) + 1} + 1)}(a) = \tau^{(j)} \circ \tau^{l_{N(j)} - q(j)}_{\beta^{(j)}} \circ \tau^{l_{N(j)}}_{\beta^{(j + l_{N(j)} - q(j))}} \circ \tau_{\beta^{(j + l_{N(j)} - q(j)+ l_{N(j) + 1})}} (a).
\end{align*}
This in tandem with \eqref{eq:tau_structure_eta} and \Cref{prop:length_of_tau} yields that
\begin{align*}
R(2^{j + 1} - 1)
\leq 2 \left\lvert \tau^{(j + l_{N(j)} - q(j) + l_{N(j) + 1} + 1)}(a) \right\rvert
< 2^{j + l_{N(j)} - q(j) + l_{N(j) + 1} + 2},
\end{align*}
which completes the proof.
\end{proof}
\begin{proof} [{Proof of \Cref{thm:G-alpha-repetative}}]
For $n \in \mathbb{N}$, let $j = j(n)$ denote the unique natural number such that $2^{j - 1} \leq n < 2^{j}$. By definition, the repetitive function is monotonically increasing, and so $R(2^{j-1} - 1) \leq R(n) \leq R(2^{j} - 1)$. Combining this with \Cref{lem:repetative_function_bounds}, yields that
\begin{align*}
2^{1 - \alpha} 2^{l_{N(j-1)} - q(j) + l_{N(j-1)+1} - (j - 1) (\alpha - 1)}
\leq \frac{R(n)}{n^{\alpha}}
\leq 2^{2 + \alpha} 2^{l_{N(j)} - q(j) + l_{N(j) + 1} - j(\alpha - 1)}.
\end{align*}
Since
\begin{align*}
l_{N(j)} - q(j) + l_{N(j) + 1} - j(\alpha - 1) \leq l_{N(j)} + l_{N(j) + 1} - (\alpha - 1) \sum_{k = 1}^{N(j) - 1} l_{k},
\end{align*}
we have that $0 < R_{\alpha} < \infty$, if and only if,
\begin{align*}
\limsup_{j \to \infty} \left\lvert l_{N(j)} + l_{N(j) + 1} - (\alpha - 1) \sum_{k = 1}^{N(j) - 1} l_{k} \right\rvert < \infty.
\end{align*}
This completes the proof.
\end{proof}
\subsection{Examples}
\label{sec:examples}
Here we discuss several examples of sequences $l = (l_{n})_{n \in \mathbb{N}}$ for which the associated $l$-Grigorchuk subshift exhibts difference order characteristics.
\begin{example}\label{ex:example} \mbox{ }
\begin{enumerate}[itemsep=0.1em,topsep=-0.25em,label=(\roman*)]
\item\label{(1)} If $l$ is a bounded sequence, then the associated $l$-Grigorchuk subshift is $1$-repetitive and $1$-repulsive, and hence, $1$-finite.
\item\label{(3)} Let $b \geq 2$ denote a fixed integer. If $l = ( b^{n} )_{n \in \mathbb{N}}$, then the associated $l$-Grigorchuk subshift is $b^{2}$-repetitive, and $b$-repulsive (and hence $b$-finite). Thus, in general, $\alpha$-repetitive is not equivalent to $\alpha$-repulsive, and hence nor $\alpha$-finite.
\item\label{(4)} Let $( b_{n} )_{n \in \mathbb{N}}$ denote a bounded sequence, and set $l_{n} = 2^{n/2} - b_{n/2}$ if $n$ is even, and set $l_{n} = b_{(n+1)/2}$ otherwise. The associated $l$-Grigorchuk subshift is $2$-repetitive, however, it is not $\alpha$-repulsive nor $\alpha$-finite, for any value of $\alpha \geq 1$.
\item Let $l_{n} = 2^{n/2}-n$ if $n$ is even and $l_{n} = n$ otherwise. The associated $l$-Grigorchuk subshift is neither $\alpha$-repetitive, $\alpha$-repulsive nor $\alpha$-finite for any value of $\alpha \geq 1$.
\item If $l = (l_{n})_{n \in \mathbb{N}}$ is a sequence of natural number such that there exists a non-constant polynomial $P$ with $ l_{n}=P(n)$, then the $l$-Grigorchuk subshift is neither \mbox{$\alpha$-repulsive}, \mbox{$\alpha$-finite} nor $\alpha$-repetitive, for any value of $\alpha \geq 1$. This is a consequence of Faulhalber's formula \cite{CG:95}.
\end{enumerate}
\end{example}
From \Cref{ex:example}~\ref{(3)} and~\ref{(4)}, for $\alpha> 1$, we see that the $l$-Grigorchuk subshifts provide examples which demonstrate that $\alpha$-repulsive, and hence $\alpha$-finite, is not equivalent to \mbox{$\alpha$-repetitive}. This gives rise to the question how the notions of $\alpha$-repetitive and $\beta$-repulsive, and hence $\beta$-finite, are connected in terms of $l$-Grigorchuk subshifts. This is what we address in the following proposition; indeed the connection, which we have observed in \Cref{ex:example}~\ref{(1)} and~\ref{(3)} is in fact true in general.
\begin{proposition}
Let $l$ be a sequence of natural numbers. If the $l$-Grigorchuk subshift is $\alpha$-repulsive, and hence $\alpha$-finite, then it is $\alpha^2$-repetitive.
\end{proposition}
\begin{proof}
Observe that, for all $n \in \mathbb{N}$,
\begin{align}\label{eq:bounds_equivalence_1}
l_{n+2} + (1 - \alpha) \sum_{i = 1}^{n+1} l_{i}
= l_{n+2} + l_{n+1}+ \left(1 - \alpha \left( 1+\frac{l_{n+1}}{\sum_{i = 1}^{n} l_{i}} \right) \right)\sum_{i = 1}^{n} l_{i}.
\end{align}
By the hypothesis and \Cref{thm:G-alpha-free}, we have that $\limsup_{n \to \infty} \lvert l_{n+1} + (1 - \alpha) \sum_{i = 1}^{n} l_{i} \rvert$ is a finite real number. In the following, we denote this value by $c$. Given $\epsilon >0$, there exists an $N \in \mathbb{N}$, such that, for all $n\geq N$,
\begin{align*}
-c - \epsilon \leq l_{n+1} + (1 - \alpha) \sum_{i = 1}^{n} l_{i} \leq c + \epsilon,
\quad \text{and hence,} \quad
\alpha - \frac{ c + \epsilon}{\sum_{i=1}^{n}l_i} \leq 1 + \frac{l_{n+1}}{\sum_{i=1}^{n}l_i} \leq \alpha + \frac{ c + \epsilon}{\sum_{i=1}^{n}l_i}.
\end{align*}
This in tandem with \Cref{eq:bounds_equivalence_1} yields for $\delta\geq 1$ that
\begin{align*}
-\delta(c+\epsilon) + l_{n+2} + (1 - \delta) \sum_{i = 1}^{n+1} l_{i}
\leq l_{n+2} +l_{n+1} + (1 - \delta \alpha) \sum_{i = 1}^{n} l_{i}
\leq \delta(c+\epsilon) + l_{n+2} + (1 - \delta) \sum_{i = 1}^{n+1} l_{i},
\end{align*}
for all $n\geq N$. This in combination with the hypothesis of the proposition and the \Cref{thm:G-alpha-free,thm:G-alpha-repetative} yields the required result.
\end{proof}
\subsection{Aperiodicity, Complexity and Ergodicity}\label{sec:section4.4}
We now turn to computing the value of the complexity function for a given $l$-Grigorchuk subshift. Knowing the behaviour of the complexity function allows us to conclude that any $l$-Grigorchuk subshift is aperiodic and uniquely ergodic. We note that in \cite{LenzSell:2016} an explicit formula for the complexity and the palindromic complexity functions have also been obtained independently. The proof of the following theorem is a generalisation of that given in \cite{GLN:16,GLN:16b}, where the case when $l$ is the constant one sequence is considered.
In the sequel, for ease of notation, for $n \in \mathbb{N}_{0}$, we set $M(n)\coloneqq \lvert\tau^{(\sum_{i=1}^{n} l_i )} (a) \rvert=2^{1+\sum_{i=1}^{n} l_i }-1$.
\begin{theorem}\label{thm:complexity}
For $m \in \mathbb{N}_{0}$ and $0 \leq r < M({m+1}) - M(m)$, the $l$-Grigorchuk subshift satisfies,
\begin{align*}
p(M(m) + 1 + r) =
\begin{cases}
2M(m)+M({m-1}) +3r & \mbox{if} \; 0 \leq r< M(m)- M({m-1}),\\
3M(m) +2r & \mbox{if} \; M(m) - M(m-1) \leq r < M(m+1) - M(m).
\end{cases}
\end{align*}
\end{theorem}
For the proof of this result we will use the following lemma.
\begin{lemma}
\label{lem:lemma534}
The factor $\tau^{(j)}(a)$ is $3$-special for every $j\in\mathbb{N}_0$.
\end{lemma}
\begin{proof}
This follows from the structure of $\eta$ given in \eqref{eq:tau_structure_eta}.
\end{proof}
\begin{proof}[{Proof of \Cref{thm:complexity}}]
For $m=1$, every word of length $\lvert \tau^{( l_{1} )}(a) \rvert + 1$ in $\mathcal{L}(\eta)$ is a factor of at least one of the following words belonging to $\mathcal{L}(\eta)$.
\begin{align*}
\tau^{(l_1) }(a) x \tau^{(l_1)}(a)& = (\underbrace{a, x, a, \dots, a, x, a}_{\tau^{(l_1) }(a)}, x, \underbrace{a, x, a, \dots, a, x, a}_{\tau^{(l_1)}(a)})\\
\tau^{(l_1) }(a) y \tau^{(l_1)}(a)& = (\underbrace{a, x, a, \dots, a, x, a}_{\tau^{(l_1) }(a)}, y, \underbrace{a, x, a, \dots, a, x, a}_{\tau^{(l_1)}(a)})\\
\tau^{(l_1) }(a) z \tau^{(l_1)}(a)& = (\underbrace{a, x, a, \dots, a, x, a}_{\tau^{(l_1) }(a)}, z, \underbrace{a, x, a, \dots, a, x, a}_{\tau^{(l_1)}(a)})
\end{align*}
This yields that $p( \lvert \tau^{(l_1) }(a) \rvert + 1)=2 \lvert \tau^{(l_1)}(a) \rvert + \lvert \tau^{(0)}(a) \rvert$. In the same way, for a fixed $m \in \mathbb{N}$, every word of length $\lvert \tau^{(\sum_{i=1}^{m} l_i )}(a) \rvert + 1$ in $\mathcal{L}(\eta)$ is a factor of at least one of the following words
\begin{align*}
\tau^{(\sum_{i=1}^{m} l_i )}(a) x \tau^{(\sum_{i=1}^{m} l_i )}(a), \quad
\tau^{(\sum_{i=1}^{m} l_i )}(a) y \tau^{(\sum_{i=1}^{m} l_i )}(a) \quad \text{and} \quad
\tau^{(\sum_{i=1}^{m} l_i )}(a) z \tau^{(\sum_{i=1}^{m} l_i )}(a),
\end{align*}
which are all contained in $\mathcal{L}(\eta)$ by \eqref{eq:tau_structure_eta}. Additionally, we have
\begin{align*}
\tau^{(\sum_{i=1}^{m} l_i )}(a) \beta^{(\sum_{i=1}^{m- 1} l_i)} \tau^{(\sum_{i=1}^{m} l_i )}(a)
= \tau^{(\sum_{i=1}^{m} l_i )}(a)\underbrace{
\underbrace{\beta^{(\sum_{i=1}^{m- 1} l_i)} \tau^{(\sum_{i=1}^{m-1} l_i )}(a)}
\cdots
\underbrace{\beta^{(\sum_{i=1}^{m- 1} l_i)}
\tau^{(\sum_{i=1}^{m-1} l_i )}(a)}}_{2^{l_{m}} - \text{times}}.
\end{align*}
With this we obtain that, for all $m\in\mathbb{N}$,
\begin{align}
\label{eq:mylab357}
p( \lvert \tau^{(\sum_{i=1}^{m} l_i )}(a) \rvert + 1) \leq 2 \lvert \tau^{(\sum_{i=1}^{m} l_i )}(a) \rvert +\lvert \tau^{(\sum_{i=1}^{m-1} l_i )}(a) \rvert.
\end{align}
By Lemma \ref{lem:lemma534} the factor $\tau^{(\sum_{i=1}^{m} l_i )}(a)$ is $3$-right special, for all $m\in\mathbb{N}$, and so
\begin{align*}
\tau^{(1+\sum_{i=1}^{m} l_i)}(a)=\tau^{(\sum_{i=1}^{m} l_i )}(a)\beta^{(\sum_{i=1}^{m} l_{i})}\tau^{(\sum_{i=1}^{m} l_i )}(a)
\end{align*}
is $3$-right special as it is a suffix of $\tau^{(\sum_{i=1}^{m+1} l_i )}(a)$. Notice that
\begin{align*}
\tau^{(\sum_{i=1}^{m} l_i )}(a)\beta^{(\sum_{i=1}^{m-1} l_{i})}\tau^{(\sum_{i=1}^{m} l_i )}(a),
\end{align*}
has the same length as $\tau^{(1+\sum_{i=1}^{m} l_i )}(a)$, but it is not right special because, by \eqref{eq:tau_structure_eta}, the only possible right-extension is
\begin{align*}
\tau^{(\sum_{i=1}^{m} l_i )}(a)\beta^{(\sum_{i=1}^{m-1} l_{i})}\tau^{(\sum_{i=1}^{m} l_i )}(a)\beta^{(\sum_{i=1}^{m} l_{i})}.
\end{align*}
However, due to the structure of $\eta$ given in \Cref{Prop:Prop1} and \eqref{eq:tau_structure_eta}, the prefix
\begin{align*}
\tau^{(\sum_{i=1}^{m} l_i )}(a)\underbrace{
\underbrace{\beta^{(\sum_{i=1}^{m- 1} l_i)}\tau^{(\sum_{i=1}^{m-1} l_i )}(a)}
\cdots
\underbrace{\beta^{(\sum_{i=1}^{m- 1} l_i)}\tau^{(\sum_{i=1}^{m-1} l_i )}(a)}}_{(2^{l_{m}} - 1) - \text{times}},
\end{align*}
whose length is equal to $2 \lvert \tau^{(\sum_{i=1}^{m} l_i )}(a)\rvert - \lvert\tau^{(\sum_{i=1}^{m-1} l_i )}(a)\rvert$, is $2$-right special. Further, it is not a suffix of $\tau^{(\sum_{i=1}^{m+1} l_i )}(a)$. Using these right special words and their respective suffixes of length strictly greater than $\lvert \tau^{(\sum_{i=1}^{m} l_i )}(a) \rvert$ we obtain that
\begin{align}\label{eq:mylab358}
\hspace{-0.35em}p(\lvert \tau^{(\sum_{i=1}^{m+1} l_i )}(a) \rvert + 1) - p( \lvert \tau^{(\sum_{i=1}^{m} l_i )}(a) \rvert + 1) \geq 2 \lvert \tau^{(\sum_{i=1}^{m+1} l_i )}(a) \rvert - \lvert \tau^{(\sum_{i=1}^{m} l_i )}(a) \rvert - \lvert \tau^{(\sum_{i=1}^{m-1} l_i )}(a) \rvert.
\end{align}
The result follows by combining $\eqref{eq:mylab357}$ and \eqref{eq:mylab358} together with an inductive argument.
\end{proof}
\begin{corollary}\label{cor:Gri-aperiodic}
Every $l$-Grigorchuk subshift is aperiodic.
\end{corollary}
\begin{proof}
By \Cref{prop:minimality} we know that every $l$-Grigorchuk subshift is minimal. Therefore, if an $l$-Grigorchuk subshift was not aperiodic, then its complexity function would be bounded, contradicting \Cref{thm:complexity}.
\end{proof}
\begin{corollary}\label{Cor:unique_ergodic}
Every $l$-Grigorchuk subshift is uniquely ergodic.
\end{corollary}
\begin{proof}
Given an $l$-Grigorchuk subshift $\Omega(\eta)$ we define the associated two-sided subshift $\Omega'(\eta)$ by $\Omega'(\eta) \coloneqq \{ \omega \in \{ a, x, y, z \}^{\mathbb{Z}} \colon \mathcal{L}(\omega) \subseteq \mathcal{L}(\eta) \}$. Here $\{ a, x, y, z \}^{\mathbb{Z}}$ denotes the set of all bi-infinite words over the alphabet $\{ a, x, y, z \}$ equipped with the discrete product topology. Since $\eta$ is uniformly recurrent (see \Cref{prop:minimality}), we have that $\Omega'(\eta)$ is minimal. (For the latter result, see for instance \cite{Combinatorics:2010}.) The existence of an invariant measure supported on $\Omega'(\eta)$ is guaranteed by \cite{B:1984/85}. By \Cref{lem:lemma534,thm:complexity} and \cite[Theorem 2.2]{B:1984/85}, where in this latter result we set $\alpha = 4$ and $k = 1$, it follows that $\Omega'(\eta)$ has at most one ergodic measure $\mu$. Therefore, $(\Omega'(\eta),\sigma)$ is a uniquely ergodic dynamical system. Since as a dynamical system, $(\Omega(\eta),\sigma)$ is a topological factor of $(\Omega'(\eta),\sigma)$ via the factor map $\pi \colon \Omega'(\eta)\to \Omega(\eta)$ given by $\pi(\dots, x_{-2}, x_{-1}, x_{0}, x_{1}, x_{2}, \dots) = (x_{1}, x_{2}, \dots)$, it follows that also $(\Omega(\eta),\sigma)$ is uniquely ergodic. To see this fix a continuous function $f\colon\Omega(\eta)\to \mathbb{R}$ and $x\in \Omega(\eta)$. Then there exists $y\in \Omega'(\eta)$ with $x=\pi(y)$ and we have
\[
\lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}f\circ \sigma^{k}(x)=
\lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}f\circ \sigma^{k}\circ \pi(y)=
\lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}f\circ \pi\circ\sigma^{k}(y)=\int f \;\mathrm{d}\mu\circ\pi^{-1}.
\]
This characterises unique ergodicity as stated e.g. in \cite[Theorem 6.19]{W:2013}.
\end{proof}
Alternatively, one can show that any $l$-Grigorchuk subshift is a regular Toeplitz subshift, and so it is uniquely ergodic, see \cite{PK:2003}.
\begin{remark}\label{rmk:last_remark}
In most sections of this article, we assumed that $l_i\ne 0$ for all $i\in\mathbb{N}$. We believe that all of our results hold under slightly weaker assumptions, namely that if $l_{i} = 0$, for some index $i$, then $l_{i-1}$ and $l_{i+1}$ are non-zero, and the homomorphisms $\tau_x$, $\tau_y$ and $\tau_z$ all occur infinitely often in the construction of $\eta$.
\end{remark}
|
2,877,628,091,029 | arxiv | \section{Introduction}\label{sec:introduction}
According to the report generated by World Health Organization~\cite{who}, Cardiac arrest or cardiovascular diseases are the perceptible reason for mortality rate all over the world. According to the statistics of this report, 17.3 million died in 2008, and 15.2 million died in 2016 due to Ischaemic heart disease and stroke. However, the predicted number for 2030 is 23.3 million which is an alarming indicator for medical care. Cardiovascular diseases are the group of syndromes mainly concerning to heart and blood vessels and sometimes the addition of atherosclerosis results in a stroke. Early detection of cardiac anomalies and relevant preventive measures can significantly decrease the mortality rate caused by CVDs and associated healthcare cost. It is imperative to mention that out of overall mortality rate linked with CVD, 47\% of death happens out of cardiac care units. Out of cardiac care, monitoring triggered a massive interest in the development of wearable devices and sensors for the proactive healthcare monitoring system.\par
Whenever a person encounters any cardiac anomaly or a symptom pertaining to CVD, a cardiologist or practitioner usually checks his/her physiological parameters like blood pressure, oxygen saturation, heart rate, and analyze electrocardiogram. Even the patients admitted under cardiac care unit after having cardiac arrest are being monitored continuously for aforesaid physiological signals. However, clinicians required numerous devices like ECG, Holter, pulse oximeter and many other devices for the observation of these physiological parameters after encountering the CVD. Thanks to the latest industrial advancement and research community for the manufacturing of biomedical sensors (conception and miniaturization) make these devices capable of the collection and transmission of health data remotely using wireless communication. These devices are common for recording numerous physiological signals for instance Oxygen Saturation, Temperature, Heart Rate, Blood Pressure, ElectroMyoGram, ElectroCardioGram, etc.
In the traditional health care monitoring system these wireless sensors are employed in WBAN (Wireless Body Area Network). These sensors are affixed to specific human body parts a for the collection of the relevant physiological parameter, wearable comfort and convenience. These sensors transmit the collected data to a cloud or other computational resources or healthcare professionals via gateway device (e.g., smartphone, tablet, .etc.) for the sake of further insights, expert opinions, analysis, and automated anomaly detection as depicted in Figure \ref{fig:wban}
\par
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.60]{./Definitions/figures/pdf/WBAN.pdf}
\caption{A WBAN architecture for ECG monitoring}
\label{fig:wban}
\end{figure}
Although the recent technical advancement in the integrated circuits and WBANs are escalating the quality of life, remote monitoring, real-time health care services across numerous health care applications. Similarly, these body sensors and wearable devices are widely used for CVD monitoring. However, these devices are not at their efficacy and need specific challenges. One of the prominent challenges in respect of CVD is automatic and early detection of a cardiac anomaly. Early detection and prediction of associated cardiac anomaly can save a life and reduce healthcare cost substantially.
There are various kinds of CVDs, for instance, Arrhythmia is a CVD that is related to irregular heartbeats. For a healthy person, the heartbeat is from 60 to 100, but sometimes heart starts beating faster or at a languid pace which is called Arrhythmia or dysrhythmia. Arrhythmia is further classified into tachycardia (very fast heartbeat) and bradycardia (very slow heartbeat). Arrhythmia impacts the rhythm in various forms commonly PAC and PVC. The PAC and PVC are triggered due to premature expulsion of electrical impulses in the atrium ~\cite{PVC}~\cite{PAC}. Myocardial Infarction (MI) is another severe heart anomaly also known as heart failure. MI mostly triggered due to an inadequate supply of blood and oxygen to heart muscles. The main reason for this improper transportation of blood and oxygen is narrowed and blocked arteries. MI is an acutely ischemic heart disease mostly damages a heart portion which is termed as necrosis. In order to get more details about CVDs disease, one can go deep down into the paper ~\cite{morris2009abc}. \par
For the verification of a CVD symptom, a cardiologist or a clinician advise for a series of examinations. Usually, the ECG is the first diagnostic tool to check irregular rhythms. Generally, ECG is transcribed on a graph which represents the contraction and relaxation of cardiac muscles due to depolarization and repolarization of myocardial cells. These electrical changes are recorded via electrodes placed on the limbs and chest wall of a patient.
The most commonly used ECG technique is 12- lead ECG ~\cite{morris2009abc}. These leads are placed on 10 sites of human body for the recording of 12 ECG signals. The 12-leads ECG is described as :
\begin{itemize}[leftmargin=*,label={--}]
\item V1, V2, V3, V4, V5, V6 collectively called as Precordial Leads.
\item I, II, III known as Limb Leads.
\item aVR, aVL, aVF recognized as Limb Leads.
\end{itemize}
\par
Out of 12 leads, each lead localizes the heart with different viewing angle and gathered wave signals. The recorded ECG signal with 12 leads comprises of five waves represented as $P$, $Q$, $R$, $S$, and $T$. Figure~\ref{fig:ecg} portrays a general one-cycle ECG waveform containing the main interval and segments of the corresponding five waves as discussed. The atrial depolarization generates the $P$ wave, while the $PR$ segment depicts the duration of atrioventricular conduction (AV). $QRS$ is termed as complex labeled and represents the activation of ventricular depolarization. However, $ST-T$ represents the Ventricular repolarization. ECG plays a vital role in the diagnosis and treatment of cardiac abnormalities.Having the early acquisition and correct interpretation of ECG is necessary. However, it is recorded by a professional once the patient arrives at an hospital or using a portable device called Holter ECG that a patient can take at home for 24h or 48h of continuous recording and then bring it back to doctors for insight analysis. Generally, these ECG monitoring approaches are used to record and monitor the ECG data for a short period (usually minutes to hours) and some episodic symptoms probably may not occur during the monitoring period. Furthermore, ECG is generally recorded after encountering the initial symptoms. It is possible that some damage to the heart may already have occurred before reporting to a cardiologist. Therefore, it is deemed necessary to have ECG recording systems which should be capable of recording data remotely with real-time analysis. Keeping in view those as mentioned above it is deemed requirement of real-time monitoring systems for CVDs with the capabilities of early detection of anomalies and alerting the relevant health care providers.\par
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.60]{./Definitions/figures/pdf/ECG_2.pdf}
\caption{A typical heartbeat (one-cycle ECG)} \label{fig:ecg}
\end{figure}
The main contribution of this research work is the detection and classification of CVDs into four classes: Normal, PAC, PVC, and MI. The presented work uses the Discrete Wavelet Transform (DWT) for preprocessing of ECG signal and extracts features using UWT. After filtering and extracting the features, the classification is performed using Bayesian Belief Network. For the removal of false alarms, Tukey box uses temporal consistency analysis. By extracting features from sliding windows retaining the latest $N$ past values it measures the deviation of the current heartbeat with the vigorous statistical values.\par
The remaining part of the paper is structured as follows: Section~\ref{sec:related} comprehends the related work. Section~\ref{sec:approach} enlightens the proposed approach for CVD detection and classification. Section~\ref{sec:results} depicts the experimental results on real ECG data sets with CVDs. Finally, section~\ref{sec:conclusion} summarizes the research work.\par.
\section{Related work} \label{sec:related}
Numerous vendors and nano-tech industries have introduced different varieties of WBANs and ECG monitoring systems since the last decade. These ECG systems are WBANs are widely in healthcare application for the collection of patient data and transmit to a relevant healthcare service provider. These devices are advantageous for medical practitioners to collect and monitor their patient's health remotely. After a thorough investigation it is observed that the use of these devices is prevailing in healthcare, but still, many improvements can be incorporated. These ECG devices are not capable of processing the ECG signals nor having a mechanism to detect and notify the CVD anomaly. Moreover, another limitation not capable for the provision of Heart Rate Variability (HRV) analysis. Those mentioned above are the main limitations existing in the healthcare monitoring systems for instance,
LifeMonitor~\cite{Equivital}, MyHeart~\cite{luprano2006combination}, CodeBlue~\cite{malan2004codeblue}, LiveNet~\cite{sung2004livenet}, MEDiSN~\cite{ko2010medisn}, AliveCor~\cite{saxon2013ubiquitous}, PhysioMem~\cite{PhysioMem},etc.\par
Lately, a very few approaches have been coined to discourse the gaps and limitations of automatic detection of anomalies and generating alarms for irregular rhythms in respect of WBANs, and real-time ECG recording and monitoring remotely. HeartSaver system is such a kind of system projected in ~\cite{sankari2011heartsaver} with similar intentions. The system is capable of analyzing the ECG in real-time and can detect cardiac pathologies which are atrioventricular block, atrial fibrillation (A-fib) and myocardial infarction. The author in ~\cite{ashrafuzzaman2013heart} introduced a mobile application which uses the mobile camera for heart attack detection by placing the index figure on camera. The proposed approach works only with Heart Rate by calculating the blood peaks. The purposed approach is unable to detect CVD because CVD detection involves complex features representation and analysis of the ECG signal.\par
A wireless-based real-time ECG monitoring system named RECAD was proposed for the detection of an arrhythmia in ~\cite{zhou2006real}. Moreover, another research work has been presented in ~\cite{oresko2010wearable} which uses smartphone platform for the detection of cardiac anomalies in real-time. However, this approach performs CVD classification into paced beat (PACE), PFUS beat, right bundle branch block beat (RBBB) and PVC. In addition to approaches as discussed above, there are other numerous portable and innocuous devices commonly available in markets providing arrhythmia detection amenities, for instance, BodyGuardian~\cite{BodyGuardian}, CardioNet~\cite{cardionet}, and Smartheart~\cite{Smartheart}. However, these devices are unable to detect first hand or initial symptoms relevant to PAC, PVC, and MI.
Authors in~\cite{jovic2011electrocardiogram} proposed a classification approach based on the analysis of 11 HRV features to distinguish between normal and abnormal ECG. They also further processed and classified into four categories: supra-ventricular arrhythmia, congestive heart failure, arrhythmia, and normal heart-beat. Seven clustering and classification algorithms analyzed ECG records from online databases. As per results reported by the author, they have three top accurate classification methods for the prediction of binary classes : Random Forest (RF) with 99.7\%, Artificial Neural Network (ANN) with 99.1\% and Support Vector Machines (SVM) with 98.9\% accuracy, and in the case of four classes : SVM with 98.4\%, BNC with 99.4\% ,and RF with 99.6\%.
In~\cite{jadhav2010artificial}, Artificial Neural Network (ANN) classifier is used to detect an arrhythmia in 12-lead ECG data. Authors used UCI arrhythmia dataset for the training of three different classifiers. Their experimental setup attained the accuracy of 86.67\%, and the reported sensitivity was 3.75\%. Author's first two tested classifiers achieved these results. As per this paper, the third classifier achieved the specificity of 93.1\%. However, the problem with aforesaid approaches is they require high computation power and are not suitable for real-time wireless systems.\par
Another approach was presented for the detection of Coronary Artery Disease (CAD) automatically in ~\cite{giri2013automated}. The proposed approach used four classifiers namely (PNN, KNN, SVM, and GMM) and data dimensionality reduction techniques, for instance, ICA, LDA, and PCA. They have tried several combinations of classifiers and data reduction techniques, and according to their results, ICA with GMM combination was outperforming with the highest accuracy of 96.8\% in comparison of others.
All the approaches and models reported in this literature review section yielded very good accuracies. However, these models are not feasible for real-time wireless systems due to the requirement of high computational power and resources.\par
In this research work, we have presented a system capable of monitoring and analyzing ECG signal remotely with the blend of supervised machine learning techniques and statistical analysis.
First, we use signal processing techniques to filter the captured ECG from the noise and extract nine features of heartbeat parameters followed by Bayesian Network Classifier is used to classify a given captured ECG as normal from the other three anomalies.
\begin{itemize}[leftmargin=*,label={--}]
\item Monitor real-time ECG data and processing it to become feasible for a Wireless ECG monitoring Devices remotely.
\item Reduction of false alarms using the optimized prediction model.
\item Detection and Prediction of cardiac anomalies: PVC, PAC, and MI.
\end{itemize}
\section{Proposed Approach} \label{sec:approach}
Our proposed system is comprised of five layers. The wireless sensors are at first layer which measures ECG and sends the measurements to smartphones in real-time as depicted in Figure ~\ref{fig:architecture}: we have utilized the real development environment for our proposed system for real-time monitoring and transmitting data remotely using these sensors. The brief details of these five layers are given as under:
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.7]{./Definitions/figures/pdf/System_Architecture.pdf}
\caption{Architecture of the proposed system} \label{fig:architecture}
\end{figure}
\begin{itemize}[leftmargin=*,labelsep=5.8mm]
\item Sensor Layer:The sensor layer is the first layer containing the installed ECG sensors with wireless functionality to record, sample, and transmits ECG data. Contrary to the traditional 12-lead ECG system available at hospitals these miniature sensors are manufactured with the intentions of portability, mobility, and specifically designed for smartphone applications. They contain two or three leads for ECG monitoring and sometimes one lead as the latest introduced by Apple.
\item Communication Layer: This layer uses a smartphone as a gateway to receive and transmit sampled ECG signal by using Bluetooth. The ECG signal captured by these sensors are transmitted along with alarms and notifications to a respective cardiologist or care provider. The transmitted data also contains information about the physical condition and position of the patient and smartphone are prominent because of their activation as a gateway.
\item Application Layer: represent the mobile application can be developed in IOS and Android for smartphones. The application and smartphone resources are utilized for the preprocessing and extraction of ECG signal. Based on extracted features this application detects the cardiac anomalies for the further expert opinion.
\item Data Layer: this layer contains the EHR data and deals with the connectivity of databases. The associated healthcare databases reside here which are responsible for the storage and retrieval of patient data. The data constitutes of patient medical history, present illness state, general information, ECG captured, alert, and extracted parameters.
\item User Layer: this layer provides the end user interface on a smartphone application. The layer is mainly centric on patients and healthcare providers. This layer is responsible for receiving the alerts and CVD detections which provoke the cardiologist to take appropriate action of medical assistance to his patient according to need.
\end{itemize}
Our approach mainly revolves around the application layer.
The main contribution of our proposed work comprises of various steps involving the preprocessing of sampled ECG using DWT, extraction of features from the ECG signal ($P$, $Q$, $R$, $S$, $T$) using UWT and perform analysis and prediction for CVD on these extracted features with the blend of Machine learning using supervised classification method a temporal analysis. A probabilistic model named Bayesian Network is used for the detection of cardiac anomalies and the prediction of MI, PVC, and PAC from a regular heartbeat. A dual verification and confirmation system is designed to check the validity of detected cardiac anomaly using Tukey box analysis. The purpose of using the Tukey box system as a wrapper on Bayesian Network is to make it more robust, minimize the false alarms and produce authenticity for prediction and detection. Bayesian Network is trained on real ECG data and functionality is added to make it reliable and adaptive to combat new ECG features. The flowchart of the proposed approach is given in Figure ~\ref{fig:design} \par
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.70]{./Definitions/figures/pdf/Design_Steps_v2.pdf}
\caption{Design steps of the application layer} \label{fig:design}
\end{figure}
\subsection{Pre-Processing}\label{sec:processing}
Pre-processing is required to eradicate the signal to noise ratio from the
ECG signal acquired from WBAN sensors. The existing noises interfere with the cardiac equipment installed in the frequency range of [0.01-150]Hz. The most common noise signals which cause interference are:
\begin{itemize}[leftmargin=*,label={--}]
\item Electromyographic (EMG) noise signals range from 25Hz to 100Hz.
\item Electrodes gesticulation artifacts vary between 1Hz to 10Hz
\item Baseline wandering with a frequency below 0.5Hz, this wandering is caused by the patient’s body parts movement and natural respiration process. This noise dispositions the iso-electric line of the measured ECG). It can be suppressed without loss in the original signal, by a high-pass digital filter or by the use of a standard Wavelet Transform (WT). In this study, we use a 0.5Hz FIR high-pass digital filter (Finite Impulse Response).
\item Power line noise: Electronic circuits of ECG sensors generate these kinds of noises.
\end{itemize} \par
Among the above mentioned, Baseline wondering and Power line interference are two primary noise signals which degrade detection accuracy of the ECG features substantially, in particular, it impacts QRS complex badly. Other noises may be wide-band and usually complex stochastic processes which distort the ECG signal. The other noises including EMG and electrode motion artifacts are most difficult signals to eradicate. Traditional digital filtering schemes are unable to remove these noises because they involved some complex probabilistic schemes which can cause potential interference with ECG frequency. Oppose to traditional digital filters we have used one of the well-known method named Wavelet Transform (WT) for time-frequency transformation. For preprocessing of data, we have applied Discrete Wavelet Transform (DWT) to tackle the discrete nature of ECG signal. DWT is applicable in various domains due to its property of associating the frequency signals and timestamps. , especially for denoising. There are two main types of WT known as Discrete Wavelet Transform (DWT) and Continuous Wavelet Transform (CWT). For preprocessing of data, we have applied Discrete Wavelet Transform (DWT) to tackle the discrete nature of ECG signal.\par
In DWT, the signal $S$ is convolved and dislodge by passing it through a series of filters. Concurrently this signal $S$ is passed via a high pass filter $H$ and low pass filter $L$. The output of these filters yielded approximated coefficient $A$ for low pass filter and detailed coefficient $D$ in respect of high pass filter. This output is further sub-sampled by the factor two which eradicates half of the frequencies from each output as specified in the following equation.
\begin{equation}\label{eq1}
{s_L}[n] = {A_1}[n] = \sum\limits_{k = - \infty }^\infty {s[k]L[2n - k]}
\end{equation}
\begin{equation}\label{eq2}
{s_H}[n] = {D_1}[n] = \sum\limits_{k = - \infty }^\infty {s[k]H[2n - k]}
\end{equation}
For the further decomposition of the yielded approximation coefficient, it is further decimated through a series of low pass filters and high pass filters as shown in the Figure~\ref{fig:dwt}. Lastly, reconstruction of a denoised signal is performed using wavelet function. The linear combination of wavelet functions weighted by wavelet coefficients is used to achieve signal reconstruction. \par
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.6]{./Definitions/figures/pdf/DWT.pdf}
\caption{Discrete Wavelet Decomposition} \label{fig:dwt}
\end{figure}%
The classical DWT does not exhibits shift invariant property; this means that DWT of a translated version of a signal $S$ is not the same as the DWT of the original signal, and shift-invariance property is vital for denoising and pattern recognition applications.
In order to overcome this drawback and get complete characteristic of the original signal, we use a variant of DWT named Undecimated Wavelet Transform (UWT). The quality of this method is that it does not decimate the signal, while it offers a better balance between smoothness and accuracy than the DWT.
We have applied the same DWT technique we have implemented in our previous work related to cardiac anomalies ~\cite{hadjem2014ecg}. In DWT, the decimation retains even indexed elements. However, the decimation could be carried out by choosing odd indexed elements instead of even indexed elements. This choice concerns every step of the decomposition process, so at every level, we chose odd or even. If we perform all the different possible decompositions of the original signal, we have 2J different decompositions, for a given maximum level J.
Let us denote by $\epsilon$j = 1 or 0 the choice of odd or even indexed elements at step j. Every decomposition is labeled by a sequence of 0s and 1s: $\epsilon = \epsilon1...,\epsilon$j. This transform is called the $\epsilon$-decimated DWT as given in our previous work~\cite{hadjem2014ecg}. The basis vectors of the $\epsilon$-decimated DWT can be obtained from the standard DWT. The further details about DWT and its applicability in various domains are comprehensively explained in ~\cite{wavelet}.\par
UWT first decomposes the ECG signal into several sub-bands by applying the Discrete Wavelet Transform and then modifies each wavelet coefficient by applying a threshold function, and finally reconstructs the de-noised signal. This technique ensures no loss of signal sharpest features by discarding only the portions of the details that exceed a certain limit due to the occurrence of noise.
In this research work, we have employed UWT for both feature extraction and pre-processing of signals. The further detailed explanation of UWT applicability in this research is explained in next sub-section.\par
\subsection{Features Extraction}\label{sec:features}
Our proposed technique for feature extraction is inspired by Boosting (to conquer the complex data points first) which is a supervised learning technique. After preprocessing and denoising, like boosting this technique detects all the complex peaks, i.e.,$QRS$ . After extracting the features from complex peaks $QRS$, it opts for other relatively simple waves including $P$ and $T$. It is imperative to mention that feature extraction of all other waves is extracted from the original ECG signal except the $QRS$ complex. For the extraction of complex $QRS$ peaks, we are using the pre-processed ECG signal because accurate detection of complex $QRS$ is cumbersome from the original signal. Our feature extraction algorithm is inspired by ~\cite{bhyri2009estimation}. We have used the DWT variant named UWT for feature extraction. The rationale behind UWT usage the maintainability of the complete characteristics of the original signal as discussed in ~\cite{nason1995stationary}.\par
UWT is applied using the Daubechies wavelet to decompose the ECG signal into eight levels in the first step. In the second step, the reconstruction of the signal is completed using detail and approximation coefficients of all frequency bands. The next step is the detection of peaks in which the algorithm detect $P$, $QRS$ complex, and $T$ peaks by adequately specifying a threshold. This threshold is used for the acceptance and rejection of peaks of a particular amplitude. Moreover, the algorithm uses a window width for the detection of onsets and offsets by specifying the current number of samples of the signal. The high-level diagram of feature extraction is given in Figure \ref {fig:features}.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.7]{./Definitions/figures/pdf/Features_Extraction.pdf}
\caption{ECG signal processing and features extraction} \label{fig:features}
\end{figure}
\par
The algorithmic description of the peak detection for $QRS$, $P$, and $T$ are enumerated as given:
\begin{itemize}[leftmargin=*,labelsep=5.8mm]
\item Apply Daubechies wavelet for eight levels UWT on ECG input signal.
\item Detection of zero crossing points in detailed coefficients at all levels yielded by high pass filter.
\item For the coarse estimation of real peaks, the zero crossing point is set to the large scale.
\item For each detected point, algorithm searches for the fine scale for the nearest zero point
\item Repeat the last step until it obtains the optimal scale.
\end{itemize}
In order to obtain the zero crossing points for calculating the offsets and onsets, a time window is used. The optimal window size we have used is 50 milliseconds. Based on the detection of $QRS$ multiple peaks, the signal is scanned for 50 milliseconds on the left side to get the minimum value, and this corresponds to Q peaks. Similarly, the minimum value on the right side of the R peaks corresponds to "S" peaks. In a specific case if a particular signal is not crossing the zero line, then we consider the minimum value inside that particular window as offsets and onsets. This mechanism results in the extraction of six features from each one-cycle ECG beat as the first six feature mentioned in Table ~\ref{table:ecg_parameters}.\par It is relatively easy to calculate the amplitude, segment, duration of each ECG wave, and intervals from the nine features as presented in Table~\ref{table:ecg_parameters}. Moreover, it is mandatory to highlight that these nine features are generally of high interest and high importance for a cardiologists ~\cite{morris2009abc}
\begin{table}[H]
\begin{tabular*}{\textwidth}{|p{1.5cm}|p{13.2cm}|@{\extracolsep{\fill}}}
\toprule
\hline
\textbf{Parameter} & \textbf{Description} \\
\hline
\midrule
\textbf{$P_{amp}$} & Amplitude in \textit{mV} of the P wave calculated by searching the peak between Ponset and Poffset \\
\hline
\textbf{$P_{dur}$} & Duration in \textit{seconds} of the P wave between Ponset and Poffset \\
\hline
\textbf{$QRS_{amp}$} & Amplitude in \textit{mV} of the QRS wave calculated by searching the peak between QRSonset and QRSoffset \\
\hline
\textbf{$QRS_{dur}$} & Duration in \textit{seconds} of the QRS wave between QRSonset and QRSoffset \\
\hline
\textbf{$T_{amp}$} & Amplitude in \textit{mV} of the T wave calculated by searching the peak between Tonset and Toffset \\
\hline
\textbf{$T_{dur}$} & Duration in \textit{seconds} of the T wave between Tonset and Toffset \\
\hline
\textbf{$PR_{dur}$} & Duration in \textit{mV} of the PR Interval between Ponset and QRSonset \\
\hline
\textbf{$ST_{amp}$} & Amplitude in \textit{mV} of the ST segment calculated based on local maxima between QRSoffset and Tonset \\
\hline
\textbf{$QT_{dur}$} & Duration in \textit{seconds} of the QT interval between QRSonset and the Toffset \\
\hline
\bottomrule
\end{tabular*}
\caption{9 parameters calculated for each ECG Beat}
\label{table:ecg_parameters}
\end{table}
These nine extracted features are used as input variables to BNC for CVD detection. The details are given in the following subsection.
\subsection{Bayesian Network Classifier (BNC)}\label{sec:bnc}
A Bayesian Network is a machine learning technique based on statistical probabilities. The famous Bayesian theorem is the backbone of a Bayesian Network which represents a set of random variables and their conditional dependencies. A Directed Acyclic Graph (DAG) usually represents the conditional properties and random variables that's why it is also known as a probabilistic directed acyclic graphical model. In bioinformatics, these Bayesian networks are instrumental in showing the symptoms and diseases cardinalities. Based on the given symptoms and their conditional probabilities the network can determine the presence of a particular disease. The nodes in the Bayesian Network represents the variables which may be observable quantities or unknown parameters. The edges in the network show the conditional dependencies. Moreover, non-connected nodes represent the observations or variables which are conditionally independent to each other. In the network, there is a probability function associated with each node. The function takes the input of a particular set of values from the parent node and returns the probability of a variable expressed by the node. There is another variant of BNC called Dynamic BNC which is used to model the succession of variables. In our methodology, the extracted nine features by DWT and UWT are the node of BNC.\par
Let $G = {\rm{ }}\left( {H,E} \right){\rm{ }}$ is a DAG, where E represents the edges and $H$ are the node and let $X = {\rm{ }}{\left( {{X_h}} \right)_{h \in H}}$ a set of variables indexed by $H$ as mentioned in our previous work ~\cite{hadjem2014ecg}. Moreover,$X$ represents a Bayesian Network, and its combined probability density function can be expressed by a product of the individual density functions and conditional dependencies on their parent variables as given in Equation \ref{eq10}.\par
\begin{equation}\label{eq10}
p(x) = \prod\limits_{h \in H} p \left( {{x_h}{\mkern 1mu} |{\mkern 1mu} {x_{{\mathop{\rm pa}\nolimits} (h)}}} \right)
\end{equation}
where in Equation \ref{eq10} pa ($h$) refered to a parent set for $h$.
In order to calculate the conditional probability for any member of a joint distribution for a given set of random variables, following chain rain can be used:
\begin{multline}\label{eq3}
{\rm{P}}({X_1} = {x_1}, \ldots ,{X_n} = {x_n}) =
\prod\limits_{h = 1}^n {\rm{P}} \left( {{X_h} = {x_h}\mid {X_{h + 1}} = {x_{h + 1}}, \ldots ,{X_n} = {x_n}} \right)
\end{multline}
One can refer ~\cite{Bayesian_Approch} to get thorough understanding and knowledge about the applications of Bayesian network in various domains.
\subsection{Box-and-Whisker plot}\label{sec:box}
The Box and Whisker plot is a statistical method commonly used to detect and represents the outlier in a given dataset or observations and also known as a boxplot. In order to detect the abnormal measurement, let $X_{i}^{w}=\{x_{i,t-w},\ldots,x_{i,t}\}$ depicts a temporal sliding window of the last $w$ values in respect of $i^{th}$ recorded or monitored ECG parameter. The lower quartile ($Q_1$ is the 25\% and the upper quartile ($Q_3$ is the 75\%) of $X_{i}^{w}$ are capitalized to get robust measures for the mean $\hat\mu =(Q_{1}+Q_{3})/2$, and the standard deviation is replaced by the interquartile range $\hat\sigma= IQR= Q_{3}- Q_{1}$ . The visual representation is specified in the Figure ~\ref{fig:boxplot}.\par
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.55]{./Definitions/figures/pdf/boxplot.pdf}
\caption{Boxplot} \label{fig:boxplot}
\end{figure}
In our proposed approach we have used the training set containing Normal ECG measures/beats to determine the mean and variance as a reference in respect of nine extracted parameters of each ECG Beat ($P_{amp}$, $P_{dur}$, $QRS_{amp}$, $QRS_{dur}$, $T_{amp}$, $T_{dur}$, $PR_{dur}$, $ST_{amp}$, $QT_{dur}$). As mentioned in the earlier subsection, our approach uses Bayesian Network model to detect and classify the abnormality (MI, PVC or PAC) for each ECG Beat. It is imperative to mention that according to medical literature, some parameters are more likely to be abnormal for certain ECG anomalies than others, for example, the ST segment elevation for MI.
After the classification/detection of Bayesian Network for abnormal (PVC, PAC or MI) we apply the univariate boxplot on the extracted nine features of each ECG Beat. If the boxplot confirms the abnormality only then the alarm variable, i.e., $AlarmClass$, corresponding to each anomaly class (PVC, PAC, MI) get incremented and considered. Each ECG Beat contains nine parameters, and if the deviation is detected in at least one parameter and Boxplot validates it then is treated as abnormal. If no deviation is detected in all nine parameters, then it is considered as Normal ECG Beat regardless of Bayesian prediction and detection. In this scenario, the output of the Bayesian Network is treated as a false alarm and will not be considered. Our system generates the alarm when the value of $AlarmClass$ variable is greater than $r$, we have used a value of $r \ge 3$ in our implementation.
Algorithm ~\ref{alg:alg1} specifies the complete steps of our proposed approach.\par
\begin{algorithm}[!htb]
\caption{CVD Prediction Algorithm including Preprocessing of ECG signal and its classification}
\label{alg:alg1}
\begin{algorithmic}[1]
\State Build a BN classification model using training set $TS$\\
\State Calculate boxplot parameters for Normal ECG Beats in the $TS$\\
\State Set the windows size $win$, $i=0$\\
\While {Captured ECG Beat $EB$} \\
\State Remove Noise $EB$ with 0.5Hz High-Pass filter and DWT; \\
\State Extract ${P,QRS,T}$ Peaks, Onsets and Offsets using UWT;\\
\State Calculate the nine $EB$ features;\\
\If{$i = win$}\\
\For {each $EB$ in the windows $win$}\\
\State Apply the trained BN Model on the nine extracted parameters $X_i$ of the $EB$;\\
\If { $Min < X_i < Max$ for all $i \in 1,2, ...9$ } \\
\State Normal $EB$, False alarm;\\
\Else\\
\State Abnormal $EB$;\\
\State $AlarmClass++$;\\
\EndIf\\
\EndFor\\
\If {$AlarmClass > r$}\\
\State Generate an alarm;\\
\EndIf\\
\State Update $TS$ (training Set) and the model with the EB data of the last windows $win$;\\
\State $i=0$;\\
\State $AlarmClass=0$;\\
\Else\\
\State $i=i+1$;\\
\EndIf\\
\EndWhile
\end{algorithmic}
\end{algorithm}
\section{Experimental results} \label{sec:results}
The detail of datasets we have used for the evaluation of our proposed approach is discussed as under:
Two datasets are used from the Physionet~\cite{PhysioNet}. The first one is the European ST-T Database (EDB)~\cite{EDB}. This dataset contains ECG recordings, and each record comprises of two-hour duration. Each record is recorded with two leads (different for each record) and sampled at 250 samples per second. Moreover, this dataset consists of 90 annotated excerpts of ambulatory ECG recording from 79 subjects. We have used this European ST-T Database (EDB) to get the records containing ECG Beats with Myocardial Infarction (MI) and Normal ECG Beat.
The second dataset is the St. Petersburg Institute of Cardiological Technics 12-lead Arrhythmia Database (INCARTDB)~\cite{PhysioNet}. ECG beats are recorded with traditional 12 leads, and each record is sampled at 257 Hz. This dataset consists of 75 annotated excerpts, and each record comprises of 30 mint duration. These 75 annotated excerpts are extracted from 32 Holter records. We have used this dataset to obtain ECGs with PAC, PVC and Normal ECG. The third dataset is MIT-BIH Arrhythmia Database (MITDB)~\cite{MITDB} used for various ECGs. This database contains 48 half-hour excerpts of two-channel ambulatory ECG recordings, obtained from 47 subjects; the recordings are digitized at 360 samples per second.
Figure~\ref{fig:various_ecg} is depicting the various ECG signals from Physionet dataset including a normal and MI ECG from EDB. MI ECG is characterized by an elevation of the ST Segment as shown in the figure. We can also see a PVC and PAC ECG from INCARTDB.\par
\begin{figure}[!htbp]
\centering
\parbox{0.45\textwidth}{
\subfigure[Normal ECG]{\includegraphics[scale=0.43]{./Definitions/figures/eps/Normal_ECG-eps-converted-to.pdf}
\label{fig:Normal_ecg}}}
\parbox{0.45\textwidth}{
\subfigure[MI ECG]{\includegraphics[scale=0.43]{./Definitions/figures/eps/MI_ECG-eps-converted-to.pdf}
\label{fig:MI_ecg}}
}
\parbox{0.45\textwidth}{
\subfigure[PVC ECG]{\includegraphics[scale=0.43]{./Definitions/figures/eps/PVC_ECG-eps-converted-to.pdf}
\label{fig:PVC_ecg}}
}
\parbox{0.45\textwidth}{
\subfigure[PAC ECG]{\includegraphics[scale=0.43]{./Definitions/figures/eps/PAC_ECG-eps-converted-to.pdf}
\label{fig:PAC_ecg}}}
\caption{Various ECGs from Physionet DB}
\label{fig:various_ecg}
\end{figure}
Figure~\ref{fig:processed_ecg} portrays the processed ECG signal by 0.5 Hz, succession of various high pass, low pass filters, and DWT. The goal of preprocessing is to remove the noise from the digitized signal without losing the main characteristics of the signal as depicted in the figure. One can notice in Figure ~\ref{fig:processed_ecg} that we have the slight rambling baseline in the resultant preprocessed ECG signal without losing the main features of the signal. Moreover, the preprocessing step removes the power line interference, baseline wandering, EMG Noise and motion artifacts of electrodes.\par
\begin{figure}[!htb]
\centering
\parbox{0.45\textwidth}{
\subfigure[Normal ECG]{\includegraphics[scale=0.43]{./Definitions/figures/eps/Normal_ECG_Processed-eps-converted-to.pdf}
\label{fig:Normal_ecg_processed}}}
\parbox{0.45\textwidth}{
\subfigure[MI ECG]{\includegraphics[scale=0.43]{./Definitions/figures/eps/MI_ECG_Processed-eps-converted-to.pdf}
\label{fig:MI_ecg_processed}}}
\parbox{0.45\textwidth}{
\subfigure[PVC ECG]{\includegraphics[scale=0.43]{./Definitions/figures/eps/PVC_ECG_Processed-eps-converted-to.pdf}
\label{fig:PVC_ecg_processed}}}
\parbox{0.45\textwidth}{
\subfigure[PAC ECG]{\includegraphics[scale=0.43]{./Definitions/figures/eps/PAC_ECG_Processed-eps-converted-to.pdf}
\label{fig:PAC_ecg_processed}}}
\caption{ECG Pre-processing stage}
\label{fig:processed_ecg}
\end{figure}
Figure~\ref{fig:features_ecg} depicts the extracted features from the preprocessed $QRS$ complex and then extracted features from $P$ and $T$ peaks of the original wave by applying the UWT as mentioned in Section~\ref{sec:approach}. The extracted $P_{onset}$, $P_{offset}$, $R_{peak}$, $QRS_{onset}$, $QRS_{offset}$, $T_{onset}$ and $T_{offset}$ from two previous ECGs (normal and MI) can be seen in Figure ~\ref{fig:features_ecg}.\par
\begin{figure}[!tb]
\centering
\parbox{0.45\textwidth}{
\subfigure[Normal ECG]{\includegraphics[scale=0.43]{./Definitions/figures/eps/Normal_ECG_Features-eps-converted-to.pdf}
\label{fig:normal_ecg_features}}}
\parbox{0.45\textwidth}{
\subfigure[MI ECG]{\includegraphics[scale=0.43]{./Definitions/figures/eps/MI_ECG_Features-eps-converted-to.pdf}
\label{fig:MI_ecg_features}}}
\caption{ECG features extraction stage}
\label{fig:features_ecg}
\end{figure}
Figure~\ref{fig:parameters_amp_ecg} depicts the amplitude parameters $P_{amp}$, $QRS_{amp}$, $T_{amp}$, $ST_{amp}$ calculated during the 15 minutes of ECG recordings in respect of previous Normal, PAC, PVC and MI ECGs. These are ECGs corresponds to 1000 ECG heartbeats. It is obvious in the figure that the amplitude varies significantly for different anomalies while it remains stable for Normal ECG. For instance, ECG for PVC and PAC has the $ST_{amp}$ parameter around $0 mV$ or negative in the figure, while in case of MI the amplitude parameter is greater than $0.5 mV$.\par
\begin{figure}[!t]
\centering
\parbox{0.45\textwidth}{
\subfigure[Variation of $P_{amp}$]{\includegraphics[scale=0.43]{./Definitions/figures/eps/Pamp-eps-converted-to.pdf}
\label{fig:Pamp_ecg}}
}
\parbox{0.45\textwidth}{
\subfigure[Variation of $QRS_{amp}$]{\includegraphics[scale=0.43]{./Definitions/figures/eps/QRSamp-eps-converted-to.pdf}
\label{fig:QRSamp_ecg}}
}
\parbox{0.45\textwidth}{
\subfigure[Variation of $T_{amp}$]{\includegraphics[scale=0.43]{./Definitions/figures/eps/Tamp-eps-converted-to.pdf}
\label{fig:Tamp_ecg}}
}
\parbox{0.45\textwidth}{
\subfigure[Variation of $QT_{amp}$]{\includegraphics[scale=0.43]{./Definitions/figures/eps/STamp-eps-converted-to.pdf}
\label{fig:STamp_ecg}}
}
\caption{ECG amplitude parameters calculation stage}
\label{fig:parameters_amp_ecg}
\end{figure}
Figure~\ref{fig:parameters_dur_ecg} represents the duration parameters $P_{dur}$, $QRS_{dur}$, $T_{dur}$, $PR_{dur}$, $QT_{dur}$ calculated for previous ECGs corresponding to same 1000 ECG heartbeats as discussed above. The difference between Normal and Abnormal ECG is observable in the figure. It can be seen in the figure that the calculated duration parameters, particularly the $QT_{dur}$ and the $T_{dur} $, for ECG with MI abnormality, $QT_{dur}$ is greater than $0.45s$ and the $T_{dur}$ is greater than $0.3s$. Moreover, it can be noticed that these calculated durations for ECG with PAC and PVC are less than $0.4s$ in respect of $QT_{dur}$ and less than $0.25s$ for $T_{dur}$. It is imperative to mention that the amplitude and duration parameters depicted in figures~\ref{fig:parameters_amp_ecg} and~\ref{fig:parameters_dur_ecg} represent the nine statistical variables of our anomaly detection and prediction model using Bayesian Networks for classification and addition of Box plot to remove false alarms. \par
\begin{figure}[!t]
\centering
\parbox{0.45\textwidth}{
\subfigure[Variation of $P_{dur}$]{\includegraphics[scale=0.43]{./Definitions/figures/eps/Pdur-eps-converted-to.pdf}
\label{fig:Pdur_ecg}}}
\parbox{0.45\textwidth}{
\subfigure[Variation of $QRS_{dur}$]{\includegraphics[scale=0.43]{./Definitions/figures/eps/QRSdur-eps-converted-to.pdf}
\label{fig:QRSdur_ecg}}}
\parbox{0.45\textwidth}{
\subfigure[Variation of $T_{dur}$]{\includegraphics[scale=0.43]{./Definitions/figures/eps/Tdur-eps-converted-to.pdf}
\label{fig:Tdur_ecg}}}
\parbox{0.45\textwidth}{
\subfigure[Variation of $QT_{dur}$]{\includegraphics[scale=0.43]{./Definitions/figures/eps/QTdur-eps-converted-to.pdf}
\label{fig:STdur_ecg}}}
\caption{ECG duration parameters calculation stage}
\label{fig:parameters_dur_ecg}
\end{figure}
In order to examine the outcome of our predicted model, we are feeding the preprocessed and nine extracted parameters from all records of two datasets, i.e., EDB and INCARTDB of Physionet database as discussed above. We have formulated our corpus from these datasets comprised of nine parameters with the anomaly class (PVC, PAC, MI, Normal) of ECG beat. The annotation of anomaly class is provided in the Physionet database against each record. However, further details are available in ~\cite{PhysioAnnot}. The brief specification of each class in the annotation file is enumerated as under:
\begin{enumerate}[leftmargin=*,labelsep=4.9mm]
\item MI Class: refers to annotated beats ``s'' (ST segment change) or ``T'' (T-wave change). The rationale behind their selection is the medical knowledge~\cite{morris2009abc} according to which "s" and "T" annotations are the significant symptoms contributing to Myocardial Infarction.
\item PVC Class: stands for annotated beats ``V'' which refers to a Premature Ventricular Contraction.
\item PAC Class: Corresponds to the annotated beats ``A'' refers to Atrial Premature Contraction.
\item Normal Class: represents annotated beats ``N'', which refers to Normal.
\end{enumerate}
Physionet databases do not necessarily contain all the annotations cited before, due to this reason we have selected two databases. Furthermore, to obtain more relevant results, we grouped the lead-wise records of each database in order to analyze the performance by lead with a larger number of beats. However, each database contains a different number and type of leads, that can vary from a record to another in the same database.\par
The EDB dataset contains the ECG records recorded with two leads, but these two leads are different due to which seven leads are distinguishable in the whole database (I, III, V1, V2, V3, V4, V5). While in INCARTDB dataset posses the record captured by 12 lead ECG. In order to draw fair results comparison, we have selected only above mentioned seven leads from INCARTDB also presents in the EDB dataset. Moreover, to solve the class imbalance problem, we have reduced the proportion of prevailing classes, for instance, Normal Class. Tables~\ref{table:corpus_edb},~\ref{table:corpus_incartdb} depicts the details of the built corpus for each database. \par
\begin{table} [!htb]
\caption{EDB Corpus}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
\toprule
\textbf{ECG Lead} & \textbf{Total records} & \textbf{Total beats} & \textbf{Total ``N'' beats} & \textbf{Total ``MI'' beats} \\
\hline
\midrule
\textbf{I} & 19 & 25821 & 12059 & 13762 \\
\hline
\textbf{III}& 46 & 69763 & 37597 & 32166 \\
\hline
\textbf{V1} & 11 & 25302 & 11598 & 13704 \\
\hline
\textbf{V2} & 10 & 32472 & 15921 & 16551 \\
\hline
\textbf{V3} & 7 & 14932 & 8114 & 6818 \\
\hline
\textbf{V4} & 34 & 77858 & 38192 & 39666 \\
\hline
\textbf{V5} & 51 & 120801 & 61002 & 59799 \\
\hline
\bottomrule
\end{tabular}
\label{table:corpus_edb}
\end{table}
\begin{table} [!htb]
\caption{INCARTDB Corpus}
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\toprule
\textbf{ECG Lead} & \textbf{Total records} & \textbf{Total beats} & \textbf{Total ``N'' beats} & \textbf{Total ``V'' beats} & \textbf{Total ``A'' beats} \\
\hline
\midrule
\textbf{I} & 75 & 37322 & 19008 & 16474 & 1840 \\
\hline
\textbf{III} & 75 & 41380 & 22248 & 17261 & 1871 \\
\hline
\textbf{V1} & 75 & 39258 & 20445 & 17064 & 1749 \\
\hline
\textbf{V2} & 75 & 38320 & 19877 & 16617 & 1826 \\
\hline
\textbf{V3} & 75 & 40585 & 21695 & 17036 & 1854 \\
\hline
\textbf{V4} & 75 & 40502 & 21228 & 17398 & 1876 \\
\hline
\textbf{V5} & 75 & 41059 & 21769 & 17421 & 1869 \\
\hline
\bottomrule
\end{tabular}
\label{table:corpus_incartdb}
\end{table}
For the evaluation result, each ECG from the datasets as mentioned earlier is represented by nine extracted parameters. The datasets were split into training and test data. For performance evaluation perspective we are more concerned the performance metrics especially, the overall accuracy of the model, Precision (Positive Prediction Value ), sensitivty (True Positive Rate), Error Rate (Err), False Positive Rate (Far). The mathematical equations for these metrics (Accuracy, Error rate, sensitivity, Far, Precision ) are given as under:
\begin{equation}\label{eq:ACC}
\text{Accuracy(Acc)} =\frac{TP + TN}{TP + TN + FP + FN}
\end{equation}%
\begin{equation}\label{eq:Err}
\text{Error Rate (Err)} =\frac{FP + FN}{TP + TN + FP + FN}
\end{equation}
\begin{equation}\label{eq:TPR}
\text{True Positive Rate (Sensibility)} =\frac{TP}{TP + FN}
\end{equation}
\begin{equation}\label{eq:FAR}
\text{False Positive Rate (Far)} = \frac{FP}{FP + TN}
\end{equation}
\begin{equation}\label{eq:TNR}
\text{True Negative Rate (Specificity)} = \frac{TN}{FP + TN} = {1 - FPR}
\end{equation}
\begin{equation}\label{eq:PPV}
\text{Positive Predictive Value (Precision)} = \frac{TP}{TP + FP}
\end{equation}
Tables~\ref{table:results_MI},~\ref{table:results_V},~\ref{table:results_A} synthesize the accuracy results obtained for the MI, PVC and PAC classification respectively. These tables are depicting the achieved results comparing to Normal class and three anomalies. The results comparisons are drawn on seven leads as cited above.\par
Moreover, by using a different size of the training set, we obtain different performance measures for our classification model. For instance, using 30\% TS, we obtain an average TPR of 98.7\% with 0.5\% FPR. For a TS of 50\%, we obtain an average TPR of 99.3\% with 0.3\% FPR. We periodically update the TS of our system with the new ECG data captured and the results depicted that the achieved accuracy for anomaly classes and lead-wise is phenomenal. Moreover, minor variation between leads is observable for instance results of lead III are slightly better as compared to others in respect of all four classes. It is also noticeable that the performance of our prediction model is slightly better for MI and PAC as compared to PVC. The reason behind this lack of performance for PVC is the non-specific variation of ECG parameters as they are more specific in the case of MI and PAC. The same argument holds for Sensibility, Specificity, Error Rate, and False Alarm Rate. Lastly, we have presented the True Positive Rate (TPR) vs. False Positive Rate with different thresholds by capitalizing the Receiver Operating Characteristic (ROC). Figure~\ref{fig:roc_ecg} presents the ROC curve in respect of PAC and MI class.\par
\begin{figure}[!htbp]
\centering
\parbox{0.45\textwidth}{
\subfigure[ROC Curve for the MI Class]{\includegraphics[scale=0.43]{./Definitions/figures/eps/ROC_MI-eps-converted-to.pdf}
\label{fig:ROC_MI}}
}
\parbox{0.45\textwidth}{
\subfigure[ROC Curve for the PAC Class]{\includegraphics[scale=0.43]{./Definitions/figures/eps/ROC_PAC-eps-converted-to.pdf}
\label{fig:ROC_PAC}}}
\caption{ROC curves obtained}
\label{fig:roc_ecg}
\end{figure}
\begin{table} [!htb]
\caption{Prediction model results for MI class}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\toprule
\textbf{Lead} & \textbf{Acc\%} & \textbf{Err\%} & \textbf{Class} & \textbf{Se\%} & \textbf{Far\%} & \textbf{Prec\%}\\
\hline
\midrule
\hline
I&93,19\%&6,80\%&MI&93,9\%&7,6\%&93,4\%\\
\cline{4-7}
& & &N&92,4\%&6,1\%&93\%\\
\hline
III&94,18\%&5,81\%&MI&93,1\%&4,5\%&96\%\\
\cline{4-7}
& & &N&95,1\%&6,8\%&92,2\%\\
\hline
V1&94\%&5,9\%&MI&94,5\%&7,3\%&96,9\%\\
\cline{4-7}
& & &N&92,7\%&5,4\%&87,8\%\\
\hline
V2&91,7\%&8,3\%&MI&83,3\%&5,6\%&82,7\%\\
\cline{4-7}
& & &N&94,4\%&16,7\%&94,6\%\\
\hline
V3&92,16\%&7,83\%&MI&93,2\%&9,5\%&94,3\%\\
\cline{4-7}
& & &N&90,4\%& 6,7\%&88,9\%\\
\hline
V4&92,84\%&7,15\%&MI&92,5\%&6,3\%&96,8\%\\
\cline{4-7}
& & &N& 93,7\%&7,5\%&85,7\%\\
\hline
V5&92,73\%&7,26\%&MI&92,5\%&6,7\%&96,5\%\\
\cline{4-7}
& & &N&93,3\%&7,5\%&86,3\%\\
\hline
\bottomrule
\end{tabular}
\label{table:results_MI}
\end{table}
\begin{table} [!htb]
\caption{Prediction model results for PVC class}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\toprule
\textbf{Lead} & \textbf{Acc\%} & \textbf{Err\%} & \textbf{Class} & \textbf{Se\%} & \textbf{Far\%} & \textbf{Prec\%}\\
\hline
\hline
\midrule
I&87,32\%&12,6\%&V&83,6\%&9,2\%&89,6\%\\
\cline{4-7}
& & &N&90,8\%&16,4\%&85,4\%\\
\hline
III&87,87\%&12,1\%&V&85,7\%&10,2\%&87,9\%\\
\cline{4-7}
& & &N&89,8\%&14,3\%&87,8\%\\
\hline
V1&87,62\%&12,3\%&V&84,7\%&9,8\%&88,7\%\\
\cline{4-7}
& & &N&90,2\%&15,3\%&86,8\%\\
\hline
V2&87,56\%&12,4\%&V&84\%&9,1\%&89,5\%\\
\cline{4-7}
& & &N&90,9\%&16\%&85,9\%\\
\hline
V3&87,56\%&12,4\%&V&85,6\%&10,7\%&87,4\%\\
\cline{4-7}
& & &N&89,3\%&14,4\%&87,7\%\\
\hline
V4&85,89\%&14,1\%&V&85,2\%&13,4\%&85,2\%\\
\cline{4-7}
& & &N&86,6\%&14,8\%&86,5\%\\
\hline
V5&86,13\%&13,8\%&V&86,1\%&13,8\%&84,7\%\\
\cline{4-7}
& & &N&86,2\%&13,9\%&87,4\%\\
\hline
\bottomrule
\end{tabular}
\label{table:results_V}
\end{table}
\begin{table} [!htb]
\caption{Prediction model results for PAC class}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\toprule
\textbf{Lead} & \textbf{Acc\%} & \textbf{Err\%} & \textbf{Class} & \textbf{Se\%} & \textbf{Far\%} & \textbf{Prec\%}\\
\hline
\hline
\midrule
I&96,54\%&3,45\%&A&96,8\%&3,8\%&96,8\%\\
\cline{4-7}
& & &N&96,2\%&3,2\%&96,2\%\\
\hline
III&97,78\%&2,2\%&A&95,6\%&0,5\%&99,3\%\\
\cline{4-7}
& & &N&99,5\%&4,4\%&96,6\%\\
\hline
V1&96,53\%&3,4\%&A&97,3\%&4,4\%&96,3\%\\
\cline{4-7}
& & &N&95,6\%&2,7\%&96,9\%\\
\hline
V2&95,88\%&4,1\%&A&95,7\%&3,9\%&95,8\%\\
\cline{4-7}
& & &N&96,1\%&4,3\%&95,9\%\\
\hline
V3&96,37\%&3,6\%&A&95,7\%&3,1\%&96,5\%\\
\cline{4-7}
& & &N&96,9\%&4,3\%&96,3\%\\
\hline
V4&96,68\%&3,3\%&A&95,1\%&1,9\%&97,8\%\\
\cline{4-7}
& & &N&98,1\%&4,9\%&95,7\%\\
\hline
V5&96,84\%&3,1\%&A&94\%&0,8\%&99\%\\
\cline{4-7}
& & &N&99,2\%&6\%&95,2\%\\
\hline
\bottomrule
\end{tabular}
\label{table:results_A}
\end{table}
\section{Conclusion} \label{sec:conclusion}
In this research work, we have proposed a system for the detection and prediction of Cardiac Anomalies. Our research work is more focused to detect the MI, PAC, PVC anomalies. The proposed system can collect live ECG data from wireless body sensors installed in a WBAN environment. The novel contribution of this research work involves four main parts, i.e., preprocessing of the ECG signal, feature extraction, prediction of cardiac anomaly and verification of anomaly by removing the false alarm. For prerpocessing of the ECG signal, we apply a succession of high and low pass filters, and Discrete Wavelet Transform (DWT) to minimize the signal to noise ratio. For feature extraction, we apply the Undecimated Wavelet Transform (UWT). After preprocessing and feature extraction we apply the Bayesian Belief Network for the prediction Normal and Abnormal beat and then further classification of cardiac anomalies into MI, PAC, and PVC. In addition to BNC, we also apply Tuckey box analysis for the removal of false alarms. For experimental purposes, we have used two annotated datasets from Physionet database, i.e., EDB and INCARTDB of real-time ECGs using seven leads. The experimental results achieved an average accuracy of 96.6\% for PAC, 92.8\% for MI and 87\% for PVC, with an average Error rate of 3.3\% for PAC, 6\% for MI and 12.5\% for PVC. The future work is to prepare the end-to-end prototype using market sensors to validate our experimental results in a user environment.\par
\bibliographystyle{unsrt}
|
2,877,628,091,030 | arxiv | \section{Introduction}\label{introduction}
In recent years, largely thanks to the Initiative for Open Citations (I4OC)\footnote{\url{https://i4oc.org}}, most major scholarly publishers have made their bibliographic reference data open, resulting, for example, in more than 700 million citations now being made openly available in the OpenCitations Index of Crossref open DOI-to-DOI citations (COCI) \cite{heibi2019software}. As a consequence, scholarly data providers and bibliometric analysis software have started to integrate open citation data into their services, thereby offering an alternative to the current reliance on proprietary citation indexes.
Open bibliographic and citation metadata are beneficial because they enable anyone to perform meta-research studies on the evolution of scholarly knowledge, and allows national and international research assessment exercises characterized by transparent and reproducible processes. Within this context, bibliographic citations are essential components of scholarly discourse, since they “remain the dominant measurable unit of credit in science” \cite{fortunato2018science}. They carry evidence of scholarly networks and of the progress of theories and methods, and are fundamental aids in tenure evaluation and recommendation systems.
To perform open bibliometric research and analysis, the publications upon which the work is based should be FAIR, namely Findable, Accessible, Interoperable, and Reusable \cite{wilkinson2016fair}. Ideally, such data should be made available without any restrictions, licensed under a Creative Commons CC0 waiver\footnote{\url{https://creativecommons.org/publicdomain/zero/1.0/legalcode}}, and the software for programmatically accessing and analysing them should be also released with open source licences.
However, data suppliers use a variety of licenses, technologies, and vocabularies for representing the same bibliographic information, or use ontology terms defined in the same ontologies with different nuances, thereby generating diversity in data representation. The adoption of a common, generic, open and documented data model that employs clearly defined ontological terms would ensure data consistency and facilitate integration tasks.
In this paper we present the OpenCitations Data Model (OCDM), a data model based on existing ontologies for describing information in the scholarly bibliographic domain with a particular focus on citations. OCDM has been developed by OpenCitations \cite{peroni2020opencitations}, an infrastructure organization for open scholarship dedicated to the publication of open bibliographic and citation data using Semantic Web technologies. Herein, we propose a holistic approach for evaluating the reusability of OCDM according to ontology evaluation methodologies, and we discuss its uptake, impact, and trustworthiness.
We compared OCDM to similar existing solutions and found that, to the best of our knowledge, OCDM (a) has the broadest vocabulary coverage, (b) is the best documented data model in this area, and (c) has already a significant uptake in the scholarly community. The main advantages of OCDM, in addition to the consistency of data description that it facilitates, are that it was designed from the outset to enable use by those who are not Semantic Web practitioners, as well as by those that are, that it is properly documented, and it is provided with accompanying software for managing the entire life-cycle of data created according to OCDM.
The paper is organized as follows. In Section \ref{background} we clarify the scope and motivations for this work. In Section \ref{ocdm} we present the data model and its documentation, software and current early adopters. In Section \ref{reusability} we present the criteria we have used to evaluate OCDM reusability and we present results, including figures about OCDM views, downloads and citations according to Figshare and Altmetrics, which are further discussed in Section \ref{discussion}.
\section{Background}\label{background}
The OpenCitations Data Model (OCDM) \cite{peroni2018opencitations} was initially developed in 2016 to describe the data in the OpenCitations Corpus (OCC). In recent years OpenCitations has developed other datasets while OCDM has been adopted by external projects, and OCDM has been expanded to accommodate these changes. We have recently further expanded the OpenCitations Data Model to accommodate the extended metadata requirements of the Open Biomedical Citations in Context Corpus project (CCC). This project has developed an exemplar Linked Open Dataset that includes detailed information on citations, in-text reference pointers such as “Berners-Lee et al. 2011”, and identifiers of the citation contexts (e.g. sentences, paragraphs, sections) within which in-text reference pointers are located, to facilitate textual analysis of citation contexts.
The citations are treated as first-class data entities \cite{ocidefinition}, enriched with open bibliographic metadata released using a CC0 waiver that can be mined, stored and republished. This includes identifiers specifying the specific positions of the various in-text reference pointers within the text. However, the literal text of these contexts are not stored within the Open Biomedical Citations in Context Corpus, and regrettably in many cases the full text of the published entities cannot be mined from elsewhere in an open way, even for some (view only) Open Access articles, because of copyright, licensing and other Intellectual Property (IP) restrictions.
Table~\ref{tab1} shows the representational requirements (hereinafter, for the sake of simplicity, also called citation properties and numbered (P1-P8)) that we were interested in recording for each citation instantiated from within a single paper.
\begin{table}[!h]
\centering
\begin{tabularx}{\columnwidth}{|l|X|}
\hline
\textbf{ID} & \textbf{Description} \\
\hline
P1 & A classification of the type of citation (e.g. self-citation). \\
\hline
P2 & The bibliographic metadata of the citing and cited bibliographic entities (e.g. type of published entity, identifiers, authors, contributors, publication date, publication venues, publication formats). \\
\hline
P3 & The bibliographic reference, typically found within the reference list of the citing bibliographic entity, that references a cited bibliographic entity. \\
\hline
P4 & The separate identifiers of all the in-text reference pointers included in the text of the citing entity, that denote bibliographic references within the reference list. \\
\hline
P5 & The co-occurrence of in-text reference pointers within each in-text reference pointer lists (e.g. “[3,5,12]”). \\
\hline
P6 & The identifiers of structural elements (e.g. XPath of sentences, paragraphs, captions) that specify where, in the full text, an in-text reference pointer appears.\\
\hline
P7 & The function or purpose of the citation (e.g. to cite as background, extend, or agree with the cited entity) to which each in-text reference pointer relates.\\
\hline
P8 & Provenance information of the citation extraction process (e.g. responsible agents, data sources, extraction dates).\\
\hline
\end{tabularx}
\caption{Representational requirements of the OpenCitations Data Model}\label{tab1}
\vspace{-25pt}
\end{table}
\section{The OpenCitations Data Model}\label{ocdm}
The OCDM permits one to record metadata about bibliographic references and their textual contexts, bibliographic entities (citing and cited publications) and the citations that link them, agents and their roles (e.g. author, editor), identifiers for the foregoing entities, provenance metadata and much more, as shown diagrammatically in Fig. \ref{fig:ocdm}. All terms described in the OCDM are brought together in the OpenCitations Ontology (OCO)\footnote{\url{https://w3id.org/oc/ontology}}. OCO aggregates terms from the SPAR (Semantic Publishing and Referencing) Ontologies \cite{peroni2018spar} and other well-known ontologies, such as PROV-O \cite{belhajjame2013prov} and Web Annotation Ontology \cite{sanderson2013designing}.
Citations are instances of the class \verb|cito:Citation| defined in CiTO, the Citation Typing Ontology\footnote{\url{http://purl.org/spar/cito}}. Subclasses (not shown in Fig. \ref{fig:ocdm}), relevant for P1, include \verb|cito:AuthorSelfCitation|, \verb|cito:JournalSelfCitation|, \verb|cito:FunderSelfCi|\-\verb|tation|, \verb|cito:AffiliationSelfCitation|, and \verb|ci|\-\verb|to:AuthorNetworkSelfCita|\-\verb|tion|. In addition, citations can be characterized with a purpose or function with respect to the related citation context, by means of the property \verb|cito:hasCita|\-\verb|tionCharacterisation| and the use of one or more CiTO properties (e.g. \verb|cito:|\-\verb|usesMethodIn|) (P7).
Instances of the class \verb|fabio:Expression|, defined in the FRBR-aligned Bibliographic Ontology (FaBiO)\footnote{\url{http://purl.org/spar/fabio}}, can be linked to bibliographic metadata such as publication dates, authors, and venues. Instances of \verb|fabio:Manifestation| aggregate information on specific editions and formats (P2).
Instances of \verb|oa:Annotation|, defined in the Web Annotation Ontology (OA)\footnote{\url{https://www.w3.org/ns/oa}}, link instances of the class \verb|cito:Citation| to instances of \verb|biro:Bibliographic|\-\verb|Reference| (P3), defined in BiRO, the Bibliographic Reference Ontology\footnote{\url{http://purl.org/spar/biro}}, and individuals of \verb|c4o:InTextReferencePointer| (P4), defined in C4O, the Citation Counting and Context Characterisation Ontology\footnote{\url{http://purl.org/spar/c4o}}. Lists of in-text reference pointers are represented by the class \verb|c4o:SingleLocationPointer|\-\verb|List| (P5).
Structural elements wherein in-text reference pointers appear are represented as individuals of \verb|deo:DiscourseElement|, defined in DEO, the Discourse Element Ontology\footnote{\url{http://purl.org/spar/deo}}. Elements are uniquely identified (P6) by means of instances of \verb|datacite:Identifier|, defined in the DataCite Ontology\footnote{\url{http://purl.org/spar/datacite}}.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.245]{imgs/core_compact.png}
\caption{Main classes and properties of the OpenCitations Ontology }
\label{fig:ocdm}
\end{figure}
Finally, as summarized in Figure ~\ref{fig:prov}, OCDM provides guidance for describing the provenance and versioning of each entity under consideration, and also enables the specification of the main metadata related to the datasets containing such entities (P8). To this end, the OCDM reuses terms from PROV-O, the Provenance Ontology\footnote{\url{http://www.w3.org/ns/prov}}, VoID, the Vocabulary of Interlinked Datasets\footnote{\url{http://rdfs.org/ns/void}} \cite{alexander2009describing}, and DCAT, the Data Catalog Vocabulary\footnote{\url{http://www.w3.org/ns/dcat}} \cite{maali2014data}.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.20]{imgs/provenance_core.png}
\caption{Provenance, versioning, and dataset description in the OCDM}
\label{fig:prov}
\end{figure}
Each bibliographic entity described by the OCDM is annotated with one or more provenance snapshots (i.e. instances of \verb|prov:Entity|, each snapshot intended as a specialisation of the bibliographic entity via \verb|prov:specialization|\-\verb|Of|) as defined in \cite{peroni2016document}. In particular, each snapshot records the set of statements having the bibliographic entity as its subject at a fixed point in time, validity dates, responsible agents for either the creation or the modification of the metadata, primary data sources, and a SPARQL query summarising changes with respect to any prior snapshot.
Lastly, a dataset (\verb|dcat:Dataset|) containing information about the bibliographic entities is described with cataloguing information (e.g. title, description, publication and change dates, subjects, webpage, SPARQL endpoint) and distribution information (\verb|dcat:Distribution|) which also includes the specification of licenses, dumps, media types, and data volumes.
\subsection{OCDM documentation and resources}
In order to make the OCDM understandable and reusable by both the Semantic Web community and communities with no expertise in Semantic Web technologies, support material has been produced. All materials are available at \url{http://opencitations.net/model} and include the following resources.
\begin{figure}
\centering
\includegraphics[scale=0.45]{imgs/ocdm_graph.pdf}
\caption{Overview of OpenCitations ecosystem and acronyms used in this paper}
\label{fig:ocdm_overview}
\end{figure}
\textbf{Human-readable documentation.} The OCDM documentation \cite{peroni2018opencitations} provides (1) detailed definitions of terms characterising open citation data and open bibliographic metadata, (2) naming conventions and URI patterns, and (3) real-world examples. OCDM is supplemented by two additional specifications, i.e. the definition of the Open Citation Identifier (OCI) \cite{ocidefinition} and the definition of the In-Text Reference Pointer Identifier (InTRePID) \cite{intrepid}.
\textbf{OCDM-compliant data examples.} All the data introduced in the OCDM documentation are expressed and provided in JSON-LD to make it easily understandable both to RDF experts and other Web users. In addition, CSV templates have been adopted so as to express and share parts of the OCDM – e.g. to store the citation data in COCI \cite{heibi2019software}.
\textbf{Ontology development documentation.} The first version of the OCDM, released in 2016, addressed citation properties P1-P3 and P8, by directly reusing the SPAR Ontologies and other vocabularies \cite{peroni2018spar}. Within the context of the CCC project described above, we used SAMOD \cite{peroni2016simplified}, an agile data-driven methodology for ontology development, to extend OCO with terms relevant to P4-P7. Motivating scenarios, competency questions, and a glossary of terms of all the new entities included in the OCDM, are available for reproducibility purposes.
\textbf{Open source software leveraging the data model.} The source code of the knowledge extraction and data re-engineering pipeline for managing data according to OCDM is available at \url{http://opencitations.net/tools}. The pipeline includes software originally developed for creating the OpenCitations Corpus (BEE and SPACIN) and the OpenCitations Indexes (Create New Citations – CNC), and a user-friendly web application (BCite)\cite{demidova2018creating} for creating OCDM-compliant RDF data from lists of bibliographic references. In addition, we have released tools to support the development of applications leveraging data organized according to OCDM: RAMOSE (to create RESTful APIs over SPARQL endpoints), OSCAR (to create user-friendly search interfaces for querying SPARQL endpoints \cite{heibi2017oscar}) and LUCINDA (a configurable browser for RDF data). Configuration files for setting up these tools are available in their GitHub repositories.
\textbf{Licenses for reuse.} OCDM (both the documentation and OCO) is released under a CC-BY license. Software solutions are released under the ISC license. The OCDM-compliant data served by OpenCitations are made open under CC0.
\subsection{OCDM early adopters}
To date, OCDM is central to the work of OpenCitations. The OpenCitations datasets modelled using OCDM include: the OpenCitations Corpus (OCC), including about 13 million citation links and the OpenCitations Indexes, which include more than 721 million citations. Forthcoming datasets, that will be released later in 2020, include OpenCitations Meta, which stores metadata of the citing and cited entities involved in the citations included in the Indexes, and the Open Biomedical Citations in Context Corpus (CCC), mainly derived from the Open Access corpus of biomedical articles provided by PubMed Central, that will include detailed information on in-text reference pointers denoting each reference in the reference list, and their textual contexts.
Moreover, OCDM has three external acknowledged early adopters. The Extraction of Citations from PDF Documents (EXCITE) project \cite{hosseini2019excite} is run by GESIS and the University of Koblenz. The aim of EXCITE is to extract and match citations from social science publications. To date, EXCITE has extracted around 1 million citations, has converted the data to RDF according to OCDM, and has then published it by ingestion into the OCC.
The Linked Open Citation Database (LOC-DB) \cite{lauscher2018linked} is a project which aims to demonstrate that it is possible for academic libraries to catalogue citation relations sustainably, accurately, and cooperatively. So far, the project has stored bibliographic and citation data for about 7000 published entities. LOC-DB has used a customisation of the OCDM as the data model for defining its data, and exports data in OCDM/JSON-LD so as to be ingested into the OCC.
The Venice Scholar Index (VSI)\footnote{\url{https://scholarindex.eu/}} is an instance of the Scholar Index, originated from the “Linked Books” project \cite{colavizza2019citation} founded by the Swiss National Science Foundation. The citation index includes about 4 million references to publications cited in the historiography of Venice. VSI exports data into RDF formats according to OCDM so as to be integrated into the OCC.
\section{Analysis of OCDM reusability}\label{reusability}
A holistic approach has been used to evaluate the OCO ontology and to infer properties relevant to OCDM. We adopted seminal definitions and classifications of ontology evaluation approaches \cite{brank2005survey,gomez2004ontology} and we selected the following dimensions and approaches that are representative with respect to OCDM reusability.
\textbf{[E1] Lexical keyword similarity.} This addresses the similarity of definitions (labels of terms) in OCO with respect to the real-world knowledge to be mapped. We adopted a data-driven evaluation \cite{brewster2004data} to map OCO definitions with terms included in a corpus of documents encoded in the Journal Article Tag Suite (JATS) XML schema\footnote{\url{https://jats.nlm.nih.gov/}}. JATS is used by Europe PubMed Central (EPMC)\footnote{\url{https://europepmc.org/downloads/openaccess}} to encode scholarly documents, that are in turn harvested by OpenCitations.
\textbf{[E2] Vocabulary coverage.} This addresses the coverage of concepts, instances, and facts of OCO with respect to the domain to be covered. \textbf{[E2.1]} We validated OCO coverage by comparing it with competing ontologies \cite{maedche2001comparing}. \textbf{[E2.2]} Secondly, we adopted an application-based approach \cite{porzel2004task} to address OCO coverage in four sources that leverage it: OpenCitations, EXCITE, LOC-DB, and ScholarIndex.
Also, we addressed aspects peculiar to OCDM reusability, namely:
\textbf{[E3] Usability-profiling.} This encompasses the communication context of OCDM, i.e. its pragmatics. We evaluated OCDM recognition level \cite{gangemi2006modelling}, i.e. the efficiency of access to OCDM ontologies, documentation, and software, by comparing it with competing ontologies \cite{maedche2001comparing}.
Lastly, we addressed current uptake, potential impact, and trustworthiness of OCDM, including metrics about OCDM views, downloads and citations according to Figshare and Altmetrics.\footnote{Source code and results of this analysis are available at \url{https://github.com/opencitations/metadata}}
\subsection{E1: Lexical keyword similarity}
We created a randomized corpus of 2800 JATS documents taken from the Open Access Subset of biomedical literature hosted by Europe PubMed Central. We extracted the list of XML elements used in the documents within this corpus (117 elements), and we expanded element names with definitions scraped from the online XML schema guidelines (e.g. \verb|<p>| became “Paragraph”). We manually pruned non-relevant elements such as MathML markup, text style elements (e.g. \verb|<italic>|), redundant wrapping elements (\verb|<keywordGroup>|) and elements that are out of scope (e.g. \verb|<biography>|), resulting in a refined list of 45 terms.
Secondly, we extracted definitions from OCO (118). We manually pruned terms that were not relevant (e.g. annotation properties, provenance, and distribution related terms), terms that represent hierarchy, sequences, and linguistic aspects not available in XML (e.g. “partOf”, “hasNext”, “Sentence”), and terms dependent on post-processing activities (e.g. “self-citation”, “hasCitationCharacterisation”), resulting in a refined list of 77 OCO definitions.
We then used Wordnet\footnote{\url{https://wordnet.princeton.edu/}} to automatically expand both XML and ontology definitions with synonyms, and we matched synsets similarities. We used a symmetric similarity score to find best matches between the synsets. We considered two thresholds for the similarity match, 0.7 and 0.5, and we manually computed precision and recall. Table~\ref{precision} shows the results.
The coverage of JATS terms in OCO was 55.5\% when the threshold was greater than 0.7, with high precision (96\%) and average recall (53.3\%). The coverage was 73.3\% when the threshold was greater than 0.5, with still high precision (93.3\%) and average recall (68.8\%).
False negative results included acronyms (e.g. “issn”) that did not have a match in Wordnet, and terms of the taxonomy that were underrepresented in the corpus (e.g. “book”). Likewise, false positive results were due to acronyms used in XML definitions that were not correctly parsed (e.g. “URI for This Same Article Online” was incorrectly matched with “fabio:JournalArticle”).
\begin{table}[]
\centering
\begin{tabularx}{\columnwidth}{|l|X|X|X|}
\hline
\textbf{Threshold} & \textbf{Matches} & \textbf{Precision} & \textbf{Recall}\\
\hline
0.7 & (25/45) 55.5\% & (24/25) 96\% & (24/45) 53.3\%\\
\hline
0.5 & (33/45) 73.3\% & (31/33) 93.9\% & (31/45) 68.8\% \\
\hline
\end{tabularx}
\caption{Lexical similarity between JATS/XML elements and OCO terms}\label{precision}
\vspace{-25pt}
\end{table}
\subsection{E2: Vocabulary coverage}
\textbf{[E2.1] Vocabulary coverage in existing vocabularies.} Since gold standard ontologies are not available, we referred to existing data models and relevant ontologies used by citation data providers. For the sake of completeness, we addressed both open and non-open citation data providers\footnote{See the definition of ``open'' at \url{https://opendefinition.org/licenses/}.}, and both graph data providers and others. We reviewed the vocabulary coverage with respect to P1-P8. We did not take into account discipline coverage or citation counting. The complete list of data models and references is available at \url{https://github.com/opencitations/metadata}. Table~\ref{vocabs} summarizes the comparison of vocabularies coverage, an “x” indicating that the source had metadata of relevance to the citations properties P1-P8 (Table~\ref{tab1}).
\begin{table}[]
\fontsize{8}{9}\selectfont
\begin{tabularx}{\columnwidth}{|l|X|X|X|X|X|X|X|X|}
\hline
& \textbf{P1} & \textbf{P2} & \textbf{P3} & \textbf{P4} & \textbf{P5} & \textbf{P6} & \textbf{P7} & \textbf{P8}\\
\hline
Google Scholar & & x & & & & & & x\\
\hline
Scopus & & x& & & & & & x\\
\hline
Web of Science & x& x& x& & & & & x\\
\hline
CiteseerX & x& x& x& & & x& &x\\
\hline
Dimensions& & x& x& & & & &x\\
\hline
Crossref& & x& x& & & & &x\\
\hline
EPMC& & x& x& & & & &x\\
\hline
Datacite& &x &x & & & &x &x\\
\hline
DBLP & & x& & & & & &x\\
\hline
MAKG& & x& x& & & x& &\\
\hline
ORC & &x & & & & & &x\\
\hline
GORC& &x &x &x &x &x & &x\\
\hline
SciGraph& &x & & & & & &x\\
\hline
WikiCite& &x & & & & & &x\\
\hline
OpenCitations&x &x &x &x &x &x &x &x\\
\hline
\end{tabularx}
\caption{Vocabulary coverage in existing vocabularies according to P1-P8}\label{vocabs}
\vspace{-25pt}
\end{table}
Non-open citation data providers include Google Scholar, Scopus \cite{scopus}, Web of Science (WoS) \cite{wos}, CiteSeerX \cite{li2006citeseerx} and Dimensions \cite{herzog2020dimensions}. Their data models cover a few aspects of bibliographic metadata (P2) and provenance data (P8). WoS, CiteSeerX, and Dimension also includes bibliographic references (P3). In addition, Wos and CiteSeerX also cover types of citations (P1), and only CiteSeerX includes citation context sentences (P6).
Open citation data providers include Crossref \cite{hendricks2020crossref}, Europe PubMed Central (EPMC), DataCite, DBLP, Microsoft Academic Knowledge Graph (MAKG) \cite{makg} (which is based on Microsoft Academic Graph \cite{wang2020microsoft} and which reuses the SPAR Ontologies and links to resources in Wikidata and OpenCitations), the Semantic Scholar Open Research Corpus (ORC) \cite{ammar2018orc}, the Semantic Scholar’s Graph of References in Context (GORC) \cite{lo2019gorc}, Springer Nature’s SciGraph \cite{scigraph} (which is based on Schema.org), WikiCite (which includes terms aligned to SPAR Ontologies and interlinks with the OpenCitations Corpus), and the OpenCitations datasets \cite{peroni2020opencitations}. All data models cover P2, and all except MAKG also cover P8. Only OpenCitations covers P1. In addition, Crossref, Europe PMC, DataCite, MAKG, GORC, and OpenCitations cover P3. MAKG, GORC, and OpenCitations cover P6, while the latter two also includes in-text reference pointers (P4) and related lists (P5). DataCite and OpenCitations allow the tracking of citation functions (P7).
\textbf{[E2.2] Vocabulary coverage in early adopters.} We separately analysed the vocabulary coverage in acknowledged adopters of OCDM (Table \ref{vocabs_adopters}).
\begin{table}[]
\fontsize{8}{9}\selectfont
\begin{tabularx}{\columnwidth}{|l|X|X|X|X|X|X|X|X|}
\hline
& \textbf{P1} & \textbf{P2} & \textbf{P3} & \textbf{P4} & \textbf{P5} & \textbf{P6} & \textbf{P7} & \textbf{P8}\\
\hline
\hline
EXCITE& &x &x & & & & &x\\
\hline
LOC-DB& &x &x & & & & &x\\
\hline
VSI& &x &x &x & &x & &x\\
\hline
\end{tabularx}
\caption{Vocabulary coverage in OCDM early adopters according to P1-P8}\label{vocabs_adopters}
\vspace{-15pt}
\end{table}
EXCITE data fully covers P2, P3 and P8. Its local data model also includes information about the data quality of extracted references, which is not currently mapped to OCDM. LOC-DB data fully covers P2, P3, and P8. The OCDM was extended in its local data model so as to cover information about its OCR activities performed on PDF scans. Venice Scholar Index (VSI) aligned data to OCDM terms so as to fully cover P2, P3, P4, P6, and P8. In order to cover peculiar needs of the project relevant to P2, the classes \verb|fabio:Work| and \verb|fabio:Expression| defined in the SPAR Ontologies (and reused in OCO) were specialized so as to include the following sub-classes: \verb|fabio:ArchivalRecord|, \verb|fabio:ArchivalRecordSet|, \verb|fabio:ArchivalDocument|, and \verb|fabio:Archival|\-\verb|DocumentSet|\footnote{As documented at https://github.com/SPAROntologies/fabio/issues/1.}.
\subsection{E3: Usability profiling}
We compared the documentation available for existing graph data providers, namely: MAKG, OC and GORC (Semantic Scholar), SciGraph, and WikiCite. We considered the same dimensions used to address OCDM documentation, namely: human-readable documentation, machine-readable data model and examples, ontology development documentation, open source software leveraging the model, and licenses for reuse (see Table \ref{usability}).
\begin{table}[]
\fontsize{8}{9}\selectfont
\begin{tabularx}{\columnwidth}{|l|X|X|X|X|X|X|X|X|}
\hline
& \textbf{HR docum.} & \textbf{MR data model} & \textbf{ontology dev. docum.} & \textbf{software} & \textbf{licenses} \\
\hline
\hline
MAKG&x & & & &\\
\hline
ORC and GORC&x &x & & &\\
\hline
SciGraph&x &x & & &x\\
\hline
WikiCite&x & & &x &x\\
\hline
OCDM&x &x &x &x &x\\
\hline
\end{tabularx}
\caption{Usability of existing ontologies and data models}\label{usability}
\vspace{-15pt}
\end{table}
The MAKG data model is graphically represented in \cite{makg}. Software for creating RDF data is available, but no machine-readable data model and examples are provided. Likewise, the development of the data model is not described. Moreover, according to Färber \cite{makg}, the property \verb|c4o:hasContext| is used to annotate instances of \verb|cito:Citation|, rather than \verb|c4o:InTextReferencePointer| as prescribed in C4O, preventing it from representing consistently P3, P4, and P7 in future works, and from merging third-party data with OpenCitations. Lastly, no license is specified for the data model.
The Semantic Scholar Open Research Corpus data model is described in \cite{ammar2018orc}. A machine-readable example of the data model is presented in a dedicated web page\footnote{\url{http://s2-public-api-prod.us-west-2.elasticbeanstalk.com/corpus/}}. No further documentation is available. Similarly, GORC is described in \cite{lo2019gorc}, where an example of JSON data is presented. Both datasets are released under OCD-BY (i.e. an open license), although programmatically accessing data through their APIs requires one to subscribe to a more restrictive and non-open license (comparable to CC-BY-NC-ND). No license associated with the data model is stated.
The Schema.org main classes reused in SciGraph are described in a dedicated web page\footnote{\url{https://scigraph.springernature.com/explorer/datasets/ontology/}}. While the ontology is reused as-is, the SciGraph data model\footnote{\url{https://github.com/springernature/scigraph}} is released as a JSON-LD file and machine-readable examples are available under a CC-BY license. Development documentation of the data model is not available.
Sources addressing the Wikidata model used by WikiCite include templates\footnote{\url{https://www.wikidata.org/wiki/Template:Bibliographic\_properties}} and examples\footnote{\url{https://www.wikidata.org/wiki/Wikidata:WikiProject\_Source\_MetaData}}. However, no dedicated documentation nor a machine-readable version of the model having citations as a scope is separately available. Data, software, and the general data model are all released under the CC0 license.
Lastly, OCDM \cite{peroni2018opencitations} is described in dedicated human-readable documentation, including machine-readable data model and examples, available under a CC-BY license. The ontology development documentation and the open source software leveraging the model are available on github (ISC licence). All materials are gathered in the official page of the OCDM data model\footnote{\url{http://opencitations.net/model}}.
\subsection{OCDM uptake, potential impact, and trustworthiness }
We can quantify current uptake of the OCDM documentation by using statistics provided by Figshare and Altmetrics, and the number of users’ views of the model description page in the OpenCitations website. As of 18 August 2020, the Figshare document \cite{peroni2018opencitations} has been viewed 10852 times, downloaded 1508 times, and cited 5 times. 100 tweets from 65 users include links to the document. The web page (\url{http://opencitations.net/model}) dedicated to the model has received 13,844 views from 8,202 unique users since 2018.
We can estimate the potential impact of OCDM by considering (a) different types of possible reuse of the model, (b) the number of current reusers of the data model, (c) projects and applications leveraging data created according to OCDM, and (d) the kind of users of data created according to OCDM.
In detail, OCDM can be reused ‘as is’, via alignment for interchange purposes, and as a JSON data model for non-Semantic Web users. Currently OCDM is used by OpenCitations for all its datasets, and by the three acknowledged early adopters, namely: EXCITE and LOC-DB, which reuse OCDM ‘as is’, and VSI, which aligned terms to OCDM. EXCITE data have been ingested in the OpenCitations Corpus, while LOC-DB and VSI data are going to be ingested soon. VOSViewer\footnote{\url{https://www.vosviewer.com/}}, CitationGecko\footnote{\url{https://citationgecko.com/}}, VisualBib\footnote{\url{https://visualbib.uniud.it/en/project/}}, and OAHelper\footnote{\url{https://www.otzberg.net/oahelper/}} are applications that leverage OpenCitations data conforming to OCDM retrieved via the OpenCitations REST APIs or directly through its SPARQL endpoints. Moreover, OpenAIRE\footnote{\url{https://www.openaire.eu/}}, MAKG, and WikiCite align data to OpenCitations. Both DBLP and Lens.org\footnote{\url{https://lens.org}} use citation data from OpenCitations to enrich their bibliographic metadata records.
Users of OpenCitations data include scholars in scientometrics, life sciences, biomedicine, the physical sciences, and the information technology domain. Open\-Citations is currently expanding its coverage to include the social science and the arts and humanities disciplines. The main users of EXCITE data are researchers in the social sciences, while those of the data held by LOC-DB and the Venice Scholar Index include librarians and researchers in the humanities.
Lastly, we address trustworthiness of OCDM. Long-term availability of ontologies is crucial for the development of the Semantic Web, and the trustworthiness of the ontology creators is important. OCDM, OCO, and the SPAR Ontologies are all maintained by OpenCitations, which has been recently selected by the Global Sustainability Coalition for Open Science Services (SCOSS)\footnote{\url{https://scoss.org/}} as an open infrastructure deserving of crowdfunding support from the scholarly community, thereby helping to ensure its long-term sustainability.
Along with trustworthiness, another important factor is the general interest in the community towards research topics and outputs that can leverage OCDM. So far, two OpenCitations projects dedicated to the enhancement of the OpenCitations Corpus and the creation of the Open Biomedical Citations in Context Corpus have been funded by the Alfred P. Sloan Foundation\footnote{See \url{https://sloan.org/grant-detail/8017}} and the Wellcome Trust respectively, as mentioned above in Section “Background”. Moreover, the Internet Archive and Figshare have both offered to archive backup copies of the OpenCitations datasets without charge.
\section{Discussion and conclusions}\label{discussion}
First, we evaluated lexical similarity of OCO definitions over the knowledge included in data sources encoded in JATS/XML, a gold standard for academic publications [E1]. While the recall is only average, mainly due to mistakes in parsing of acronyms, for those terms that were correctly matched the lexical similarity precision is high, showing that OCO is appropriate for representing data sources organized according to the gold standard. One of the known limits of data-driven evaluation methodologies is that these do not address possible changes in the domain knowledge over time. To date, early adopters of OCO continuously contribute with new scenarios to be represented in the model, which is correspondingly expanded. As a result, OCO will remain a comprehensive reference point for future developments. Other statistical semantic approaches will be evaluated in the future.
Secondly, we evaluated OCO vocabulary coverage as compared with competing data models [E2.1] and in the context of early adopters [E2.2]. Only OCDM fully covers P1-P8. In particular, only one other provider covers P4 and P5 (identifiers for in-text references and groups of these), three providers cover property P6 (although they only store full-text sentences, and lack identifiers for in-text reference pointers), and only one other provider covers property P7 (citation function). Two graph-data providers reuse terms from SPAR Ontologies (either directly or by alignment) in different ways, generating heterogeneity in data.
Among early adopters, LOC-DB required extensions in order to represent special information related to the cataloguing of digital objects, and VSI required us to expand the FaBiO ontology to permit description of unpublished archival entities. While such changes can be deemed marginal, these are relevant hints for future developments in the humanities domain and will require further analysis. Nonetheless, the OCDM vocabulary coverage is satisfying and strengthens its reusability across domains and applications.
We showed how alternative citation data providers ensure access to their data models [E3]. Peer-reviewed articles are the main access point to descriptions of those data models, with additional information scattered across various web pages. While machine-readable data models and examples are mostly available, none of the other providers referenced detailed development documentation. Moreover, the licenses for reusing the data models are not always defined. In summary, OCDM appears to be the most documented and findable data model.
Again, no comparison was possible of the uptake of the alternative models in the community. We showed that OCDM has been relatively popular in community social networks, and that the documentation has been downloaded and read by many people. At the moment we cannot measure for what purpose the OCDM documentation has been reused, with the exception of the three early-adopter projects of which we are aware listed in this paper.
We have shown that OCDM is potentially of significant usefulness to several communities, and fosters reuse in combination with legacy technologies, and we have highlighted ongoing interest from several parties in the maintenance and ongoing development of OCDM in support of several projects.
In future works, we will (a) create SHeX shapes to facilitate reusers in mapping their data to OCDM, and (b) trace OCDM usage scenarios by asking users to fill in a form for statistical purposes.
\section*{Acknowledgements}
This work was funded by the Wellcome Trust (Wellcome-214471\_Z\_18\_Z). We thank Ludo Waltman (Centre for Science and Technology Studies - CWTS, Leiden University) and Vincent Larivière (École de bibliothéconomie et des sciences de l'information, l’Université de Montréal) for supervising aspects of this work, and Ivan Heibi (University of Bologna) for contributing with suggestions.
\bibliographystyle{splncs04}
|
2,877,628,091,031 | arxiv |
\section*{Acknowledgements}
This work was funded in part by the National Science Foundation (NSF) awards CAREER IIS-1943364 and CCF-1918483, the Frederick N.\ Andrews Fellowship, and the Wabash Heartland Innovation Network. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors.
We would like to thank our reviewers, who gave excellent suggestions to improve the paper.
Further, we would like to thank Ryan Murphy for many insightful discussions, and Mayank Kakodkar and Carlos H. C.\ Teixeira for their invaluable help with the subgraph function estimation.
\section{Proof of \Cref{prop:Einv}}
\propEinv*
\begin{proof}
First note that $Y$ is only a function of $W$ and an independent random noise (following \Cref{def:trainG,def:testG}, depicted in \Cref{fig:SCM}).
\update{Therefore, $Y$ is E-invariant, and thus
\begin{equation*}
\begin{split}
{\rm P}(Y|{\mathcal{G}}^\text{te}_{N^\text{te}} = G_{N^\text{te}}^\text{te})
={\rm P}(Y|{\mathcal{G}}^\text{tr}_{N^\text{tr}}=G_{N^\text{tr}}^\text{tr}),
\end{split}
\end{equation*}
since the observed graphs in test ($G_{N^\text{te}}^\text{te}$) and training ($G_{N^\text{tr}}^\text{tr}$) only differ due to the change of environments while sharing the same graphon variable $W$ and the other random variables.
The definition of E-invariance states that $\forall e \in \text{supp}(E^\text{tr})$, $\forall e^\dagger \in \text{supp}(E^\text{te})$,
\[
\Gamma({\mathcal{G}}^\text{tr}_{N^\text{tr}}|E^\text{tr}=e)=\Gamma({\mathcal{G}}^\text{te}_{N^\text{te}}|E^\text{te}=e^\dagger).
\]
So, the E-invariance of $\Gamma$ yields
$$\rho(y, \Gamma({\mathcal{G}}^\text{tr}_{N^\text{tr}})) = \rho(y, \Gamma({\mathcal{G}}^\text{te}_{N^\text{te}})),$$ concluding our proof.}
\end{proof}
\section{Proof of \Cref{thm:sizeExtrapolationBound}}
\thmsizeExtrapolationBound*
\begin{proof}
\input{sections/appendix/proofs_size_extrapolation_bound}
\end{proof}
\section{Biases in estimating induced homomorphism densities}
\label{sec:BiasAppendix}
\input{sections/appendix/appx_biasedestimates}
\subsection{Model implementation}
All neural network approaches, including the models proposed in this paper, are implemented in PyTorch~\citep{pytorchcitation} and Pytorch Geometric~\citep{FeyLenssen2019}.
Our GIN~\citep{xu2018powerful}, GCN~\citep{Kipf2016} and PNA~\citep{corso2020principal} implementations are based on their Pytorch Geometric implementations. We consider sum, mean, and max READOUTs as proposed by~\citet{xu2020neural} for extrapolations (denoted by {\em XU-READOUT}). For RPGIN~\citep{pmlr-v97-murphy19a}, we implement the permutation and concatenation with one-hot identifiers (of dimension 10) and use GIN as before. Other than a few hyperparameters and architectural choices,
we use standard choices (e.g.~\citet{hu2020open}) for neural network architectures. If the graphs are unattributed, we follow convention and assign a constant ${\bf 1}$ dummy feature to every vertex.
We use the WL graph kernel implementations provided by the \emph{graphkernels} package~\citep{Sugiyama-2017-Bioinformatics}. All kernel methods use a Support Vector Machine on scikit-learn~\citep{scikit-learn}.
The Graphlet Counting kernel (GC kernel), as well as our own procedure, relies on being able to efficiently count attributed or unattributed connected induced homomorphisms within the graph. We use ESCAPE~\citep{pinar2017escape} and R-GPM~\citep{teixeira2018graph} as described in the main text. The source code of ESCAPE is available online and the authors of~\citet{teixeira2018graph} provided us their code. We pre-process each graph beforehand and save the obtained estimated induced homomorphism densities. Note that R-GPM takes around $20$ minutes per graph in the worst case considered, but graphs can be pre-processed in parallel. ESCAPE takes up to one minute per graph.
\newcommand{\cG^\wcard_{N^\wcard}}{{\mathcal{G}}^\text{*}_{N^\text{*}}}
All the models learn graph representations $\Gamma(\cG^\wcard_{N^\wcard})$, which we pass to a $L$-hidden layer feedforward neural network (MLP) with softmax outputs ($L \in \{0,1\}$ depending on the task) to obtain the prediction. For $\Gamma_\text{GIN}$, and $\Gamma_\text{RPGIN}$, we use respectively GIN and RPGIN as our base models to obtain latent representations for each $k$-sized connected induced subgraph. Then, we sum over the latent representations, each weighted by its corresponding induced homomorphism density, to obtain the graph representation. For $\Gamma_\text{1-hot}$, the representation $\Gamma_{\text{1-hot}}(\cG^\wcard_{N^\wcard})$ is a vector containing densities of each (possibly attributed) $k$-sized connected subgraph. To map this into a graph representation, we apply $\Gamma_{\text{1-hot}}(\cG^\wcard_{N^\wcard})^{\text{T}} {\bm{W}}$ where ${\bm{W}}$ is a learnable weight matrix whose rows are subgraph representations. Note that this effectively learns a unique weight vector for each subgraph type.
We use the Adam optimizer to optimize all the neural network models. When an in-distribution validation set is available (see below), we use the weights that achieve best validation-set performance for prediction. Otherwise, we train for a fixed number of epochs.
The specifics of hyperparameter grids and downstream architectures are discussed in each section below.
\subsection{Schizophrenia Task: Size extrapolation}
The results of these experiments are reported in \Cref{tab:unattributed} (left).
The data was graciously provided by the authors of~\citet{de2016mapping}, which they pre-processed from publicly available data from The Center for Biomedical Research Excellence.
There are 145 graphs which represent the functional connectivity brain networks of 71 schizophrenic patients and 74 healthy controls. Each graph has 264 vertices representing spherical regions of interest (ROIs). Edges represent functional connectivity. Originally, edges reflected a time-series coherence between regions. If the coherence between signals from two regions was above a certain threshold, the authors created a weighted edge. Otherwise, there is no edge. For simplicity, we converted these to unweighted edges. Extensive pre-processing must be done over fMRI data to create brain graphs. This includes discarding signals from certain ROIs. As described by the authors, these choices make highly significant impacts on the resulting graph. We refer the reader to the paper~\citep{de2016mapping}.
Note that there are numerous methods for constructing a brain graph, and in ways that change the number of vertices. The measurement strategy taken by the lab can result in measuring about 500 ROIs, 1000 ROIs, or 264 as in the case we consider~\citep{hagmann2007mapping, wedeen2005mapping, de2016mapping}.
For our purposes, we wish to create an extrapolation task, where a change in environment leads to an extrapolation set that contains smaller graphs.
For this, we randomly select 20 of the 145 graphs in the dataset, balanced among the healthy and schizophrenic patients, to be used as test.
For each healthy-group graph in these 20 graphs, we sample (with replacement) $\lfloor 0.4\times 264 \rfloor$ vertices to be removed. In average, the new size for the healthy-group graphs in these 20 graphs is $178.2$.
We hold out the test graphs that are later used to assess the extrapolation capabilities. Over the remaining data, we use a stratified 5-fold cross-validation to choose the hyperparameters and to report the validation accuracy.
Once the best hyperparameters are chosen, we re-train the model on the entire training data using 10 different initialization seeds, and predict on the test.
For $\Gamma_\text{GIN}$ $ $ and $\Gamma_\text{RPGIN}$, in their GNNs, the aggregation MLP of \Cref{eq:app_gnn} has hidden neurons chosen among $\{32, 64, 128, 256\}$ and number of layers (i.e. recursions of message-passing) among $\{1, 2\}$.
The learning rate is chosen in $\{0.001, 0.0001\}$. The value of $k$ is treated as a hyperparameter chosen in $\{4, 5\}$.
For $\Gamma_\text{1-hot}$, recall that we wish to learn the matrix ${\bm{W}}$ whose rows are subgraph representations. We choose the dimension of the representations among $\{32, 64, 128, 256\}$ and the learning rate in $\{0.001, 0.0001\}$. The value of $k$ is treated as a hyperparameter chosen in $\{4, 5\}$.
For the GNNs, we tune the learning rate in $\{0.01, 0.001\}$,
the number of hidden neurons of the MLP in \Cref{eq:app_gnn} in $\{32, 64, 128\}$,
the number of layers among $\{1, 2, 3\}$.
For all these models, we use a batch size of 32 graphs and a single final linear layer with a softmax activation as the downstream classifier. We optimize for 400 epochs.
For the graph kernels, following~\citet{kriege2020survey}, we tune the regularization hyperparameter C in SVM over the set $\{10^{-3}, 10^{-2}, 10^{-1},1, 10, 10^{2},10^{3}\}$. We tune the number of Weisfeiler-Lehman iterations of the WL kernel to be in $\{1, 2, 3, 4\}$ (see~\citet[Section 3.1]{kriege2020survey}).
\subsection{Erd{\H o}s-R\'enyi\xspace Connection Probability: Size Extrapolation}
We simulated Erd{\H o}s-R\'enyi\xspace graphs (Gnp model) using NetworkX~\citep{SciPyProceedings_11_networkx}.
\update{The task is to classify the edge probability $p \in \{0.2, 0.5, 0.8 \}$ of the generated graph.}
\Cref{tab:unattributed} shows results for a single environment task (middle), where graphs in training have all size $80$, and a multiple environment task (right), where training graphs have sizes in $\{70, 80\}$ chosen uniformly at random. In both cases, the test is composed of graphs of size 140. The training, validation, and test sets are fixed. The number of graphs in training, validation, and test are 80, 40, and 100, respectively.
The induced homomorphism densities are obtained for subgraphs of a fixed size $k = 5$.
For $\Gamma_\text{1-hot}$, we hyperparameter tune the dimension of the subgraph representations in $\{32, 64, 128, 256\}$ and the learning rate in $\{0.1, 0.01, 0.001\}$.
For the GNNs and for $\Gamma_\text{GIN}$, and $\Gamma_\text{RPGIN}$, we hyperparameter tune
the number of hidden neurons in the MLP of the GNN (\Cref{eq:app_gnn}) in $\{32, 64, 128, 256\}$ (GNN is used to learn the representation for $k$-sized subgraph for $\Gamma_\text{GIN}$, and $\Gamma_\text{RPGIN}$). The number of layers is also a hyperparameter in $\{1, 2, 3\}$ (3 layers only for the GNNs), and the learning rate in $\{0.1, 0.01, 0.001\}$. We also hyperparameter tune the presence or absence of the Jumping Knowledge mechanism from~\citet{xuJumpingKnowledge}.
For IRM, we consider the two distinct graph sizes to be the two training environments. We tune the regularizer $\lambda$~\citep[Section 3]{arjovsky2019invariant} in $\{4, 8, 16, 32\}$, stopping at 32 because increasing its value decreased performances.
We train all neural models for 500 epochs with batch size equal to the full training data. The downstream classifier is composed by a single linear layer with softmax activations.
We perform early stopping as per~\citet{hu2020open}.
The hyperparameter search is performed by training all models with 10 different initialization seeds and selecting the configuration that achieved the highest mean accuracy on the validation data.
Then, we report the mean (and standard deviation) accuracy over the training, the validation, and the test data in \Cref{tab:unattributed} (right).
For the graph kernels, following~\citet{kriege2020survey}, we tune the regularization hyperparameter C in SVM over the set $\{10^{-3}, 10^{-2}, 10^{-1},1, 10, 10^{2},10^{3}\}$. We tune the number of Weisfeiler-Lehman iterations of the WL kernel to be among $\{1, 2, 3, 4\}$ (see~\citet[Section 3.1]{kriege2020survey}).
\subsection{Extrapolation performance over SBM attributed graphs}
We sample Stochastic Block Model graphs (SBM) using NetworkX~\citep{SciPyProceedings_11_networkx}.
Each graph has two blocks, having a within-block edge probability of ${\bm{P}}_{1,1} = {\bm{P}}_{2,2} = 0.2$. The cross-block edge probability is ${\bm{P}}_{1,2} = {\bm{P}}_{2,1} \in \{0.1, 0.3\}$.
The label of a graph is its cross-block edge probability, i.e., $Y = {\bm{P}}_{1,2}$.
Vertex color distributions change with train and test environments. In training, vertices in the first block
are either red or blue, with probabilities $\{0.9, 0.1\}$, respectively, while
vertices in the second block are either green or yellow, with probabilities $\{0.9, 0.1\}$, respectively.
In test, the probability distributions are reversed: Vertices in the first block
are either red or blue, with probabilities $\{0.1, 0.9\},$ respectively, and vertices in
the second block are green or yellow with probabilities $\{0.1, 0.9\},$ respectively.
\Cref{tab:attr} shows results for the three scenarios we considered:
\begin{enumerate*}
\item A single environment, where training graphs are of size 20 (left),
\item A multiple environment, where training graphs have size 14 or 20, chosen uniformly at random (middle),
\item A multiple environment, where training graphs are of size 20 or 30, chosen uniformly at random (right).
\end{enumerate*}
The test is the same in all cases, and contains graphs of size 40.
The number of graphs in training, validation, and test are 80, 20, and 100, respectively.
We obtain the induced homomorphism densities for $\Gamma_\text{GIN}$, $\Gamma_\text{RPGIN}$, $\Gamma_\text{1-hot}$ $ $ for a fixed subgraph size $k = 5$.
For the GNNs and for $\Gamma_\text{GIN}$ $ $ and $\Gamma_\text{RPGIN}$, we choose the number of hidden neurons in the MLP of the GNN (\Cref{eq:app_gnn}) in $\{32, 64, 128, 256\}$, the number of layers in $\{1, 2, 3\}$ (3 layers only for the GNNs) and hyperparameter tune the presence or absence of the Jumping Knowledge mechanism from~\citet{xuJumpingKnowledge}. We add the regularization penalty in \Cref{eq:regul} for $\Gamma_\text{GIN}$ $ $ and $\Gamma_\text{RPGIN}$ $ $ in this experiments.
For $\Gamma_\text{GIN}$ $ $ and $\Gamma_\text{RPGIN}$ $ $, we choose the learning rate in $\{0.01, 0.001\}$
and the regularization weight in $\{0.1, 0.15\}$. For the GNNs we choose the learning rate in $\{0.1, 0.01, 0.001\}$.
For IRM, we consider the two distinct graph sizes to be the two training environments. We can not treat vertex attributes as environment here since we only have a single vertex-attribute distribution in training. We tune the regularizer $\lambda$~\citep[Section 3]{arjovsky2019invariant} in $\{4, 8, 16, 32\}$, stopping at 32 because increasing its value decreased performances.
For $\Gamma_\text{1-hot}$, we hyperparameter tune the dimension of the subgraph representations in $\{32, 64, 128, 256\}$ and the learning rate in $\{0.01, 0.001\}$.
We optimize all neural models for 500 epochs with batch size equal to the full training data. We use a single layer with softmax outputs as the downstream classifier.
We perform early stopping as per~\citet{hu2020open}.
The hyperparameter search is performed by training all models with 10 different initialization seeds and selecting the configuration that achieved the highest mean accuracy on the validation data.
Then, we report the mean (and standard deviation) accuracy over the training, the validation, and the test data in \Cref{tab:attr}.
For the graph kernels, following~\citet{kriege2020survey}, we tune the regularization hyperparameter C in SVM over the set $\{10^{-3}, 10^{-2}, 10^{-1},1, 10, 10^{2},10^{3}\}$. We tune the number of Weisfeiler-Lehman iterations of the WL kernel to be among $\{1, 2, 3, 4\}$ (see~\citet[Section 3.1]{kriege2020survey}).
\begin{table*
\caption{Dataset statistics, Table from~\citet{yehudai2020size}.}
\label{stat}
\begin{small}
\begin{sc}
\begin{center}
\resizebox{\textwidth}{!}{
\begin{subtable}{\textwidth}
\centering
\begin{tabular}{|l|r|r|r|r|r|r|}
\cline{2-7}
\multicolumn{1}{c|}{} & \multicolumn{3}{c|}{\textbf{NCI1}} & \multicolumn{3}{c|}{\textbf{NCI109}} \\
\cline{2-7}
\multicolumn{1}{c|}{} & \textbf{all} & \textbf{Smallest} $\mathbf{50\%}$ & \textbf{Largest $\mathbf{10\%}$} & \textbf{all} & \textbf{Smallest} $\mathbf{50\%}$ & \textbf{Largest $\mathbf{10\%}$} \\
\hline
\textbf{Class A} & $49.95\%$ & $62.30\%$ & $19.17\%$ & $49.62\%$ & $62.04\%$ & $21.37\%$ \\
\hline
\textbf{Class B} & $50.04\%$ & $37.69\%$ & $80.82\%$ & $50.37\%$ & $37.95\%$ & $78.62\%$ \\
\hline
\textbf{Num of graphs} & 4110 & 2157 & 412 & 4127 & 2079 & 421 \\
\hline
\textbf{Avg graph size} & 29 & 20 & 61 & 29 & 20 & 61 \\
\hline
\end{tabular}
\end{subtable}
}
\bigskip
\resizebox{\textwidth}{!}{
\begin{subtable}{\textwidth}
\centering
\begin{tabular}{|l|r|r|r|r|r|r|}
\cline{2-7}
\multicolumn{1}{c|}{} & \multicolumn{3}{c|}{\textbf{PROTEINS}} & \multicolumn{3}{c|}{\textbf{DD}} \\
\cline{2-7}
\multicolumn{1}{c|}{} & \textbf{all} & \textbf{Smallest} $\mathbf{50\%}$ & \textbf{Largest $\mathbf{10\%}$} & \textbf{all} & \textbf{Smallest} $\mathbf{50\%}$ & \textbf{Largest $\mathbf{10\%}$} \\
\hline
\textbf{Class A} & $59.56\%$ & $41.97\%$ & $90.17\%$ & $58.65\%$ & $35.47\%$ & $79.66\%$ \\
\hline
\textbf{Class B} & $40.43\%$ & $58.02\%$ & $9.82\%$ & $41.34\%$ & $64.52\%$ & $20.33\%$ \\
\hline
\textbf{Num of graphs} & 1113 & 567 & 112 & 1178 & 592 & 118 \\
\hline
\textbf{Avg graph size} & 39 & 15 & 138 & 284 & 144 & 746 \\
\hline
\end{tabular}
\end{subtable}
}
\end{center}
\end{sc}
\end{small}
\end{table*}
\subsection{Extrapolation performance in real world tasks that violate our causal model}
The results on graphs that violate our causal model are reported in \Cref{REAL-WORLD-MATTH}.
We use the datasets from~\citet{Morris2020}, split into train, validation and test as proposed by~\citet{yehudai2020size}.
In particular, train is obtained by considering the graphs with sizes smaller than the 50-th percentile, and test those with sizes larger than the 90-th percentile. Additionally, $10\%$ of the training graphs is held out from training and used as validation. For statistics on the datasets and corresponding splits, see~\citet{yehudai2020size}.
We obtain the homomorphism densities for a fixed subgraph size $k = 4$. We observed that larger subgraph sizes, $k \geq 5$, implies a larger number of distinct subgraphs and consequently a smaller proportion of shared subgraphs in different graphs.
To further reduce the number of distinct subgraphs seen by the models, we only consider the most common subgraphs in training and validation when necessary.
Specifically, for \textsc{NCI1} and \textsc{NCI109}, we only use the top 100 subgraphs (out of a total of around 300), and for \textsc{DD} only the 30k most common (out of a total of around 200k). For \textsc{PROTEINS} we keep all the distinct subgraphs (which are around 180).
For the GNNs, we follow the setup proposed in~\citet{yehudai2020size}, where all the GNNs have 3 layers and a final classifier composed of a feedforward neural network (MLP) with 1 hidden layer and softmax outputs. \update{We also use a dropout of 0.3}. We tune the batch size in $\{64, 128\}$, the learning rate in $\{0.01, 0.005, 0.001\}$ and the network width in $\{32, 64\}$. For $\Gamma_\text{GIN}$ $ $ and $\Gamma_\text{RPGIN}$, the setup is the same, except for the number of GNN layers that is set to 2. For \textsc{DD} we use a fixed batch size of 256 to reduce the number of times the subgraphs are passed to the network, in order to speed up training.
For $\Gamma_\text{1-hot}$, we choose the batch size in $\{64, 128\}$, the learning rate in $\{0.01, 0.005, 0.001\}$ and the dimension of the subgraph representations in $\{32, 64\}$.
For IRM we tune the regularizer $\lambda$~\citep[Section 3]{arjovsky2019invariant} in $\{8, 32, 128, 512\}$. The two environments are considered to be graphs with size smaller than the median size in the training graphs and larger than the median size in the training graphs, respectively.
To mitigate the imbalance between classes in training, we reweight the classes in the loss with the training proportions for each class.
We train all neural models for 1000 epochs \update{using early stopping as per~\citet{hu2020open}}. We test the models on the epoch achieving the highest mean Matthew Correlation Coefficient on validation because of the significant class imbalance in the test, see \Cref{stat}.
For the graph kernels, following~\citet{kriege2020survey}, we tune the regularization hyperparameter C in SVM over the set $\{10^{-3}, 10^{-2}, 10^{-1},1, 10, 10^{2},10^{3}\}$. We
fix the number of Weisfeiler-Lehman iterations of the WL kernel to $3$ (see~\citet[Section 3.1]{kriege2020survey}), which is comparable to the $3$ GNN layers.
\section{Conclusions}
\label{sec:conclusions}
\vspace{-2pt}
In this work we looked at the task of out-of-distribution (OOD) graph classification, where train and test data have different distributions.
By introducing a structural causal model inspired by graphon models~\citep{lovasz2006limits}, we defined a representation that is approximately invariant to the train/test distribution changes of our causal model, empirically showing its benefits on both synthetic and real-world datasets against standard graph classification baselines.
Finally, our work contributed a blueprint for defining graph extrapolation tasks through causal models.
\section{Empirical Results}
\label{sec:experiments}
This section is dedicated to the empirical evaluation of our theoretical claims, including the ability of the representations
in \Cref{eq:Gamma-1hot,eq:Gamma-kgnn,eq:Gamma-krpgnn} to extrapolate as predicted by \Cref{prop:Einv} for tasks
that abide by \Cref{def:trainG,def:testG}. Due to space constraints, our results are summarised here, while further details are relegated to \Cref{sec:ExperimentAppendix}.
\update{Our code is also available\footnote{\small \url{https://github.com/PurdueMINDS/size-invariant-GNNs}}}.
We explore the extrapolation power of $\Gamma_\text{1-hot}$, $\Gamma_\text{GIN}$ and $\Gamma_\text{RPGIN}$ of \Cref{eq:Gamma-1hot,eq:Gamma-kgnn,eq:Gamma-krpgnn} using the Graph Isomorphism Network (GIN)~\citep{xu2018powerful} as our base GNN model, and Relational Pooling GIN (RPGIN)~\citep{pmlr-v97-murphy19a} as a more expressive GNN. The graph representations are then passed to a $L$-hidden layer feedforward neural network (MLP) with softmax outputs that give the predicted classes, $L\in\{0, 1\}$.
\update{As described in~\Cref{sec:intuitive}, we obtain induced homomorphism densities of {\em connected} graphs. For practical reasons, we focus only on densities of graphs of size {\em exactly} $k$, which is treated as a hyperparameter. Note that the number of parameters for our $\Gamma_\text{GNN}$ and $\Gamma_{\text{GNN}^+}$ does not depend on $k$ (for $\Gamma_\text{1-hot}$ it does), and the forward pass on
the $k$-sized graphs can be performed in parallel.}
\vspace{-5pt}
\paragraph{Baselines.} Our baselines include the Graphlet Counting kernel~(GC Kernel)~\citep{shervashidze2009efficient}, which uses the $\Gamma_\text{1-hot}$ representation as input to a downstream classifier.
We report $\Gamma_\text{1-hot}$ separately from {GC Kernel} since $\Gamma_\text{1-hot}$ differs from {GC Kernel} in that we add the same feedforward neural network (MLP) classifier used in the $\Gamma_\text{GNN}$ model.
We also include GIN~\citep{xu2018powerful},
GCN~\citep{Kipf2016} and PNA~\citep{corso2020principal}, considering the sum, mean, and max READOUTs as proposed by~\citet{xu2020neural} for extrapolations (which we denote as {\em XU-READOUT} to not confuse with our $\text{READOUT}_\Gamma$). We also examine a more-expressive GNN, RPGIN~\citep{pmlr-v97-murphy19a}, and the WL Kernel~\citep{shervashidze2011weisfeiler}.
We do not use the method of~\citet{yehudai2020size} as a baseline since it is a covariate shift adaptation approach that requires samples from ${\rm P}({\mathcal{G}}^\text{te}_{N^\text{te}})$, which are not available in our setting.
\vspace{-5pt}
\paragraph{Experiments with single and multiple graph sizes in training.}
Our single-environment experiments consist of a single graph size in training, and different sizes in test (different from the training size).
Whenever multiple environments are available in training ---multiple environments implies different graph sizes---, we employ Invariant Risk Minimization (IRM), considering the penalty proposed by~\citet{arjovsky2019invariant} for each environment (defined empirically as a range of training examples with similar graph sizes).
For each task, we report
\begin{enumerate*}[label=(\alph*)]
\item \emph{training} accuracy
\item \emph{validation} accuracy, which are new examples sampled from ${\rm P}(Y,{\mathcal{G}}^\text{tr}_{N^\text{tr}})$; and
\item \emph{extrapolation test} accuracy, which are new OOD examples sampled from ${\rm P}(Y,{\mathcal{G}}^\text{te}_{N^\text{te}})$.
\end{enumerate*}
\update{In our experiments we perform early stopping as per~\citet{hu2020open}}.
\subsection{Size extrapolation tasks for unattributed graphs}
\label{sec:unattributed}
{\em Schizophrenia task.} %
We use the fMRI brain graph data on 71 schizophrenic patients and 74 controls for classifying individuals with schizophrenia~\citep{de2016mapping}.
Vertices represent brain regions (voxels) with edges as functional connectivity.
We process the graph differently between training and test data, where training graphs have exactly 264 vertices (a single environment) and \update{control-group graphs in test} have around 40\% fewer vertices. We employ a 5-fold cross-validation for hyperparameter tuning.
{\em Erd{\H o}s-R\'enyi\xspace task.} %
We simulate Erd{\H o}s-R\'enyi\xspace graphs ~\citep{gilbert1959random, erdds1959random}
as a simple graphon random graph model.
The task is to classify the edge probability $p \in \{0.2, 0.5, 0.8 \}$ of the generated graph.
First we consider a single-environment version of the task, where we train and validate on graphs of size 80 and extrapolate to graphs with size 140 in test.
We also consider another experiment with training/validation graph sizes uniformly selected from $\{70,80\}$ (so we can use IRM), with the test data same as before (graphs of size 140 in test).
{\bf Results.}
\Cref{tab:unattributed} shows that all methods perform well in validation (generalization over the training distribution). However, only $\Gamma_\text{1-hot}$ (GC Kernel and our simple classifier), $\Gamma_\text{GIN}$, $\Gamma_\text{RPGIN}$ are able to extrapolate, while displaying very similar ---often identical--- accuracies in validation (sampled from ${\rm P}({\mathcal{G}}^\text{tr}_{N^\text{tr}})$) and test (sampled from ${\rm P}({\mathcal{G}}^\text{te}_{N^\text{te}})$) in all experiments, as predicted by combining the theoretical results in \Cref{prop:Einv} and \Cref{thm:sizeExtrapolationBound}.
Using IRM in the Erd{\H o}s-R\'enyi\xspace task shows no improvement over not using IRM in the multi-environment setting.
\begin{table}[t]
\caption{Extrapolation performance over real-world graph datasets with OOD tasks violating \Cref{def:trainG,def:testG} and conditions of \Cref{thm:sizeExtrapolationBound}. \update{Always one of our E-invariant representations $\Gamma_\text{GIN}$ and $\Gamma_\text{RPGIN}$ is amongst the top 4 best methods in all datasets except \textsc{NCI109}}. Table shows mean (standard deviation) Matthews correlation coefficient (MCC) of the classifiers over the OOD test data. Bold emphasises the top-4 models (in average MCC) for each dataset.}
\vspace{-5pt}
\label{REAL-WORLD-MATTH}
\begin{small}
\begin{sc}
\begin{center}
\resizebox{\columnwidth}{!}{
\begin{tabular}{lrrrr}
\toprule
\textbf{Datasets} & \multicolumn{1}{c}{\textbf{NCI1}} & \multicolumn{1}{c}{\textbf{NCI109}} & \multicolumn{1}{c}{\textbf{PROTEINS}} & \multicolumn{1}{c}{\textbf{DD}} \\
\midrule
RANDOM & 0.00 (0.00) & 0.00 (0.00) & 0.00 (0.00) & 0.00 (0.00) \\
PNA & 0.21 (0.06) & \textbf{0.24 (0.06)} & \textbf{0.26 (0.08)} & 0.24 (0.10) \\
PNA (mean xu-readout) & 0.12 (0.05) & 0.21 (0.04) & 0.25 (0.06) & \textbf{0.29 (0.08)} \\
PNA (max xu-readout) & 0.16 (0.05) & 0.18 (0.07) & 0.20 (0.05) & 0.12 (0.14) \\
PNA + IRM & 0.21 (0.07) & \textbf{0.27 (0.08)} & \textbf{0.26 (0.10)} & \textbf{0.26 (0.08)} \\
GCN & 0.20 (0.06) & 0.15 (0.06) & 0.21 (0.09) & 0.23 (0.05) \\
GCN (mean xu-readout) & 0.20 (0.04) & 0.15 (0.09) & 0.23 (0.07) & 0.19 (0.06) \\
GCN (max xu-readout) & 0.20 (0.04) & 0.19 (0.07) & 0.20 (0.14) & 0.09 (0.08) \\
GCN + IRM & 0.12 (0.05) & \textbf{0.22 (0.06)} & 0.20 (0.07) & 0.23 (0.07) \\
GIN & \textbf{0.25 (0.06)} & 0.18 (0.05) & 0.23 (0.05) & 0.25 (0.09) \\
GIN (mean xu-readout) & 0.16 (0.05) & 0.14 (0.05) & 0.24 (0.05) & \textbf{0.27 (0.12)} \\
GIN (max xu-readout) & 0.15 (0.08) & 0.18 (0.08) & \textbf{0.28 (0.11)} & 0.19 (0.07) \\
GIN + IRM & 0.18 (0.08) & 0.16 (0.04) & \textbf{0.26 (0.06)} & 0.21 (0.09) \\
RPGIN & 0.15 (0.04) & 0.19 (0.05) & 0.24 (0.09) & 0.22 (0.09) \\
\hline
WL kernel & \textbf{0.39 (0.00)} & 0.21 (0.00) & 0.00 (0.00) & 0.00 (0.00) \\
GC kernel & 0.02 (0.00) & 0.01 (0.00) & \textbf{0.29 (0.00)} & 0.00 (0.00) \\
\hline
$\Gamma_\text{1-hot}$ & 0.17 (0.08) & \textbf{0.25 (0.06)} & 0.12 (0.09) & 0.23 (0.08) \\
$\Gamma_\text{GIN}$ & \textbf{0.24 (0.04)} & 0.18 (0.04) & \textbf{0.29 (0.11)} & \textbf{0.28 (0.06)} \\
$\Gamma_\text{RPGIN}$ & \textbf{0.26 (0.05)} & 0.20 (0.04) & 0.25 (0.12) & 0.20 (0.05) \\
\bottomrule
\end{tabular}
}
\end{center}
\end{sc}
\end{small}
\vspace{-8pt}
\end{table}
\subsection{Size/attribute extrapolation for attributed graphs}
\label{sec:attribute}
We now define a Stochastic Block Model (SBM) task with vertex attributes. The SBM has two blocks. Our goal is to classify the cross-block edge probability ${\bm{P}}_{1,2} = {\bm{P}}_{2,1} \in \{0.1, 0.3\}$ of a sampled graph.
Vertex attribute distributions depend on the blocks.
In block 1 vertices are randomly assigned red and blue attributes, while in block 2 vertices are randomly assigned green and yellow attributes (see {\bf SBM with vertex attributes} in \Cref{sec:family}).
The change in environments between training and test introduces a joint attribute-and-size distribution shift: In training, the vertices are $90\%$ red (resp.\ green) and $10\%$ blue (resp.\ yellow) in block 1 (resp.\ block 2). While in test, the distribution is flipped and vertices are $10\%$ red (resp.\ green) and $90\%$ blue (resp.\ yellow) in block 1 (resp.\ block 2).
We consider three scenarios, with the same test data made of graphs of size 40:
\begin{enumerate*}
\item[(a)] A single-environment case, where all training graphs have size 20;
\item[(b)] A multi-environment case, where training graphs have sizes 14 and 20;
\item[(c)] A multi-environment case, where training graphs have sizes 20 and 30. These differences in training data will check whether having graphs of sizes closer to the test graph sizes improves the performance of traditional graph representation methods.
\end{enumerate*}
{\bf Results.} \Cref{tab:attr} shows how traditional graph representations and $\Gamma_\text{1-hot}$ (both GC Kernel and our neural classifier) tap into the easy correlation between $Y$ and the density of red and green vertex attributes in the training graphs, while $\Gamma_\text{GIN}$ and $\Gamma_\text{RPGIN}$, with their attribute regularization (\Cref{eq:regul}), are approximately E-invariant, resulting in higher test accuracy that more closely matches their validation accuracy. Moreover, applying IRM has no beneficial impact, while adding larger graphs in training (closer to test graph sizes) increases the extrapolation accuracy of most methods.
\subsection{Experiments with real-world datasets that violate our causal model}
Finally, we test our E-invariant representations on datasets that violate \Cref{def:trainG,def:testG} and the conditions of \Cref{thm:sizeExtrapolationBound}.
We consider four vertex-attributed datasets (\textsc{NCI1}, \textsc{NCI109}, \textsc{DD}, \textsc{PROTEINS}) from~\citet{Morris2020}, and split the data as proposed by~\citet{yehudai2020size}.
As mentioned earlier,~\citet{yehudai2020size} is not part of our baselines since it requires samples from the test distribution ${\rm P}({\mathcal{G}}^\text{te}_{N^\text{te}})$.
Training and test data are created as follows: Graphs with sizes smaller than the $50$-th percentile are assigned to training, while graphs with sizes larger than the $90$-th percentile are assigned to test. A validation set for hyperparameter tuning consists of $10\%$ held out examples from training.
{\bf Results.} \Cref{REAL-WORLD-MATTH} shows the test results using the Matthews correlation coefficient (MCC) --- MCC was chosen due to significant class imbalances in the OOD shift of our test data, see \Cref{sec:ExperimentAppendix} for more details. \update{We observe that always one of our E-invariant representations $\Gamma_\text{GIN}$ and $\Gamma_\text{RPGIN}$ is amongst the top 4 best methods in all datasets
except \textsc{NCI109}. We also note that the \textsc{WL Kernel} performs really well at \textsc{NCI1} and very poorly (random) on \textsc{PROTEINS} and \textsc{DD}, showcasing the importance of consistency across datasets}.
\update{{\em Comments on \Cref{REAL-WORLD-MATTH}.} Counterfactual-driven extrapolations have their representation methods tailored to a specific extrapolation mechanism. Unlike in-distribution tasks (and covariate shift adaptation tasks, where one sees test distribution examples of the input graphs), counterfactual-driven extrapolations rely on being robust to the distribution-shift mechanism given by the causal model. Hence, it is expected that the causal extrapolation mechanism that works for a molecular task may not work as well for a social network (unless they share a universal graph-formation mechanism). The schizophrenia task~(\Cref{sec:unattributed}) has the same mechanism as our causal model (hence, good performance). Further research may show that every single dataset in this subsection has its own distinct extrapolation mechanism. We think that although these datasets violate our assumptions, this subsection is important (and we hope will be copied by future work) to show which datasets may need different extrapolation mechanisms.}
\section{Graph Classification: A Causal Model Based on Random Graphs}
\label{sec:family}
\paragraph{Out-of-distribution (OOD) shift.} For any joint distribution ${\rm P}(Y,{\mathcal{G}})$ of graphs ${\mathcal{G}}$ and labels $Y$, there are infinitely many causal models that give the same joint distribution~\citep{pearl2009causality}.
This phenomenon is known as model underspecification.
Hence, if the training data distribution ${\rm P}^\text{tr}(Y,{\mathcal{G}})$ does not have the same support as the test distribution ${\rm P}^\text{te}(Y,{\mathcal{G}})$,
a model trained with samples drawn from ${\rm P}^\text{tr}(Y,{\mathcal{G}})$ needs to be able to extrapolate in order to correctly predict ${\rm P}^\text{te}(Y|{\mathcal{G}})$.
In this work, we assume Independence between Cause and Mechanism (ICM): ${\rm P}^\text{tr}(Y|{\mathcal{G}}) = {\rm P}^\text{te}(Y|{\mathcal{G}})$, which is a common assumption in the causal deep learning literature~\citep{bengio2019meta,besserve2018group,johansson2016learning,louizos2017causal,raj2020causal,scholkopf2019causality,arjovsky2019invariant}.
In inductive graph classification tasks, ICM implies that the shift between train and test distributions ${\rm P}^\text{tr}(Y,{\mathcal{G}}) \neq {\rm P}^\text{te}(Y,{\mathcal{G}})$ comes from ${\rm P}^\text{tr}({\mathcal{G}}) \neq {\rm P}^\text{te}({\mathcal{G}})$, since ${\rm P}^\text{tr}(Y|{\mathcal{G}}) = {\rm P}^\text{te}(Y|{\mathcal{G}})$.
And because our task is inductive, i.e., no data from ${\rm P}^\text{te}({\mathcal{G}})$ or a proxy variable, we must make assumptions about the causal mechanisms in order to extrapolate.
\paragraph{Causal model.}
A graph representation that is robust (invariant) to shifts in ${\rm P}^\text{te}({\mathcal{G}})$ must know how the distribution shifts.
Either we are given some examples from ${\rm P}^\text{te}({\mathcal{G}})$ (a.k.a.\ covariate shift adaptation~\citep{sugiyama2007covariate}) or we are given a causal structure that describes how the test distribution can shift.
Our paper focuses on the latter by giving a Structural Causal Model (SCM) for the data generation process \update{in \Cref{def:trainG,def:testG}. The definition of the Structural Causal Model (SCM) is needed since the observational probability itself does not provide any causal information (see observational equivalence in~\citet[Theorem 1.2.8]{pearl2009causality}).}
\Cref{fig:SCM} depicts the Directed Acyclic Graph (DAG) of our causal model.
It uses the twin network DAGs structure first proposed by~\citet{balke1994twinnets} (see~\citet[Chapter 7.1.4]{pearl2009causality}) in order to define how the test distribution can change.
\newcommand{\text{supp}}{\text{supp}}
In what follows we detail the SCM in \Cref{def:trainG,def:testG}.
Our causal model is inspired by Stochastic Block Models (SBMs)~\citep{diaconis1981statistics,snijders1997estimation} and their connection to graphon random graph models~\citep{airoldi2013stochastic,lovasz2006limits}:
\begin{definition}[Training Graph ${\mathcal{G}}^\text{tr}_{N^\text{tr}}$] \label{def:trainG}
The training graph SCM is depicted at the left side of the twin network DAG in \Cref{fig:SCM}.
\begin{itemize}[leftmargin=*]
\item The training graph is characterized by a graphon $W \sim {\rm P}(W)$, where $W:[0,1]^2\rightarrow[0,1]$ is a random symmetric measurable function~\citep{lovasz2006limits} sampled (according to some distribution) from ${\mathbb{D}}_W$, the set of all symmetric measurable functions on $[0,1]^2\rightarrow[0,1]$.
$W$ defines both the graph's target label and some of its structural and attribute characteristics, but $W$ is unknown.
\item The {\bf training environment} $E^\text{tr} \sim {\rm P}^\text{tr}(E)$ is a hidden environment variable that represents specific graph properties that change between the training and test.
$E^\text{tr}\in\mathbb{E}$ for some properly defined environment space $\mathbb{E}$.
\item The graph's size is determined by its environment $N^\text{tr} := \eta(E^\text{tr})$, where $\eta$ is an unknown deterministic function.
\item The graph's target label is given by $Y := h(W,Z_Y)$, $Y\in {\mathbb{Y}}$, with ${\mathbb{Y}}$ some properly defined discrete target space. $Z_Y$ is an independent random noise variable and $h$ is a deterministic function on the input space ${\mathbb{D}}_W\times {\mathbb{R}}$.
\item The vertices are numbered $V^\text{tr}=\{1,\ldots,N^\text{tr}\}$. Each vertex $v \in V^\text{tr}$ has an associated hidden variable $U_v \sim \text{Uniform}(0,1)$ sampled i.i.d..
The graph is undirected and its
adjacency matrix $A^\text{tr}\in \{0,1\}^{N^\text{tr}\times N^\text{tr}}$ is defined by
\begin{equation} \label{eq:Xtr}
A_{u,v}^\text{tr} \!:= \mathds{1}(Z_{u,v}\!>\!W(U_u,U_v)), \forall u,v \in V^\text{tr}, u\!\neq\!v.
\end{equation}
The diagonals are set to $0$ because there is no self-loop. Here $\mathds{1}$ is an indicator function, and $\{Z_{u,v}=Z_{v,u}\}_{u,v\in V^\text{tr}}$ are independent uniform noises on $[0,1]$.
\item The graph may contain discrete vertex attributes $X^\text{tr}\in {\mathbb{X}}^{N^\text{tr}}$ defined as $$X_{v}^\text{tr} := g_X(E^\text{tr}, W(U_v,U_v)), \:\forall v\in V^\text{tr},$$
where $X_{v}^\text{tr}\in {\mathbb{X}}$, and ${\mathbb{X}}$ is some properly defined attribute space. $g_X$ is a deterministic function that determines a vertex attribute using $W(U_v,U_v) \in [0,1]$ via, say, inverse sampling~\citep{tweedie1945inverse} the vertex attribute distribution.
\item Then, the training graph is $${\mathcal{G}}^\text{tr}_{N^\text{tr}} :=(A^\text{tr},X^\text{tr}).$$
\end{itemize}
\end{definition}
The test data comes from the following (coupled) distribution, that is, the model uses some of the same random variables of the training graph model, effectively only replacing $E^\text{tr}$ by $E^\text{te}$, as shown in the DAG
of \Cref{fig:SCM}.
\begin{definition}[Test Graph ${\mathcal{G}}^\text{te}_{N^\text{te}}$]
\label{def:testG}
The SCM of the test graph is given by the right side of the twin network DAG in \Cref{fig:SCM}, changing the following variables from \Cref{def:trainG}:
\begin{itemize}[leftmargin=*]
\item The {\bf test environment} $E^\text{te} \sim {\rm P}^\text{te}(E)$, and $E^\text{te}\in \mathbb{E}$ belongs to the same space as $E^\text{tr}$.
It represents specific properties of the graphs that change between the test and training data. Denote $\text{supp}(\cdot)$ $:= \{x | {\rm P}(x) > 0\}$ as the support of a random variable.
The supports of $E^\text{te}$ and $E^\text{tr}$ may not overlap (i.e., $\text{supp}(E^\text{te}) \cap \text{supp}(E^\text{tr}) = \emptyset$).
\item The change in environment from $E^\text{tr}$ to $E^\text{te}$ may change the graph's size as $N^\text{te} := \eta(E^\text{te})$, where $\eta$ is the same unknown deterministic function as in \Cref{def:trainG}.
\item The vertices are numbered $V^\text{te}=\{1,\ldots,N^\text{te}\}$. The adjacency matrix $A^\text{te}\in \{0,1\}^{N^\text{te}\times N^\text{te}}$ is defined as in \Cref{eq:Xtr}.
\item The graph may contain discrete vertex attributes $X^\text{te}\in{\mathbb{X}}^{N^\text{te}}$ defined as
$$X_{v}^\text{te} := g_X(E^\text{te}, W(U_v,U_v)), \:\forall v \in V^\text{te},$$
with $g_X$ as given in \Cref{def:trainG}.
\item Then, the test graph is
$${\mathcal{G}}^\text{te}_{N^\text{te}} :=(A^\text{te},X^\text{te}).$$
\end{itemize}
\end{definition}
Our SCM has a direct connection with graphon random graph model~\citep{lovasz2006limits}, and extend\update{s} it by considering vertex attributes. Now we introduce examples of our graph classification tasks based on \Cref{def:trainG,def:testG} using two classic random graph models.
{\bf Notation:} (${\mathcal{G}}^\text{*}_{N^\text{*}}, E^\text{*}, A^\text{*}, V^\text{*}, X^\text{*}$) In what follows we use the superscript $\text{*}$ as a wildcard to describe both train and test random variables. For instance, ${\mathcal{G}}^\text{*}_{N^\text{*}}$ is a variable that is a wildcard for referring to either ${\mathcal{G}}^\text{tr}_{N^\text{tr}}$ or ${\mathcal{G}}^\text{te}_{N^\text{te}}$.
Also, from now on we define ${\rm P}^\text{te}({\mathcal{G}}) = {\rm P}({\mathcal{G}}^\text{te}_{N^\text{te}})$ and ${\rm P}^\text{tr}({\mathcal{G}}) = {\rm P}({\mathcal{G}}^\text{tr}_{N^\text{tr}})$.
\paragraph{Erd{\H o}s-R\'enyi\xspace example.}
Consider a random training environment $E^\text{tr}$ such that $N^\text{tr} = \eta (E^\text{tr})$ is the number of vertices for graphs in our training data. Let $p$ be the probability that any two distinct vertices of the graph have an edge. Define $W$ as a constant function that always outputs $p$.
Sample independent uniform noises $Z_{u,v} \sim \text{Uniform}(0,1)$ (for each possible edge, $Z_{u,v}=Z_{v,u}$).
An Erd{\H o}s-R\'enyi\xspace graph can be defined as a graph whose adjacency matrix $A^\text{tr}$ is $A^\text{tr}_{u,v} = \mathds{1}(Z_{u,v}>W(U_u,U_v))=\mathds{1}(Z_{u,v}>p)$, $\forall u,v \in V^\text{tr},u\neq v$. Here vertex attributes are not considered and \update{we} can define $X^\text{tr}_v=\O, \forall v\in V^{\text{tr}}$ as the null attribute.
In the test data, we have a different environment $E^\text{te}$ and graph size $N^\text{te} = \eta (E^\text{te})$, with $\text{supp}(N^\text{te}) \cap \text{supp}(N^\text{tr}) = \emptyset$.
The variable $\{Z_{u,v}\}_{u,v\in \{1,\ldots,\max(\text{supp}(N^\text{tr}) \cup \text{supp}(N^\text{te}))\}}$
can be thought as the seed of a random number generator
to determine if two distinct vertices $u$ and $v$
are connected by an edge.
The above defines our training and test data as a set of Erd{\H o}s-R\'enyi\xspace random graphs of sizes $N^\text{tr}$ and $N^\text{te}$ with probability
$p$.
The targets of the Erd{\H o}s-R\'enyi\xspace graphs can be, for instance, the value $Y = p$ in \Cref{def:trainG}, which is determined by $W$ and invariant to graph sizes.
\paragraph{Stochastic Block Model (SBM)~\citep{snijders1997estimation}.} An SBM can be seen as a generalization of Erd{\H o}s-R\'enyi\xspace graphs.
SBMs partition the vertex set into disjoint subsets $S_1,S_2,...,S_r$ (known as blocks or communities) with an associated $r\times r$ symmetric matrix ${\bm{P}}$, where the probability of an edge $(u,v)$, $u\in S_i$ and $v\in S_j$ is ${\bm{P}}_{ij}$, for $i,j\in \{1,\ldots,r\}$.
In the training and test data, we still have i.i.d sampled $Z_{u,v}=Z_{v,u}$ and different environments $E^\text{tr}$, $E^\text{te}$.
Divide the interval $[0,1]$ into disjoint convex sets $[t_0,t_1), [t_1,t_2),\ldots,[t_{r-1},t_r]$, where $t_0=0$ and $t_r=1$, such that if $U_v \sim \text{Uniform}(0,1)$ satisfies $U_v \in [t_{i-1},t_i)$, then vertex $v$ belongs to block $S_i$. Thus $W(U_u,U_v)=\sum_{i,j\in \{1,\ldots,r\}}P_{ij}\mathds{1}(U_u \in [t_{i-1},t_i))\mathds{1}(U_v \in [t_{j-1},t_j))$. An SBM graph in training or test can be defined as a graph whose adjacency matrix $A^\text{*}$ is $A^\text{*}_{u,v} = \mathds{1}(Z_{u,v} > W(U_u,U_v))$, $\forall u,v \in V^\text{*}, u\neq v$. Now we have a set of SBM random graphs of sizes $N^\text{tr}$ and $N^\text{te}$ with ${\bm{P}}$. Consider if there are only two blocks, the target $Y$ can be ${\bm{P}}_{1,2}$ which is the probability of an edge connecting vertices between the blocks, determined by $W$ and invariant to graph sizes.
\paragraph{SBM with vertex attributes.} For the SBM, assume the vertex attributes are tied to blocks, and are distinct for each block.
The environment variable operates on changing
the distributions of attributes assigned in each block.
Consider the following SBM example with two blocks:
Define $W(U_v,U_v)=\frac{U_v}{2t_1}\mathds{1}(U_v\in [0,t_1))+ (\frac{1}{2}+\frac{U_v-t_1}{2(1-t_1)})\mathds{1}(U_v\in [t_1,1])$. So $W(U_v,U_v)< \frac{1}{2}$ if and only if $v$ belongs to the first block. We only change the values of $W$ for points on a zero-measure space.
Let $g_X$ be such that it defines constants \update{as} $0 < \alpha_{E^\text{*}\!,1} < \frac{1}{2} < \alpha_{E^\text{*}\!,2} < 1$, and
vertex attributes as
$$X^\text{*}_v\! =\! g_X(E^\text{*} \!,W(U_v,U_v))\!=\!\negthickspace
\begin{bmatrix*}[l]
\mathds{1}(W(U_v,U_v) \! \in \! [0,\alpha_{E^\text{*}\!,1}))\\
\mathds{1}(W(U_v,U_v) \! \in \! [\alpha_{E^\text{*}\!,1},.5))\\
\mathds{1}(W(U_v,U_v) \! \in \! [.5,\alpha_{E^\text{*}\!,2}))\\
\mathds{1}(W(U_v,U_v) \! \in \! [\alpha_{E^\text{*}\!,2},1])
\end{bmatrix*}\negthickspace,$$
where the attribute of vertex $v$, $X^\text{*}_v$, is one-hot encoded to represent 4 colors: red and blue (if $v$ is in block $1$) and green and yellow (if $v$ is in block 2).
\section{Introduction}
\label{introduction}
In general, graph representation learning methods assume that the train and test data come from the same distribution.
Unfortunately, this assumption is not always valid in real-world deployments~\citep{hu2020open,koh2020wilds,d2020underspecification}.
When the test distribution is different from training, the test data is described as {\em out of distribution (OOD)}.
Differences in train/test distribution may be due to environmental factors such as those related to the way the data is collected or processed.
Particularly, in graph classification tasks, where ${\mathcal{G}}$ is the graph and $Y$ its label, we often see different graph sizes and/or distinct arrangements of vertex attributes associated with the same target label.
{\em How should we learn a graph representation for out-of-distribution inductive tasks (extrapolations), where the graphs in training and test (deployment) have distinct characteristics (i.e., ${\rm P}^\text{tr}({\mathcal{G}}) \neq {\rm P}^\text{te}({\mathcal{G}})$)? }
Are inductive graph neural networks (GNNs) robust to distribution shifts between ${\rm P}^\text{tr}({\mathcal{G}})$ and ${\rm P}^\text{te}({\mathcal{G}})$?
If not, is it possible to design a graph classifier that is robust to such OOD shifts without access to samples from ${\rm P}^\text{te}({\mathcal{G}})$?
\begin{figure}[t!!]
\centering
\begin{minipage}{\columnwidth}
\centering
\includegraphics[width=3.2in]{figs/BeatriceSEM.pdf}\hfill
\end{minipage}
\caption{The twin network DAG~\citep{balke1994twinnets} of our structural causal model (SCM). Gray (resp.\ white) vertices represent observed (resp.\ hidden) random variables.}
\label{fig:SCM}
\vspace{-10pt}
\end{figure}
In this work we consider an OOD graph classification task with different train and test distributions based on graph sizes and vertex attributes. Our work focuses on simple (no self-loops) undirected graphs with discrete vertex attributes.
We make the common assumption of independence between cause and mechanisms~\citep{bengio2019meta,besserve2018group,johansson2016learning,louizos2017causal,raj2020causal,scholkopf2019causality,arjovsky2019invariant}, which states that ${\rm P}(Y|{\mathcal{G}})$ remains the same between train and test.
We also assume we do not have access to samples from ${\rm P}^\text{te}({\mathcal{G}})$, hence covariate shift adaptation methods (such as~\citet{yehudai2020size}) are unfit for our scenario.
In our setting we need to learn to extrapolate from a causal model.
\vspace{-5pt}
\paragraph{Contributions.}
Our contributions are as follows:
\begin{enumerate}[leftmargin=*]
\vspace{-5pt}
\item We provide a causal model that formally describes a class of graph classification tasks where the training (${\rm P}^\text{tr}({\mathcal{G}})$) and test (${\rm P}^\text{te}({\mathcal{G}})$) graphs have different size and vertex attribute distributions.
\item Assuming Independence between Cause and Mechanism (ICM)~\citep{louizos2017causal,shajarisales2015telling}, we introduce a graph representation method based on the work of~\citet{lovasz2006limits} and Graph Neural Networks (GNNs)~\citep{Kipf2016, Hamilton2017, pmlr-v97-you19b}
that is invariant to the train/test distribution shifts of our causal model.
Unlike existing invariant representations, this representation can perform extrapolations from single training environment (e.g., all training graphs have the same size).
\item Our empirical results show that, in most experiments, neither Invariant Risk Minimization (IRM)~\citep{arjovsky2019invariant} nor the GNN extrapolation modifications proposed by~\citet{xu2020neural} are able to perform well in graph classification tasks over the OOD test data.
\end{enumerate}
\section{E-Invariant Graph Representations}
\label{sec:methods}
In this section we discuss shortcomings of traditional graph representation methods for out-of-distribution (OOD) graph classification tasks. We will base our discussion on our Structural Causal Model (SCM) (described in \Cref{def:testG,def:trainG} and \Cref{fig:SCM}).
We show that there is an approximately environment-invariant graph representation that is able to extrapolate to OOD test data.
\paragraph{The shortcomings of standard graph representation methods.}
\Cref{fig:SCM} shows that our target variable $Y$ is a function only of the {\em graphon} variable $W$, rather than the training or test environments, $E^\text{tr}$ and $E^\text{te}$, respectively.
However, $Y$ is not independent of $E^\text{tr}$ given ${\mathcal{G}}^\text{tr}_{N^\text{tr}}$, since both $E^\text{tr}$ and $W$ affect $A^\text{tr}$ and $X^\text{tr}$ (which are colliders), and $Y$ depends on $W$.
Hence, traditional graph representation learning methods can pick up this easy spurious correlation in the training data (via shortcut learning~\citep{geirhos2020shortcut}), which would prevent the model learning the correct OOD test predictor.
\update{To address the challenge of correctly predicting $Y$ in our OOD test data, regardless of spurious correlations between the variables, we need an estimator that can account for it.
In what follows we focus on {\bf environment-invariant (E-invariant)} graph representations.
To show the ability of E-invariant representations to extrapolate to OOD test data, we introduce the definition and the effect on downstream OOD classification tasks in the following proposition.}
\begin{restatable}{proposition}{propEinv}[E-invariant Representation's Effect on \update{OOD} Classification]\label{prop:Einv}
Consider a permutation-invariant graph representation $\Gamma:\cup_{n=1}^\infty \{0,1\}^{n\times n} \times {\mathbb{X}}^n \to {\mathbb{R}}^d$, $d \geq 1$, and a downstream function $\rho:{\mathbb{Y}} \times {\mathbb{R}}^d \to [0,1]$ (e.g., a feedforward neural network (MLP) with softmax outputs) such that, for some $\epsilon, \delta > 0$, the generalization error over the training distribution is: $\forall y \in {\mathbb{Y}}$,
\[
{\rm P}(~\vert {\rm P}(Y = y | {\mathcal{G}}^\text{tr}_{N^\text{tr}}) - \rho(y,\Gamma({\mathcal{G}}^\text{tr}_{N^\text{tr}})) \vert~ \leq \epsilon) \geq 1 - \delta ,
\]
$\Gamma$ is said to be {\bf environment-invariant (E-invariant)} if $\forall e \in \text{supp}(E^\text{tr}),\forall e^\dagger \in \text{supp}(E^\text{te})$,
\[
\Gamma({\mathcal{G}}^\text{tr}_{N^\text{tr}}|E^\text{tr}=e)=\Gamma({\mathcal{G}}^\text{te}_{N^\text{te}}|E^\text{te}=e^\dagger).
\]
\update{If $\Gamma$ is E-invariant}, then the OOD test error is the same as the generalization error over the training distribution, i.e., $\forall y \in {\mathbb{Y}}$,
\begin{equation} \label{eq:predY_Einv}
{\rm P}(\vert {\rm P}(Y = y| {\mathcal{G}}^\text{te}_{N^\text{te}}) - \rho(y,\Gamma({\mathcal{G}}^\text{te}_{N^\text{te}}))\vert \leq \epsilon) \geq 1 - \delta.
\end{equation}
\end{restatable}
\Cref{prop:Einv} shows that an E-invariant representation will perform no worse on the OOD test data (extrapolation
samples from $(Y,{\mathcal{G}}^\text{te}_{N^\text{te}})$) than on a test dataset having the same environment distribution as the training data (samples from $(Y,{\mathcal{G}}^\text{tr}_{N^\text{tr}})$). {\em Our task now becomes finding an E-invariant graph representation $\Gamma$ that can be used to predict $Y$.}
\paragraph{The shortcomings of Invariant Risk Minimization (IRM).}
Invariant Risk Minimization (IRM)~\citep{arjovsky2019invariant} aims to learn a representation that is invariant across all training environment\update{s}, $\forall e \in \text{supp}(E^\text{tr})$, by adding a regularization penalty on the empirical risk. However, IRM will fail if: (i) $\text{supp}(E^\text{te}) \not \subseteq \text{supp}(E^\text{tr})$, since the penalty provides no guarantee that the representation will still be invariant w.r.t.\ $e^\dagger \in \text{supp}(E^\text{te})\backslash \text{supp}(E^\text{tr})$ if the representation is a nonlinear function of the input~\citep{rosenfeld2020risks}; and
(ii) if the training data only contains a single environment, i.e., $\text{supp}(E^\text{tr})=\{e\}$. For instance, the training data may contain only graphs of a single size. In this case, we are unable to apply IRM for size extrapolations.
Our experiments show that the IRM procedure
does not seem to work for graph representation learning.
In what follows we leverage the stability of subgraph densities (more precisely, induced homomorphism densities) in graphon random graph models~\citep{lovasz2006limits} to learn E-invariant representations for the SCM defined in \Cref{def:testG,def:trainG}, whose DAG is illustrated in \Cref{fig:SCM}.
\subsection{An Approximately E-Invariant Graph Representations for Our Model}\label{sec:intuitive}
Let ${\mathcal{G}}^\text{*}_{N^\text{*}}$ denote either an $N^\text{tr}$-sized train or $N^\text{te}$-sized test graph from the SCM in \Cref{def:trainG,def:testG}.
For a given $k$-vertex graph $F_k$ $(k<N^\text{*})$, let $\text{ind}(F_k,{\mathcal{G}}^\text{*}_{N^\text{*}})$ be the number of induced homomorphisms of $F_k$ into ${\mathcal{G}}^\text{*}_{N^\text{*}}$, informally, the number of mappings from $V(F_k)$ to $V({\mathcal{G}}^\text{*}_{N^\text{*}})$ such that the corresponding subgraph induced in ${\mathcal{G}}^\text{*}_{N^\text{*}}$ is isomorphic to $F_k$. The induced homomorphism density is defined as
\begin{equation}\label{eq:tinj}
t_{\rm ind}(F_k,{\mathcal{G}}^\text{*}_{N^\text{*}})=\frac{{\rm ind}(F_k,{\mathcal{G}}^\text{*}_{N^\text{*}})}{N^\text{*}!/(N^\text{*} - k)!},
\end{equation}
where the denominator is the number of possible mappings. Let $\mathcal{F}_{\leq k}$ be the set of all connected vertex-attributed graphs of size $k' \leq k$.
Using the subgraph densities (induced homomorphism densities) $\{t_{\rm ind}(F_{k'},{\mathcal{G}}^\text{*}_{N^\text{*}})\}_{F_{k'} \in {\mathcal{F}}_{\leq k}}$ we will construct a (feature vector) representation for ${\mathcal{G}}^\text{*}_{N^\text{*}}$, similar to~\citet{hancock2020survey,pinar2017escape},
\begin{equation}
\label{eq:Gamma-1hot}
\Gamma_\text{1-hot}({\mathcal{G}}^\text{*}_{N^\text{*}})\! =\!\!\!\!\!\!\! \sum_{F_{k'}\in \mathcal{F}_{\leq k}} \!\! t_\text{ind}(F_{k'}, {\mathcal{G}}^\text{*}_{N^\text{*}}) {\bm{1}}_\text{one-hot}\{F_{k'}, \mathcal{F}_{\leq k}\},
\end{equation}%
%
where ${\bm{1}}_\text{one-hot}\{F_{k'}, \mathcal{F}_{\leq k}\}$ assigns a unique one-hot vector to each distinct graph $F_{k'}$ in $\mathcal{F}_{\leq k}$. For instance, for $k=4$, the one-hot vectors could be (1,0,\ldots,0)=\VeeNbrg, (0,1,\ldots,0)=\Kfourbrgg, (0,0,\ldots,1,\ldots,0)=\VeeNbrr, (0,0,\ldots,1)=\TrigNbrg, etc..
In \Cref{sec:conditions} we show that the (feature vector) representation in \Cref{eq:Gamma-1hot} is approximately environment-invariant in our SCM model.
An alternative approach is to replace the one-hot vector representation with learnable graph representation models. We first use Graph Neural Networks (GNNs)~\citep{Kipf2016, Hamilton2017, pmlr-v97-you19b} to learn representations that can capture information from vertex attributes.
Simply speaking, GNNs proceed by vertices passing messages, amongst each other, through a learnable function such as an MLP, and repeating $L \in \mathbb{Z}_{\ge 1}$ layers.
Consider the following simple GNN example. Let $V^\text{*}$ be the set of vertices. At each iteration $l \in \{1, 2, \ldots, L\}$, all vertices $v\in V^\text{*}$ are associated with a learned vector ${\bm{h}}_v^{(l)}$. Specifically, we begin by initializing a vector as ${\bm{h}}_{v}^{(0)} = X_{v}$ for every vertex $v \in V^\text{*}$. Then, we recursively compute an update such as the following $\forall v \in V^\text{*}$,
\begin{equation}\label{eq:WL1}%
{\bm{h}}^{(l)}_v \!=\mathrm{MLP}^{(l)} \! \Big({\bm{h}}^{(l-1)}_v\!, \text{READOUT}_\text{Neigh}(({\bm{h}}^{(l-1)}_u)_{u \in \mathcal{N}\!(v)}) \!\Big),
\end{equation}%
where $\mathcal{N}(v) \subseteq V^\text{*}$ denotes the neighborhood set of $v$ in the graph, $\text{READOUT}_\text{Neigh}$ is a permutation-invariant function (e.g. sum) of the neighborhood learned vectors, $\mathrm{MLP}^{(l)}$ denotes a multi-layer perceptron and whose superscript $l$ indicates that the MLP at each recursion layer may have different learnable parameters.
There are other alternatives to \Cref{eq:WL1} that we will also test in our experiments.
Then, we arrive to the following
representation of ${\mathcal{G}}^\text{*}_{N^\text{*}}$:
\begin{equation}
\label{eq:Gamma-kgnn}
\begin{aligned}[b]
&\Gamma_\text{GNN}({\mathcal{G}}^\text{*}_{N^\text{*}} ) =\\
&\sum_{F_{k'}\in \mathcal{F}_{\leq k}} t_\text{ind}(F_{k'},{\mathcal{G}}^\text{*}_{N^\text{*}}) \text{READOUT}_\Gamma(\text{GNN}(F_{k'})),
\end{aligned}
\end{equation}
where
$\text{READOUT}_\Gamma$ is a permutation-invariant function that maps the vertex-level outputs of a GNN to a graph-level representation (e.g. by summing all vertex embeddings). Unfortunately, GNNs are not most-expressive representations of graphs~\citep{morris2019weisfeiler,pmlr-v97-murphy19a,xu2018powerful} and thus $\Gamma_\text{GNN}(\cdot)$ is less expressive than $\Gamma_\text{1-hot}(\cdot)$. A representation with greater expressive power is
\begin{equation} \label{eq:Gamma-krpgnn}
\begin{aligned}[b]
&\Gamma_{\text{GNN}^+}({\mathcal{G}}^\text{*}_{N^\text{*}})=\\
&\sum_{F_{k'}\in \mathcal{F}_{\leq k}} t_\text{ind}(F_{k'}, {\mathcal{G}}^\text{*}_{N^\text{*}} ) \text{READOUT}_\Gamma(\text{GNN}^+(F_{k'})),
\end{aligned}
\end{equation}
where $\text{GNN}^+$ is a most-expressive $k'$-vertex graph representation, which can be achieved by any of the methods of~\citet{vignac2020building, maron2019provably,pmlr-v97-murphy19a}.
Since $\text{GNN}^+$ is most expressive, $\text{GNN}^+$ can ignore attributes and map each $F_{k'}$ to a one-hot vector ${\bm{1}}_\text{one-hot}\{F_{k'}, \mathcal{F}_{\leq k}\}$; therefore, $\Gamma_{\text{GNN}^+}(\cdot)$ generalizes $\Gamma_\text{1-hot}(\cdot)$ of \Cref{eq:Gamma-1hot}. But {\em note that greater expressiveness does not imply better extrapolation}.
More importantly, GNN and $\text{GNN}^+$ representations allow us to increase their E-invariance by adding a penalty for having different representations of two graphs $F_{k'}$ and $H_{k'}$ with the same topology but different vertex attributes (say, $F_{k'}\!\!=$~\Kfourbrgg and $H_{k'}\!\!=$~\Kfourbrgr), as long as these differences do not significantly impact downstream model accuracy in the training data.
\update{Note that this is more powerful than simply masking vertex attributes, since it allows same-topology graphs with distinct vertex attributes to have different representations if it is important to distinguish them for the target prediction (see \Cref{sec:attribute}).}
We will discuss more about these theoretical underpinnings in the next section.
Hence, for each $k'$-sized vertex-attributed graph $F_{k'}$, we consider the set \({\mathcal{H}}(F_{k'})\) of all $k'$-sized vertex-attributed graphs having the same underlying topology as $F_{k'}$ but with all possible different vertex attributes.
We then define the regularization penalty
\begin{align}
\frac{1}{|\mathcal{F}_{\leq k}|}&\sum_{F_{k'}\in \mathcal{F}_{\leq k}}
\mathbb{E}_{H_{k'} \in {\mathcal{H}}(F_{k'})} \bigl[ \Vert \text{READOUT}_\Gamma(\text{GNN}^{*}(F_{k'})) \nonumber \\
&- \text{READOUT}_\Gamma(\text{GNN}^{*}(H_{k'})) \Vert_2 \bigr] \label{eq:regul},
\end{align}
where $\text{GNN}^{*} = \text{GNN}$ if we choose the representation $\Gamma_\text{GNN}$, or $\text{GNN}^{*} = \text{GNN}^{+}$ if we choose the representation $\Gamma_{\text{GNN}^+}$.
In practice, we assume $H_{k'}$ is uniformly sampled from ${\mathcal{H}}(F_{k'})$ and we sample one $H_{k'}$ for each $F_{k'}$
in order to obtain an unbiased estimator of \Cref{eq:regul}.
\paragraph{Practical considerations.}
\update{Efficient algorithms exist to obtain {\em induced} homomorphism densities over all possible {\em connected} $k$-vertex subgraphs~\citep{ahmed2016estimation, bressan2017counting, chen2018mining, chen2016general, rossi2019heterogeneous, wang2014efficiently}.
For unattributed graphs and $k \leq 5$, we use ESCAPE~\citep{pinar2017escape} to obtain {\em exact} densities. For attributed graphs or unattributed graphs with $k > 5$, exact counting becomes intractable, so we use R-GPM~\citep{teixeira2018graph} to obtain unbiased estimates of densities. Finally, \Cref{prop:bias} in \Cref{sec:BiasAppendix} shows that certain biased estimators can also be used if $\text{READOUT}_\Gamma$ is the sum of vertex embeddings.}
\section{Related Work}\label{sec:relatedwork}
This section presents an overview of the related work. Due to space constraints, a more in-depth discussion with further references is given in \Cref{sec:RelatedWorkAppendix}.
\vspace{-5pt}
\paragraph{OOD extrapolation in graph classification and size extrapolation in GNNs.}
Our work ascertain\update{s} a causal relationship between graphs and their target labels. We are unaware of existing work on this topic.
\citet{xu2020neural} is interested on a geometric (non-causal) definition of extrapolation for a class of graph algorithms.
\citet{hu2020open} introduces a large graph dataset presenting significant challenges of OOD extrapolation, however, their shift is on the two-dimensional structural framework distribution of the molecules, and no causal model is provided.
The parallel work of~\citet{yehudai2020size} improves size extrapolation in GNNs using self-supervised and semi-supervised learning on both the training and test domain, which is orthogonal to our problem.
Previous works also examine empirically the ability of graph neural networks to extrapolate in various applications, such as physics~\citep{battaglia2016interaction,sanchez2018graph}, mathematical and abstract reasoning~\citep{santoro2018measuring,saxton2018analysing}, and graph algorithms~\citep{bello2016neural,nowak2017note,battaglia2018relational,joshi2020learning, velivckovic2019neural, hao2020towards}.
These works do not provide guarantees of test extrapolation performance, a causal model, or a proof that the tasks require extrapolation over different environments.
\vspace{-5pt}
\paragraph{Causal reasoning and invariances.}
Recent efforts have brought counterfactual inference to machine learning models, including {\em Independence of causal mechanism (ICM)} methods~\citep{bengio2019meta,besserve2018group,johansson2016learning,louizos2017causal,parascandolo2018learning,raj2020causal,scholkopf2019causality}, {\em Causal Discovery from Change (CDC)} methods~\citep{tian2001causal}, and {\em representation disentanglement} methods~\citep{bengio2019meta,goudet2017causal,locatello2019challenging}.
Invariant risk minimization (IRM)~\citep{arjovsky2019invariant} is a type of ICM~\citep{scholkopf2019causality}. \update{Risk
Extrapolation (REx)~\citep{krueger2021outofdistribution} optimizes by focusing on the training environments that have the largest impact on training.}
Broadly, the above efforts look for representations (or mechanism descriptions) that are invariant across multiple environments observed in the training data.
In our work, we are interested in techniques that can work with a single training environment \update{and when the test support is not a subset of the train support} --- a common case in graph data.
To the best of our knowledge, the only representation learning work considering single environment extrapolations is \citet{mouli2021neural}.
However, none of these methods is specifically designed for graphs, and it is unclear how they can be efficiently adapted for graph tasks.
Finally, we also note that domain adaptation techniques and recent work on domain-predictors~\citep{chuang2020estimating} aim to learn invariances that can be used for the predictions. However, these require access to test data during training, which is not our scenario.
\vspace{-5pt}
\paragraph{Graph classification using induced homomorphisms.}
A related set of works looks at induced homomorphism densities as graph features for a kernel~\citep{shervashidze2009efficient,yanardag2015deep,wale2008comparison}. These methods can perform poorly in some tasks~\citep{kriege2018property}. \update{Recent work has also shown an interest in induced subgraphs, which are used to improve predictions of GNNs~\citep{bouritsas2020improving} or treated as inputs for newly-proposed architectures~\citep{toenshoff2021graph}. Also note that the graph representations $\Gamma_\text{GNN}(\cdot)$ and $\Gamma_{\text{GNN}^+}(\cdot)$ in \Cref{eq:Gamma-kgnn,eq:Gamma-krpgnn} respectively,
have similarities to $k$-ary Relational Pooling~\citep{pmlr-v97-murphy19a} with the main difference being that the subgraph representations are weighted in our case.}
None of these methods focus on invariant representations or extrapolations.
\vspace{-5pt}
\paragraph{Expressiveness of graph representations.}
The expressiveness of a graph representation method is a measure of model family bias~\citep{morris2019weisfeiler,xu2018powerful,gartner2003graph,maron2019provably,pmlr-v97-murphy19a}.
That is, given enough training data, a neural network from a more expressive family can achieve smaller generalization error over the training distribution than a neural network from a less expressive family, assuming appropriate optimization.
However, this power is a measure of generalization capability over the training distribution, not OOD extrapolation.
Hence, the question of representation expressiveness is orthogonal to our work.
\subsection{Theoretical Description of our E-Invariant Graph Representations}
\label{sec:conditions}
In this section, we show that the graph representations seen in the previous section are approximately environment-invariant in our SCM model under mild assumptions.
\begin{restatable}[Approximate\update{ly} E-invariant Graph Representation]{theorem}{thmsizeExtrapolationBound}
\label{thm:sizeExtrapolationBound}
Let ${\mathcal{G}}^\text{tr}_{N^\text{tr}}$ and ${\mathcal{G}}^\text{te}_{N^\text{te}}$ be two samples of graphs of sizes ${N^\text{tr}}$ and ${N^\text{te}}$ from the training and test distributions, respectively, both defined over the same graphon variable $W$ and satisfying \Cref{def:trainG,def:testG}.
Assume the vertex attribute function $g_X(\cdot,\cdot)$ of \Cref{def:trainG,def:testG} is invariant to $E^\text{tr}$ and $E^\text{te}$ (the reason for this assumption will be clear later).
Let $||\cdot||_\infty$ denote the $L$-infinity norm.
For any integer $k\leq \min(N^\text{tr},N^\text{te})$, and any constant $0<\epsilon<1$,
\begin{equation}
\begin{aligned}[b]
{\rm P}(\Vert\Gamma_\text{1-hot}&({\mathcal{G}}^\text{tr}_{N^\text{tr}}) -\Gamma_\text{1-hot}({\mathcal{G}}^\text{te}_{N^\text{te}})\Vert_\infty>\epsilon)\leq \\ &2|\mathcal{F}_{\leq k}|(\exp(-\frac{\epsilon^2 N^\text{tr}}{8k^2})+\exp(-\frac{\epsilon^2 N^\text{te}}{8k^2})).
\end{aligned}
\end{equation}
\end{restatable}
\Cref{thm:sizeExtrapolationBound} shows how the graph representations given in \Cref{eq:Gamma-1hot} are approximately E-invariant.
Note that for unattributed graphs, we can define $g_X(\cdot,\cdot)=\O$ as the null attribute, which is invariant to any environment by construction. For graphs with attributed vertices, $g_X(\cdot,\cdot)$ being invariant to $E^\text{tr}$ and $E^\text{te}$ means \update{that} for any two environments $e\in \text{supp}(E^\text{tr}), e^\dagger \in \text{supp}(E^\text{te})$, $g_X(e,\cdot)=g_X(e^\dagger,\cdot)$.
\Cref{thm:sizeExtrapolationBound} shows that for $k \ll \min({N^\text{tr}},{N^\text{te}})$, the representations $\Gamma_\text{1-hot}(\cdot)$ of two possibly different-sized graphs with the same $W$ are nearly identical, \update{indicating}
$\Gamma_\text{1-hot}({\mathcal{G}}^\text{*}_{N^\text{*}})$ is an approximately E-invariant representation.
\Cref{thm:sizeExtrapolationBound} also exposes a trade-off, however.
If the observed graphs tend to be relatively small, the required $k$ for approximately E-invariant representations can be small, and then the expressiveness of $\Gamma_\text{1-hot}(\cdot)$ gets compromised.
That is, the ability of $\Gamma_\text{1-hot}({\mathcal{G}}^\text{*}_{N^\text{*}})$ to extract information about $W$ from ${\mathcal{G}}^\text{*}_{N^\text{*}}$ reduces as $k$ decreases. Finally, this guarantees that for appropriate $k$, passing the representation $\Gamma_\text{1-hot}({\mathcal{G}}^\text{*}_{N^\text{*}})$ to a downstream classifier provably approximates the classifier in \Cref{eq:predY_Einv} of \Cref{prop:Einv}.
Note that when the vertex attributes are not invariant to the environment variable, $\Gamma_\text{1-hot}(\cdot)$ is not E-invariant and we can not extrapolate using $\Gamma_\text{1-hot}(\cdot)$.
Thankfully, for the GNN-based graph representations $\Gamma_\text{GNN}({\mathcal{G}}^\text{*}_{N^\text{*}})$ and $\Gamma_{\text{GNN}^+}({\mathcal{G}}^\text{*}_{N^\text{*}})$ in \Cref{eq:Gamma-kgnn,eq:Gamma-krpgnn}, respectively, the regularization penalty in \Cref{eq:regul} pushes the graph representation
to be more E-invariant, making it more likely to satisfy the conditions of E-invariance in \Cref{thm:sizeExtrapolationBound}.
\Cref{eq:regul} is inspired by the {\em asymmetry learning} procedure of \citet{mouli2021neural}, which induces symmetry priors in the neural network, which can be broken (making the neural network asymmetric) only when imposing the symmetry significantly increases the training loss.
To understand the effect of our {\em asymmetry learning} in regularizing towards topology, consider the attributed SBM example in \Cref{sec:family}.
The environment operates by changing the distributions of attributes assigned within each block. If we are going to achieve E-invariance (and correctly predict cross-block edge probabilities in the test data (see \Cref{sec:attribute})), we need graph representations that \update{treat} attributes assigned to the same block as equivalent.
By regularizing the GNN-based graph representations towards focusing only on topology rather than vertex attributes, the regularization forces the GNN to treat all within-block vertex attributes as equivalent, and achieve an approximately E-invariant representation in this setting.
And since treating the across-block vertex attributes as equivalent hurts the training loss in this setting, these will not be considered equivalent by the GNN. |
2,877,628,091,032 | arxiv | \section{Introduction}
Rapid innovation and technological disruption in manufacturing
low-cost and high-quality commercial unmanned aerial vehicles (UAVs) or drones has
opened up many business opportunities to address consumer applications such as
goods delivery services, passenger transport, aerial surveillance and inspection, rescue operations \cite{HayatYanmMuz}. With growing efforts from governments facilitating regulatory framework \cite{uSpace,Faa}, UAV market is projected to reach \$63.6 billion by 2025 \cite{market}.
Ensuring ultra-reliable and low latency links between
UAVs and their ground control stations plays a pivotal role in making these
businesses a reality as many of the above mentioned application scenarios require UAVs to be autonomous or semi autonomous. Integrating UAVs into
ubiquitous existing or future cellular networks as user terminals and connecting them with base stations (BSs) offers simple and cost-effective solution to the UAV connectivity problem \cite{ZengLyuZhang}.
In spite of the promising results demonstrating the feasibility of supporting UAVs in current cellular networks, several new challenges have been highlighted in supporting aerial users in current cellular networks, which are otherwise developed for terrestrial users \cite{Qcom,Eric,HayBetFak}.
In particular, interference and abrupt changes in signal strength (compared to terrestrial users) have been observed in aerial users as the BS antennas are typically tilted a little downwards (intended for terrestrial users), thus making the aerial users experience side lobes.
However, the inherent advantage offered by UAVs in terms of 3D mobility can be exploited to efficiently design UAV paths to avoid the
outage areas and exploit good channel conditions while not deviating too much away from the trajectories planned for original tasks. Motivated by this, several recent works have considered the problem of communication-aware trajectory design for cellular connected UAVs \cite{zhang2018cellular, BasVinPol,zhang2019radio,BulGue,zeng2019path,ChalSadBet}.
Specifically, the problem of finding an
optimal path in the sense of a shortest path between a departing point and a given destination such that the UAV consistently gets a reliable connection from the cellular network has been considered in \cite{zhang2018cellular,BasVinPol,zhang2019radio,BulGue,zeng2019path}.
The works in \cite{zhang2018cellular,BulGue} have considered the problem of finding the shortest path under cellulr coverage constraints assuming that the UAV terminal experience line-of-sight (LoS) channels from the BSs at all times
independent of UAV and BS locations. Convex optimization and graph based approaches are used to optimize the trajectory. However, the chosen radio propagation model
is not applicable in urban environments, where it is shown
that air-to-ground channels exhibit switching from LoS and non-line-of-sight (NLoS)
conditions depending on the UAV and BS locations, where NLoS conditions are caused by signal blockage, reflection and diffraction caused by city buildings
\cite{QiMcGTamNix,AlAitKanJam}.
To overcome the drawback arising from using simple LoS channel models in urban environments, the works in \cite{BasVinPol,zhang2019radio}
have utilized a radio map of the environment that carries very fine grain information about the channel gains from all BSs in the trajectory optimization.
While \cite{BasVinPol} considers only the altitude optimization of UAV, \cite{zhang2019radio} optimizes trajectory in 2D while considering a fixed altitude.
Both these works depend on discretizing the radio map of the overall flight region into finer grids and then use graph based algorithms to find the shortest path from the initial location to the destination.
The complexity and performance trade-off of the shortest path algorithm depend on the number of nodes in the constructed graph, which in turns depend on the grid resolution used in discretizing the radio map. Note that the radio maps are not available on fly but needs to be estimated offline by collecting lot of radio measurements from users in that environment \cite{ChenYanGes}.
Another approach to obtain realistic trajectories in
complex urban environments is to use learning approaches which are model free \cite{ChalSadBet,zeng2019path}. However, the drawback of such techniques is that they require relatively high number of learning episodes to obtain the desired results.
In this work, we consider the problem of finding shortest path between a starting location and a given destination such that a constant altitude flying UAV consistently gets a reliable quality of service (QoS) from the cellular network. Some of the key contributions of this work are
\begin{itemize}
\item Instead of considering radio map which contains rich information channel gains but not easy to model analytically, and generally is not available for any arbitrary areas, we use the 3D map of the city along with a segmented pathloss model to construct coverage maps which serve as a high-quality approximation to the radio maps while having an analytical structure.
\item Making use of the convexity of sub-regions within the coverage map, we prove that the optimal trajectory has a piecewise linear structure.
\item By leveraging this optimal structure, we
propose a low-complexity graph based shortest path algorithm that doesn't
require discretizing the entire coverage map.
\end{itemize}
\section{System Model}
We consider a cellular connected UAV that flies over
an urban area consisting of a number of city buildings for a duration of time $T$. The position of UAV at time $t \in [0, T]$
is denoted by ${\bf v}(t)=[x(t),y(t),h]^{{\Transpose}}\in\mathbb{R}^{3}$,
where $h$ denotes the altitude of the UAV. For simplicity, the altitude of the UAV is set to a fixed value which is determined by the tallest building in the city to avoid the collision.
We assume that the UAV is equipped with a GPS receiver, hence
${\bf v}(t)$ is known.
The UAV is presumed to fly from a pre-determined initial position ${\bf{v}}_{\text{I}}$
at time $t=0$ and has to reach to a terminal location ${\bf{v}}_{\text{F}}$ by the end of the mission duration. The UAV flies at
a constant speed of, hence the UAV's trajectory ${\bf v}(t), t \in [0, T]$ can solely be determined by the path it takes.
During the mission the UAV needs to be remained connected to one of the
$K$ outdoor static base stations (BS) which are randomly
scattered with uniform distribution over the city.
The $k$-th BS, $k\in[1,K]$, is located at
${\bf u}_{k}=[x_{k},y_{k}, h_g]^{{\Transpose}}\in\mathbb{R}^{3}$
, where $h_g$ stands for the height of the BS and is assumed to be the same for all BSs \footnote{By no means this is an restriction and the results presented in this paper can be easily extended to the case with different BS heights.}. Moreover, we denote $\hat{\bf{u}}_k = [x_k,y_k,h]^{{\Transpose}}, \,k\in[1,K]$ as the projections of the $k$-th BS locations on the 2D plane with the same altitude as the UAV.
\subsection{Communication Model}
We consider a cellular down-link scenario where the time varying signal-to-noise ratio (SNR) at the UAV from the $k$-th BS is given by
\begin{equation}\label{eq:snrModel}
\rho_{k}({\bf{v}}(t))=\frac{P \gamma_{k,s}(t)}{\sigma^2},\, 0\le t\le T,
\end{equation}
where $P$ is the transmission power of the BS, $\gamma_{k,s}(t)$ is the channel gain between the $k$-th BS and the UAV flying at location ${\bf{v}}(t)$, $\sigma^2$ represents the noise power, and finally $s\in\left\{\text{LoS},\text{NLoS}\right\}$
emphasizes the strong dependence of the propagation conditions in
{\text{}}{los} or {\text{}}{nlos} scenarios\cite{ChenYanGes}.
The channel gain between the UAV and the $k$-th BS is modeled as\cite{ChenYanGes,ChenGesb}
\begin{equation}
\gamma_{k,s}(t)=\frac{\beta_{s}}{d_k(t)^{\alpha_{s}}}, \label{eq:CH_Model}
\end{equation}
where
$$
d_k(t) = {\| {\bf{v}}(t)-{\bf u}_{k}\|}_2
$$
represents the distance between the $k$-th BS and the UAV.
Regarding the
{\text{}}{los}/{\text{}}{nlos} classification of the UAV-BS links, we leverage
the knowledge of a 3D city map. Based on such map,
we can predict {\text{}}{los} (un)availability on any given UAV-BS link from a trivial geometry argument: For a given UAV
position, the BS is considered in {\text{}}{los} to the UAV if
the straight line passing through the UAV’s and the BS's position lies higher than any buildings in between.
\subsection{Problem Formulation}
The problem of finding the
shortest trajectory for the UAV between a predefined starting point ${\bf{v}}_{\text{I}}$ and a terminal point ${\bf{v}}_{\text{F}}$, while satisfying the
minimum SNR $\Bar{\rho}$ during the mission
\begin{equation}
\min_{0\le t \le T}\,\max_{k\in[1,K]}\rho_k({\bf{v}}(t)) \ge \Bar{\rho}. \label{eq:SNR_THR_constraint}
\end{equation}
Since the UAV moves with a constant velocity, the trajectory optimization can be formulated as follows
\begin{subequations}\label{eq:TRJ_problem_Continuous}
\noindent
\begin{align}
\min_{T,\{{\bf{v}}(t),0\le t\le T\}} & \quad T\\
\mbox{s.t.\ } & \quad \eqref{eq:SNR_THR_constraint},\\
& \quad {\bf{v}}(0) = {\bf{v}}_{\text{I}}, \, {\bf{v}}(T) = {\bf{v}}_{\text{F}}.
\end{align}
\end{subequations}
\begin{figure}[t]
\begin{centering}
\includegraphics[width=0.7\columnwidth]{IMG/CVXProb2.png}
\par\end{centering}
\caption{Coverage area of a given BS and the sectors. \label{fig:CVX_TRJ_problem}}
\end{figure}
This problem is not convex since the SNR in the constraint
\eqref{eq:SNR_THR_constraint} is a non-differentiable and non-smooth function with respect to the UAV position due to the binary classification variable $s\in\{\text{LoS},\, \text{NLoS}\}$, therefore this function is neither convex nor concave. Moreover, it is a functional optimization, hence, it is challenging to solve
\eqref{eq:TRJ_problem_Continuous} optimally in general.
In the following, with some analysis we show that the optimal trajectory has some structures which can be exploited to make the problem \eqref{eq:TRJ_problem_Continuous} more tractable. To this end, the following results and definitions are helpful.
\begin{definition}{\bf{Coverage area}:} \label{defi:def_1}
The coverage area of the BS is defined as a set of points with the same altitude as the UAV in which the SNR of the UAV-BS link will remain greater than or equal to $\bar{\rho}$. The coverage area of the $k$-th BS, $k\in[1,K]$ is defined as
\begin{equation}
A_k = \{ {\bf{v}}=[x,y,h]^{{\Transpose}}\in \mathbb{R}^3\,|\, \rho_k({\bf{v}})\ge \bar{\rho} \}.
\end{equation}
Using the SNR expression in \eqref{eq:snrModel},
the set of points $[x,y]$ that belong to the set $A_k$ can be written as
\begin{equation}\label{eq:distThold}
(x-x_k)^2+(y-x_k)^2 \leq d_s,
\end{equation}
where $d_s \triangleq
\left({\frac{P\beta_s}{\sigma^2 \Bar{\rho}}}\right)^{\frac{2}{\alpha_s}}-(h_g-h)^2$.
The radius $d_s$ therefore depends on whether the point ${\bf{v}}$ is in LoS or NLoS with respect to the BS, which in turn depends upon the building distribution around that BS. Based on \eqref{eq:distThold} and the 3D map, without loss
of generality, the coverage areas $A_k$ can be
divided into $M_k$ sectors
\begin{equation}\label{eq:sub-region_def}
A_k=\{a_{k,1} \cup \cdots \cup a_{k,M_k}\},
\end{equation}
where each $a_{k,i}$ is a convex shape which is a segment of a circle between two angles $\theta_{k,i}$ and $\theta_{k,i+1}$ with a radius of $r_{k,i}$.
The radius $r_{k,i}$ depends on the building distribution and \eqref{eq:distThold}.
For better understanding, an illustration of such coverage area of a BS is given in Fig.\ref{fig:CVX_TRJ_problem} and
in Fig. \ref{fig:CoverageArea}.
For instance, regarding the coverage area depicted in Fig.\ref{fig:CVX_TRJ_problem} for a given BS, we can write $ A_k=\{a_{k,1} \cup a_{k,2} \cup a_{k,3} \cup a_{k,4}\}$.
\end{definition}
\begin{definition}{\bf{Coverage border}:} \label{defi:def_2}
The coverage border is the perimeter of a coverage area of a given base station. The coverage border of the $k$-th BS, $k\in[1,K]$ is denoted $B_k$.
\end{definition}
\begin{definition}{\bf{Common areas and common borders}:} \label{defi:def_3}
The common area between $k$-th and $j$-th BSs, $k,j\in[1,K], k\neq j$ represents the overlap regions of their coverage areas, i.e.,
\begin{equation}
C_{j,k} = C_{k,j} = \left\{ A_k \cap A_j\right\}.
\end{equation}
The borders of the common areas $C_{j,k}$ is defined as the common borders which we denote by $D_{j,k}$.
\end{definition}
\begin{figure}[t]
\begin{centering}
\includegraphics[width=1\columnwidth]{IMG/CoverageArea.png}
\par\end{centering}
\caption{Top view of the city, the base stations positions, coverage area of each base station, and the common area. The UAV flies at 50m and the base stations are on the ground level. \label{fig:CoverageArea}}
\end{figure}
In Fig. \ref{fig:CoverageArea}, an example of the coverage areas, coverage borders, common areas, and common borders of two base stations is illustrated. The coverage area of each BS is depicted with a highlighted surfaces and the coverage borders are shown with solid black lines.
\begin{proposition}
Problem \eqref{eq:TRJ_problem_Continuous} is equivalent to the following problem:
\begin{subequations}\label{eq:TRJ_problem_original}
\noindent
\begin{align}
\min_{N,\mathcal{V}} & \quad \sum_{n\in [1,N-1]} \|{\bf{v}}_n - {\bf{v}}_{n+1}\|^2_2\\
\mbox{s.t.\ } & \quad \rho({\bf{v}}_n,{\bf{v}}_{n+1}) \ge \Bar{\rho}\,,n\in[1,N-1],\label{eq:TRJ_problem_original_C1}\\
& \quad {\bf{v}}_1 = {\bf{v}}_{\text{I}}, \, {\bf{v}}_N = {\bf{v}}_{\text{F}},\label{eq:TRJ_problem_original_C3}
\end{align}
\end{subequations}
where
\begin{equation}
\rho({\bf{x}},{\bf{y}}) = \min_{0\le \lambda \le 1}\,\max_{k\in[1,K]}\rho_k\left(\lambda{\bf{x}} + (1-\lambda){\bf{y}}\right),
\end{equation}
and $\mathcal{V}=({\bf{v}}_n)_{n=1}^{N}$ is the sequence of UAV trajectory points in $\mathbb{R}^3$ such that any two consecutive points are connected with a straight line.
\end{proposition}
\begin{proof}
We now provide a sketch of the proof. Let ${\bf{v}}^*(t), 0\le t \le T$ be the optimal trajectory which traverses the $k$-th BS's coverage area $A_k$. Without loss generality, let us assume that within coverage area $A_k$ the trajectory traverses the $n$-th sector. We denote the intersections of ${\bf{v}}^*(t)$ with the boarders of sector $a_{k,n}$ as points ${\bf{v}}_{k,n}, {\bf{v}}_{k,n+1}$. For instance in Fig. \ref{fig:CVX_TRJ_problem}, the optimal trajectory intersects the border of the sector $a_{k,1}$ in points ${\bf{v}}_{k,1}, {\bf{v}}_{k,2}$.
Since ${\bf{v}}_{k,n}, {\bf{v}}_{k,n+1}$ both are inside $a_{k,n}$ and each sector has a convex shape, then the straight line connecting ${\bf{v}}_{k,n}, {\bf{v}}_{k,n+1}$ also lies inside $a_{k,n}$, mathematically we can write
\begin{equation}
\lambda{\bf{v}}_{k,n} + (1-\lambda){\bf{v}}_{k,n+1} \in a_{k,n.}, \,\forall \lambda, 0\le\lambda \le 1.\label{eq:cons_points_in_CVX_sub_reg}
\end{equation}
This implies that the constraint \eqref{eq:SNR_THR_constraint} is satisfied for any points on the straight line between ${\bf{v}}_{k,n}, {\bf{v}}_{k,n+1}$. Since, our objective is to minimize the travel time (or equivalently the length of the trajectory), then the optimal trajectory between ${\bf{v}}_{k,n}, {\bf{v}}_{k,n+1}$ is the straight line. Note that \eqref{eq:cons_points_in_CVX_sub_reg} can equivalently be written as
\begin{equation}
\rho({\bf{v}}_{k,n},{\bf{v}}_{k,n+1}) \ge \Bar{\rho}.
\end{equation}
Consequently without loss of optimality, the optimal trajectory can be represented as a sequence of the points such that any two consecutive points are connected with a straight line
\begin{equation}
\mathcal{V}=({\bf{v}}_n)_{n=1}^{N}\,|\,\rho({\bf{v}}_n,{\bf{v}}_{n+1}) \ge \Bar{\rho},n\in [1,N-1].
\end{equation}
Hence, problem \eqref{eq:TRJ_problem_Continuous} is equivalent to \eqref{eq:TRJ_problem_original}.
\end{proof}
Then to solve \eqref{eq:TRJ_problem_original}, we just need to optimize over a limited number of optimization variables, however this problem is still difficult to solve since constraint \eqref{eq:TRJ_problem_original_C1} is neither convex nor concave. In what comes next, we develop a graph theory-based solution to this problem. First, we check the feasibility of problem \eqref{eq:TRJ_problem_original} by proposing a graph theory based approach
in a similar manner to the one proposed in \cite{zhang2018cellular}.
. We then derive a method to find a sub-optimal and efficient solution to problem \eqref{eq:TRJ_problem_original}.
\section{Feasibility check \label{sec:Feasibility}}
In this section, we investigate the feasibility of problem \eqref{eq:TRJ_problem_original} by leveraging the graph theory approach. A trajectory sequence $\mathcal{V}=({\bf{v}}_n)_{n=1}^{N}$ is a feasible solution to problem \eqref{eq:TRJ_problem_original} if constraints \eqref{eq:TRJ_problem_original_C1} is satisfied. In general, obtaining a feasible solution to problem \eqref{eq:TRJ_problem_original} is not trivial, since the coverage area of BSs have non-convex shapes and the exhaustive search inherently cannot be avoided. For further simplification, we uniformly discretize the coverage border of each BS, which was defined in Definition \ref{defi:def_2}, into $Q$ samples. The discretized coverage border of the $k$-th BS, $k\in[1,K]$ is denoted by $\hat{B}_k$, $|\hat{B}_k| = Q_k$, where $|.|$ is the cardinality function. We then define $\hat{D}_{k,j}$ as a set of the discrete points on the common boarders between $k$-th and $j$-th BSs, $k,j\in[1,K], k\neq j$ which is given by
\begin{equation}
\hat{D}_{k,j} = {D}_{k,j} \cap \hat{B}_k \cap \hat{B}_j,
\end{equation}
where ${D}_{k,j}$ was defined in Definition \ref{defi:def_3}. We now propose a method to check the feasibility of the original problem by leveraging the graph theory approaches. Let's denote an undirected graph by $G=(\mathcal{N},
\mathcal{E})$. We define $\mathcal{N}$ as a set of graph's nodes which is given by $\mathcal{N} = \{{\bf{v}}_{\text{I}}\cup \,{\mathcal{U}}\cup \mathcal{D} \cup{\bf{v}}_{\text{F}} \}$,
where ${\mathcal{U}} = \{\hat{\bf{u}}_k,\,k\in[1,K]\}$ is a set comprising the projections of the BSs locations, and $\mathcal{D}$ is defined as
\begin{equation}
\mathcal{D} = \bigcup_{k,j\in[1,K], k\neq j}\hat{D}_{k,j}.
\end{equation}The set of the graph's edges is denoted by $\mathcal{E}$ which is given by
\begin{equation}\label{eq:Feas_graph_edges}
\begin{aligned}
\mathcal{E}&= \{(\hat{\bf{u}}_k,{\bf{v}}_{\text{I}}) |\,{\bf{v}}_{\text{I}}\in A_k, \,k\in[1,K] \}\\
&\cup(\hat{\bf{u}}_k,{\bf{x}}_{k,j}) |\,\forall{\bf{x}}_{k,j}\in \hat{D}_{k,j}, \,k,j\in[1,K],\,k\neq j \}\\
&\cup\{(\hat{\bf{u}}_k,{\bf{v}}_{\text{F}}) |\,{\bf{v}}_{\text{F}}\in A_k, \,k\in[1,K] \}.
\end{aligned}
\end{equation}
We also assign a weight value to each edge of the graph corresponding to its length.
Note that, the edge $({\bf{v}}_{\text{I}},\hat{\bf{u}}_k)$ exists if the starting point ${\bf{v}}_{\text{I}}$ lies in the coverage area of the $k$-th BS. Moreover, $(\hat{\bf{u}}_k,{\bf{x}}_{k,j})$ represents an edge between the $k$-th BS and all the points (${\bf{x}}_{k,j}$) in the discretized coverage borders with its neighbour BS $j$.
\begin{proposition}\label{prop:Coverage_under_feas_graph}
All the edges defined in \eqref{eq:Feas_graph_edges}
satisfy the constraint \eqref{eq:TRJ_problem_original_C1}.
\end{proposition}
\begin{proof}
Without loss of generality consider $k$-th BS having an coverage area $A_k$.
By definition, we can see that $\hat{\bf{u}}_k,{\bf{x}}_{k,j}, k \neq j$ lie
inside $A_k$. Since the coverage area $A_k$ can be represented by
a union convex non-overlapping sectors as defined in \eqref{eq:sub-region_def},
by construction, there always exits a straight line path connecting
$\hat{\bf{u}}_k$ and ${\bf{x}}_{k,j}$ which always lies inside the coverage region
$A_k$. Therefore all edges $(\hat{\bf{u}}_k,{\bf{x}}_{k,j}), k \neq j$ satisfy the coverage constraint. Since, we assume that initial and terminal points of the
UAV are always in the coverage area of at least one BS, it can be easily see that
edges of the form $({\bf{v}}_{\text{I}},\hat{\bf{u}}_k)$ and $(\hat{\bf{u}}_k,{\bf{v}}_{\text{F}})$ also satisfy the constraint in \eqref{eq:TRJ_problem_original_C1}.
\end{proof}
Since all edges of the graph $G$ satisfy SNR feasibility constraint, the trajectory optimization problem optimization problem \eqref{eq:TRJ_problem_original} is feasible if we can find a path from starting node ${\bf{v}}_{\text{I}}$ to the terminal node ${\bf{v}}_{\text{F}}$ in the graph $G$. To this end, we employ the Dijkstra \cite{cormen2009introduction} algorithm with the worst-case complexity of $\mathcal{O}(|\mathcal{E}|+|\mathcal{N}|\log|\mathcal{N}|)$ which obtains a shortest path between ${\bf{v}}_{\text{I}}$ and ${\bf{v}}_{\text{F}}$. We denote such a solution as the base trajectory $\mathcal{V}_b = ({\bf{v}}_n^b)_{n=1}^N$. Note that, if the algorithm cannot find a path between ${\bf{v}}_{\text{I}}$ and ${\bf{v}}_{\text{F}}$, problem \eqref{eq:TRJ_problem_original} is infeasible.
The base trajectory starts from the initial point ${\bf{v}}_{\text{I}}$ and it goes on top of the closest BS to the ${\bf{v}}_{\text{I}}$. The UAV then tries to reach to the terminal point by visiting the minimum number of the BSs. From one BS to another one the UAV crosses over a point inside the discretized common border of the two BSs.
An illustration of the base trajectory between the starting point and the terminal point is shown in Fig. \ref{fig:CoverageArea}. For ease of exhibition we consider merely two BSs. It can be seen that, the base trajectory starts from ${\bf{v}}_{\text{I}}$ and heads towards the closest BS, which is the BS1 here, and then it goes to the neighbour base station by passing over the common borders between the BSs. Finally, the trajectory terminates by going from BS2 in a straight line towards ${\bf{v}}_{\text{F}}$.
We denote the base stations which are sequentially visited by the base trajectory as:
\begin{equation}
\mathcal{U}^b = (\hat{\bf{u}}_k)\, | \, \hat{\bf{u}}_k\in \mathcal{V}_b.
\end{equation}
We also define an index set $I^b = (I_{b,1},\cdots,I_{b,K^{'}})$, where $I_{b,j}$ is the BS's index of the $j$-th element in $\mathcal{U}^b$, and $K^{'}=|\mathcal{U}^b|$. As an example, let's assume that the base trajectory visits the sequence of the BSs $\mathcal{U}^b=(\hat{\bf{u}}_1,\hat{\bf{u}}_3,\hat{\bf{u}}_4,\hat{\bf{u}}_7)$, then the index set $I^b$ is given by
\begin{equation}
I^b =(1,3,4,7).
\end{equation}
As it is shown in Fig. \ref{fig:CoverageArea}, the base trajectory is not an efficient solution since the trajectory needs to fly over the BSs to reach to the terminal point. In the next section, we propose a method to improve the base trajectory.
\section{Trajectory Optimization}\label{sec:trajectory_Optimization}
In this section we aim to find a sub-optimal and high-quality approximate solution to \eqref{eq:TRJ_problem_original} by improving the base trajectory. As mentioned earlier, the base trajectory is not an efficient solution since it requires to visit the BSs to get to the terminal location. For example in Fig. \ref{fig:CoverageArea}, the optimal trajectory is a straight line from ${\bf{v}}_{\text{I}}$ to ${\bf{v}}_{\text{F}}$. To tackle this problem, in this section we aim improve the base trajectory obtained in Section \ref{sec:Feasibility} by employing the graph theory methods.
We then construct an undirected graph $G=(\mathcal{N},
\mathcal{E})$. For ease of exposition we use the same notations as Section \ref{sec:Feasibility}. The nodes of the graph is defined as follows
\begin{equation}
\mathcal{N} = \{{\bf{v}}_{\text{I}}\cup {\mathcal{U}}^b \cup {\mathcal{D}}^b \cup{\bf{v}}_{\text{F}} \},
\end{equation}
where ${\mathcal{D}}^b\subset\mathcal{D}$ which is defined as
\begin{equation}
{\mathcal{D}}^b = \left\{ \bigcup _{j\in[1,K^{'}-1]} \hat{B}_{I_{b,j},I_{b,j+1}} \right\}.
\end{equation}
The edges of the graph are given by
\begin{equation}\label{eq:Graph_Opt_labels} \small
\begin{aligned}
\mathcal{E}&= \{({\bf{v}}_{\text{I}},\hat{\bf{u}}_{I_{b,1}})\}\\
&\cup\{({\bf{v}}_{\text{I}},{\bf{x}}_{1,2}) |L({\bf{v}}_{\text{I}},{\bf{x}}_{1,2})\in A_{I_{b,1}},\forall{\bf{x}}_{1,2}\in \hat{B}_{I_{b,1},I_{b,2}}\}\\
&\cup\{({\bf{x}}_{k-1,k},{\bf{x}}_{k,k+1}) |L({\bf{x}}_{k-1,k},{\bf{x}}_{k,k+1})\in A_{I_{b,k}},\\ &\kern3pt\forall{\bf{x}}_{k-1,k}\in\hat{B}_{I_{b,k-1},I_{b,k}},\forall{\bf{x}}_{k,k+1}\in\hat{B}_{I_{b,k},I_{b,k+1}},k\in[2,K^{'}-1]\}\\
&\cup\{({\hat{\bf{u}}}_k,{\bf{x}}_{k,j}) |\,\forall{\bf{x}}_{k,j} \in \hat{B}_{I_{b,k},I_{b,j}} ,\,k,j\in[1,K^{'}], k\neq j\}\\
&\cup\{({\bf{v}}_{\text{F}},{\bf{x}}_{K^{'}-1,K^{'}}) |L({\bf{v}}_{\text{F}},{\bf{x}}_{K^{'}-1,K^{'}})\in A_{I_{b,K^{'}}},\\ &\kern18pt\forall{\bf{x}}_{K^{'}-1,K^{'}}\in \hat{B}_{I_{b,K^{'}-1},I_{b,K^{'}}}\}\\
&\cup\{({\bf{v}}_{\text{F}},\hat{\bf{u}}_{I_{b,K^{'}}})\},
\end{aligned}
\end{equation}
where $L({\bf{x}},{\bf{y}})$ is a line segment between two points ${\bf{x}},{\bf{y}}$ which is defined as follows:
\begin{equation}
L({\bf{x}},{\bf{y}}) = \left\{\lambda{\bf{x}} + (1-\lambda){\bf{y}}, \forall{\lambda}, 0\le \lambda\le 1 \right\},
\end{equation}
We also assign a weight value to each edge of the graph corresponding to its length. All the edges $({\bf{v}}_{\text{I}},\hat{\bf{u}}_{I_{b,1}}),\,({\hat{\bf{u}}}_k,{\bf{x}}_{k,j}),\,({\bf{v}}_{\text{F}},\hat{\bf{u}}_{I_{b,K^{'}}})$ are defined in a similar manner to \eqref{eq:Feas_graph_edges}, and similar to Proposition \ref{prop:Coverage_under_feas_graph}, it can be shown that the constraint \eqref{eq:TRJ_problem_original_C1} is always satisfied for any of these edges. $({\bf{v}}_{\text{I}},{\bf{x}}_{1,2})$ is the edge between the initial location ${\bf{v}}_{\text{I}}$ and any points inside the discretized common borders of $I_{b,1}$-th and the $I_{b,2}$-th BS, and it exists if this edge lies inside $A_{I_{b,1}}$. The edge $({\bf{v}}_{\text{F}},{\bf{x}}_{K^{'}-1,K^{'}})$ is also defined similarly. The edge $({\bf{x}}_{k-1,k},{\bf{x}}_{k,k+1})$ represents an edge between all the points in the discretized common borders of the $I_{b,k}$-th BS and it's neighbor BSs $I_{b,k-1},I_{b,k+1}$. Edge $({\bf{x}}_{k-1,k},{\bf{x}}_{k,k+1}) \in \mathcal{E}$, if the line $L({\bf{x}}_{k-1,k},{\bf{x}}_{k,k+1})$ lies inside $A_{I_{b,k}}$, which can be efficiently checked by the following result.
\begin{lemma} \label{lemma:Graph_TRJ_lemma}
Let ${\bf{x}},{\bf{y}}\in A_k$, to determine if the line $L({\bf{x}},{\bf{y}})$ is inside coverage area $A_k$, only a limited number of points along $L({\bf{x}},{\bf{y}})$ need to be evaluated.
\end{lemma}
\begin{proof}
Let's assume that the line $L({\bf{x}},{\bf{y}})$ sequentially traverses some sectors in $A_k$, denoted by $(a_{k,1},\ldots, a_{k,N^{'}})$
with starting location ${\bf{x}} \in a_{k,1}$ and ending location ${\bf{y}} \in a_{k,N^{'}}$.
The set of intersections of the line with the boundaries of the sectors
is denoted by a sequence of the points $({\bf{x}}_j)_{j=1}^{J}$.
Since all the sectors are convex, it can be shown that if
$\{{\bf{x}}_j,{\bf{x}}_{j+1}\},j\in[1,J]$ belong to a same sector then
the line $L({\bf{x}}_j,{\bf{x}}_{j+1})$ lies inside $A_k$.
Therefore, to check if the line $L({\bf{x}},{\bf{y}})$ is inside
the coverage area, it is enough to evaluate a limited number of points.
\end{proof}
Having constructed graph $G$ using Lemma \ref{lemma:Graph_TRJ_lemma}, since any edges of the graph is covered by at least one base station then constraint \eqref{eq:TRJ_problem_original_C1} will always be satisfied if the UAV moves along any edges of the graph. So, problem \eqref{eq:TRJ_problem_original} is cast as finding a shortest path between ${\bf{v}}_{\text{I}},{\bf{v}}_{\text{F}}$ in graph $G$. Similar to Section \ref{sec:Feasibility}, we use the Dijkstra algorithm to find the shortest trajectory.
\section{Numerical Results\label{sec:Numerical-Results}}
We consider a dense urban Manhattan-like area of size $2\times2\, \text{km}^2$, consisting of a regular street grid and buildings.
The building heights are Rayleigh distributed within
the range of $5$ to $70$ (m) \cite{AlAitKanJam}. Propagation parameters for the UAV-BS links are selected
as $\alpha_{{\text{}}{los}}=2.2,\,\alpha_{{\text{}}{nlos}}=2.8,\,\beta_{{\text{}}{los}}=10^{-4},\, \text{and } \beta_{{\text{}}{nlos}}=10^{-4}$ according to an urban micro scenario in \cite{haneda20165g}.
The UAV's path originates at ${\bf v}_{\text{I}}=(300,300,80)~\text{m}$ and terminates
at ${\bf v}_{\text{F}}=(1500,1500,80)~\text{m}$.
The cellular network consists of $K=25$ BSs which are randomly scattered over the city. All the BSs have the same height $h_g=20~\text{m}$ and we assume that the UAV flies with the fixed altitude $h=80~\text{m}$.
Fig. \ref{fig:Top_VIEW_CITY} illustrates BSs and the coverage map where the highlighted regions represent the areas where the minimum SNR constraint \eqref{eq:SNR_THR_constraint} is satisfied.
\begin{figure}[t]
\begin{centering}
\includegraphics[width=1\columnwidth]{IMG/TopCityView.eps}
\par\end{centering}
\caption{Top view of the city, BS locations, the generated trajectories and its lengths for different algorithms. The coverage area of each BS is highlighted with green color. \label{fig:Top_VIEW_CITY}}
\end{figure}
The base trajectory and the optimized trajectory described in Sections \ref{sec:Feasibility} and \ref{sec:trajectory_Optimization}
are shown in Fig. \ref{fig:Top_VIEW_CITY}. We have compared our method to
the other graph based approaches
proposed in \cite{zhang2019radio} where
the whole map within the flying area needs to be quantified into grids.
We consider the quantization unit to be
$10\times10\, \text{m}^2$ which results
in total $\Delta ^ 2 = 4\times10^{4}$ number of nodes in the graph.
It can be seen from Fig. \ref{fig:Top_VIEW_CITY} that our method provides the best solution in terms of the path length.
The base trajectory has the maximum length among all the solutions as it is forced to visit BSs along its way to the destination.
\begin{figure}[t]
\begin{centering}
\includegraphics[width=1\columnwidth]{IMG/outage.eps}
\par\end{centering}
\caption{Outage versus the trajectory length for different algorithms. \label{fig:outage}}
\end{figure}
In Fig. \ref{fig:outage}, we evaluate the performance of the different approaches in terms of the outage over 1000 Monte-Carlo simulations with different BS locations.
The outage is defined as the amount of time the SNR constraint
in \eqref{eq:SNR_THR_constraint} is not satisfied while following the devised trajectory.
The outage of the straight trajectory between the starting and the terminal points is illustrated as well. It can be seen that constraint \eqref{eq:SNR_THR_constraint} is always guaranteed when the UAV moves along our proposed trajectories while there is no hard guarantee for the other approaches. In general, our graph-based trajectory performs better than the other methods.
Finally, we compare the complexity of our proposed algorithms.
Our approach which requires only discretizing the coverage border of each BS
into $Q$ samples (ref Sec. \ref{sec:Feasibility}) which are later used as nodes in the graph. An upper bound on the complexity of our graph-based algorithm is given by $\mathcal{O}(| {\mathcal{U}^b}|Q^2+KQ\log KQ)$.
It is shown that the complexity of the optimal algorithm introduced in \cite{zhang2019radio} is given by $\mathcal{O}\left( K\Delta^2+\Delta^2\log \Delta\right)$, where $\Delta$ relates to the quantization of
the map. In this simulation we assumed
grid size to be $10\times10\, \text{m}^2$ which resulted
in total $\Delta ^ 2 = 4\times10^{4}$ number of nodes.
It is clear that the complexity of our proposed algorithms are considerably less than the method in \cite{zhang2019radio}, since $Q\ll D$. Moreover, the complexity of our algorithm just increases with the number BSs
rather then the size of the flying area, since $Q$ does not change by increasing the size of the flying area.
\section{Conclusion}
This study investigated the problem of UAV trajectory design under cellular connectivity constraint to minimize its trajectory length between a pre-determined initial location and a given destination point in an urban environment. We proposed a novel approach to trajectory design that strikes a trade-off between performance (i.e. path length reduction) and complexity by exploiting the 3D map of the environment and employing the graph theory. We established a graph theory based framework to first evaluated the feasibility of the problem and then to obtain a high-quality approximate solution to the UAV trajectory design problem. The performance of the proposed solutions was validated with a set of Monte-Carlo simulations.
\bibliographystyle{IEEEtran}
|
2,877,628,091,033 | arxiv | \section{Introduction}
\vspace{-0.10cm}
In spin-1 hadrons, there are structure functions in addition to
the ones of the spin-1/2 nucleons, and they are related to
their tensor polarizations. These new structure functions were proposed
in 1980's; however, experimental progress was rather slow except
for the HERMES $b_1$ measurement in 2005 \cite{Airapetian:2005cb}.
In spite of this situation, we have a bright future prospect
because there are experimental projects in 2020's and 2030's
to investigate polarized deuteron structure functions
at various accelerator facilities, such as
Thomas Jefferson National Accelerator Facility (JLab),
Fermilab (Fermi National Accelerator Laboratory),
Nuclotron-based Ion Collider fAcility (NICA),
LHC (Large Hadron Collider)-spin,
and electron-ion colliders (EIC, EicC)
\cite{spin-1-exp}.
Therefore, time has come to investigate them theoretically
before experimental measurements.
In this report, we discuss a gluon transversity, especially how to
find it in a proton-deuteron Drell-Yan process.
There are already significant works on a quark transversity
both theoretically and experimentally.
However, there is no experimental measurement on the gluon transversity
because it does not exist in the spin-1/2 nucleons.
The gluon transversity is defined by an amplitude with a gluon-spin flip,
namely the difference of two unit of spin ($\Delta s=2$), so that
the hadron spin needs to be larger than or equal to one.
The most stable spin-1 target is the deuteron, which can be used
for experimental measurements.
Since the proton and neutron cannot contribute directly,
the gluon transversity is an appropriate observable to find
any exotic signature in the deuteron beyond the simple bound
system of the nucleons. If a finite distribution is found experimentally,
it could lead a new field of hadron physics.
On this topic, the purpose of our study is to provide a theoretical
formalism for investigating the gluon transversity at hadron accelerators,
for example, by the Drell-Yan process \cite{ks-trans-g-2020,pd-drell-yan}
as discussed in Sec.\,3,
whereas the lepton scattering measurement was already considered at JLab
\cite{spin-1-exp,gluon-trans-2}.
The second topic is on
transverse-momentum-dependent parton distribution functions (TMDs)
and parton distribution functions (PDFs)
of tensor-polarized spin-1 hadrons up to twist 4 \cite{ks-tmd-2021}
as explained in Secs.\,4 and 5.
Polarized PDFs of the nucleons have been investigated
up to twist 4 \cite{tmds-nucleon}; however,
they were investigated only at the twist-2 level \cite{bm-2000}
until recently for spin-1 hadrons.
The purpose of our study is to provide full TMDs, PDFs, and fragmentation
functions up to twist 4 for the spin-1 hadrons \cite{ks-tmd-2021}.
Due to the time-reversal (T) invariance in the collinear PDFs,
there are sum rules for T-odd TMD distributions.
For the collinear PDFs, a useful twist-2 relation and a sum rule were
found \cite{ks-ww-bc-2021} in the similar way to
the Wandzura-Wilczek relation and the Burkhardt-Cottingham sum rule.
Furthermore, the equation of motion for quarks was used for obtaining
relations among the collinear parton- and multiparton-distribution functions
for spin-1 hadrons \cite{eq-motion}.
We explain these results in this paper.
\vspace{-0.10cm}
\section{Polarizations of spin-1 hadrons}
\label{polarizations}
\vspace{-0.10cm}
Polarizations of spin-1 hadrons are described by the spin vector $\vec S$
and tensor $T_{ij}$ defined by the polarization vector $\vec E$ as
\ \vspace{-0.35cm}
\begin{align}
& \vec S
= \text{Im} \, (\, \vec E^{\, *} \times \vec E \,)
= (S_{T}^x,\, S_{T}^y,\, S_L) ,
\nonumber \\[-0.15cm]
& T_{ij}
= \frac{1}{3} \delta_{ij}
- \text{Re} \, (\, E_i^{\, *} E_j \,)
\nonumber \\[-0.02cm]
& \ \hspace{-0.15cm}
= \frac{1}{2}
\left(
\begin{array}{ccc}
- \frac{2}{3} S_{LL} + S_{TT}^{xx} & S_{TT}^{xy} & S_{LT}^x \\[+0.20cm]
S_{TT}^{xy} & - \frac{2}{3} S_{LL} - S_{TT}^{xx} & S_{LT}^y \\[+0.20cm]
S_{LT}^x & S_{LT}^y & \frac{4}{3} S_{LL}
\end{array}
\right) ,
\label{eqn:spin-1-vector-tensor}
\end{align}
where $S_{T}^x$, $S_{T}^y$, $S_L$,
$S_{LL}$, $S_{TT}^{xx}$, $S_{TT}^{xy}$, $S_{LT}^x$, and $S_{LT}^y$
are parameters to express the vector and tensor polarizations.
The polarizations of the spin-1 hadrons, for example the deuteron,
are listed in Table \ref{table:polarizations} by showing
the polarization $\vec E$ and the polarization parameters
for the longitudinal, transverse, and linear polarizations
of a spin-1 hadron.
The longitidutinal polarizations contain both $S_L$ and $S_{LL}$,
and the transverse ones do $S^i_T$, $S_{LL}$, and $S^{xx}_{TT}$.
It is interesting to see that these polarizations partially have
the tensor polarization parameter $S_{LL}$ and
that the linear polarization parameter $S^{xx}_{TT}$ is contained
in the transverse polarization.
The linear polarizations are defined by the polarization vector
$\vec E_x = \left ( \, 1,\, 0,\, 0 \, \right )$ and
$\vec E_y = \left ( \, 0,\, 1,\, 0 \, \right )$.
They also have the parameter $S_{LL}$ in addition to $S^{xx}_{TT}$
as shown in Table \ref{table:polarizations}, so that the $S_{LL}$
terms should be cancelled in order to extract
the gluon transversity defined in association
with $S^{xx}_{TT}$.
\vspace{0.20cm}
\footnotesize
\begin{center}
\renewcommand{\arraystretch}{1.6}
\bottomcaption{Longitudinal, transverse, and linear polarizations
of a spin-1 hadron, polarization vectors, and parameters
of the spin vector and tensor \cite{ks-trans-g-2020,spin-1-exp}.}
\label{table:polarizations}
\begin{supertabular}{|l|c|ccccc|} \hline
Polarizations & $\vec E$ & $S_T^x$
\hspace{-0.20cm} & \hspace{-0.20cm} $S_T^y$ \hspace{-0.20cm} & \hspace{-0.20cm} $S_L$ \hspace{-0.20cm} & \hspace{-0.20cm} $S_{LL}$ \hspace{-0.20cm} & \hspace{-0.20cm} $S_{TT}^{xx}$ \\ \hline
Longitudinal $+z$ & $\frac{1}{\sqrt{2}} (-1,\, -i,\, 0)$ &
0 \hspace{-0.20cm} & \hspace{-0.20cm} 0 \hspace{-0.20cm} & \hspace{-0.20cm} $+$1 \hspace{-0.20cm} & \hspace{-0.20cm} $+\frac{1}{2}$ \hspace{-0.20cm} & \hspace{-0.20cm} 0 \\ \hline
Longitudinal $-z$ & $\frac{1}{\sqrt{2}} (+1,\, -i,\, 0)$ &
0 \hspace{-0.20cm} & \hspace{-0.20cm} 0 \hspace{-0.20cm} & \hspace{-0.20cm} $-$1 \hspace{-0.20cm} & \hspace{-0.20cm} $+\frac{1}{2}$ \hspace{-0.20cm} & \hspace{-0.20cm} 0 \\ \hline
Transverse $+x$ & $\frac{1}{\sqrt{2}} (0,\, -1,\, -i)$ &
$+$1 \hspace{-0.20cm} & \hspace{-0.20cm} 0 \hspace{-0.20cm} & \hspace{-0.20cm} 0 \hspace{-0.20cm} & \hspace{-0.20cm} $-\frac{1}{4}$ \hspace{-0.20cm} & \hspace{-0.20cm} $+\frac{1}{2}$ \\ \hline
Transverse $-x$ & $\frac{1}{\sqrt{2}} (0,\, +1,\, -i)$ &
$-1$ \hspace{-0.20cm} & \hspace{-0.20cm} 0 \hspace{-0.20cm} & \hspace{-0.20cm} 0 \hspace{-0.20cm} & \hspace{-0.20cm} $-\frac{1}{4}$ \hspace{-0.20cm} & \hspace{-0.20cm} $+\frac{1}{2}$ \\ \hline
Transverse $+y$ & $\frac{1}{\sqrt{2}} (-i,\, 0,\, -1)$ &
0 \hspace{-0.20cm} & \hspace{-0.20cm} $+$1 \hspace{-0.20cm} & \hspace{-0.20cm} 0 \hspace{-0.20cm} & \hspace{-0.20cm} $-\frac{1}{4}$ \hspace{-0.20cm} & \hspace{-0.20cm} $-\frac{1}{2}$ \\ \hline
Transverse $-y$ & $\frac{1}{\sqrt{2}} (-i,\, 0,\, +1)$ &
0 \hspace{-0.20cm} & \hspace{-0.20cm} $-1$ \hspace{-0.20cm} & \hspace{-0.20cm} 0 \hspace{-0.20cm} & \hspace{-0.20cm} $-\frac{1}{4}$ \hspace{-0.20cm} & \hspace{-0.20cm} $-\frac{1}{2}$ \\ \hline
Linear $x$ & $(1,\, 0,\, 0)$ &
0 \hspace{-0.20cm} & \hspace{-0.20cm} 0 \hspace{-0.20cm} & \hspace{-0.20cm} 0 \hspace{-0.20cm} & \hspace{-0.20cm} $+\frac{1}{2}$ \hspace{-0.20cm} & \hspace{-0.20cm} $-1$ \\ \hline
Linear $y$ & $(0,\, 1,\, 0)$ &
0 \hspace{-0.20cm} & \hspace{-0.20cm} 0 \hspace{-0.20cm} & \hspace{-0.20cm} 0 \hspace{-0.20cm} & \hspace{-0.20cm} $+\frac{1}{2}$ \hspace{-0.20cm} & \hspace{-0.20cm} $+1$ \\ \hline
\end{supertabular}
\end{center}
\normalsize
\section{Gluon transversity in Drell-Yan process}
\label{gluon-transversity}
The gluon transversity has not been measured yet, although
there are global analysis results on the quark transversity.
In principle, it exists in the spin-1 deuteron although
it does not for the spin-1/2 nucleons, so that it is
a unique quantity to probe a new hadronic physics
within the deuteron.
The gluon transversity $\Delta_T g$ is defined by the matrix element
between the linearly polarized ($E_x$) deuteron as
\cite{ks-trans-g-2020}
\begin{align}
\Delta_T g (x)
& = \varepsilon_{TT,\alpha\beta}
\int \frac{d \xi^-}{2\pi} \, x p^+ \, e^{i x p^+ \xi^-}
\nonumber \\[-0.10cm]
& \ \hspace{0.50cm}
\times
\langle \, p \, E_{x} \left | \, A^{\alpha} (0) \, A^{\beta} (\xi)
\right | p \, E_{x} \, \rangle
_{\xi^+=\vec\xi_\perp=0} ,
\label{eqn:pdf-definitions}
\end{align}
where $x$ is the momentum fraction for a gluon,
$\varepsilon_{TT}^{\alpha\beta}$ is the transverse parameter
given by $\varepsilon_{TT}^{11}=+1$ and $\varepsilon_{TT}^{22}=-1$,
$\xi$ is the space-time coordinate expressed by the lightcone
coordinates $\xi^\pm = (\xi^0 \pm \xi^3)/\sqrt{2}$ and $\vec\xi_\perp$,
$p$ is the deuteron momentum, and $A^\mu$ is the gluon field.
It is expressed by the gluon distribution difference as
\begin{align}
\Delta_T g (x) = g_{\hat x/\hat x} (x) - g_{\hat y/\hat x} (x) ,
\label{eqn:gluon-transversity-linear}
\nonumber \\[-0.70cm]
\end{align}
where $\hat y/\hat x$ is the gluon linear polarization
$\varepsilon_y$ in the deuteron with the polarization $E_x$.
In terms of parton-hadron forward scattering amplitudes
$A_{\Lambda_i \lambda_i ,\, \Lambda_f \lambda_f}$
with the initial and final hadron helicities
$\Lambda_i$ and $\Lambda_f$ and parton ones
$\lambda_i$ and $\lambda_f$,
the gluon transversity is given by
\begin{align}
\Delta_T g (x) \sim \text{Im} \, A_{++,\, - \hspace{0.03cm} -} \ .
\label{eqn:delta-deltaT-amplitudes}
\end{align}
Namely, it is defined by the amplitude with the gluon helicity flip,
so that the change of two spin units ($\Delta s=2$) is needed
between the initial and final states. It is the reason why
the spin-1/2 nucleons cannot accommodate this distribution.
The gluon transversity will be measured at charged-lepton scattering
by looking by the angle dependence of the deuteron spin
in the cross section \cite{spin-1-exp}. It is the angle between
the lepton-scattering plan and the target-spin orientation.
The intension of our studies is to make the measurement possible
at hadron accelerator facilities by supplying a theoretical formalism
for the Drell-Yan process \cite{ks-trans-g-2020,pd-drell-yan}.
As an example, the proton-deuteron Drell-Yan process was investigated
because it is possible at Fermilab. The formalism details are explained
in the paper \cite{ks-trans-g-2020}, where the cross section of
$p(A)+d(B) \to \mu^+ \mu^- +X$ is given by
the difference $d\sigma (E_x) - d\sigma (E_y)$ as
\begin{align}
& \frac{ d \sigma_{pd \to \mu^+ \mu^- X} }{d\tau \, d \vec q_T^{\, 2} \, d\phi \, dy}
(E_x-E_y )
= - \frac{\alpha^2 \, \alpha_s \, C_F \, q_T^2}{6\pi s^3} \cos (2\phi)
\nonumber \\
&
\times
\int_{\text{min}(x_a)}^1 \! dx_a
\frac{ \sum_{q} e_q^2 \, x_a \!
\left[ \, q_A (x_a) + \bar q_A (x_a) \, \right ] x_b \Delta_T g_B (x_b)}
{ (x_a x_b)^2 \, (x_a -x_1) (\tau -x_a x_2 )^2} ,
\label{eqn:cross-5}
\end{align}
\ \vspace{-0.45cm} \
\noindent
by considering the deuteron linear polarizations ($E_x$, $E_y$).
Here, $\tau$ is defined by the dimuon mass or momentum squared as
$\tau=M_{\mu\mu}^2/s=Q^2/s$ with the center-of-mass energy squared $s$,
$\vec q_T^{\,2}$ is the dimuon transverse momentum squared,
$\phi$ is its azimuthal angle, $y$ is the rapidity
in the center-of-mass frame,
$\alpha$ is the fine structure constant,
$\alpha_s$ is the QCD running coupling constant,
$C_F$ is the color factor $C_F=(N_c^2-1)/(2N_c)$ with $N_c=3$,
and $e_q$ is the quark charge.
The momentum fraction $x_b$ is given by
$x_b=(x_a x_2 -\tau)/(x_a-x_1)$, and the minimum
of $x_a$ is $\text{min}(x_a)=(x_1-\tau)/(1-x_2)$
with $x_1 = e^y \sqrt{(Q^2+\vec q_T^{\,2})/s}$
and $x_2 = e^{-y} \sqrt{(Q^2+\vec q_T^{\,2})/s}$.
The $q_A (x_a)$ and $\bar q_A (x_a)$
are quark and antiquark distribution functions in the proton,
and $\Delta_T g (x_b)$ is the gluon transversity in the deuteron.
In estimating the cross section numerically, we used
the CTEQ14 PDFs for the unpolarized PDFs of the proton and also
the deuteron by ignoring nuclear corrections.
Since there is no available gluon transversity at this stage,
we assumed it is equal to the longitudinally-polarized gluon
distribution given by the NNPDF1.1; however, it is likely
an overestimation of the cross section.
In Fig.\,1, the polarization asymmetry
$A_{E_{xy}}\equiv d\sigma (E_x-E_y)/d\sigma (E_x+E_y)$
is shown for the Fermilab kinematics with $p_p = 120$ GeV
by taking $\phi=0$, $y=0.5$, and $q_T=0.5$ or 1.0 GeV
as the function $M_{\mu\mu}^2$.
The asymmetry is typically a few percent. However,
if a finite gluon transversity is found in an experiment,
it could lead to an interesting new hadron physics.
Fortunately, this experiment will be proposed at Fermilab
within the E-1039 collaboration \cite{spin-1-exp}.
\begin{figurehere}
\begin{center}
\includegraphics[width=6.0cm]{AExy-asymmetry.eps} \\
Figure 1. Polarization asymmetry $ | A_{E_{xy}} |$.
\end{center}
\label{fig:asym-ex-ey}
\end{figurehere}
\section{TMDs and PDFs for spin-1 hadrons}
\label{TMDs-up-to-twist-4}
Next, we discuss the TMDs and PDFs at the twists 3 and 4
for tensor-polarized spin-1 hadrons.
Recently, fully consistent investigations have been done for finding
possible twist 3 and twist 4 TMDs and PDFs, whereas the higher-twist
PDFs were found many years ago for the nucleons.
In general, the TMDs and PDFs are defined from the correlation function
$\Phi_{ij}^{[c]}$,
which is the amplitude to extract a parton from a hadron and
then to insert it into the hadron at a different space-time point:
\begin{align}
& \Phi_{ij}^{[c]} (k, P, T \, | \, n )
= \int \! \frac{d^4 \xi}{(2\pi)^4} \, e^{ i k \cdot \xi}
\nonumber \\[-0.15cm]
& \ \hspace{1.7cm}
\times
\langle \, P , T \left | \,
\bar\psi _j (0) \, W^{[c]} (0, \xi)
\psi _i (\xi) \, \right | P, \, T \, \rangle .
\label{eqn:correlation-q}
\end{align}
Here, $k$ and $P$ are quark and hadron momenta, $T$ indicates
the tensor polarization of a spin-1 hadron,
$n$ is the lightcone vector $n^\mu =(1,0,0,-1)/\sqrt{2}$,
$\xi$ is a space-time coordinate,
$\psi$ is the quark field,
and $W^{[c]} (0, \xi)$ is the gauge link with the integral path $c$.
This correlation function is expanded in a Lorentz invariant way
with the constraints of the Hermiticity and parity invariance.
The time-reversal invariance does not have to be satisfied in the TMD
level due to the existence of the color flow given by the gauge link;
however, it is imposed in the collinear PDFs. Then, we obtain
\cite{ks-tmd-2021}
\begin{align}
\Phi(k, P, T \, | n) & = \frac{A_{13}}{M} T_{kk}
+ \cdots
+ \frac{A_{20}}{M^2} \varepsilon^{\mu\nu P k} \gamma_{\mu} \gamma_5 T_{\nu k}
\nonumber \\
&
+ \frac{B_{21}M}{P\cdot n} T_{kn}
+ \cdots
+ \frac{B_{52}M}{P\cdot n } \sigma_{\mu k} T^{\mu n} ,
\label{eqn:cork4}
\end{align}
for the tensor polarization part.
The twist-2 expression was given in Ref.\,\cite{bm-2000} for spin-1 hadrons.
Higher-twist expressions were investigated for the spin-1/2 nucleons
in Ref.\,\cite{tmds-nucleon} by including the lightcone vector $n$
to accommodate twist-3 and 4 effects.
In the same way, we included $n$ terms for the spin-1 hadrons
for defining the higher-twist TMDs and PDFs.
Here, $A_i$ and $B_i$ are expansion coefficients,
the tensor polarization is expressed by $T^{\mu\nu}$,
and the contraction $X_{\mu k} \equiv X_{\mu \nu} k^{\nu}$ is used.
The TMDs are given by integrating the function over the quark momenta as
\begin{align}
\Phi^{[c]} (x, k_T, P, T ) & = \! \int \! dk^+ dk^- \,
\Phi^{[c]} (k, P, T \, |n ) \, \delta (k^+ \! -x P^+) .
\label{eqn:correlation-tmd}
\\[-1.00cm] \nonumber
\end{align}
The TMDs and collinear PDFs are defined by traces of
the correlation functions with $\gamma$ matrices ($\Gamma$) as
$ \Phi^{\left[ \Gamma \right]} \equiv
\text{Tr} \left[ \, \Phi \Gamma \, \right] /2 $.
The twist-2 TMDs were defined
by the traces $\Phi^{ [ \gamma^+ ] }$,
$\Phi^{ [ \gamma^+ \gamma_5 ] }$, and
$\Phi^{ [ i \sigma^{i+} \gamma_5 ] }$
(or $\Phi^{ [ \sigma^{i+} ] }$) \cite{bm-2000}.
The twist-3 TMDs were obtained by
$\Phi^{ [ \gamma^i ] }$,
$\Phi^{\left[{\bf 1}\right]}$,
$\Phi^{\left[i\gamma_5\right]}$
$\Phi^{ [\gamma^{i}\gamma_5 ]}$
$\Phi^{ [ \sigma^{ij} ]}$,
and $\Phi^{ [ \sigma^{-+} ] }$,
and the twist-4 TMDs were obtained by
$\Phi^{[\gamma^-]}$,
$\Phi^{[\gamma^- \gamma_5]}$, and $\Phi^{[\sigma^{i-}]}$
\cite{ks-tmd-2021}.
For example, we have
\begin{align}
&
\Phi^{ [ \gamma^i ] } (x, k_T, T)
=
\frac{M}{P^+} \bigg [ f^{\perp}_{LL}(x, k_T^{\, 2}) S_{LL} \frac{k_T^i}{M}
\! + \! f^{\,\prime} _{LT} (x, k_T^{\, 2})S_{LT}^i
\nonumber \\[-0.10cm]
& \ \hspace{1.0cm}
- f_{LT}^{\perp}(x, k_T^{\, 2}) \frac{ k_{T}^i S_{LT}\cdot k_{T}}{M^2}
- f_{TT}^{\,\prime} (x, k_T^{\, 2}) \frac{S_{TT}^{ i j} k_{T \, j} }{M}
\nonumber \\[-0.10cm]
& \ \hspace{1.0cm}
+ f_{TT}^{\perp}(x, k_T^{\, 2}) \frac{k_T\cdot S_{TT}\cdot k_T}{M^2}
\frac{k_T^i}{M} \bigg ] ,
\label{eqn:cork-3-1a}
\end{align}
as the trace for defining some of the twist-3 TMDs.
Instead of the TMDs with $^\prime$, we define other TMDs by
$
F (x, k_T^{\, 2}) \equiv F^{\,\prime} (x, k_T^{\, 2})
- (k_T^{\, 2} /(2M^2)) \, F^{\perp} (x, k^{\, 2}_T)
$
where $k_T^{\, 2}= - \vec k_T^{\, 2}$,
so that the TMDs with $^\prime$ may not be used
in actual TMD lists.
From these traces, we find that the following tensor-polarized TMDs exist
\cite{ks-tmd-2021}:
\begin{align}
& \text{Twist-2 TMD:}\ \ f_{1LL},\ f_{1LT},\ f_{1TT},\ g_{1LT},\ g_{1TT},
\nonumber \\[-0.20cm]
& \ \hspace{2.3cm}
h_{1LL}^\perp,\ h_{1LT},\ h_{1LT}^\perp,\ h_{1TT},\ h_{1TT}^\perp ,
\nonumber \\
& \text{Twist-3 TMD:}\ \ f_{LL}^\perp,\ e_{LL},\
f_{LT},\ f_{LT}^\perp,\ e_{1T},\ e_{1T}^\perp,\
f_{TT},\ f_{TT}^\perp,
\nonumber \\[-0.20cm]
& \ \hspace{2.3cm}
e_{TT},\ e_{TT}^\perp,\
g_{LL}^\perp,\ g_{LT},\ g_{LT}^\perp,\ g_{TT},\ g_{TT}^\perp,
\nonumber \\[-0.20cm]
& \ \hspace{2.3cm}
h_{1L},\ h_{LT},\ h_{LT}^\perp,\ h_{TT},\ h_{TT}^\perp,
\nonumber \\
& \text{Twist-4 TMD:}\ \ f_{3LL},\ f_{3LT},\ f_{3TT},\ g_{3LT},\ g_{3TT},
\nonumber \\[-0.20cm]
& \ \hspace{2.3cm}
h_{3LL}^\perp,\ h_{3LT},\ h_{3LT}^\perp,\ h_{3TT},\ h_{3TT}^\perp .
\label{eqn:spin-1-tmds-2-3-4}
\end{align}
Namely, there are 10, 20, and 10 tensor-polarized TMDs
at twists 2, 3, and 4, respectively.
These are classified by chiral even/odd and time-reversal even/odd.
Since the time-reversal invariance should be satisfied
in collinear PDFs by the integral over the transverse momentum $\vec k_T$,
there are sum rules for the T-odd TMDs as
\begin{align}
& \! \int \! d^2 k_T \, h_{1LT} (x, k_T^{\, 2})
= \! \int \! d^2 k_T \, g_{LT} (x, k_T^{\, 2})
\nonumber \\[-0.20cm]
& \ \ \
= \! \int \! d^2 k_T \, h_{LL} (x, k_T^{\, 2})
= \! \int \! d^2 k_T \, h_{3LT}(x, k_T^{\, 2}) = 0 .
\label{eqn:TMD-sum}
\end{align}
The TMD fragmentation functions are also found up to twist 4 \cite{ks-tmd-2021}
simply by changing kinematical variables and function names as
\cite{bm-2000}
\vspace{-0.20cm}
\begin{align}
& \ \hspace{-0.20cm}
\text{Kinematical variables:} \ \
x, k_T, S, T, M, n, \gamma^+, \sigma^{i+}
\nonumber \\[-0.15cm]
& \ \hspace{0.5cm}
\Rightarrow \
z, k_T, S_h, T_h, M_h, \bar n, \gamma^-, \sigma^{i-},
\nonumber \\
& \ \hspace{-0.20cm}
\text{Distribution functions:} \ \ f, g, h, e \hspace{2.30cm}
\nonumber \\[-0.15cm]
& \ \hspace{0.5cm}
\Rightarrow \
\text{Fragmentation functions:} \
D, G, H, E .
\label{eqn:tmd-fragmentation}
\end{align}
In addition, if the TMDs are integrated over $\vec k_T$, we obtain
the tensor-polarized PDFs up to twist 4 as
\vspace{-0.20cm}
\begin{align}
& \text{Twist-2 PDF:}\ f_{1LL}, \
\text{Twist-3:}\ e_{LL},\ f_{LT},
\nonumber \\[-0.15cm]
& \text{Twist-4:}\ f_{3LL}.
\label{eqn:pdfs-2-3-4}
\nonumber \\[-0.90cm]
\end{align}
The collinear fragmentation functions were investigated
in Ref.\,\cite{ji-ffs}.
\section{Useful relations among PDFs and multiparton distribution functions}
\label{useful-relations}
For the new twist-3 PDFs, we derived useful relations.
First, we obtained a twist-2 relation and a sum rule \cite{ks-ww-bc-2021},
analogous to the Wandzura-Wilczek (WW) relation
and the Burkhardt-Cottingham (BC) sum rule,
for the twist-2 and twist-3
tensor-polarized parton distribution functions $f_{1LL}$
and $f_{LT}$, respectively.
Using the formalism of the operator product expansion and defining
multiparton distribution functions for twist-3 terms,
we obtained the relation
\begin{align}
\!
f_{LT}(x)= \frac{3}{2} \int^{\epsilon (x)}_x dy \frac{f_{1LL}(y)}{y}
+\int^{\epsilon (x)}_x dy \frac{f_{LT}^{(HT)}(y)}{y}.
\label{eqn:flt}
\end{align}
Here, $\epsilon (x)$ is defined by $\epsilon (x)=1$ ($-1$) at $x>0$ ($x<0$),
and the last term is the twist-3 effect given by
the multiparton distribution functions.
We define $+$ PDFs by $f^+ (x) = f(x) + \bar f(x)$
in the range $0 \le x \le 1$.
The $f_{1LL}^+$ is the same as $b_1$ with the relation
$b_1^{q+\bar q} = - (3/2) f_{1LL}^+$.
Then, neglecting the higher-twist term,
we obtain
\begin{align}
f_{LT}^+(x)= \frac{3}{2} \int^1_x
\frac{dy}{y} \, f_{1LL}^+(y) .
\end{align}
Namely, the twist-2 part of $f_{LT}$ is expressed by an integral of
$f_{1LL}$ (or $b_1$).
If the function $f_{2LL}$ is define by
$f_{2LL} = \frac{2}{3} f_{LT}-f_{1LL}$,
it leads to the twist-2 relation similar to the WW relation as
\begin{align}
f_{2LT}^+ (x)=-f_{1LL}^+ (x)+ \int^1_x \frac{dy}{y} f_{1LL}^+ (y) .
\nonumber
\end{align}
Integrating this equation over $x$, we obtain the BC-like sum rule as
\begin{align}
& \int_0^1 dx \, f_{2LT}^+(x) =0 .
\nonumber
\end{align}
If the parton-model sum rule for $f_{1LL}$ ($b_1$),
$\int dx f_{1LL}^+ (x) = 0$
($\int dx b_1^{q+\bar q} (x) = 0$) \cite{b1-sum},
is applied by assuming vanishing tensor-polarized antiquark distributions,
another sum rule exists for $f_{LT}$ itself,
$\int_0^1 dx \, f_{LT}^+(x) =0.$
In deriving these relation, we showed that the following
tensor-polarized multiparton distribution functions exist:
$F_{LT} (x,y),\ G_{LT} (x,y),\
H_{LL}^\perp (x,y),\ H_{TT} (x,y) .$
Next, from the equation of motion for quarks, useful relations
were also obtained
(1)
for the twist-3 PDF $f_{LT}$,
the trasverse-momentum moment PDF $f_{1LT}^{\,(1)}$, and
the multiparton distribution functions $F_{G,LT}$ and $G_{G,LT}$;
(2)
for the twist-3 PDF $e_{LL}$, the twist-2 PDF $f_{1LL}$,
and the multiparton distribution function $H_{G,LL}^\perp$ as
\cite{eq-motion}
\begin{align}
& \! \!
x f_{LT}(x) - f_{1LT}^{\,(1)}(x)
- {\cal P} \! \! \int_{-1}^1 \! \! \! dy \,
\frac{F_{G,LT}(x, y) + G_{G,LT} (x, y) }{x-y} = 0 ,
\nonumber \\
& \! \!
x \, e_{LL}(x) - 2 {\cal P} \! \! \int_{-1}^1 \! \! \! dy \,
\frac{H_{G,LL}^\perp (x, y)}{x-y}
-\frac{m}{M} f_{1LL} (x)
=0 .
\nonumber
\end{align}
The transverse-momentum moments of the TMDs are defined by
$f^{\, (1)} (x) = \int \! d^2 k_T
(\vec k_T^{\,2} / (2 M^2)) \, f(x,k_T^2)$,
${\cal P}$ is the principle integral,
and $m$ is the quark mass.
In addition, the Lorentz-invariance relation was obtained as
\cite{eq-motion}
\begin{align}
\! \!
\frac{d f_{1LT}^{\,(1)}(x) }{dx}
- f_{LT}(x) + \frac{3}{2} f_{1LL}(x)
- 2 {\cal P} \! \!
\int_{-1}^1 \! \! \! dy \, \frac{F_{G,LT} (x, y)}{(x-y)^2} =0 .
\nonumber
\end{align}
In these derivations, we also obtained relations
in $F_{D/G,LT} (x, y)$, $G_{D/G,LT} (x, y)$,
$H_{D/G,LL}^\perp (x, y)$, and $H_{D/G,TT} (x, y)$.
\section{Summary}
We explained a possible gluon transversity measurement by the proton-deuteron
Drell-Yan process. Then, possible twist-3 and twist-4 TMDs and PDFs were shown
for tensor-polarized spin-1 hadrons. In addition, the corresponding
TMD fragmentation functions exist at twists 3 and 4.
A useful twist-2 WW-like relation and a BC-like sum rule were derived
by defining multiparton distribution functions at twist 3.
Furthremore, from the equation of motion for quarks,
the twist-3 PDFs are related to other PDFs and multiparton
distribution functions, and so called the Lorentz-invariance relation
was also obtained. Since there are various experimental projects
to investigate spin-1 hadrons, these studies should be useful.
\section*{Acknowledgments}
S. Kumano was partially supported by
Japan Society for the Promotion of Science (JSPS) Grants-in-Aid
for Scientific Research (KAKENHI) Grant Number 19K03830.
Qin-Tao Song was supported by the National Natural Science Foundation
of China under Grant Number 12005191, the Academic Improvement Project
of Zhengzhou University, and the China Scholarship Council
for visiting Ecole Polytechnique.
\end{multicols}
\medline
\begin{multicols}{2}
|
2,877,628,091,034 | arxiv | \section{Introduction}
Based on the hurricane struking Puerto Rico in 2017, we developed a transportable disaster response system "DroneGo" featuring a drone fleet capable of delivering medical package and videoing roads. Assuming equal weight for both mission, we take the capability of carrying out the former missions as a constraint and a starting point from which reconnaissance routes are built.
The feasibility of fitting packages into cargo bay 1 or 2 is tested by genetic algorithm. In scenario where drones carry packages to and unloaded back, from specification of drones and loading weight can we derive the maximum reachable distance of each drone loaded. A k-means clustering algorithm is used for partitioning destinations and deriving centroids as locations of bases. Sampled points from roads are added to take into account reconnaissance mission. Points of destination are oversampled with increasing weight till centroids fall within the scope of the maximum reachable distance. Drones for each base are selected based on their performance and medical demand of destinations. Containers of each base is filled till full with minimum drones required and maximum medicine storage. Transportation of medical packages and backup drones are also included in our system.
A biased random walk model mimicing a drunk man is built to explore feasible routes on a field with altitude and road information. A proposal mechanism guaranteeing stochasticity and an objective function biasing randomness are combined. Road and home attraction are weighted differently as walk goes, to simulate the behavior of an exploratory and nostalgic drone. In modified model, we unleashed nuisance parameters to follow a log nomal distribution and use recurrent neural network to learn from previous routes and enhance its performance. The results of modified model are apparantly better and a conbination of two routes is chosen for our system. When analyzing filtered distribution of nuisance parameters and their contribution to performance, we find them neither contributing or providing reasonable experience for us to pick their values in the next run. The performance difference caused by k-nearest neighbor rule selection is huge, but the underlying mechanism is not explorable in this single field.
\begin{table}[p]
\caption{Symbol Description}
\label{notations}
\centering
\begin{tabular}{cc}
\toprule
Symbol & Description\\
\midrule
MPC & max Payload capacibilities of drones\\
MDC & max drivable capacibility of drones\\
k & the proportional coefficient for MDC/MPC\\
l & the weight of load\\
t & maximum flight time with cargo\\
T & maximum flight time without cargo\\
$P_1$ & the power of flying to destination\\
$P_2$ & the power of flying back\\
$t_1$ & the time of flying to destination\\
$t_2$ & the time of flying back\\
$x_1,x_2,\cdots,x_n$ & the longitude of points to be clustered in K-means clustering\\
$y_1,y_2,\cdots,y_n$ & the latitude of points to be clustered in K-means clustering\\
dist(X,Y) & the Euclidean distance between X and Y\\
$label_i$ & point that has the least squared Euclidean distance\\
$a_i$ & the centroid of each cluster\\
$p_{road}$ & the tendency of drones flying towards roads\\
$p_{home}$ & the tendency of drones going home\\
$\gamma$ & relative weight of $p_{home}$\\
$\hat{p}_{road}$ & enlarged difference of $p_{road}$\\
$\hat{p}_{home}$ & enlarged difference of $p_{home}$\\
$\alpha$ & constant relevant to $\gamma$\\
$\beta$ & constant relevant to $\gamma$\\
MFD & maximum flight distance\\
d & distance from current cell to the origin\\
CNN & convolutional neural network\\
RNN & recurrent neural network\\
t-1,t,t+1 & a Time-Series data set\\
X & input data set\\
W & input weight\\
U & weight of sample in this time\\
V & output weight\\
CCR & conbinational coverage rate\\
NC & net coverage\\
A & the area of a specified bounding box\\
$\hat{f}(x)$ & indicates the distribution function of after filtering\\
$f(x)$ & indicates the distribution function of before filtering\\
$m(\hat{f}(x))$ & measure of $\hat{f}(x)$. from measure theory to probability theory.\\
$m(f(x))$ & measure of $f(x)$, from measure theory to probability theory.\\
\bottomrule
\end{tabular}
\end{table}
\section{Introduction}
\subsection{Symbol Description and Disambiguity}
\subsubsection{Symbol Description}
For symbol description, see table \ref{notations}.
\subsubsection{Disambiguity}
\begin{enumerate}
\item A base is where container is located and drones depart. Bases store medicine and retrieve drones to process videos for decision-making.
\item A destination is where medical package is delivered to.
\item A package plan of destination A is all the medical packages needed by destionation A in a day.\label{package plan}
\item Word "feasible" has different meanings: feasible base means drones that depart from it could reach the destination; feasible drone plan means the packages can be packed into the drone cargo bay; feasible cell means drone can walked on it for reason defined in context; feasible route means the drone goes back home in the end.
\item Field: The place where our drone walks on. It has all the factors perceptible and necessary to our drone.
\item Road coverage: the sum of the importance of cells on the route, used for route evaluation.
\end{enumerate}
\subsection{Problem Restatement}
The occurance of natural disaster events recorded has been increasing exponetially since 1900, despite the recent minor decline \cite{owidnaturaldisasters}. The horrific casualty and economic damage caused by catastrophes concerns every Earth citizen who dreams about living a safe and comfortable life. Not only does timely and adequete response to natural disasters helps decrease casualty and property loss, but also it reassures people from anxiety in the context of global climate change.
Required by HELP, Inc., we developed a transportable disaster response system, "DroneGo", based on the hurricane that struck Puerto Rico in 2017. It includes a drone fleet and medicine configurated according to anticipated medical package demand of destination. It's carried in ISO standard cargo containers and transported to elaborately picked bases from which drones depart with two missions --- medical package delivery and video recording.
\subsection{Our Work}
We divided the ultimate goal into two sub-problems:
\begin{itemize}
\item Selecting base locations and drones and making delivery plan.
\item Making reconnaissance routes.
\end{itemize}
In the first problem, locations are given by k-means clustering of oversampled delivery locations and sampled points on main roads. The maximum reachable distance of different drones are calculated in the scenario that drones carry packages when flying to the delivery locations and are unloaded when going back. The maximum flight time under specific loading weight is derived from a linear model according to max payload capability and flight time with no cargo given by attachment 2. All combination of drones and package plans \ref{package plan} are numerated and tested by a genetic algorithm for whether they can be packed into the drone cargo bay. The flight capabilities of drones serve as constraints to our k-means clustering in a way that the oversampling weight of delivery points increases till each cluster centers fall within the overlapping scope of maximum reachable circle of destinations. The most requirement-satisfactory drones are selected based on the performance and missions assigned.
In the second problem, we prototyped initial feasible routes by simulating a biased random walk model on a raster featuring altitude and importance derived from the place main roads are located. A proposal mechanism is used to move drones from current cell to a randomly-picked adjacent available cell. Drones decide whether to accpet or reject the proposal based on an objective function of importance and distance from the starting cell. In optimization stage, we integrated feasible and well-performed routes into the objective function, weaken bias by recurrent neural network and unleashed hard-coded parameters by specifying them to follow a lognormal distribution. We also combined 2 routes to get higher coverage to elevate the reconnaissance ability of our system.
\subsection{Assumptions}
\begin{enumerate}
\item When departing for delivery mission, drones first rise to the height required for the route; When landing, drones move right above the spot then go straight down. Time for going straight up and down is negligible.
\item Drones carry out missions seperately.
\item When filming, drones keep at a specific height above ground to keep the resolution of videos.
\item The maximum distance from which roads can be filmed is $ 50m $\label{50m}.
\item Only bases have charging facilities for drones.
\item One flight has only one destination and the drone carries all the medical packages the destination needs for a day.
\item Maximun flight time decreases linearly with the total weight of cargos with coefficient $k \leq 1.3$\label{linear decrease}.
\item Drones' speed is constant, regardless to cargo weight.\label{speed consistancy}
\item The weight of buffering material can be ignored.
\item Medicine would not go bad, therefore we should fill the container with as many medical packages as possible.
\end{enumerate}
\section{Data Manipulation}
\subsection{Data Source}
We obtained a georeferenced raster image displaying elevation data for Puerto Rico and the U.S. Virgin Islands derived from NED data released in December, 2010\cite{altitudedata}. The data is in $100\, m \times 100\,m$ resolution and uses the Albers Equal-Area Conic projection.
Road shape data is from OpenStreetMap, a open data licensed under the Open Data Commons Open Database License (ODbL) by the OpenStreetMap Foundation (OSMF)\cite{OpenStreetMap}.
Thanks to thousands of individuals who contributed to the database.
\subsection{Data Wrangling}
We cropped the altitude raster into a $ 700 \times 1900 $ extent, covering the entire mainland of Puerto Rico.
World Geodetic System (WGS84) is the coordinate reference system to which the raster is projected to acquire the longitude and latitude of cells.
We filtered out minor roads in shape files to include only motorways, divided roads, national roads and regional roads. To transform shape files to one single raster for our drones to walk on, we buffered lines with $ 50\,m $ radius\ref{50m} and masked a template raster with derived polygons. For each road types, we assigned a progressively decreasing series $(2, \sqrt{3}, \sqrt{2}, 1)$ as their importance and picked the highest importance if one cell belongs to more than once type.\label{importance}
To integrate altitude and road data, we stacked them to a brick file with the same extent and CRS as one of the input data of our model.
\section{Part1: Container Configuration}
\subsection{Analysis of the problem}
We came up with three considerations when choosing the best location for bases:
\begin{itemize}
\item Drones should be able to fly from the base to distination with medicine and come back unloaded;
\item Drones can fly from one base to another to make up for medicine shortage.
\item The number of roads that drones can monitor, measured by the number of cells of roads;
\end{itemize}
All three factors involves calculation of maximum flight distance of drones carrying medical packages, which should be thought about first. Because we don't consider time and energy consumption, if the base is feasible for delivering medicines, the actual distance between the base and destination is ignored when scheduled. In reality, this could be compensated by forcing drones to deliver packages at night because daytime is more valuable for reconnaissance.
After bases are determined, drones could be selected according to performance and the medicine demand of distination. When filling containers, we first decided the type and number of drones. Then the medical packages
are filling into the rest space of the container in a given proportion till the container is full.
Due to the fact that one base may have too many destinations, and thus medical packages are consumed much faster than other bases, we considered making it possible to deliver medical packages from one base to another or to share drones between bases to save space.
\subsection{Maximum Distance Calculation}
Max payload capability $(MPC)$ mentioned in attachment 2 is redefined as the maxinum capability in which condition are in safety-guaranteed condition, and we defined an imaginary max drivable capability $MDC = k \cdot MPC$, in which condition are not able to fly, where $ k $ is the proportional coefficient.
Maximum flight time with cargo $ t $ is in negative relationship to cargo weight $ l $.Therefore, under any load $ l $, the maximum flight time can be derived as:
\begin{equation}
t = - \frac{ T } { MDC } \cdot l + T,
\end{equation}
where T is the maximum flight time without cargo.
The power of flying to and back is different but time consumption of flying to and back is the same. In a round trip that approaches the farthest spot, total energy storage $E$ of a drone is used up:
\begin{equation}
P_1 \cdot t + P_2 \cdot t = E \label{eq:P},
\end{equation}
where $P_1$ is the power of flying to, $P_2$ is the power of flying back, and $t$ is the time consumption of flying to or back:
\begin{equation}
t_1 = t_2 = \frac{D}{2V},
\end{equation}
where $D$ is the maximum reachable distance, and $V$ is the speed constant.
The power of flying to and back can be derived from division of energy and time:
\begin{equation}
P_1 = \frac{E}{t_1} = \frac{E}{ - \frac{ T } { MDC } \cdot l + T }
\end{equation}
\begin{equation}
P_2 = \frac{E}{t_2}=\frac{E}{T}
\end{equation}
Finally we subsituted equation (4,5) into equation (6) and derived maximum reachable distance $D$ under load $l$:
\begin{equation}
D = \frac{l - k \cdot MCP}{l - 2k \cdot MCP} \cdot \frac{V \cdot T}{30}
\end{equation}
\subsection{Cargo Bay Filling}
A genetic algorithm is used for this 3-D bin packing problem. We first enumerated all the combination of drones and package plans and then tested for feasibility of packing the package plan into the cargo bay. Here are the feasible drone package plans for each destination. Only top two plans with longest distance for each destination is shown.\label{genetic algorithm}
\begin{table}[htp]
\label{tbl Feasible drone plan with $k$ arbitrarily set as 1.3}
\centering
\caption{Feasible drone plan with $k$ arbitrarily set as 1.3}
\begin{tabular}{cccccccc}
\toprule
Destination& Drone& Bay Type& MED1& MED2& MED3& Weight& Distance
\\
\midrule
Caribbean Medical Center\\ Fajardo & B & 1 & 1 & 0& 1 & 5 & 36
\\
Caribbean Medical Center\\ Fajardo& C& 2& 1& 0& 1& 5& 31.39\\
Hospital HIMA \\San Pablo& C& 2& 2& 0& 1& 7& 28.44\\
Hospital HIMA\\ San Pablo& F& 2& 2& 0& 1& 7& 27.19\\
Hospital Pavia Snturce\\ San Juan& B& 1& 1& 1& 0& 4& 40.13\\
Hospital Pavia Snturce\\ San Juan& C& 2& 1& 1& 0& 4& 32.72\\
Puerto Rico Children's Hospital\\ Bayamon& F& 2& 2& 1& 2& 12& 23.21\\
Puerto Rico Children's Hospital\\ Bayamon& C& 2& 2& 1& 2& 12& 18.97\\
Hospital Pavia Arecibo\\ Arecibo& B& 1& 1& 0& 0& 2& 47.06\\
Hospital Pavia Arecibo\\ Arecibo& C& 2& 1& 0& 0& 2& 35.16\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Oversampling and K-means Clustering Model}
Since the packing problem is simply organized, our focus is the location(s) of cargo container(s). We firstly oversampled the locations of hospitals due to the minority of hospitals compared with roads, and then we located the cargo containers by K-means to cluster.
Oversampling in data science is a technique, that is used to adjust the class distribution of a data set. The number of points from the road greatly exceeds the number of locations of hospitals, which leads to an imbalance in the data set. In details, if we give them the same weight to calculate the best locations, the result would certainly ignore the tiny turbulence of the locations of hospitals. Therefore, we decided to oversample the locations of hospitals by giving them more weights to emphasize the importance of hospitals. After oversampling, we used K-means clustering to classify three categories of hospitals and main roads, and find the K-means centers of three categories.
We conducted the algorithm of K-means clustering. Here, euclidean distance formula is defined as:
\begin{equation}
dist(X, Y)=\sqrt{\sum_{i=1}^{n}\left|x_{i}-y_{i}\right|^{2}}
\end{equation}
Let $T=x_1, x_2, \cdots, x_m$ be the points to be clustered. The steps of k-means clustering are as follows:
\begin{enumerate}
\item Initialize $k$ cluster centers $a_1,a_2,\cdots,a_k$;
\item Assign each point to the cluster whose mean has the least squared Euclidean distance.
\begin{equation}
label_i = \mathop{\arg\min}_{1\leq j\leq k} {\sqrt{\sum_{i=1}^{n}(x_i-a_j)^2}}
\end{equation}
\item Recalculate the centroid of each cluster given as:
\begin{equation}
a_i = \left(\frac{\sum_
{j}^{n} x_{j}}{n}, \frac{\sum_{j}^{n} y_{j}}{n}\right)
\end{equation}
\item Check if it meets a terminal condition, which could be evaluated by iteration times, mean squared error or cluster center varying frequency.
\item Go to step 2.
\end{enumerate}
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\textwidth]{figures/kmeans.pdf}
\caption{K-means}
\label{fig:kmeans}
\end{figure}
In the realistic occasions, we see that the ratio of the number of hospitals and the number of sampled points on main roads are over 1:10, so we decided to oversample the number of hospitals into a ratio of 1:1. Then we put these points into K-means clustering, which would give back $k(1\leq k\leq 3)$ cluster centers back. To determine $k$, we prints three possible results of $k$. In the occasions of $k=1,2$, distances from cluster centers to hospitals are too long for drones to arrive by its limited batteries, according to Table 2. So we had to locate 3 cargo containers as the DroneGo disaster response system.
The result of k-means clustering, which indicates the locations of cargo containers, is
(18.3147,-66.8219), (18.298,-66.2065), (18.2698,-65.7574). The locations of cargo containers and sampled points on main roads are shown as figure \ref{fig:kmeansResult}.
Five destinations can be partitioned into 3 clusters:
\begin{itemize}
\item Base 1: Caribbean Medical Center
\item Base 2: Hospital HIMA , Hospital Pavia Snturce and Puerto Rico Children's Hospital
\item Base 3: Hospital Pavia Arecibo
\end{itemize}
\begin{figure}[htb]
\centering
\includegraphics[width=0.8\textwidth]{figures/figmap.pdf}
\caption{K-means Result}
\label{fig:kmeansResult}
\end{figure}
Base 1 and 3 have only one destination assigned and the optimal drone shown in table \ref{tbl Feasible drone plan with $k$ arbitrarily set as 1.3} for both bases are drone B. Thus drone B is our choice for base 1 and 3 with no doubt.
Base 2 has three destinations assigned, and the most optimal common drone of those three destinations is drone C (ranked first in Hospital HIMA, second in both Hospital Pavia Snturce and Puerto Rico Children’s Hospital). Therefore drone C should be chosen for delivery mission for base 2.
For reconnaissance, drone B is outstanding in terms of maximum flight distance (53 km), so each base should at least has one drone B for filming roads. Even though drones have service life more than 2 years, we still decided to double the drone numbers in case that accidents happen. Two drone H are also included for each base for communication purpose.
Because the distance between base 2 and 3 is 41.5km and is lower than either the maximum reachable distance of drone B carrying MED1 or without loading, we considered transportation of medical packages between base 2 and 3. By moving all MED1 storage from base 2 to base3, we can even the medical package number ratio. The rest space of the container is filled with medical packages in the proportion specified by destination medicine demand.
The actual number of each medical package in each base is calculated by genetic algorithm metioned in \ref{genetic algorithm}. Our container configuration is shown as follows:
\begin{table}[htb]
\centering
\caption{Packing configuration for cargo containers}
\label{tbl: Packing configuration for cargo containers}
\resizebox{\textwidth}{!}{
\begin{tabular}{cccccccc}
\toprule
Base & Distination & Drone Config & Drone of Delivery & MED 1 & MED 2 & MED 3 & Supporting Days \\
\midrule
1 & HPA & B$\times$4 H$\times$2 & B & 2251 & 0 & 0 & 375\\
2 & HIMA HPS PRCH & B$\times$2 C$\times$2 H$\times$2 & C & 0 & 1080 & 2160 & 375 \\
3 & CMC & B$\times$2 H$\times$2 & B & 1320 & 0 & 1320 & 1320 \\
\bottomrule
\end{tabular}
}
\end{table}
We visualized the packing configuration of each package plan (figure \ref{fig:package}).
\begin{figure}[htp]
\caption{Drone playload packing configurations}
\label{fig:package}
\centering
\subfigure[Drone B with MED1,3]{
\label{fig:package11}
\includegraphics[width=0.3\textwidth]{figures/1-1.png}
}
\subfigure[Drone B with MED1,2]{
\label{fig:package12}
\includegraphics[width=0.3\textwidth]{figures/1-2.png}
}
\subfigure[Drone B with MED1]{
\label{fig:package13}
\includegraphics[width=0.3\textwidth]{figures/1-3.png}
}
\medskip
\subfigure[Drone C with MED1,1,3]{
\label{fig:package21}
\includegraphics[width=0.32\textwidth]{figures/2-1.png}
}
\subfigure[Drone C with MED1,1,2,3,3]{
\label{fig:package22}
\includegraphics[width=0.32\textwidth]{figures/2-2.png}
}
\end{figure}
\section{Part2: Route designing by Biased Random Walk Model}
\subsection{Analysis of Problem}
In previous section, we decided the drone type for each base. All three bases have drone B, which can also fly farther than any other drones and has the video capability. Therefore, it's our perfect choice for reconnaissance.
When we thought about the problem, we first come up with a deterministic model of path searching. However, deterministic model has its own flaws:
\begin{itemize}
\item It's easy to be trapped in a regional optima.
\item It depends too much on some arbitrary parameters.
\item It only gives one feasible route. If accidents happen, namely wildfire, and block some cells, the entire route is not applicable anymore.
\item It can not amend the route automatically when trapped in dead ends.
\end{itemize}
Thus, we decided to build a stochastic model simulating a random walk of "drunk man" with the following advantages:
\begin{itemize}
\item It's expected to explore all feasible routes and thus find the global optima for us.
\item It's naturally compatible to stochastic parameters. Thus, our biased walk is less biased by human discretion and more affected by the environment.
\item It gives many feasible routes, many of which are equally good. The alternative routes are good backup routes.
\item We don't have to consider how to design a feasible route because we can easily abandon infeasible routes.
\end{itemize}
We use the metaphor of a drunk man to blueprint our model. Our drunk man should behave as follows:
\begin{enumerate}
\item Biased: attracted by roads;
\item Exploratory and nostalgic: moving apart from the origin in the beginning and come back home in the end;\item New-trumps-old mindset: reluctant to walk on roads that it has walked on previously.
\end{enumerate}
Accordingly, our drunk man should sense the following factors:\label{factors}
\begin{enumerate}
\item Importance: evaluated by road type.
\item Altitude change: drones fly at the same height from ground.
\item Moving distance: drones should go back to base before it's running out of power.
\end{enumerate}
After sensing the necessary factors, our drunk man can finally take the first step! But it still needs to integrate all those factors. But it's in its mind, anyway. Let's see how it works.
\subsection{Preliminary Model}
\subsubsection{Random Walk Procedure}
Imagine on Christmas eve a man gets loaded for some reason. It has nothing to do. Why not take a walk? So it decides to wander around the field. Departing from home, sometimes it would explore off the roads, but as a sane man, it would keep speeding as much time as possible on roads. Here is what's going on in its mind: First, it looks around and randomly pick a place to step on. It's not a psychopath at least so it would avoid hitting a tree or fall into rivers. After spotting a place, it would stare at it for a while to evaluate the spot it chose. Then it accept or reject the spot based on its evaluation. Keep in mind it's drunk so it's likely that it would be willing to accept a spot deviating from the main road. But the man has a good memory even when unconscious, so it would avoid going to the place once again (but is not compulsory). After moving to a new spot, it repeats the previous steps.
In parallel universes, sometimes our drunk man would fall into some trenchs (scenario 1); sometime it would get exhausted (scenario 2); but sometimes the man finally get home (scenario 3)! That's a good news, and the track of its walk is what we are looking for.
It's time to implement our drunk man in algorithm. First of all, we initalized the drone sited on our bases. The drone keeps iterating the following steps on the field till one of three scenarios mentioned above happens.
\begin{enumerate}
\item Search for all available adjacent cells, generate a random number and propose one of them according to a uniform probability;
\item Evaluate the proposed cell by an objective function; which should gives a value between 0 to 1;
\item Decide whether to accept or reject the proposal. Generate another random number for making decision. If lower than the value in step 2, it accepts the proposal, decrease the importance of the adjacent cells and moves to the proposed cell. If it's lower, it rejects the proposal and keeps still.
\end{enumerate}
\subsubsection{Objective Funcion}
The objective function integrates all information necessary to our model and is the only source affecting drones' tendency of filming road networks or going home. It evaluate the relative superiority of proposed cell
to other adjacent feasible cells.
We supposed three elements, $p_{road}$, which affects the tendency of drones flying toward roads, $p_{home}$, which affects the tendency of drones going home, and $\gamma$, which is the relative weight of $p_{home}$. The value $f$ of objective function is given as:
\begin{equation}
f = p_{road} + \gamma \times p_{home}
\end{equation}
Let $p_{road} = \frac{road_i}{\sum road_i}$, and $p_{home} = \frac{home_i}{\sum home_i}$, where $road_i$ means the significance of selected point by K-nearest neighbor rule (described later), and $home_i$ means the distance from the selected point to the origin. Here, $p_{road}$ and $p_{home}$ are nondimensionalized by other feasible cells adjacent to the current cell, so that $\sum p_{road}=\sum p_{home} = 1$, and therefore $p_{road}$ + $p_{home}$ is plausible. We enlarge the difference between $home_i$ and $road_i$ by exponetialization. Thus, let
$$\hat{p}_{road} = \frac{e^{road_i}}{\sum e^{road_i}},\quad \hat{p}_{home}=\frac{e^{home_i}}{\sum e^{home_i}}$$
Still, $\hat{p}_{road}$ and $\hat{p}_{home}$ are nondimensionalized\footnote{Detailed calculation is omitted. For real implementation, please check our code in appendix}.
To make our drone exploratory and nostalgic, we defined the relative weight of home attraction $\gamma = \alpha \times (d/MFD - \beta )^3 $ where $\alpha = 0.2 \& \beta = 0.3$ are set arbitrarily, $d$ is the distance from current cell to the origin and $MFD$ is maximum flight distance, $MFD = 53000$ for drone B. $\gamma$ is negative at the beginning of the route, making cells apart from the origin have higher probability to be accepted. When leaving origin for some distance, $\gamma$ is close to zero and $p_{road}$ takes major place in accpeting or rejecting proposed cells. In this period the drone behaves totally based on road attraction. In final stage when flight distance is approaching $MFD$, $p_{home}$ is in charge of the process and the drone would accept proposed cells directing to the origin with higher probability.
The sensing range of the drone is specified by the K-nearest neighbor rule. When larger neighbor rule is used, the drone is able to be attracted to remote roads if the drone is wandering within a non-road space. However, if roads are more dense in the field, it's possible that the heterogeneity of the field is blurred out and thus the drone that moves along roads in small rule would move unintentionally. In this paper, Four|eight|twelve-nearest neighbor rules are implemented in our program and the results are discussed in sensitive analysis.
\begin{figure}[htb]
\centering
\subfigure[Four-nearest neighbor rules]{
\includegraphics[width=0.3\textwidth]{figures/n4.pdf}
}
\subfigure[Eight-nearest neighbor rules]{
\includegraphics[width=0.3\textwidth]{figures/n8.pdf}
}
\subfigure[Twelve-nearest neighbor rules]{
\includegraphics[width=0.3\textwidth]{figures/n12.pdf}
}
\caption{K-nearest neighbor rule}
\label{fig:neighbor}
\end{figure}
\subsubsection{Results of Preliminary Model}
After letting off 10000 drones, we picked the route with the highest coverage and ploted in \ref{fig:routepm}. As we can see, in fig a, the drone spent too much time off roads and wandering on the left-bottom corner of the picture. It depicts that roads are not attracted by roads to enough extent. In fig b, the drone covers some part of the nearest road but fail to move farther to explore more territory. Therefore, we concluded $\gamma$ is too small in both ends. Besides, the field is discontinuous at most parts, therefore unbiased walk is commonly expected when drones are trapped in a non-road field. Therefore, some irrational looping is inevitable.
\subsection{Modified model}
\subsubsection{Unleashing Parameters}
In defining the relative weight of home attraction $\gamma$, two nuisance parameters $\alpha \& \beta$ are used: $$\gamma = \alpha \times (d/MFD - \beta )^3. $$ $\beta$ determines when $\gamma = 0$ and thus pinpoints the turning point of drones from exploratory (going out) to nostalgic (going back). After rescaled by $\alpha$, the $\gamma$ is increased or decreased, making home attraction more or less important in proposal mechanism.
We modified our model by unleashing those two parameters from our hand. We set $\alpha$ and $\beta$ to follow a log normal distribution with means equal $0.5$ and $0.5$ respectively and variance equal $0.7$ and $0.05$ respectively. Those numbers are derived arbitrarily. According to previous analysis, $\gamma$ is small in both direction, we elevated the mean of scalar $\alpha$. After we simulated a feasible route, we would take down its $\alpha \& \beta$ and distance for further analysis.
\subsubsection{Experience Learning}
Importance data is naturally discrete but experience of previous drones is not. If we use Convolutional Neural Networks(CNN), it cannot memorize the former choices. In order to fit our biased random walk model, we should set a neural network that can memorize the former choices and decide its next move with the turbulance of former choices.
Here we introduce Recurrent Neural Network(RNN). RNN can make use of a sequential information, which means that RNN can memorize former choices and decide its next move by some kind of weight of former choices. In our case, according to our Biased Random Walk Model, we choose the furthest distance that drones go, the repeated routes that drones have, and the standard deviation as our parameters into RNN, trying to train more stable and more accurate parameters into the previous model.
\begin{figure}[htb]
\centering
\includegraphics[width=0.8\linewidth]{figures/rnn.png}
\caption{The principle of the recurrent neural network\cite{lecun2015deep}}
\label{fig:rnn}
\end{figure}
The figure \ref{fig:rnn} is the unfolded picture of Hidden Layer of RNN. $t-1, t, t+1$ depicts a Time-Series set, X means the input data set, $S_t$ means the memory of sample situated in the time $t$, $S_t = f(W*S_{t-1} + U*X_t)$, where $W$ means the input weight, $U$ means the weight of sample in this time, $V$ means the output weight.
When $t=1$, we initialize $S_0=0$, randomly initialize $W, U, V$, and calculated by the equations below:
$$h_1=U \dot x_1+W\dot s_0$$
$$s_1=f(h_1)$$
$$o_1=g(Vs_1)$$
where function f and function g are both activation function, f can be classic tanh, relu and sigmond function, g can be softmax function.
When $t=2$,
$$h_2=U \dot x_2+W\dot s_1$$
$$s_2=f(h_2)$$
$$o_2=g(Vs_2)$$
As shown above, we can deduce the general equation from this:
$$h_t=U \dot x_t+W\dot s_{t-1}$$
$$s_t=f(h_t)$$
$$o_t=g(Vs_t)$$
In our Random Walk model, we remembered the former state and then made its next move. Training the model by RNN, can enhance performance of our model.
\begin{figure}
\centering
\caption{The results of preliminary model}
\label{fig:routepm}
\subfigure[Route 1]{
\label{fig:route43}
\includegraphics[width=0.45\textwidth]{figures/43.png}
}
\subfigure[Route 2]{
\label{fig:route139}
\includegraphics[width=0.45\textwidth]{figures/139.png}
}
\end{figure}
\begin{figure}
\centering
\caption{The results of modified model}
\label{fig:routemm}
\subfigure[Route 3]{
\label{fig:route25035}
\includegraphics[width=0.45\textwidth]{figures/25035.png}
}
\subfigure[Route 4]{
\label{fig:route25097}
\includegraphics[width=0.45\textwidth]{figures/25097.png}
}
\subfigure[Route 5]{
\label{fig:route66711}
\includegraphics[width=0.45\textwidth]{figures/66711.png}
}
\subfigure[Route 6]{
\label{fig:route75036}
\includegraphics[width=0.45\textwidth]{figures/75036.png}
}
\end{figure}
\subsection{Results of Modified Model}
The results of modified model are plotted in fig \ref{fig:routemm} from a to d. As we can see, those routes are significantly better than results of preliminary model. For example, the drone of route in figure \ref{fig:route25035} walked along roads for a long distance and crossed the non-road field as if roads on the right botton corner are pulling it. However, some small loops still exists in all four routes.
For better coverage, we tried to overlay 2 routes chosen from all feasible routes. Conbinatorial Coverage Rate(CCR) is derived from $ CCR = \frac{NC}{A} $, where $NC$ is the net coverage defined as the coverage of union of two routes, and $A$ is the area of a specified bounding box.
Combination of route in figure \ref{fig:route25035} and \ref{fig:route25097} has the highest conbinatorial coverage rate (0.82) among all possible plans.
\section{Sensitivity Analysis of Model}
\subsection{Distribution Change of Behavior Parameters}
In modified model, we unleashed behavior parameters $\alpha$ and $\beta$ to follow a lognormal distribution. If we only save the value of those two parameters of routes whose coverage surpasses some threashold, our model could be seen as a gate that filters out some paramters that are not "fittable" intuitively for (1) generate a feasible route (2) increase the coverage.
First, we did a regression analysis of $\alpha$ and $\beta$ to coverage. However, there is no significant relationship between any two of them (figure \ref{fig:coverage}). Second, we did a lognormal fitting of $\alpha$ and $\beta$ respectively (figure \ref{fig:logfit}).
$\alpha$ still fits a lognormal distribution($R^2 = 0.89$), with consistant mean value and a small deviation of variance from $0.7 \to 0.61$. In terms of $\beta$, it fails to fit a lognormal distribution and has the mean value $ 0.43 $ and variance $0.29$.
\begin{figure}[htb]
\centering
\caption{Regression analysis of $\alpha$ and $\beta$}
\label{fig:coverage}
\subfigure[Alpha]{
\includegraphics[width=0.45\textwidth]{figures/alphac.pdf}
}
\subfigure[Beta]{
\includegraphics[width=0.45\textwidth]{figures/betac.pdf}
}
\end{figure}
\begin{figure}[htb]
\centering
\caption{Lognormal fitting of $\alpha$ and $\beta$}
\label{fig:logfit}
\subfigure[Alpha]{
\includegraphics[width=0.45\textwidth]{figures/alpha.pdf}
}
\subfigure[Beta]{
\includegraphics[width=0.45\textwidth]{figures/beta.pdf}
}
\end{figure}
This analysis shows that neither $\alpha$ or $\beta$ has contribution to the performance of the biased random walk process. The distribution of $\alpha$ is not significantly affected after filtering. However, the distribution of $\beta$ is evened.
\begin{equation}
\hat{f}(x)=\left\{
\begin{aligned}
g(x)<f(x), x \in K \\
f(x), x \notin K
\end{aligned}
\right. K \in U(\mu, \epsilon);
\end{equation}
So, we can get that,
$$M[\hat{f}(x)]=M_{x\in K}[\hat{f}(x)]+M_{x\notin K}[\hat{f}(x)]=M_{x\in K}[g(x)]+M_{x\notin K}[f(x)]\leq M[f(x)]$$
which means that the probability of after filtering is smaller than the prob
$\beta$ has explicit meaning in our model --- time when home attraction surpasses road attraction. Intuitively we expect our drone to turning from departing to going back in the halfway. However, this expectation is probably flawed according to our result. A drone either tending to move near its origin or go far away is probably more likely to make a feasible route and has higher coverage. It occurred to our mind that maybe the lognormal distribution is not a good distribution for our nuisance parameters. And even distribution would generate a more interpretable distribution for results because it's not skewed at the beginning. However, even distribution is not an uninformative distribution\cite{gelman2013bayesian}, and thus it will still affect the result.
\subsection{Performace Change due to K-nearest neighbor rule}
We conducted our model based on different K-nearest neighbor rule. The 8-nearest neighbor rule performs best according to average distance and coverage, and 4-nearest neighbor rule worst. The relationship between rule and performance is not deterministic. Field heterogeneity plays a key role. Therefore, we cannot interpret anything from results of a single field. More artificial field should be generated for further analysis.
\section{Stength and Weakness}
\subsection{Strength}
\begin{enumerate}
\item We used real data for analysis and correctly projected points based on coordinate reference system.
\item We used stochastic model to generate feasible routes to minimize anthropogenic bias underlying evaluation method.
\item The model is space-explicit --- it uses as much spatial data as possible to reflect the reality. No simplication on data side is applied apart from importance assignment of roads.
\item This model is flexible that many parts could be modified to integrate new factors biasing our route designing.
\end{enumerate}
\subsection{Weakness}
\begin{enumerate}
\item We have no idea how to tradeoff between delivery distance and road coverage so we arbitrarily emphasized the latter, which may be seen as penny wise and pound foolish.
\item We simulate one drone at a time. Therefore, the combination of routes would have too many overlapping parts.
\item The behavior of our drone is uncontrollable. For example, local looping of routes are inevitable. Afterwards smoothing is required.
\item The algorithm is time-consuming and even though does not guarantee to find the best solution.
\item The initial distribution of nuisance paramters still has some arbitrariness, and filtered distribution is not interpretable.
\end{enumerate}
\bibliographystyle{unsrt}
\section{Introduction}
Based on the hurricane struking Puerto Rico in 2017, we developed a transportable disaster response system "DroneGo" featuring a drone fleet capable of delivering medical package and videoing roads. Assuming equal weight for both mission, we take the capability of carrying out the former missions as a constraint and a starting point from which reconnaissance routes are built.
The feasibility of fitting packages into cargo bay 1 or 2 is tested by genetic algorithm. In scenario where drones carry packages to and unloaded back, from specification of drones and loading weight can we derive the maximum reachable distance of each drone loaded. A k-means clustering algorithm is used for partitioning destinations and deriving centroids as locations of bases. Sampled points from roads are added to take into account reconnaissance mission. Points of destination are oversampled with increasing weight till centroids fall within the scope of the maximum reachable distance. Drones for each base are selected based on their performance and medical demand of destinations. Containers of each base is filled till full with minimum drones required and maximum medicine storage. Transportation of medical packages and backup drones are also included in our system.
A biased random walk model mimicing a drunk man is built to explore feasible routes on a field with altitude and road information. A proposal mechanism guaranteeing stochasticity and an objective function biasing randomness are combined. Road and home attraction are weighted differently as walk goes, to simulate the behavior of an exploratory and nostalgic drone. In modified model, we unleashed nuisance parameters to follow a log nomal distribution and use recurrent neural network to learn from previous routes and enhance its performance. The results of modified model are apparantly better and a conbination of two routes is chosen for our system. When analyzing filtered distribution of nuisance parameters and their contribution to performance, we find them neither contributing or providing reasonable experience for us to pick their values in the next run. The performance difference caused by k-nearest neighbor rule selection is huge, but the underlying mechanism is not explorable in this single field.
\begin{table}[p]
\caption{Symbol Description}
\label{notations}
\centering
\begin{tabular}{cc}
\toprule
Symbol & Description\\
\midrule
MPC & max Payload capacibilities of drones\\
MDC & max drivable capacibility of drones\\
k & the proportional coefficient for MDC/MPC\\
l & the weight of load\\
t & maximum flight time with cargo\\
T & maximum flight time without cargo\\
$P_1$ & the power of flying to destination\\
$P_2$ & the power of flying back\\
$t_1$ & the time of flying to destination\\
$t_2$ & the time of flying back\\
$x_1,x_2,\cdots,x_n$ & the longitude of points to be clustered in K-means clustering\\
$y_1,y_2,\cdots,y_n$ & the latitude of points to be clustered in K-means clustering\\
dist(X,Y) & the Euclidean distance between X and Y\\
$label_i$ & point that has the least squared Euclidean distance\\
$a_i$ & the centroid of each cluster\\
$p_{road}$ & the tendency of drones flying towards roads\\
$p_{home}$ & the tendency of drones going home\\
$\gamma$ & relative weight of $p_{home}$\\
$\hat{p}_{road}$ & enlarged difference of $p_{road}$\\
$\hat{p}_{home}$ & enlarged difference of $p_{home}$\\
$\alpha$ & constant relevant to $\gamma$\\
$\beta$ & constant relevant to $\gamma$\\
MFD & maximum flight distance\\
d & distance from current cell to the origin\\
CNN & convolutional neural network\\
RNN & recurrent neural network\\
t-1,t,t+1 & a Time-Series data set\\
X & input data set\\
W & input weight\\
U & weight of sample in this time\\
V & output weight\\
CCR & conbinational coverage rate\\
NC & net coverage\\
A & the area of a specified bounding box\\
$\hat{f}(x)$ & indicates the distribution function of after filtering\\
$f(x)$ & indicates the distribution function of before filtering\\
$m(\hat{f}(x))$ & measure of $\hat{f}(x)$. from measure theory to probability theory.\\
$m(f(x))$ & measure of $f(x)$, from measure theory to probability theory.\\
\bottomrule
\end{tabular}
\end{table}
\section{Introduction}
\subsection{Symbol Description and Disambiguity}
\subsubsection{Symbol Description}
For symbol description, see table \ref{notations}.
\subsubsection{Disambiguity}
\begin{enumerate}
\item A base is where container is located and drones depart. Bases store medicine and retrieve drones to process videos for decision-making.
\item A destination is where medical package is delivered to.
\item A package plan of destination A is all the medical packages needed by destionation A in a day.\label{package plan}
\item Word "feasible" has different meanings: feasible base means drones that depart from it could reach the destination; feasible drone plan means the packages can be packed into the drone cargo bay; feasible cell means drone can walked on it for reason defined in context; feasible route means the drone goes back home in the end.
\item Field: The place where our drone walks on. It has all the factors perceptible and necessary to our drone.
\item Road coverage: the sum of the importance of cells on the route, used for route evaluation.
\end{enumerate}
\subsection{Problem Restatement}
The occurance of natural disaster events recorded has been increasing exponetially since 1900, despite the recent minor decline \cite{owidnaturaldisasters}. The horrific casualty and economic damage caused by catastrophes concerns every Earth citizen who dreams about living a safe and comfortable life. Not only does timely and adequete response to natural disasters helps decrease casualty and property loss, but also it reassures people from anxiety in the context of global climate change.
Required by HELP, Inc., we developed a transportable disaster response system, "DroneGo", based on the hurricane that struck Puerto Rico in 2017. It includes a drone fleet and medicine configurated according to anticipated medical package demand of destination. It's carried in ISO standard cargo containers and transported to elaborately picked bases from which drones depart with two missions --- medical package delivery and video recording.
\subsection{Our Work}
We divided the ultimate goal into two sub-problems:
\begin{itemize}
\item Selecting base locations and drones and making delivery plan.
\item Making reconnaissance routes.
\end{itemize}
In the first problem, locations are given by k-means clustering of oversampled delivery locations and sampled points on main roads. The maximum reachable distance of different drones are calculated in the scenario that drones carry packages when flying to the delivery locations and are unloaded when going back. The maximum flight time under specific loading weight is derived from a linear model according to max payload capability and flight time with no cargo given by attachment 2. All combination of drones and package plans \ref{package plan} are numerated and tested by a genetic algorithm for whether they can be packed into the drone cargo bay. The flight capabilities of drones serve as constraints to our k-means clustering in a way that the oversampling weight of delivery points increases till each cluster centers fall within the overlapping scope of maximum reachable circle of destinations. The most requirement-satisfactory drones are selected based on the performance and missions assigned.
In the second problem, we prototyped initial feasible routes by simulating a biased random walk model on a raster featuring altitude and importance derived from the place main roads are located. A proposal mechanism is used to move drones from current cell to a randomly-picked adjacent available cell. Drones decide whether to accpet or reject the proposal based on an objective function of importance and distance from the starting cell. In optimization stage, we integrated feasible and well-performed routes into the objective function, weaken bias by recurrent neural network and unleashed hard-coded parameters by specifying them to follow a lognormal distribution. We also combined 2 routes to get higher coverage to elevate the reconnaissance ability of our system.
\subsection{Assumptions}
\begin{enumerate}
\item When departing for delivery mission, drones first rise to the height required for the route; When landing, drones move right above the spot then go straight down. Time for going straight up and down is negligible.
\item Drones carry out missions seperately.
\item When filming, drones keep at a specific height above ground to keep the resolution of videos.
\item The maximum distance from which roads can be filmed is $ 50m $\label{50m}.
\item Only bases have charging facilities for drones.
\item One flight has only one destination and the drone carries all the medical packages the destination needs for a day.
\item Maximun flight time decreases linearly with the total weight of cargos with coefficient $k \leq 1.3$\label{linear decrease}.
\item Drones' speed is constant, regardless to cargo weight.\label{speed consistancy}
\item The weight of buffering material can be ignored.
\item Medicine would not go bad, therefore we should fill the container with as many medical packages as possible.
\end{enumerate}
\section{Data Manipulation}
\subsection{Data Source}
We obtained a georeferenced raster image displaying elevation data for Puerto Rico and the U.S. Virgin Islands derived from NED data released in December, 2010\cite{altitudedata}. The data is in $100\, m \times 100\,m$ resolution and uses the Albers Equal-Area Conic projection.
Road shape data is from OpenStreetMap, a open data licensed under the Open Data Commons Open Database License (ODbL) by the OpenStreetMap Foundation (OSMF)\cite{OpenStreetMap}.
Thanks to thousands of individuals who contributed to the database.
\subsection{Data Wrangling}
We cropped the altitude raster into a $ 700 \times 1900 $ extent, covering the entire mainland of Puerto Rico.
World Geodetic System (WGS84) is the coordinate reference system to which the raster is projected to acquire the longitude and latitude of cells.
We filtered out minor roads in shape files to include only motorways, divided roads, national roads and regional roads. To transform shape files to one single raster for our drones to walk on, we buffered lines with $ 50\,m $ radius\ref{50m} and masked a template raster with derived polygons. For each road types, we assigned a progressively decreasing series $(2, \sqrt{3}, \sqrt{2}, 1)$ as their importance and picked the highest importance if one cell belongs to more than once type.\label{importance}
To integrate altitude and road data, we stacked them to a brick file with the same extent and CRS as one of the input data of our model.
\section{Part1: Container Configuration}
\subsection{Analysis of the problem}
We came up with three considerations when choosing the best location for bases:
\begin{itemize}
\item Drones should be able to fly from the base to distination with medicine and come back unloaded;
\item Drones can fly from one base to another to make up for medicine shortage.
\item The number of roads that drones can monitor, measured by the number of cells of roads;
\end{itemize}
All three factors involves calculation of maximum flight distance of drones carrying medical packages, which should be thought about first. Because we don't consider time and energy consumption, if the base is feasible for delivering medicines, the actual distance between the base and destination is ignored when scheduled. In reality, this could be compensated by forcing drones to deliver packages at night because daytime is more valuable for reconnaissance.
After bases are determined, drones could be selected according to performance and the medicine demand of distination. When filling containers, we first decided the type and number of drones. Then the medical packages
are filling into the rest space of the container in a given proportion till the container is full.
Due to the fact that one base may have too many destinations, and thus medical packages are consumed much faster than other bases, we considered making it possible to deliver medical packages from one base to another or to share drones between bases to save space.
\subsection{Maximum Distance Calculation}
Max payload capability $(MPC)$ mentioned in attachment 2 is redefined as the maxinum capability in which condition are in safety-guaranteed condition, and we defined an imaginary max drivable capability $MDC = k \cdot MPC$, in which condition are not able to fly, where $ k $ is the proportional coefficient.
Maximum flight time with cargo $ t $ is in negative relationship to cargo weight $ l $.Therefore, under any load $ l $, the maximum flight time can be derived as:
\begin{equation}
t = - \frac{ T } { MDC } \cdot l + T,
\end{equation}
where T is the maximum flight time without cargo.
The power of flying to and back is different but time consumption of flying to and back is the same. In a round trip that approaches the farthest spot, total energy storage $E$ of a drone is used up:
\begin{equation}
P_1 \cdot t + P_2 \cdot t = E \label{eq:P},
\end{equation}
where $P_1$ is the power of flying to, $P_2$ is the power of flying back, and $t$ is the time consumption of flying to or back:
\begin{equation}
t_1 = t_2 = \frac{D}{2V},
\end{equation}
where $D$ is the maximum reachable distance, and $V$ is the speed constant.
The power of flying to and back can be derived from division of energy and time:
\begin{equation}
P_1 = \frac{E}{t_1} = \frac{E}{ - \frac{ T } { MDC } \cdot l + T }
\end{equation}
\begin{equation}
P_2 = \frac{E}{t_2}=\frac{E}{T}
\end{equation}
Finally we subsituted equation (4,5) into equation (6) and derived maximum reachable distance $D$ under load $l$:
\begin{equation}
D = \frac{l - k \cdot MCP}{l - 2k \cdot MCP} \cdot \frac{V \cdot T}{30}
\end{equation}
\subsection{Cargo Bay Filling}
A genetic algorithm is used for this 3-D bin packing problem. We first enumerated all the combination of drones and package plans and then tested for feasibility of packing the package plan into the cargo bay. Here are the feasible drone package plans for each destination. Only top two plans with longest distance for each destination is shown.\label{genetic algorithm}
\begin{table}[htp]
\label{tbl Feasible drone plan with $k$ arbitrarily set as 1.3}
\centering
\caption{Feasible drone plan with $k$ arbitrarily set as 1.3}
\begin{tabular}{cccccccc}
\toprule
Destination& Drone& Bay Type& MED1& MED2& MED3& Weight& Distance
\\
\midrule
Caribbean Medical Center\\ Fajardo & B & 1 & 1 & 0& 1 & 5 & 36
\\
Caribbean Medical Center\\ Fajardo& C& 2& 1& 0& 1& 5& 31.39\\
Hospital HIMA \\San Pablo& C& 2& 2& 0& 1& 7& 28.44\\
Hospital HIMA\\ San Pablo& F& 2& 2& 0& 1& 7& 27.19\\
Hospital Pavia Snturce\\ San Juan& B& 1& 1& 1& 0& 4& 40.13\\
Hospital Pavia Snturce\\ San Juan& C& 2& 1& 1& 0& 4& 32.72\\
Puerto Rico Children's Hospital\\ Bayamon& F& 2& 2& 1& 2& 12& 23.21\\
Puerto Rico Children's Hospital\\ Bayamon& C& 2& 2& 1& 2& 12& 18.97\\
Hospital Pavia Arecibo\\ Arecibo& B& 1& 1& 0& 0& 2& 47.06\\
Hospital Pavia Arecibo\\ Arecibo& C& 2& 1& 0& 0& 2& 35.16\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Oversampling and K-means Clustering Model}
Since the packing problem is simply organized, our focus is the location(s) of cargo container(s). We firstly oversampled the locations of hospitals due to the minority of hospitals compared with roads, and then we located the cargo containers by K-means to cluster.
Oversampling in data science is a technique, that is used to adjust the class distribution of a data set. The number of points from the road greatly exceeds the number of locations of hospitals, which leads to an imbalance in the data set. In details, if we give them the same weight to calculate the best locations, the result would certainly ignore the tiny turbulence of the locations of hospitals. Therefore, we decided to oversample the locations of hospitals by giving them more weights to emphasize the importance of hospitals. After oversampling, we used K-means clustering to classify three categories of hospitals and main roads, and find the K-means centers of three categories.
We conducted the algorithm of K-means clustering. Here, euclidean distance formula is defined as:
\begin{equation}
dist(X, Y)=\sqrt{\sum_{i=1}^{n}\left|x_{i}-y_{i}\right|^{2}}
\end{equation}
Let $T=x_1, x_2, \cdots, x_m$ be the points to be clustered. The steps of k-means clustering are as follows:
\begin{enumerate}
\item Initialize $k$ cluster centers $a_1,a_2,\cdots,a_k$;
\item Assign each point to the cluster whose mean has the least squared Euclidean distance.
\begin{equation}
label_i = \mathop{\arg\min}_{1\leq j\leq k} {\sqrt{\sum_{i=1}^{n}(x_i-a_j)^2}}
\end{equation}
\item Recalculate the centroid of each cluster given as:
\begin{equation}
a_i = \left(\frac{\sum_
{j}^{n} x_{j}}{n}, \frac{\sum_{j}^{n} y_{j}}{n}\right)
\end{equation}
\item Check if it meets a terminal condition, which could be evaluated by iteration times, mean squared error or cluster center varying frequency.
\item Go to step 2.
\end{enumerate}
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\textwidth]{figures/kmeans.pdf}
\caption{K-means}
\label{fig:kmeans}
\end{figure}
In the realistic occasions, we see that the ratio of the number of hospitals and the number of sampled points on main roads are over 1:10, so we decided to oversample the number of hospitals into a ratio of 1:1. Then we put these points into K-means clustering, which would give back $k(1\leq k\leq 3)$ cluster centers back. To determine $k$, we prints three possible results of $k$. In the occasions of $k=1,2$, distances from cluster centers to hospitals are too long for drones to arrive by its limited batteries, according to Table 2. So we had to locate 3 cargo containers as the DroneGo disaster response system.
The result of k-means clustering, which indicates the locations of cargo containers, is
(18.3147,-66.8219), (18.298,-66.2065), (18.2698,-65.7574). The locations of cargo containers and sampled points on main roads are shown as figure \ref{fig:kmeansResult}.
Five destinations can be partitioned into 3 clusters:
\begin{itemize}
\item Base 1: Caribbean Medical Center
\item Base 2: Hospital HIMA , Hospital Pavia Snturce and Puerto Rico Children's Hospital
\item Base 3: Hospital Pavia Arecibo
\end{itemize}
\begin{figure}[htb]
\centering
\includegraphics[width=0.8\textwidth]{figures/figmap.pdf}
\caption{K-means Result}
\label{fig:kmeansResult}
\end{figure}
Base 1 and 3 have only one destination assigned and the optimal drone shown in table \ref{tbl Feasible drone plan with $k$ arbitrarily set as 1.3} for both bases are drone B. Thus drone B is our choice for base 1 and 3 with no doubt.
Base 2 has three destinations assigned, and the most optimal common drone of those three destinations is drone C (ranked first in Hospital HIMA, second in both Hospital Pavia Snturce and Puerto Rico Children’s Hospital). Therefore drone C should be chosen for delivery mission for base 2.
For reconnaissance, drone B is outstanding in terms of maximum flight distance (53 km), so each base should at least has one drone B for filming roads. Even though drones have service life more than 2 years, we still decided to double the drone numbers in case that accidents happen. Two drone H are also included for each base for communication purpose.
Because the distance between base 2 and 3 is 41.5km and is lower than either the maximum reachable distance of drone B carrying MED1 or without loading, we considered transportation of medical packages between base 2 and 3. By moving all MED1 storage from base 2 to base3, we can even the medical package number ratio. The rest space of the container is filled with medical packages in the proportion specified by destination medicine demand.
The actual number of each medical package in each base is calculated by genetic algorithm metioned in \ref{genetic algorithm}. Our container configuration is shown as follows:
\begin{table}[htb]
\centering
\caption{Packing configuration for cargo containers}
\label{tbl: Packing configuration for cargo containers}
\resizebox{\textwidth}{!}{
\begin{tabular}{cccccccc}
\toprule
Base & Distination & Drone Config & Drone of Delivery & MED 1 & MED 2 & MED 3 & Supporting Days \\
\midrule
1 & HPA & B$\times$4 H$\times$2 & B & 2251 & 0 & 0 & 375\\
2 & HIMA HPS PRCH & B$\times$2 C$\times$2 H$\times$2 & C & 0 & 1080 & 2160 & 375 \\
3 & CMC & B$\times$2 H$\times$2 & B & 1320 & 0 & 1320 & 1320 \\
\bottomrule
\end{tabular}
}
\end{table}
We visualized the packing configuration of each package plan (figure \ref{fig:package}).
\begin{figure}[htp]
\caption{Drone playload packing configurations}
\label{fig:package}
\centering
\subfigure[Drone B with MED1,3]{
\label{fig:package11}
\includegraphics[width=0.3\textwidth]{figures/1-1.png}
}
\subfigure[Drone B with MED1,2]{
\label{fig:package12}
\includegraphics[width=0.3\textwidth]{figures/1-2.png}
}
\subfigure[Drone B with MED1]{
\label{fig:package13}
\includegraphics[width=0.3\textwidth]{figures/1-3.png}
}
\medskip
\subfigure[Drone C with MED1,1,3]{
\label{fig:package21}
\includegraphics[width=0.32\textwidth]{figures/2-1.png}
}
\subfigure[Drone C with MED1,1,2,3,3]{
\label{fig:package22}
\includegraphics[width=0.32\textwidth]{figures/2-2.png}
}
\end{figure}
\section{Part2: Route designing by Biased Random Walk Model}
\subsection{Analysis of Problem}
In previous section, we decided the drone type for each base. All three bases have drone B, which can also fly farther than any other drones and has the video capability. Therefore, it's our perfect choice for reconnaissance.
When we thought about the problem, we first come up with a deterministic model of path searching. However, deterministic model has its own flaws:
\begin{itemize}
\item It's easy to be trapped in a regional optima.
\item It depends too much on some arbitrary parameters.
\item It only gives one feasible route. If accidents happen, namely wildfire, and block some cells, the entire route is not applicable anymore.
\item It can not amend the route automatically when trapped in dead ends.
\end{itemize}
Thus, we decided to build a stochastic model simulating a random walk of "drunk man" with the following advantages:
\begin{itemize}
\item It's expected to explore all feasible routes and thus find the global optima for us.
\item It's naturally compatible to stochastic parameters. Thus, our biased walk is less biased by human discretion and more affected by the environment.
\item It gives many feasible routes, many of which are equally good. The alternative routes are good backup routes.
\item We don't have to consider how to design a feasible route because we can easily abandon infeasible routes.
\end{itemize}
We use the metaphor of a drunk man to blueprint our model. Our drunk man should behave as follows:
\begin{enumerate}
\item Biased: attracted by roads;
\item Exploratory and nostalgic: moving apart from the origin in the beginning and come back home in the end;\item New-trumps-old mindset: reluctant to walk on roads that it has walked on previously.
\end{enumerate}
Accordingly, our drunk man should sense the following factors:\label{factors}
\begin{enumerate}
\item Importance: evaluated by road type.
\item Altitude change: drones fly at the same height from ground.
\item Moving distance: drones should go back to base before it's running out of power.
\end{enumerate}
After sensing the necessary factors, our drunk man can finally take the first step! But it still needs to integrate all those factors. But it's in its mind, anyway. Let's see how it works.
\subsection{Preliminary Model}
\subsubsection{Random Walk Procedure}
Imagine on Christmas eve a man gets loaded for some reason. It has nothing to do. Why not take a walk? So it decides to wander around the field. Departing from home, sometimes it would explore off the roads, but as a sane man, it would keep speeding as much time as possible on roads. Here is what's going on in its mind: First, it looks around and randomly pick a place to step on. It's not a psychopath at least so it would avoid hitting a tree or fall into rivers. After spotting a place, it would stare at it for a while to evaluate the spot it chose. Then it accept or reject the spot based on its evaluation. Keep in mind it's drunk so it's likely that it would be willing to accept a spot deviating from the main road. But the man has a good memory even when unconscious, so it would avoid going to the place once again (but is not compulsory). After moving to a new spot, it repeats the previous steps.
In parallel universes, sometimes our drunk man would fall into some trenchs (scenario 1); sometime it would get exhausted (scenario 2); but sometimes the man finally get home (scenario 3)! That's a good news, and the track of its walk is what we are looking for.
It's time to implement our drunk man in algorithm. First of all, we initalized the drone sited on our bases. The drone keeps iterating the following steps on the field till one of three scenarios mentioned above happens.
\begin{enumerate}
\item Search for all available adjacent cells, generate a random number and propose one of them according to a uniform probability;
\item Evaluate the proposed cell by an objective function; which should gives a value between 0 to 1;
\item Decide whether to accept or reject the proposal. Generate another random number for making decision. If lower than the value in step 2, it accepts the proposal, decrease the importance of the adjacent cells and moves to the proposed cell. If it's lower, it rejects the proposal and keeps still.
\end{enumerate}
\subsubsection{Objective Funcion}
The objective function integrates all information necessary to our model and is the only source affecting drones' tendency of filming road networks or going home. It evaluate the relative superiority of proposed cell
to other adjacent feasible cells.
We supposed three elements, $p_{road}$, which affects the tendency of drones flying toward roads, $p_{home}$, which affects the tendency of drones going home, and $\gamma$, which is the relative weight of $p_{home}$. The value $f$ of objective function is given as:
\begin{equation}
f = p_{road} + \gamma \times p_{home}
\end{equation}
Let $p_{road} = \frac{road_i}{\sum road_i}$, and $p_{home} = \frac{home_i}{\sum home_i}$, where $road_i$ means the significance of selected point by K-nearest neighbor rule (described later), and $home_i$ means the distance from the selected point to the origin. Here, $p_{road}$ and $p_{home}$ are nondimensionalized by other feasible cells adjacent to the current cell, so that $\sum p_{road}=\sum p_{home} = 1$, and therefore $p_{road}$ + $p_{home}$ is plausible. We enlarge the difference between $home_i$ and $road_i$ by exponetialization. Thus, let
$$\hat{p}_{road} = \frac{e^{road_i}}{\sum e^{road_i}},\quad \hat{p}_{home}=\frac{e^{home_i}}{\sum e^{home_i}}$$
Still, $\hat{p}_{road}$ and $\hat{p}_{home}$ are nondimensionalized\footnote{Detailed calculation is omitted. For real implementation, please check our code in appendix}.
To make our drone exploratory and nostalgic, we defined the relative weight of home attraction $\gamma = \alpha \times (d/MFD - \beta )^3 $ where $\alpha = 0.2 \& \beta = 0.3$ are set arbitrarily, $d$ is the distance from current cell to the origin and $MFD$ is maximum flight distance, $MFD = 53000$ for drone B. $\gamma$ is negative at the beginning of the route, making cells apart from the origin have higher probability to be accepted. When leaving origin for some distance, $\gamma$ is close to zero and $p_{road}$ takes major place in accpeting or rejecting proposed cells. In this period the drone behaves totally based on road attraction. In final stage when flight distance is approaching $MFD$, $p_{home}$ is in charge of the process and the drone would accept proposed cells directing to the origin with higher probability.
The sensing range of the drone is specified by the K-nearest neighbor rule. When larger neighbor rule is used, the drone is able to be attracted to remote roads if the drone is wandering within a non-road space. However, if roads are more dense in the field, it's possible that the heterogeneity of the field is blurred out and thus the drone that moves along roads in small rule would move unintentionally. In this paper, Four|eight|twelve-nearest neighbor rules are implemented in our program and the results are discussed in sensitive analysis.
\begin{figure}[htb]
\centering
\subfigure[Four-nearest neighbor rules]{
\includegraphics[width=0.3\textwidth]{figures/n4.pdf}
}
\subfigure[Eight-nearest neighbor rules]{
\includegraphics[width=0.3\textwidth]{figures/n8.pdf}
}
\subfigure[Twelve-nearest neighbor rules]{
\includegraphics[width=0.3\textwidth]{figures/n12.pdf}
}
\caption{K-nearest neighbor rule}
\label{fig:neighbor}
\end{figure}
\subsubsection{Results of Preliminary Model}
After letting off 10000 drones, we picked the route with the highest coverage and ploted in \ref{fig:routepm}. As we can see, in fig a, the drone spent too much time off roads and wandering on the left-bottom corner of the picture. It depicts that roads are not attracted by roads to enough extent. In fig b, the drone covers some part of the nearest road but fail to move farther to explore more territory. Therefore, we concluded $\gamma$ is too small in both ends. Besides, the field is discontinuous at most parts, therefore unbiased walk is commonly expected when drones are trapped in a non-road field. Therefore, some irrational looping is inevitable.
\subsection{Modified model}
\subsubsection{Unleashing Parameters}
In defining the relative weight of home attraction $\gamma$, two nuisance parameters $\alpha \& \beta$ are used: $$\gamma = \alpha \times (d/MFD - \beta )^3. $$ $\beta$ determines when $\gamma = 0$ and thus pinpoints the turning point of drones from exploratory (going out) to nostalgic (going back). After rescaled by $\alpha$, the $\gamma$ is increased or decreased, making home attraction more or less important in proposal mechanism.
We modified our model by unleashing those two parameters from our hand. We set $\alpha$ and $\beta$ to follow a log normal distribution with means equal $0.5$ and $0.5$ respectively and variance equal $0.7$ and $0.05$ respectively. Those numbers are derived arbitrarily. According to previous analysis, $\gamma$ is small in both direction, we elevated the mean of scalar $\alpha$. After we simulated a feasible route, we would take down its $\alpha \& \beta$ and distance for further analysis.
\subsubsection{Experience Learning}
Importance data is naturally discrete but experience of previous drones is not. If we use Convolutional Neural Networks(CNN), it cannot memorize the former choices. In order to fit our biased random walk model, we should set a neural network that can memorize the former choices and decide its next move with the turbulance of former choices.
Here we introduce Recurrent Neural Network(RNN). RNN can make use of a sequential information, which means that RNN can memorize former choices and decide its next move by some kind of weight of former choices. In our case, according to our Biased Random Walk Model, we choose the furthest distance that drones go, the repeated routes that drones have, and the standard deviation as our parameters into RNN, trying to train more stable and more accurate parameters into the previous model.
\begin{figure}[htb]
\centering
\includegraphics[width=0.8\linewidth]{figures/rnn.png}
\caption{The principle of the recurrent neural network\cite{lecun2015deep}}
\label{fig:rnn}
\end{figure}
The figure \ref{fig:rnn} is the unfolded picture of Hidden Layer of RNN. $t-1, t, t+1$ depicts a Time-Series set, X means the input data set, $S_t$ means the memory of sample situated in the time $t$, $S_t = f(W*S_{t-1} + U*X_t)$, where $W$ means the input weight, $U$ means the weight of sample in this time, $V$ means the output weight.
When $t=1$, we initialize $S_0=0$, randomly initialize $W, U, V$, and calculated by the equations below:
$$h_1=U \dot x_1+W\dot s_0$$
$$s_1=f(h_1)$$
$$o_1=g(Vs_1)$$
where function f and function g are both activation function, f can be classic tanh, relu and sigmond function, g can be softmax function.
When $t=2$,
$$h_2=U \dot x_2+W\dot s_1$$
$$s_2=f(h_2)$$
$$o_2=g(Vs_2)$$
As shown above, we can deduce the general equation from this:
$$h_t=U \dot x_t+W\dot s_{t-1}$$
$$s_t=f(h_t)$$
$$o_t=g(Vs_t)$$
In our Random Walk model, we remembered the former state and then made its next move. Training the model by RNN, can enhance performance of our model.
\begin{figure}
\centering
\caption{The results of preliminary model}
\label{fig:routepm}
\subfigure[Route 1]{
\label{fig:route43}
\includegraphics[width=0.45\textwidth]{figures/43.png}
}
\subfigure[Route 2]{
\label{fig:route139}
\includegraphics[width=0.45\textwidth]{figures/139.png}
}
\end{figure}
\begin{figure}
\centering
\caption{The results of modified model}
\label{fig:routemm}
\subfigure[Route 3]{
\label{fig:route25035}
\includegraphics[width=0.45\textwidth]{figures/25035.png}
}
\subfigure[Route 4]{
\label{fig:route25097}
\includegraphics[width=0.45\textwidth]{figures/25097.png}
}
\subfigure[Route 5]{
\label{fig:route66711}
\includegraphics[width=0.45\textwidth]{figures/66711.png}
}
\subfigure[Route 6]{
\label{fig:route75036}
\includegraphics[width=0.45\textwidth]{figures/75036.png}
}
\end{figure}
\subsection{Results of Modified Model}
The results of modified model are plotted in fig \ref{fig:routemm} from a to d. As we can see, those routes are significantly better than results of preliminary model. For example, the drone of route in figure \ref{fig:route25035} walked along roads for a long distance and crossed the non-road field as if roads on the right botton corner are pulling it. However, some small loops still exists in all four routes.
For better coverage, we tried to overlay 2 routes chosen from all feasible routes. Conbinatorial Coverage Rate(CCR) is derived from $ CCR = \frac{NC}{A} $, where $NC$ is the net coverage defined as the coverage of union of two routes, and $A$ is the area of a specified bounding box.
Combination of route in figure \ref{fig:route25035} and \ref{fig:route25097} has the highest conbinatorial coverage rate (0.82) among all possible plans.
\section{Sensitivity Analysis of Model}
\subsection{Distribution Change of Behavior Parameters}
In modified model, we unleashed behavior parameters $\alpha$ and $\beta$ to follow a lognormal distribution. If we only save the value of those two parameters of routes whose coverage surpasses some threashold, our model could be seen as a gate that filters out some paramters that are not "fittable" intuitively for (1) generate a feasible route (2) increase the coverage.
First, we did a regression analysis of $\alpha$ and $\beta$ to coverage. However, there is no significant relationship between any two of them (figure \ref{fig:coverage}). Second, we did a lognormal fitting of $\alpha$ and $\beta$ respectively (figure \ref{fig:logfit}).
$\alpha$ still fits a lognormal distribution($R^2 = 0.89$), with consistant mean value and a small deviation of variance from $0.7 \to 0.61$. In terms of $\beta$, it fails to fit a lognormal distribution and has the mean value $ 0.43 $ and variance $0.29$.
\begin{figure}[htb]
\centering
\caption{Regression analysis of $\alpha$ and $\beta$}
\label{fig:coverage}
\subfigure[Alpha]{
\includegraphics[width=0.45\textwidth]{figures/alphac.pdf}
}
\subfigure[Beta]{
\includegraphics[width=0.45\textwidth]{figures/betac.pdf}
}
\end{figure}
\begin{figure}[htb]
\centering
\caption{Lognormal fitting of $\alpha$ and $\beta$}
\label{fig:logfit}
\subfigure[Alpha]{
\includegraphics[width=0.45\textwidth]{figures/alpha.pdf}
}
\subfigure[Beta]{
\includegraphics[width=0.45\textwidth]{figures/beta.pdf}
}
\end{figure}
This analysis shows that neither $\alpha$ or $\beta$ has contribution to the performance of the biased random walk process. The distribution of $\alpha$ is not significantly affected after filtering. However, the distribution of $\beta$ is evened.
\begin{equation}
\hat{f}(x)=\left\{
\begin{aligned}
g(x)<f(x), x \in K \\
f(x), x \notin K
\end{aligned}
\right. K \in U(\mu, \epsilon);
\end{equation}
So, we can get that,
$$M[\hat{f}(x)]=M_{x\in K}[\hat{f}(x)]+M_{x\notin K}[\hat{f}(x)]=M_{x\in K}[g(x)]+M_{x\notin K}[f(x)]\leq M[f(x)]$$
which means that the probability of after filtering is smaller than the prob
$\beta$ has explicit meaning in our model --- time when home attraction surpasses road attraction. Intuitively we expect our drone to turning from departing to going back in the halfway. However, this expectation is probably flawed according to our result. A drone either tending to move near its origin or go far away is probably more likely to make a feasible route and has higher coverage. It occurred to our mind that maybe the lognormal distribution is not a good distribution for our nuisance parameters. And even distribution would generate a more interpretable distribution for results because it's not skewed at the beginning. However, even distribution is not an uninformative distribution\cite{gelman2013bayesian}, and thus it will still affect the result.
\subsection{Performace Change due to K-nearest neighbor rule}
We conducted our model based on different K-nearest neighbor rule. The 8-nearest neighbor rule performs best according to average distance and coverage, and 4-nearest neighbor rule worst. The relationship between rule and performance is not deterministic. Field heterogeneity plays a key role. Therefore, we cannot interpret anything from results of a single field. More artificial field should be generated for further analysis.
\section{Stength and Weakness}
\subsection{Strength}
\begin{enumerate}
\item We used real data for analysis and correctly projected points based on coordinate reference system.
\item We used stochastic model to generate feasible routes to minimize anthropogenic bias underlying evaluation method.
\item The model is space-explicit --- it uses as much spatial data as possible to reflect the reality. No simplication on data side is applied apart from importance assignment of roads.
\item This model is flexible that many parts could be modified to integrate new factors biasing our route designing.
\end{enumerate}
\subsection{Weakness}
\begin{enumerate}
\item We have no idea how to tradeoff between delivery distance and road coverage so we arbitrarily emphasized the latter, which may be seen as penny wise and pound foolish.
\item We simulate one drone at a time. Therefore, the combination of routes would have too many overlapping parts.
\item The behavior of our drone is uncontrollable. For example, local looping of routes are inevitable. Afterwards smoothing is required.
\item The algorithm is time-consuming and even though does not guarantee to find the best solution.
\item The initial distribution of nuisance paramters still has some arbitrariness, and filtered distribution is not interpretable.
\end{enumerate}
\bibliographystyle{unsrt}
|
2,877,628,091,035 | arxiv | \section{Introduction}
In order to cope with the growing wireless traffic volume demands, significant changes in wireless technology deployments are expected in the near future.
Two important trends can be distinguished: (i)~the already ubiquitous wireless networks are predicted to undergo extreme densification~\cite{Nokia2016}, and (ii)~an increasing number of spectrum bands are being targeted by multiple wireless technologies, e.g. LTE was recently proposed to operate in the 5 GHz unlicensed band~\cite{3GPP2016, 3GPP2016a, Forum2015}, the 3.5~GHz Citizens Broadband Radio Service (CBRS) band in the U.S. is under discussion for being open to more technologies~\cite{FCC15-472015}.
These trends will, in turn, increase the level of interference and the complexity of wireless inter-technology interactions, which have to be managed through efficient spectrum sharing mechanisms.
Traditionally, wireless technologies have operated in either licensed, or unlicensed bands. Licensed bands are granted by spectrum regulators to single entities, e.g. cellular operators, which then individually deploy and manage their networks in dedicated spectrum bands. Consequently, inter-technology coexistence has not been an issue in these bands.
By contrast, in the unlicensed bands any technology and device has equal rights to access the spectrum, as long as basic regulatory restrictions are met, e.g. maximum transmit power. As such, mutual interference among different technologies is inherent to the unlicensed bands and has typically been managed by rather simple distributed spectrum sharing schemes, e.g. between \mbox{Wi-Fi} and Bluetooth.
Recently, due to the growing need for higher network capacity, several regulatory and technical changes have been introduced for wireless technologies.
Firstly, spectrum regulators have opened an increasing number of bands to \emph{multiple} technologies, and have authorised novel access right frameworks, other than pure exclusive use or equal rights, i.e. different variants of primary/secondary access.
Some examples of bands where such frameworks exist are: TV white space (TVWS)~\cite{Ofcom2015, FCC2010}, the recently proposed 3.5~GHz Citizens Broadband Radio Service (CBRS) band in the U.S.~\cite{FCC15-472015}, and the 2.3--2.4~GHz band in Europe, where recent coexistence trials under Licensed Shared Access (LSA) have been conducted~\cite{Guiducci2017}.
New challenging coexistence cases are also expected in the unlicensed bands, where LTE has recently been proposed and standardized to operate in the unlicensed 5~GHz band~\cite{3GPP2016, 3GPP2016a, Forum2015}, where it must coexist with \mbox{Wi-Fi}.
As both LTE and \mbox{Wi-Fi} are broadband technologies designed to carry high traffic loads, this is different to prior inter-technology coexistence cases in unlicensed bands (\emph{cf.} \mbox{Wi-Fi}/Bluetooth coexistence).
Furthermore, a \emph{second} technology, i.e. NB-IoT (Narrowband Internet of Things)~\cite{3GPP2016a}, has been recently designed to coexist with LTE in the same licensed cellular bands where LTE used to operate exclusively.
As demonstrated by these examples, a significant number of \emph{heterogeneous} wireless devices, in terms of technologies and traffic requirements, is expected to be deployed in shared spectrum bands. It follows that new inter-technology interactions are currently emerging, and they are too complex to be efficiently managed by traditional spectrum sharing mechanisms designed for either licensed cellular bands, or unlicensed bands with low to moderate traffic volumes.
It is thus crucial to design novel inter-technology spectrum sharing mechanisms that: (i)~allow multiple devices and technologies to access the spectrum; and (ii)~facilitate an efficient overall use of the spectrum, while fulfilling the requirements of each device/technology.
\begin{table*}[t!]
\begin{center}
\caption{Classification of inter-technology surveys in the literature. This survey addresses the categories shaded in green.
\label{table_0} }
\begin{tabular}{|l|l|l|l|c|}
\cline{5-5}
\multicolumn{4}{c|}{} & \textsc{Prior Surveys}\\
\hline
\multirow{2}{*}{\rotatebox[origin=c]{90}{\parbox{3.5cm}{\centering\textsc{Inter-technology \\Interactions}}}}
& \multirow{6}{*}{\parbox{3.5cm}{\textbf{Inter-technology coexistence} \\\textbf{(in shared spectrum bands)}}}
& \multirow{3}{*}{\parbox{1.5cm}{\textbf{hierarchical} \\\textbf{regulatory} \\\textbf{framework}}}
& \parbox{3cm}{\textbf{different access rights} \\ (i.e. \emph{primary/secondary})} &
\begin{tabular}{p{6.5cm}} dynamic spectrum access (DSA), e.g.~\cite{Paisana2014, Tehrani2016, Akyildiz2006, Akyildiz2008, Zhao2007, Wang2008, Yucek2009, Gavrilovska2014, Ren2012}
\end{tabular}
\\
\hhline{|~|~|~|--}
& & & {\cellcolor{green!25}}\parbox{3cm}{\textbf{equal access rights} \\(i.e. \emph{primary/primary}, \\\emph{secondary/secondary})} &
\begin{tabular}{p{6.5cm}} converged heterogeneous mobile networks with focus on M2M \emph{integration}~\cite{Jo2014}
\end{tabular}
\\
\hhline{|~|~|---}
& & \multicolumn{2}{l|}{\cellcolor{green!25}\parbox{4.5cm}{\textbf{flat regulatory framework with equal access rights} (i.e. \emph{spectrum commons})}}
& \begin{tabular}{p{6.5cm}} converged heterogeneous mobile networks with focus on M2M \emph{integration}~\cite{Jo2014};\\
early literature on \mbox{Wi-Fi}/LTE coexistence in the unlicensed bands~\cite{Ho2017}
\end{tabular}
\\
\hhline{|~|====}
& \multicolumn{3}{l|}{\textbf{Integration of technologies operating in different spectrum bands}}
& \begin{tabular}{p{6.5cm}} mobile cellular and vehicular communications~\cite{Zheng2015};\\
interworking architectures for wireless technologies~\cite{Atayero2012} \end{tabular}
\\
\hline
\end{tabular}
\end{center}
\end{table*}
Furthermore, the design of inter-technology spectrum sharing mechanisms does not only depend on purely technical aspects, but also on regulatory constraints, business models, and social practices. For instance, the regulators impose limits on the spectrum access rights for different devices/networks and in some cases even on the spectrum sharing mechanisms, e.g. listen-before-talk (LBT) being mandatory for the 5~GHz unlicensed band in Europe~\cite{ETSI2015}.
Business models and social practices affect the design of spectrum sharing mechanisms, as the most efficient mechanisms from a technical perspective may not be practically feasible due to e.g. lack of agreements among the involved network managers/device owners.
Two important questions arise, pertinent to designing future wireless technologies: \textbf{(i)~how to design in a systematic manner efficient spectrum sharing mechanisms especially for inter-technology coexistence, by taking into account technical and non-technical parameters}; and \textbf{(ii)~how to evaluate their coexistence performance, with respect to a given technology itself, and its impact on other coexisting technologies}.
In this survey we explore the first question by means of a multi-layer \emph{technology circle} that incorporates in a system-level view all relevant technical and non-technical aspects of a wireless technology.
The technology circle, as proposed in~\cite{Mahonen2012} and illustrated in Fig.~\ref{fig_techCircle}, includes the seven layers of the OSI stack and introduces the regulatory framework at Layer~0, and business models and social practices at Layer~8. The technology circle thus represents a unified design space for spectrum sharing, consisting of parameters at different layers.
Next, we identify the layers at which spectrum sharing is implemented, and the layers that impose constraints. We then discuss the individual effect of each layer on spectrum sharing and the feasibility of different design parameter combinations at different layers.
To this end we present a classification of the literature on inter-technology coexistence, based on individual spectrum sharing design parameters at different layers.
We focus on coexistence under a regulatory framework with \emph{equal} spectrum access rights and especially on a spectrum commons. Importantly, these are the most challenging coexistence cases, as the limitations imposed by regulators tend to be more relaxed, but multiple, diverse technologies may share the same band, so that the design of spectrum sharing mechanism must take into account interactions with a wide range of other technologies.
We address the second posed question by discussing the choice of performance evaluation methods and metrics in the literature on inter-technology coexistence with equal spectrum access rights.
Finally, we reflect on the reviewed literature to determine suitable design approaches for future wireless technologies and we identify challenges and possible research directions.
\subsection{Related Surveys in the Literature}
Earlier surveys addressing inter-technology spectrum sharing~\cite{Paisana2014, Tehrani2016, Akyildiz2006, Akyildiz2008, Zhao2007, Wang2008, Yucek2009, Gavrilovska2014, Ren2012, Zheng2015, Jo2014, Atayero2012, Ho2017} focused on only specific coexistence cases and did not present a comprehensive view of inter-technology wireless coexistence in general, as summarized in Table~\ref{table_0}.
These surveys considered: spectrum that is shared in a primary/secondary manner, i.e. through dynamic spectrum access (DSA) techniques, e.g.~\cite{Paisana2014, Tehrani2016, Akyildiz2006, Akyildiz2008, Zhao2007, Wang2008, Yucek2009, Gavrilovska2014, Ren2012}; coexistence solutions in the form of integrated, coordinated technologies, e.g. converged heterogeneous mobile networks operating in shared spectrum bands~\cite{Jo2014}; or early literature on \mbox{Wi-Fi}/LTE coexistence in the unlicensed bands~\cite{Ho2017}.
Other surveys addressed inter-technology interactions for integrated technologies operating in different spectrum bands, e.g. mobile cellular and vehicular communications~\cite{Zheng2015}, and interworking architectures for wireless technologies~\cite{Atayero2012}.
Therefore, the existing literature lacks a general and comprehensive view of inter-technology coexistence, which is especially important for the most challenging coexistence case, i.e. spectrum sharing when multiple technologies have the same rights to access the spectrum.
\textbf{Our survey, instead, presents inter-technology coexistence from a unified, system-level perspective}, which is essential for answering the two posed research questions on systematically designing efficient spectrum sharing mechanisms and evaluating their coexistence performance.
Moreover, we focus on technologies with equal spectrum access rights and especially on coexistence in a spectrum commons, which we expect to be of high practical relevance in the near future.
\begin{table*}[t!]
\caption{Interference classification and terminology based on the relative shift between the spectrum portions where the interferer transmitter and victim receiver operate}
\label{table_1}
\centering
\begin{tabular}{|p{3.5cm}|p{3.3cm}|c|}
\hline
\diagbox[width=3.9cm, height=1cm]{\textbf{Terminology scope}}{\textbf{Spectrum used}}
& \centering Same frequency for interferer Tx and victim Rx
& Different frequencies for interferer Tx and victim Rx\\
\hline
Generic & \parbox{3cm}{\textbf{\emph{in-band interference}}~\cite{AgilentTechnologies2011}}
& \begin{tabular}{p{9cm}} $\bullet$ \textbf{\emph{out-of-band emissions}}~\cite{ITU-R2015a} (also \textbf{\emph{out-of-band interference}}~\cite{FCCTechnologicalAdvisoryCouncil2015}) -- due to Tx\\
$\bullet$ \textbf{\emph{spurious emissions}}~\cite{ITU-R2015a} (sometimes included in out-of-band interference~\cite{AgilentTechnologies2011}) -- due to Tx \\
$\bullet$ \textbf{\emph{adjacent band interference}}~\cite{FCCTechnologicalAdvisoryCouncil2015} -- due to Rx
\end{tabular}\\
\hline
Technology-oriented & \parbox{3cm}{\textbf{\emph{co-channel}} \\ \textbf{\emph{interference}} \cite{FCCTechnologicalAdvisoryCouncil2015, AgilentTechnologies2011}}
& \begin{tabular}{p{9cm}} \textbf{\emph{adjacent channel interference}}~\cite{3GPP2015, AgilentTechnologies2011}:\\
$\bullet$ \textbf{\emph{adjacent channel leakage}}~\cite{3GPP2015} -- due to Tx \\
$\bullet$ \textbf{\emph{adjacent channel selectivity}}~\cite{3GPP2015}/ \textbf{\emph{rejection}}~\cite{IEEE2016} -- due to Rx
\end{tabular}\\
\hline
\end{tabular}
\end{table*}
\subsection{Survey Structure}
The rest of this survey is structured as follows.
In Section~\ref{interf_tax} we define the inter-technology coexistence problem in terms of interference and we present an interference taxonomy.
In Section~\ref{tech_circle} we present the technology circle and we discuss the impact of different layers on spectrum sharing mechanisms in general.
Section~\ref{litHierFr} presents our literature review of inter-technology coexistence within a hierarchical regulatory framework with a focus on technologies with the same spectrum access rights, i.e. primary/primary and secondary/secondary.
Section~\ref{litSpecComm} presents our literature review of inter-technology coexistence in a spectrum commons.
In Section~\ref{sec_discussion} we discuss the main findings of this survey and we identify challenges and potential future research directions.
Section~\ref{sec_conclusions} concludes the survey.
\section{Interference Taxonomy \& Problem Statement}
\label{interf_tax}
In this section we present an interference taxonomy and we define our problem statement for wireless inter-technology coexistence in terms of interference types.
We also present the spectrum management terminology used in this survey.
\textbf{\emph{Interference}} consists of perturbing signals that arrive at a receiver at the same time as the signal of interest. Consequently, the signal-to-interference-and-noise ratio (SINR) is decreased at the victim receiver, such that decoding the useful signal becomes more difficult.
Spectrum regulators consider interference when establishing operational bounds for devices, or technologies.
From an engineering perspective, interference is important for determining the achievable data rates, depending on the capabilities of the radio hardware (e.g. filter characteristics, receiver noise figure).
Increased interference thus decreases the link capacity, which in turn affects the overall network capacity.
We note that although interference fundamentally occurs at the Physical (PHY) layer, interference mitigation techniques are also implemented at other layers, especially at the MAC. The final link capacity is thus affected by such techniques, as well.
Interference can be classified according to the imperfections of the transmitter and receiver, and the relative portions of the spectrum where the interfering transmitter and the victim receiver operate.
There are a few terms widely used in this context, but their meaning is sometimes loosely defined, as summarized in Table~\ref{table_1}. We identify two types of terminology used to refer to interference, based on the scope: (i)~\textbf{generic} terms typically used in the regulatory domain, which is concerned with interference from another frequency band and limits imposed on the transmitters; and (ii)~\textbf{technology-oriented} terms that refer mostly to interference among devices within given technologies with further channel partitioning of the same spectrum band, where each device is allowed to access any of these channels. Fig.~\ref{fig_1} shows examples of different types of interference among IEEE~802.11ac~\mbox{Wi-Fi}, Licensed Assisted Access (LAA) LTE, and radars operating in the 5 GHz band.
\begin{figure}[!t]
\centering
\subfloat[]{\includegraphics[width=1\columnwidth]{fig_bandChannel_v2} \label{fig_1a}}
\\
\subfloat[]{\includegraphics[width=1\columnwidth]{fig_interfTypes_v2} \label{fig_1b}}
\caption{Example of (a) two spectrum bands, where one is used as unlicensed by IEEE 802.11ac \mbox{Wi-Fi}~\cite{IEEE2016} and LAA-LTE~\cite{3GPP2016a}, and the other is allocated to radar services in Europe~\cite{ECC2016}; and (b) types of mutual interference occurring between different nodes operating in these bands: Nodes 1, 2, and 3 are IEEE 802.11ac \mbox{Wi-Fi} nodes operating on the 40 MHz channels in the same respective colour (blue and yellow), Node 4 is an LAA-LTE node operating on a 20 MHz channel (red), and Node 5 is a radar node operating on a channel in the radar band (green).}
\label{fig_1}
\end{figure}
Interference from a \textbf{generic} perspective can be \textbf{\emph{in-band}}, if both interfering transmitter and victim receiver operate in the same spectrum band~\cite{AgilentTechnologies2011}.
In case the transmitter and receiver do not operate in the same band, the interference can be in the form of: \textbf{\emph{out-of-band emissions/interference}}, \textbf{\emph{spurious emissions}}, or \textbf{\emph{adjacent band interference}}~\cite{ITU-R2015a, FCCTechnologicalAdvisoryCouncil2015, AgilentTechnologies2011}.
Out-of-band and spurious emissions refer to interference caused by the imperfection in the filters of the transmitter.
We note that spectrum regulators are typically concerned with these kinds of interference, since regulation traditionally imposes operational limits on the transmitters and not the receivers (see e.g.~\cite{Vries2013}).
Adjacent band interference was used in~\cite{FCCTechnologicalAdvisoryCouncil2015} to refer to the interference experienced by the receiver due to its own inability to perfectly filter out the received power in a band adjacent to the one it operates on.
From a \textbf{technology-oriented} perspective, where several channels are defined within a given band, the interference is defined with respect to the channel, not the band. We thus identify \textbf{\emph{co-channel interference}} for operation over the same channel~\cite{AgilentTechnologies2011, FCCTechnologicalAdvisoryCouncil2015} and \textbf{\emph{adjacent channel interference}} (ACI) for operation on adjacent channels~\cite{AgilentTechnologies2011, 3GPP2015}.
We note that co- and adjacent channel interference can occur both among devices of the same technology, and among devices of different technologies (\emph{cf.} Fig.~\ref{fig_1}).
Furthermore, it is important to distinguish between ACI caused by the imperfections of the interferer transmitter and imperfections of the victim receiver -- as shown in Fig.~\ref{fig_ACI_survey} -- as the performance of a technology in terms of link-level data rates depends on both.
For instance, 3GPP~\cite{3GPP2015} distinguishes, in case of LTE, between \textbf{\emph{adjacent channel leakage}} (at the transmitter) and \textbf{\emph{adjacent channel selectivity}} (at the receiver).
The IEEE 802.11 standard~\cite{IEEE2016} defines a similar concept to the receiver selectivity, i.e. \textbf{\emph{adjacent channel rejection}}, and specifies the transmitter spectrum mask as an equivalent of the allowed adjacent channel leakage.
\begin{figure}[!t]
\centering
\includegraphics[width=1\columnwidth]{fig_ACI_survey}
\caption{Illustration of ACI as determined by the filters of the interferer transmitter (blue)
and victim receiver (red). The ACI caused by the power leaked by the transmitter
is shown as the area coloured in light blue. The ACI due to imperfect receiver filtering is
shown as the area coloured in light red.}
\label{fig_ACI_survey}
\end{figure}
\subsection*{Problem Statement}
In the context of multiple wireless technologies operating in the same spectrum band, an important aspect is achieving \textbf{\emph{inter-technology coexistence}}, which refers to the ability of two or more co-located technologies to carry out their communication tasks without significant negative impact on their performance. A consistent informal definition is reported in~\cite{Cypher2000}.
We note that the definition of coexistence that we adopt in this survey is intentionally broad, in order to span the wide range of interpretations in the literature: some works use specific coexistence goals and metrics (e.g. achieving a minimum throughput value), whereas others study the coexistence impact on the performance of each technology in terms of various metrics (e.g. throughput, delay, packet collision probability, etc.), but do not target a specific coexistence goal. We discuss this further in our literature review in Section~\ref{litSpecComm} (see especially Tables~\ref{table_review_3b}, \ref{table_review_4b}, \ref{table_review_4c}, \ref{table_review_4d}).
Wireless inter-technology coexistence can be achieved by mitigating \emph{co- and adjacent channel interference}, as these types of interference occur when multiple devices of different technologies share the same spectrum band.
In order to mitigate this inter-technology interference and allow access to the spectrum for multiple devices, spectrum sharing mechanisms are typically implemented at Layer~2, in a similar manner as for traditional MAC schemes mitigating intra-technology interference.
Such solutions allow each device to use only a portion of the spectrum resources, e.g. in time or frequency, while experiencing lower levels of interference.
It follows that each device will still experience a decrease in capacity when other portions of the spectrum resources are occupied by other devices.
For example, a single device is allowed to transmit for a shorter time duration (e.g. time division multiple access -- TDMA -- in cellular networks, carrier sense multiple access with collision avoidance -- CSMA/CA -- in \mbox{Wi-Fi}), or over a portion of the frequency band (e.g. frequency division multiple access -- FDMA -- in cellular networks, channel selection in \mbox{Wi-Fi}).
We discuss spectrum sharing mechanisms at Layer~2 further in Section~\ref{spec_sharing}.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.7\linewidth]{techCircleWithText.pdf}
\caption{Technology circle~\cite{Mahonen2012} as a general system-level framework for considering the design space of inter-technology spectrum sharing. Most of the spectrum sharing mechanisms (yellow) are implemented at Layer~2 and a few at Layer~1. The main constraints (blue) on spectrum sharing design are found at Layer~0 and some at Layers~7~and~8. The main features of each layer are summarized in the figure. This classification is further used in Tables~\ref{table_review_1}, \ref{table_review_2}, \ref{table_review_3}, and~\ref{table_review_4} in our literature review.}
\label{fig_techCircle}
\end{figure*}
\begin{table*}[t!]
\caption{General spectrum sharing taxonomy based on the technology circle. Specific mechanisms considered in the literature for inter-technology coexistence are further presented in Tables~\ref{table_review_1}, \ref{table_review_2}, \ref{table_review_3}, and~\ref{table_review_4}.}
\label{table_2}
\centering
\begin{tabular}{|c|p{1.5cm}|p{1cm}|p{9cm}|c|}
\hline
\multicolumn{2}{|c|}{\textbf{Scope}} & \multicolumn{2}{c|}{\textbf{Spectrum sharing techniques}} & \textbf{Layer} \\
\hline
\multirow{11}{*}{Intra-technology} & \multirow{3}{*}{link level} & \multicolumn{2}{l|}{in time: TDD} & 0 \\
\cline{3-5}
& & \multicolumn{2}{l|}{in frequency: FDD} & 0 \\
\cline{3-5}
& & \multicolumn{2}{l|}{full duplex} & 1 \\
\cline{2-5}
& \multirow{9}{*}{network level}& \multicolumn{2}{l|}{in frequency: FDMA, OFDMA (and NC-OFDMA), channel selection, frequency reuse} & 2 \\
\cline{3-5}
& & \multirow{5}{*}{in time} & periodic transmissions: TDMA, (adaptive) duty cycle & 2 \\
\cline{4-5}
& & & random access: without spectrum sensing, e.g. ALOHA, slotted ALOHA; LBT and no random backoff, e.g. ETSI frame based equipment (FBE); LBT with random backoff and fixed contention window (CW), e.g. ETSI load based equipment (LBE) B; LBT with random backoff and adaptive CW, e.g. CSMA/CA, ETSI LBE A & \multirow{4}{*}{2}\\
\cline{3-5}
& & \multicolumn{2}{l|}{in code: CDMA} & 2 \\
\cline{3-5}
& & \multicolumn{2}{l|}{in space: SDMA} & 2 \\
\cline{3-5}
& & \multicolumn{2}{l|}{other: power control} & 2\\
\hline
\multicolumn{2}{|c|}{\multirow{5}{*}{Inter-technology}} & \multicolumn{2}{l|}{in frequency: distributed channel selection, DSA techniques (database, spectrum sensing)} & 0, 2\\
\cline{3-5}
\multicolumn{2}{|c|}{} & \multicolumn{2}{l|}{in time: random access, distributed periodic, DSA techniques (database, spectrum sensing)} & 0, 2 \\
\cline{3-5}
\multicolumn{2}{|c|}{} & \multicolumn{2}{l|}{in code: FHSS, DSSS} & 1\\
\cline{3-5}
\multicolumn{2}{|c|}{} & \multicolumn{2}{l|}{in space: geolocation \& DSA techniques (database, spectrum sensing)} & 0, 2 \\
\cline{3-5}
\multicolumn{2}{|c|}{} & \multicolumn{2}{l|}{other: power control -- distributed, DSA techniques (database, spectrum sensing)} & 0, 2 \\
\hline
\end{tabular}
\end{table*}
\subsection*{Spectrum Management Terminology}
\label{terms}
Spectrum management refers to the manner in which the spectrum is used in general, in order to facilitate wireless communication among different devices~\cite{ECC2014}. We identify the most important terms describing aspects of spectrum management as follows: spectrum rights, spectrum allocation, spectrum sharing.
\textbf{\emph{Spectrum rights}} is typically used by spectrum regulators to describe
under which conditions a party can use a spectrum band and what entitlement it has, e.g. how long, with what power level, and whether it has priority over other spectrum users when transmitting in this band.
Spectrum rights are also relevant for engineers, who have to design and deploy technologies and devices that use the spectrum within the limits set by the spectrum regulators.
\textbf{\emph{Spectrum allocation/assignment}} is used in a regulatory context, to express that the spectrum regulator grants a certain party some rights to use a particular portion of the spectrum~\cite{ECC2014}, e.g. bands that are allocated to individual cellular operators. A related term that is widely used, but does not have a regulatory connotation, is \textbf{\emph{channel allocation}}, i.e. the channels on which different devices operate within a band, as configured by network managers.
\textbf{\emph{Spectrum sharing}} is broadly defined by ECC ``as common usage of the same spectrum resource by more than one user. Sharing can be performed with respect to
all three domains: frequency, time and place."~\cite{ECC2014}
In this survey we adopt a similar definition as the one given by ECC, but we also include spectrum sharing via coding.
\section{Spectrum Sharing: A System-Level View}
\label{tech_circle}
The design, implementation, and performance of spectrum sharing schemes is determined by a multitude of inter-related factors beyond the pure technical approach. As such, in order to maximize the spectrum utility for individual devices and/or networks, spectrum sharing should be analysed from a system-level perspective that takes into account the technical, regulatory, and business aspects of wireless technologies.
In this section we present such a system-level perspective and some general classifications for spectrum sharing that can be further applied for inter-technology coexistence with equal spectrum access rights, as in Sections~\ref{litHierFr} and~\ref{litSpecComm}.
In~\cite{Mahonen2012} the technical and non-technical aspects of wireless technologies were identified and grouped into nine layers forming a \emph{technology circle}, as shown in Fig.~\ref{fig_techCircle}. Layers~\mbox{1--7} are the technical layers of the OSI stack (i.e. Physical, Data Link, Network, Transport, Session, Presentation, Application\footnote{In real implementations (e.g. TCP/IP stack), the functionality of Layers~5--6 is integrated in Layer~7; we thus discuss only the \emph{Application} layer.}), whereas Layers~0~and~8 model the regulatory, and business and social aspects, respectively. As the circular representation suggests, there is an inter-dependence between all these layers, which together form a large design parameter space that determines the candidate spectrum sharing mechanisms:
some layers correspond to the actual implementation of these mechanisms, whereas other layers impose design constraints.
Specifically, the major spectrum sharing mechanisms are implemented at Layer~2, and some at Layer~1, as summarized in Table~\ref{table_2}.
Nonetheless, there are exceptions where sharing mechanisms are implemented at other layers, e.g. duplexing and DSA databases at Layer~0.
Most of the design constraints for spectrum sharing are specified at Layer~0, but also at Layers~7~and~8.
We note that Layers~3~and~4 may have an indirect influence on the efficiency of inter-technology spectrum sharing mechanisms, by e.g. limiting the size of
the packets transmitted through fragmentation, or varying the data rate of the traffic flow; however, this is outside the scope of this survey.
Importantly, not all combinations of technical and non-technical parameters at different layers are feasible when designing spectrum sharing mechanisms for inter-technology coexistence, and out of those that are feasible, some may be preferred over others.
For instance, when deploying traditional cellular networks, each operator has exclusive rights to access the spectrum at Layer~0. This case is thus suitable for implementing spectrum sharing mechanisms at Layer~2 that are centrally coordinated, as a single operator manages the entire network at Layer~8.
Consequently, cellular networks are ideally suited to carry delay-sensitive traffic such as voice (at Layer~7), as the performance of the centrally-managed network can be readily predicted and optimized.
We note that, for this example, inter-technology coexistence only occurs for multiple integrated technologies (e.g. LTE and NB-IoT), which are deployed by the same operator (at Layer~8).
Let us now consider a spectrum band where different networks operate based on primary/secondary spectrum rights at Layer~0. The primary network can then implement coordinated spectrum sharing mechanisms at Layer~2 as a result of typically having a single network manager at Layer~8 (i.e. similarly to cellular networks). The operation of the secondary networks is strictly limited at Layer~0 to ensure primary protection, e.g. by specifying a maximum allowable interference power from the secondary networks to the primary.
As such, the access of the secondary networks to the spectrum can be coordinated at Layer~0 through a reliable database operated by a third party at Layer~8, e.g. for TVWS.
By contrast, interactions among secondary devices can be managed by distributed spectrum sharing mechanisms at Layer~2, as they do not have the right to any protection at Layer~0; it follows that it is not straightforward to guarantee the quality of the services offered by these secondary networks at Layer~7~\cite{Akyildiz2006, Liang2011}.
Lastly, in a spectrum commons like the unlicensed bands, where various technologies coexist and have the same spectrum access rights at Layer~0, distributed spectrum sharing mechanisms have been a popular choice at Layer~2. Fully centralized coordination is typically not feasible for this example, due to the lack of business agreements among numerous networks managers at Layer~8.
As illustrated by these examples, there is a tight interconnection between the technical and non-technical design parameters and constraints at different layers. It is critical to consider these interconnections in a unified system-level view, as different parameter combinations result in specific inter-technology interactions.
Correctly identifying and evaluating these interactions lays the foundation for a robust development framework for new wireless technologies that result in efficient spectrum use.
In this survey we adopt the technology circle proposed in~\cite{Mahonen2012} as a framework to facilitate our system-level analysis of inter-technology spectrum sharing.
In the following we briefly describe each layer of the technology circle and we highlight its impact on the design of spectrum sharing mechanisms in general.
We first discuss Layer~0, which specifies the main constraints on spectrum sharing, but which also includes a few sharing mechanisms; we then present Layer~2 where most of the spectrum sharing mechanisms are implemented; subsequently, further sharing mechanisms at Layer~1 are presented; lastly, we discuss further constraints at Layers~7~and~8.
In Sections~\ref{litHierFr} and~\ref{litSpecComm} we then apply these general spectrum sharing classifications to our literature review on inter-technology coexistence with equal access rights.
\subsection{Regulatory Framework Constraints \& Spectrum Sharing at Layer~0}
\label{reg_framework}
Layer~0 primarily defines regulatory constraints for spectrum sharing mechanisms at Layers~2~and~1. However, a few spectrum sharing mechanisms are actually implemented at this layer. In this section we first discuss the regulatory constraints and then spectrum sharing at Layer~0.
\subsubsection{Constraints at Layer~0}
The regulatory framework consists of the regulatory limitations imposed on the use of spectrum. These determine who is allowed to use the spectrum, for how long,
and within which technical parameter constraints, e.g. transmit power. Consequently, spectrum sharing mechanisms have to be designed and optimized under these constraints.
\begin{figure}[!t]
\centering
\includegraphics[width=1\columnwidth]{regFramework}
\caption{Spectrum access rights based on the regulatory framework.}
\label{fig_regFramework}
\end{figure}
As shown in Fig.~\ref{fig_regFramework}, spectrum access rights span a continuum of access models, from exclusive use of spectrum, i.e. exclusive spectrum access rights for a single network or technology, to a spectrum commons, where all devices/networks/technologies have the same rights to access the spectrum.
Spectrum access rights between these extremes include the primary/secondary spectrum use model, where secondary networks must give priority to the dominant primary network.
We note that the vast regulatory literature on spectrum access rights is out of the scope of this survey and we instead refer the interested reader to~\cite{Peha2005, Zhao2007, ElectronicCommunicationsCommitte(ECC)2009, Buddhikot2007}.
Traditionally, the spectrum access rights applied in practice have been at the two extremes in Fig.~\ref{fig_regFramework}.
On the one hand, exclusive rights to access the spectrum have been granted to e.g. mobile cellular networks, where each operator buys a license for a given spectrum band. Since there is a single operator deploying and managing the network, the regulators do not need to impose rules on the spectrum sharing techniques; the regulatory restrictions instead largely focus on transmit power levels and filter masks, in order to limit the interference towards other out-of-band networks/services.
On the other hand, in the unlicensed bands -- an example of a spectrum commons -- any device/technology/network has the same rights to access the spectrum (e.g. the 2.4~GHz and the 5~GHz unlicensed bands). Since such bands are open in principle to any technology, the spectrum regulators may decide to impose some restrictions also on the spectrum sharing mechanisms at Layer~2, such that multiple coexisting technologies have the opportunity to access the spectrum. For instance, in Europe ETSI requires devices to implement LBT at the MAC layer, where each device must sense and detect the medium free from other transmissions before starting its own transmission~\cite{ETSI2015}. Additionally, for the channels in the 5~GHz unlicensed bands where radar systems operate, mechanisms like dynamic frequency selection (DFS) and transmit power control (TPC) are specified in regulation~\cite{ETSI2015, ECC2004, 47CFR15.4072016}, in order to protect radar operations.
Over the last fifteen years, several measurement studies have investigated how efficiently spectrum is used~\cite{FederalCommunicationsCommissionSpectrumPolicyTaskForce2002, Palaios2013, McHenry2006, Valenta2009, Islam2008}. The main findings revealed that some of the allocated spectrum with exclusive rights is not used to its full capacity. Consequently, other models for spectrum access rights have emerged, with the general aim of allowing more dynamic access to the spectrum, based on demand. However, incumbent technologies operating in these bands still have priority when accessing the spectrum, such that hierarchical primary/secondary regulatory frameworks are needed.
Three recent examples where hierarchical regulatory frameworks are applicable are: TVWS, the CBRS band, and other bands granted through LSA, e.g. 2.3--2.4~GHz~\cite{Guiducci2017}.
TVWS refers to the spectrum initially allocated for TV broadcasting, the coverage of which is not uniform, such that in particular locations the spectrum could be reused by other technologies with secondary access rights~\cite{Ofcom2015, FCC2010}, e.g. IEEE~802.11af \mbox{Wi-Fi}~\cite{IEEE2016}, LTE~\cite{ETSI2011}, IEEE 802.19.1~\cite{IEEE2014}, or IEEE 802.22~\cite{2011}.
We note that this spectrum access framework has only recently been adopted by a few regulators, i.e. FCC in the U.S. and Ofcom in the U.K., and practical deployments are still in their infancy~\cite{Matsumura2017}.
A three-layer hierarchical regulatory model is currently under discussion for the 3.5~GHz CBRS band in the U.S.~\cite{FCC15-472015} with: incumbent access, priority access, and general authorized access (GAA). The spectrum access system (SAS) manages the spectrum access of the secondary systems, corresponding to the two latter spectrum access layers.
LSA is specified by the ECC in Europe~\cite{ECC2014} and is primarily intended for mobile broadband operators that are willing to share spectrum with existing incumbents.
We note that other models for spectrum access rights have also been proposed in the literature~\cite{Tehrani2016}, but have thus far largely not been adopted in practice.
Importantly, inter-technology coexistence can occur for any model of spectrum access rights. However, the most challenging coexistence cases are expected in the unlicensed bands as an example of a spectrum commons, where any technology is allowed to transmit while complying with rather relaxed rules. As such, there is also a growing tendency to extensively use unlicensed bands by different technologies.
One example trend is to aggregate unlicensed spectrum, e.g. LTE in the licensed bands aggregates carriers in the unlicensed bands (as carrier \mbox{Wi-Fi}~\cite{3gpp2014wifi}, LAA~\cite{3GPP2016, 3GPP2016a}, or LTE-U~\cite{Forum2015}); and \mbox{Wi-Fi} aggregates multiple 20~MHz channels in the 5 GHz unlicensed band~\cite{IEEE2016}.
Moreover, both future 5G cellular technologies~\cite{3GPP2017} and IEEE 802.11ad \mbox{Wi-Fi}~\cite{IEEE2016} aim at extending their operation to the unlicensed 60~GHz band.
\subsubsection{Spectrum Sharing at Layer~0}
In hierarchical regulatory frameworks, spectrum sharing between primary/secondary networks is implemented through DSA mechanisms, where secondary networks access the spectrum opportunistically, whenever it is not occupied by primary networks.
In such deployments where the primary users are protected from the secondary users, e.g. TVWS, the secondary users typically acquire knowledge on the availability of channels from a database operated by a third party~\cite{Ofcom2015, FCC2010}.
In fact, there is a strong inter-connection among spectrum sharing in \textbf{frequency}, \textbf{time}, \textbf{space}, and \textbf{power} in such networks, i.e. the DSA database is a central coordinator that gives information on the availability of the channels in certain locations and imposes limits on the transmit power and duration of use for the secondary networks.
We consider these to be fundamental constraints imposed by the database on how the secondary networks access the spectrum and we include such spectrum sharing mechanisms at Layer~0.
We note that primary/secondary spectrum sharing could also be implemented in a solely distributed manner using spectrum sensing, or spectrum sensing could be used as additional input for DSA databases, but we consider such techniques as belonging to Layer~2, similarly to other sensing-based spectrum access mechanisms, e.g. CSMA/CA.
Notably, many DSA and supporting cognitive radio techniques have been proposed in the literature~\cite{Ren2012, Gavrilovska2014, Yucek2009, Wang2008, Akyildiz2006, Liang2011}, but have not yet been implemented in commercial deployments.
Finally, duplexing can be considered a spectrum sharing mechanism between the two directions of a single link, that is implemented at Layer~0 through regulatory and technical restrictions on channelization. Here we can distinguish frequency division duplexing (FDD) and time division duplexing (TDD), as shown in Table~\ref{table_2}.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.7\linewidth]{layer1_v2}
\caption{General classification of Layer~1 techniques that can be used for facilitating wireless inter-technology coexistence. Specific Layer~1 techniques considered in the reviewed literature for inter-technology coexistence are presented in Tables~\ref{table_review_1}, \ref{table_review_2}, \ref{table_review_3}, and~\ref{table_review_4}.}
\label{fig_layer1}
\end{figure*}
\subsection{Spectrum Sharing at Layer~2}
\label{spec_sharing}
The majority of spectrum sharing mechanisms are implemented at Layer~2 of the technology circle. Although the focus of this survey is on inter-technology spectrum sharing, here we also present and discuss a taxonomy of intra-technology spectrum sharing, since the mechanisms implemented by devices within a technology can also affect the interactions with other technologies.
\subsubsection{Intra-Technology Spectrum Sharing}
From an intra-technology network-level perspective, multiple devices within the same network have to access the same spectrum. In this context spectrum sharing is performed by the MAC sub-layer of Layer~2. Spectrum sharing in such a case can be performed in: \textbf{(i)~frequency}; \textbf{(ii)~time}; \textbf{(iii)~code}; or \textbf{(iv)~space}.
\paragraph*{Spectrum sharing in frequency}
The traditional technique is frequency division multiple access (FDMA), which divides the allocated band into multiple sub-carriers, which are then allocated to different users, e.g. in GSM. A similar concept, but with a finer frequency division granularity is orthogonal frequency division multiple access (OFDMA), which divides the band into closely-spaced orthogonal sub-carriers, e.g. in LTE and WiMAX.
Furthermore, frequency division can be used as a spectrum sharing mechanism between devices, without necessarily being implemented as a MAC protocol, e.g. channel selection/allocation for \mbox{Wi-Fi}, which can increase capacity and reduce interference among \mbox{Wi-Fi} devices~\cite{SurachaiChieochan2010}.
Frequency reuse techniques have been applied analogously for cellular networks~\cite{Damnjanovic2011, Saquib2012, Katzela1996, Hamza2013}.
We note that, for modern and emerging wireless networks, implementing channel selection for interference management may not be straightforward, due to advanced features like channel bonding (in e.g. IEEE~802.11n/ac \mbox{Wi-Fi}, and LTE), where several channels are dynamically merged to form larger-bandwidth channels~\cite{Bukhari2016}. Consequently, partially overlapping channels of different widths may be used and reconfigured dynamically by different coexisting devices, which increases the complexity of network-wide interference interactions.
\paragraph*{Spectrum sharing in time}
This has traditionally been implemented among users in cellular networks through scheduled time division multiple access (TDMA), which is an instance of periodic transmissions that are centrally coordinated.
A more general concept is duty cycling, which also refers to non-coordinated or only locally-coordinated periodic transmissions. Originally, duty cycling was proposed for sensor networks~\cite{Carrano2014} with the aim of reducing energy consumption. Recently, it has also been adopted by broadband technologies such as LTE-U, which implements adaptive duty cycling~\cite{Forum2015}.
A fundamentally different approach is random access in time, e.g. ALOHA and its variant slotted ALOHA, where each device transmits whenever there is traffic to be sent from the upper layers. Also random, but implementing carrier sensing, are LBT mechanisms, where each device first listens to the channel and transmits only if no other ongoing transmission is detected, e.g. CSMA/CA for \mbox{Wi-Fi} and several other LBT variants specified by ETSI~\cite{ETSI2015}, \emph{cf.} Table~\ref{table_2}.
We note that, in order to reduce the number of colliding transmissions from different devices, some LBT mechanisms vary the sensing time that a device has to listen to the channel for, based on a random backoff, which is selected by each device randomly within a given interval, e.g. [0, CW], where CW (contention window) is a design parameter. Furthermore, the CW itself can be adapted, e.g. for CSMA/CA in IEEE~802.11 the CW is doubled every time that a collision occurs (i.e. binary exponential random backoff).
\paragraph*{Spectrum sharing via coding}
For multi-user networks this is known as code division multiple access (CDMA) and it is based on spread spectrum techniques at Layer~1. CDMA is implemented by allocating a unique code for each user and allowing all users to transmit over the same wide bandwidth. This was implemented in 3G systems like UMTS and CDMA2000, based on direct sequence spread spectrum (DSSS) at Layer~1.
\begin{table*}[t!]
\caption{General classification of applications based on user requirements. Requirements for traffic volume are further used to classify existing literature on inter-technology coexistence in a spectrum commons in Section V.}
\label{table_app}
\centering
\begin{tabular}{|p{2cm}|p{2cm}|p{12cm}|}
\hline
\centering\textbf{Requirement} & \centering\textbf{Classification} & \centering\let\newline\\\arraybackslash\hspace{0pt} \textbf{Examples} \\
\hline
\multirow{2}{*}{traffic volume} & high traffic load (broadband) & file sharing, video steaming, and video conferencing through cellular broadband (e.g. LTE, 5G), and IEEE 802.11 \mbox{Wi-Fi}; some industrial applications for sensor networks with need for high sampling rate \\
\cline{2-3}
& low traffic load & home and industrial applications for sensor networks (e.g. IEEE 802.15.4, ZigBee, NB-IoT, Bluetooth), M2M applications \\
\hline
\multirow{2}{*}{delay} & delay-tolerant & web browsing, file transfer, email, some sensor applications \\
\cline{2-3}
& delay-sensitive & voice calls, streaming, some industrial IoT applications~\cite{Schulz2017} \\
\hline
\multirow{2}{*}{target end-user} & human & web browsing, video conferencing \\
\cline{2-3}
& machine & IoT, D2D, M2M \\
\hline
\end{tabular}
\end{table*}
\paragraph*{Spectrum sharing in space}
This is based on antenna directivity at Layer~1.
Deploying directional antennas facilitates e.g. sectorization in cellular networks, and thus interference reduction and more aggressive frequency reuse~\cite{Chan1992}.
As such, sectorization in cellular networks is used for combined spectrum sharing in space and frequency.
A more recent multiple access technique is space division multiple access (SDMA) which emerged together with advanced antenna techniques at Layer~1. SDMA is based on using narrow beams pointed in the direction of the desired receiver, such that interference in other directions is reduced, which allows a higher number of simultaneous links over the same area, i.e. increases spatial reuse. We note that although the underlying Layer~1 techniques of sectorization and beamforming are similar, beamforming is a dynamic mechanism, whereas sectorization assumes a static antenna configuration.
Multi-user multiple-input-multiple-output (MU-MIMO), i.e. an example of SDMA, has been standardized as an option in IEEE~802.11ac \mbox{Wi-Fi}~\cite{IEEE2016}, LTE~\cite{3GPP2017b}, IEEE 802.16 WiMAX~\cite{IEEE802.16-2012}. Example MU-MIMO MAC protocols for \mbox{Wi-Fi} were reviewed in~\cite{Liao2016}.
Another method to share the spectrum, but not a MAC protocol, is transmit \textbf{power} control, which determines the transmission and interference range, and thus affects spatial reuse. Many technologies implement it as a mandatory or an optional feature, e.g. UMTS, LTE, \mbox{Wi-Fi}, sensor networks.
\subsubsection{Inter-Technology Spectrum Sharing}
For distributed spectrum access, the intra-technology spectrum sharing mechanisms can coincide with the inter-technology mechanisms at Layer~2, especially in the unlicensed bands, but also for secondary/secondary inter-technology coexistence in hierarchical regulatory models.
Consequently, inter-technology spectrum sharing mechanisms can be implemented through (but are not restricted to) MAC protocols.
Inter-technology spectrum sharing can be performed in \textbf{frequency} through channel selection schemes. An example is LTE-U/LAA performing channel selection to avoid co-channel \mbox{Wi-Fi} devices~\cite{Forum2015, 3GPP2016, 3GPP2016a}.
Inter-technology spectrum sharing in \textbf{time} can be implemented in distributed networks at the MAC layer, through duty cycle transmissions (e.g. LTE-U) or through LBT MAC protocols (e.g. CSMA/CA for \mbox{Wi-Fi} and LBT for LAA). These mechanisms share the spectrum both within and among technologies.
Another mechanism that facilitates both intra- and inter-technology coexistence for distributed networks is \textbf{power control}, which affects spatial reuse within and among technologies. This is considered for e.g. LAA~\cite{3GPP2015}, and the upcoming \mbox{Wi-Fi} amendment IEEE~802.11ax~\cite{Omar2016, Bellalta2016}.
Importantly, most current technologies implement more than one spectrum sharing mechanism at Layer~2 to facilitate (both intra- and inter-technology) coexistence. Examples include GSM (FDMA and TDMA), LTE (OFDMA and TDMA), \mbox{Wi-Fi} (CSMA/CA, and optionally channel selection and SDMA), LTE in the unlicensed bands (duty cycle or LBT, and channel selection). We note that most technologies implement a variant of spectrum sharing in time and frequency, which suggests that these mechanisms are able to efficiently mitigate interference.
\subsection{Spectrum Sharing and Interference Mitigation at Layer~1}
\label{phy}
The PHY layer can affect inter-technology coexistence through techniques that influence the design and performance of spectrum sharing mechanisms, as shown in Fig.~\ref{fig_layer1}. Furthermore, some of these techniques, i.e. spread spectrum techniques and full-duplex, can be seen as spectrum sharing mechanisms implemented directly at Layer~1, as summarized in Table~\ref{table_2}. We briefly discuss the PHY techniques in Fig.~\ref{fig_layer1} in the following.
The PHY layer determines the manner in which the data is sent over the wireless channels, primarily through modulation and coding. Different combinations of modulation and coding schemes affect the spectrum reuse, since they may provide increased robustness to interference, so that the number of links that can be simultaneously active is increased.
Other mechanisms at Layer~1 that affect spectrum sharing are antenna techniques.
Deploying directional antennas facilitates sectorization in cellular networks and beamforming based on multiple antennas supports SDMA mechanisms at Layer~2, as discussed in Section~\ref{spec_sharing}.
Also, using multiple-input-multiple-output (MIMO) antenna systems enables multiple data streams per link, which can increase the link capacity and reduce the effect of channel quality fluctuations through spatial diversity.
Spatial diversity is also exploited through cooperative communication, which proposes virtual antenna arrays built with single-antenna devices. The impact of such techniques on the MAC in general is surveyed in~\cite{Sami2016}.
Spread spectrum techniques have been used at Layer~1 to increase robustness against interference in intra- and inter-technology coexistence scenarios. Frequency hopping spread spectrum (FHSS) allows rather low data rates and is thus implemented by technologies like Bluetooth~\cite{Bluetooth2016}, which transports lower volumes of traffic. Direct sequence spread spectrum (DSSS) was used instead for technologies that transport moderate to high traffic volumes, e.g. IEEE 802.11b and code division multiple access (CDMA) systems like UMTS and CDMA2000.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.65\linewidth]{literatureClassif_v3}
\caption{Classification of research work in our literature review in Sections~\ref{litHierFr}~and~\ref{litSpecComm}, where we focus on inter-technology coexistence with \emph{equal} spectrum access rights within a hierarchical regulatory framework (i.e. primary/primary and secondary/secondary) and a flat regulatory framework (i.e. spectrum commons).}
\label{fig_classifLit}
\end{figure*}
Finally, recent interference cancellation techniques at Layer~1, which allow full-duplex communication, i.e. bidirectional for the same link at the same time, are a promising solution to increase spectrum utilization efficiency (see, e.g.~\cite{Kim2015} for full-duplex from the perspective of the PHY and MAC layers, and~\cite{Amjad2017} for full-duplex with cognitive radios). Full-duplex would impact spectrum sharing techniques at Layer~2, which would have to be redesigned (e.g. CSMA/CA for \mbox{Wi-Fi}~\cite{Xie2014}).
We note that full-duplex can be considered a spectrum sharing technique at Layer~1, since it refers to sharing spectrum resources at the link level, by means of PHY techniques. By contrast, other duplexing techniques like FDD and TDD share the resources as determined by regulations at Layer~0.
\subsection{Constraints of Applications at Layer~7}
This layer can have a major impact on the design and performance of spectrum sharing mechanisms, since the specific requirements for each target application in a given network should be reflected in the choice of spectrum sharing technique.
The applications can be grouped in different categories, according to their requirements in terms of traffic volume, delay, and target end-users, as shown in Table~\ref{table_app}.
The application type affects the selected environment where the networks are deployed, their mobility patterns, and thus the interference characteristics. Coexistence in these specific conditions has to be managed by the spectrum sharing mechanisms.
\subsection{Constraints of Business Models and Social Practices at Layer~8}
\label{business_models}
Business models and social practices affect the network deployment likelihood, topology, ownership, and level of coordination. These result in different interference
conditions. A taxonomy of business models is outside the scope of this survey, but we provide some examples to illustrate such interactions between technical and non-technical requirements.
Outdoor public cellular networks are owned and managed by mobile operators, as they have the financial resources to acquire a license for the cellular
bands. Consequently, the spectrum sharing techniques can be centrally coordinated. However, outdoor base station (BS) deployments in private locations, e.g. on top of buildings, are restricted by the existence of an agreement with the building owners. The optimization of the spectrum sharing parameters depends, in turn, on the physical locations of BSs and resulting propagation conditions.
By contrast, in private deployments, e.g. indoor residential \mbox{Wi-Fi}, there are multiple distributed networks, which operate individually, often with the default configuration.
In \mbox{Wi-Fi} business deployments, a higher level of coordination is expected than in private deployments, e.g. for channel allocation or client-access point (AP) association, as there is a single manager configuring the network. A similar example are hotspot deployments. However, it may also occur that multiple hotspots from different operators transmit over the same spectrum, such that coordination can be achieved within a network managed by a single operator, but not among networks.
Finally, based on Layer~8 considerations we identify inter-technology interactions of two types: (i)~integration and (ii)~competition.
Inter-technology \emph{integration} refers to different technologies that interconnect, in order to increase capacity, or extend the range of the offered services, e.g. carrier \mbox{Wi-Fi} (i.e. integration of \mbox{Wi-Fi} into the 3GPP cellular networks for data offloading purposes~\cite{He2016, Rebecchi2015}); standardization of \mbox{LAA-LTE} operating in the 5~GHz unlicensed band for capacity increase; and NB-IoT in LTE Advanced Pro (i.e. Release 13) for supporting device-to-device (D2D) IoT applications.
Inter-technology \emph{competition} occurs among different technologies that share the same spectrum, but for their individual offered services, e.g. secondary technologies operating within hierarchical regulatory frameworks; IEEE 802.15.4, Bluetooth, and \mbox{Wi-Fi} sharing the 2.4 GHz unlicensed band; LTE-U, LAA, and \mbox{Wi-Fi} sharing the 5~GHz unlicensed band.
Importantly, interactions of the competition type lead to the most challenging inter-technology coexistence cases, where optimizing the overall spectrum utility is not trivial, due to the potentially greedy or conflicting individual goals for each technology.
\subsection{Literature Review Structure}
In the following sections we present a review of the literature addressing inter-technology coexistence with equal spectrum access rights and we classify the work according to different layers of the technology circle, as shown in Fig.~\ref{fig_classifLit}. We first differentiate the work based on the regulatory framework at Layer~0, i.e. hierarchical in Section~\ref{litHierFr}, and flat in Section~\ref{litSpecComm}. For the hierarchical regulatory framework, we distinguish coexistence between \emph{primary/primary} (Section~\ref{litPrimPrim}) and \emph{secondary/secondary} (Section~\ref{litSecSec}) technologies. Although this is not the focus of our survey, in Section~\ref{litPrimSec} we also give some illustrative examples of \emph{primary/secondary} coexistence from the recent literature, in order to show how increasingly more bands are being considered for operation of multiple technologies.
For coexistence within a flat regulatory framework, where all technologies have the same rights (i.e. \emph{spectrum commons}), we classify the work further based on the Application Layer~7, i.e. \emph{low-traffic} and \emph{broadband} technologies. For each of the identified categories we review the spectrum sharing mechanisms at Layers~2~and~1.
\section{Literature Review of Inter-Technology Spectrum Sharing within a Hierarchical Regulatory Framework}
\label{litHierFr}
This section focuses on inter-technology coexistence within a hierarchical regulatory framework. We first review existing work on inter-technology coexistence with equal spectrum rights, i.e. primary/primary in Section~\ref{litPrimPrim} and secondary/secondary in Section~\ref{litSecSec}. Then we give some examples of work on primary/secondary coexistence in Section~\ref{litPrimSec}.
We note that primary/secondary inter-technology coexistence has already been extensively addressed in previous surveys, e.g.~\cite{Paisana2014, Tehrani2016, Akyildiz2006, Akyildiz2008, Zhao2007, Wang2008, Yucek2009, Gavrilovska2014, Ren2012}, and so we give only a few representative examples from the recent literature, in order to show that there is an increasing number of bands considered for operation of multiple technologies, which may open the possibility to also accommodate inter-technology coexistence with equal spectrum rights in the future.
Finally, in Section~\ref{hier_frame_sum} we summarize and discuss the main findings.
\begin{table*}[!t]
\caption{Literature review of inter-technology spectrum sharing with equal rights, within a hierarchical regulatory framework}
\label{table_review_1}
\centering
\begin{tabular}{|p{2.2cm}|p{2cm}|p{0.61cm}|p{5cm}|p{2.3cm}|p{3.2cm}|}
\hline
\centering \textbf{Spectrum Rights at Layer~0}
& \centering \textbf{Technologies}
& \centering \textbf{Ref.}
& \centering \textbf{Coexistence at Layer~2}
& \centering \textbf{Coexistence at Layer~1}
& \centering\let\newline\\\arraybackslash\hspace{0pt} \textbf{Coordination at Layer~2 based on constraints at Layer~8} \\
\hline
\multirow{2}{*}{primary/primary} & \multirow{2}{2cm}{LTE/NB-IoT} & \cite{Mangalvedhe2016} & \textbf{LTE}: resource blanking in time and frequency; \textbf{NB-IoT}: power boosting & -- & centralized\\
\cline{3-6}
& & \cite{Wang2016} & \textbf{both technologies}: adjacent frequencies & -- & -- \\
\hline
\multirow{7}{*}{secondary/secondary} & GAA users/GAA users in the CBRS band & \cite{Sahoo2017, Ying2017} & \textbf{both technologies}: channel allocation through SAS & -- & centralized \\
\cline{2-6}
& \multirow{2}{2cm}{IEEE 802.22/802.11af in TVWS} & \cite{Kang2011} & \textbf{802.11af}: likely CSMA/CA & -- & --\\
\cline{3-6}
& & \cite{Feng2013} & \textbf{802.11af}: CSMA/CA; \textbf{802.22}: busy tone & \textbf{802.11af}: signal pattern comparison & distributed \\
\cline{2-6}
& \multirow{3}{2cm}{\mbox{Wi-Fi}/LTE in TVWS} & \cite{Cavalcante2013, Paiva2013} & \textbf{\mbox{Wi-Fi}}: CSMA/CA~\cite{Cavalcante2013}; \textbf{LTE}: none & -- & distributed \\
\cline{3-6}
& & \cite{Almeida2013} & \textbf{\mbox{Wi-Fi}}: CSMA/CA; \textbf{LTE}: fixed duty cycle (0--80\%) with different subframe blanking patterns & -- & distributed \\
\cline{3-6}
& & \cite{Beluri2012} & \textbf{\mbox{Wi-Fi}}: CSMA/CA; \textbf{LTE}: fixed and adaptive duty cycle, LBT, channel selection & -- & distributed \\
\hline
\end{tabular}
\end{table*}
\subsection{Primary/Primary Coexistence}
\label{litPrimPrim}
This section reviews the literature on primary/primary inter-technology coexistence, as summarized in Table~\ref{table_review_1}.
We first present a literature overview in Section~\ref{litprimprim}. We then review in detail the work on LTE/NB-IoT coexistence in Section~\ref{ltenbiot}.
\subsubsection{\textbf{Literature Overview}}
\label{litprimprim}
Primary/primary inter-technology coexistence was considered in the literature for different technologies that are integrated, such that exclusive spectrum access rights at Layer~0 are assigned to a single entity that deploys and manages at Layer~8 a multi-technology network in the same spectrum band, e.g. cellular networks that incorporate LTE and NB-IoT.
As such, designing inter-technology spectrum sharing mechanisms is less challenging and only a few papers addressed this by considering centralized mechanisms specific to single-technology cellular networks, i.e. channel allocation, power control, resource blanking.
\subsubsection{\textbf{LTE/NB-IoT}}
\label{ltenbiot}
The authors in~\cite{Mangalvedhe2016} identified interference problems occurring when LTE coexists with an in-band \mbox{NB-IoT} deployment (i.e. both technologies use the same subcarriers) of the same operator, for the case where only some of the BSs are \mbox{NB-IoT-capable}. NB-IoT devices could thus associate to only some BSs, such that they may suffer from strong interference from BSs that are only LTE-capable.
As coexistence solutions, the authors investigated power boosting, i.e. increasing the downlink power for the NB-IoT resource blocks compared to that for LTE resource blocks; and resource blanking, i.e. not scheduling LTE transmissions on resource blocks that are used for NB-IoT by neighbouring BSs.
The simulation results in~\cite{Mangalvedhe2016} showed that LTE resource blanking was an efficient method to avoid co-channel interference for NB-IoT users.
We note that this technique is essentially a dynamic variant of spectrum sharing in time and frequency among different BSs.
By contrast, the authors in~\cite{Wang2016} considered LTE/NB-IoT coexistence for the complementary case, where the two technologies transmitted on different frequency channels. The effects of ACI were evaluated for different filter capabilities of the transmitter (i.e. ACLR) and of the receiver (i.e. ACS). The authors found through simulations that the effect of ACI on the LTE and NB-IoT networks was in general negligible.
\subsection{Secondary/Secondary Coexistence}
\label{litSecSec}
This section reviews the literature on secondary/secondary inter-technology coexistence, as summarized in Table~\ref{table_review_1}. We first present a literature overview in Section~\ref{lit_ov_ss}. We then review in detail the work on: (i)~the newly-available CBRS band in the U.S. in Section~\ref{CBRS_review}; and (ii)~TVWS in Section~\ref{TVWS_review}.
\subsubsection{\textbf{Literature Overview}}
\label{lit_ov_ss}
Only a few works have addressed secondary/secondary inter-technology coexistence in the CBRS band and they considered centralized channel allocation through a database. However, they only presented preliminary results and the addressed allocation issue is similar in any other centrally managed network.
Most of the work focusing on secondary/secondary inter-technology coexistence in TVWS assumed that protection of the primary technology had been met, such that the addressed coexistence issues are in fact equivalent to those in a spectrum commons.
We thus emphasize that secondary/secondary inter-technology coexistence mechanisms in the literature are similar to either those used for primary/primary or spectrum commons coexistence.
\subsubsection{\textbf{CBRS band}}
\label{CBRS_review}
As discussed in Section~\ref{reg_framework}, spectrum access in the CBRS band is managed through SASs, where there is a three-layer hierarchical regulatory framework for access rights. An example of coexisting secondary technologies is coexistence among GAA users of different technologies. This was briefly addressed in~\cite{Sahoo2017, Ying2017}, which are short poster papers that provide at most preliminary results.
The authors in~\cite{Sahoo2017} proposed schemes for fair allocation of the channels among GAA users managed by a SAS, i.e. static and max-min fair allocations.
In~\cite{Ying2017} another scheme was proposed for the SAS to allocate channels dynamically to coexisting GAA devices, but this required the devices to perform carrier sensing and was based on graph theory and the transmission activity of each device. Examples of such users included \mbox{Wi-Fi}, \mbox{LTE-U}, LAA.
Since channel allocation is centrally performed by the SAS, we emphasize that such allocation problems are similar in any other centralized networks, e.g. cellular networks.
\subsubsection{\textbf{TVWS}}
\label{TVWS_review}
The authors in~\cite{Kang2011, Feng2013} addressed coexistence between IEEE 802.11af and IEEE 802.22 in TVWS. We note that IEEE 802.11af accesses the spectrum based on CSMA/CA, whereas IEEE 802.22 implements scheduled transmissions.
In~\cite{Kang2011} an evaluation of co-channel interference from IEEE 802.11af to 802.22 was presented, where no additional inter-technology coexistence mechanism was implemented, and it was found via OPNET simulations that the IEEE 802.22 upstream throughput was severely degraded. No results were presented for IEEE 802.11af. Also, it is not clear to which extent CSMA/CA for 802.11af was implemented in~\cite{Kang2011}.
The authors in~\cite{Feng2013} proposed implementing a busy tone by the IEEE 802.22 nodes, in order to avoid 802.11af hidden nodes. Additionally, IEEE 802.11af compared the signal pattern of the busy tone and the 802.22 signal, in order to detect 802.22 exposed terminals. The proposed scheme was shown via simulations to provide an increase in the aggregate throughput over the case without busy tone, especially for high traffic loads.
\begin{table*}[!t]
\caption{Examples from the literature on inter-technology coexistence with primary/secondary spectrum access rights}
\label{table_review_2}
\centering
\begin{tabular}{|p{3cm}|c|p{4cm}|p{4cm}|p{3cm}|}
\hline
\centering \textbf{Technologies}
& \centering \textbf{Ref.}
& \centering \textbf{Coexistence for Secondary Technology at Layer~0}
& \centering \textbf{Coexistence for Secondary Technology at Layer~2}
& \centering\let\newline\\\arraybackslash\hspace{0pt} \textbf{Coordination based on constraints at Layer~8}\\
\hline
shipborne radars/CBRS devices & \cite{Nguyen2017}, \cite{Palola2017} & in frequency and space based on SAS database and additional sensing network & -- & centralized\\
\hline
(non-)governmental incumbents/LTE & \cite{Guiducci2017} & time, frequency, and space based on an LSA system & -- & centralized\\
\hline
radar/LTE & \cite{Labib2016} & channels restricted for indoor use & LBT, TPC, DFS & distributed\\
\hline
radar/IoT & \cite{Khan2017} & frequency, time, and space based on REM and SA database & -- & centralized\\
\hline
\multirow{3}{3cm}{IEEE 802.11p DSRC (ITS)/\mbox{Wi-Fi}} & \cite{Lansford2013} & -- & standardized CSMA/CA & distributed\\
\cline{2-5}
& \cite{Naik2017} & -- & real-time channelization & distributed\\
\cline{2-5}
& \cite{Khan2017a} & -- & LBT with lower priority; reduced \mbox{Wi-Fi} transmit power & distributed \\
\hline
TV/LTE & \cite{Holland2014}, \cite{Ibuka2015} & frequency, time, space through database & -- & centralized\\
\hline
TV/IEEE 802.11af & \cite{Mizutani2015, Holland2014} & frequency, time, space through database & -- & centralized\\
\hline
TV/next-generation cognitive radio TV (ATSC 3.0) & \cite{Rempe2017} & time, frequency, space based on database & spectrum sensing & centralized \\
\hline
\end{tabular}
\end{table*}
A number of papers addressed \mbox{Wi-Fi}/LTE coexistence in TVWS~\cite{Beluri2012, Almeida2013, Cavalcante2013, Paiva2013}. Importantly, these papers did not consider the incumbent TV transmissions, so the addressed coexistence problem and the proposed solutions are the same as for \mbox{Wi-Fi}/LTE coexistence in the 5~GHz unlicensed band, which has been extensively studied in the literature and is reviewed in Section~\ref{litrev_hightraffic} and summarized in Table~\ref{table_review_4}.
In \cite{Cavalcante2013} an evaluation of the impact of \mbox{Wi-Fi}/LTE coexistence in TVWS was presented, where LTE did not implement any inter-technology coexistence mechanism. The authors found via simulations that \mbox{Wi-Fi} was severely affected, due to its CSMA/CA mechanism through which \mbox{Wi-Fi} deferred to LTE, whereas LTE transmitted almost continuously.
By contrast, \cite{Paiva2013} evaluated the mutual interference between \mbox{Wi-Fi}/LTE at the PHY layer only (i.e. CSMA/CA was not modelled) and found via simulations that the performance of both technologies was degraded.
The authors in~\cite{Almeida2013} simulated blank subframe allocation for LTE to coexist with \mbox{Wi-Fi} in TVWS, with fixed duty cycle and different blank subframe patterns. Their main finding was that there was a tradeoff between \mbox{Wi-Fi} and LTE performance and that duty cycle tuning depended on deployment and requirements.
The authors in~\cite{Beluri2012} proposed fixed and adaptive duty cycle for LTE when coexisting with \mbox{Wi-Fi} and compared these schemes with LBT through simulations. Additionally, LTE could select less loaded channels. The authors found that LBT was more efficient than duty cycle for high traffic load, but claimed that LBT was not justified, given that LTE would likely avoid loaded channels.
We note that the overall results reported for \mbox{Wi-Fi}/LTE coexistence in TVWS are consistent with those for \mbox{Wi-Fi}/LTE coexistence in the 5~GHz unlicensed band in Section~\ref{litrev_hightraffic}.
\subsection{Primary/Secondary Coexistence}
\label{litPrimSec}
This section presents some representative examples from the recent literature on primary/secondary inter-technology coexistence, as summarized in Table~\ref{table_review_2}, illustrating that an increasing number of bands are opening for multiple technologies.
This trend is relevant to our survey, as it can also potentially lead to an increasing number of coexistence cases among technologies with the same spectrum access rights, i.e. secondary/secondary, or even to opening more bands as a spectrum commons in the future.
We note that an extensive survey of primary/secondary coexistence is not our focus, but the interested reader may refer to e.g.~\cite{Paisana2014, Tehrani2016, Akyildiz2006, Akyildiz2008, Zhao2007, Wang2008, Yucek2009, Gavrilovska2014, Ren2012}.
We first present a literature overview in Section~\ref{litovprimsec}.
We then present in detail works addressing coexistence in: the CBRS band in Section~\ref{revcbrs}; LSA bands in Section~\ref{revlsa}; radar bands in Section~\ref{revradar}; the 5.9~GHz band allocated to Dedicated Short-Range Communications (DSRC) in Section~\ref{revdsrc}; and TVWS in Section~\ref{revtvws}.
\subsubsection{\textbf{Literature Overview}}
\label{litovprimsec}
Most of the proposed spectrum sharing mechanisms are based on central coordination, due to the constraints at Layers~0~and~8. We note that for most of the primary/secondary coexistence cases, spectrum sharing is implemented at Layer~0 through a database that imposes fundamental restrictions on the way that the secondary technologies access the spectrum. However, for some coexistence cases, e.g. DSRC/\mbox{Wi-Fi}, distributed mechanisms were implemented at Layer~2. We note, however, that opening the DSRC band for \mbox{Wi-Fi} is still under discussion~\cite{FCC2016}, so it has not yet been clarified through regulation which level of protection must be offered to DSRC.
\subsubsection{\textbf{CBRS Band}}
\label{revcbrs}
The authors in~\cite{Nguyen2017} considered via simulations the case of incumbent shipborne radars coexisting with secondary CBRS devices, for which an additional sensing network had to detect the incumbents and report their presence to the SAS.
Several algorithms were proposed for determining the sensing capabilities of these sensors and their placement.
The work in~\cite{Palola2017} addressed a similar coexistence case and experimentally evaluated the evacuation time and the reconfiguration performance in the CBRS band for an SAS, where the incumbents were shipborne radars and the secondary users implemented LTE.
\subsubsection{\textbf{LSA Bands}}
\label{revlsa}
The first large-scale LSA implementation was presented in~\cite{Guiducci2017}, where an LTE deployment coexisted with several incumbents in the 2.3--2.4~GHz band (e.g. fixed services, Programme-Making and Special Events -- PMSE -- video links). Several drive tests and simulations were conducted for functional and regulatory compliance verification.
\subsubsection{\textbf{Radar Bands}}
\label{revradar}
The authors in~\cite{Labib2016} reviewed the spectrum sharing techniques imposed by regulators for LTE (but also valid for any other technology) to coexist with radars in the 5~GHz band.
The work in~\cite{Khan2017} proposed coexistence of IoT and rotating radars through an SAS with radio environmental maps, which shared the spectrum in frequency, time, and space. Results from a measurement campaign on spectrum usage by rotating radars was presented, in order to show the coexistence potential with IoT.
\subsubsection{\textbf{DSRC/Wi-Fi}}
\label{revdsrc}
The authors in~\cite{Lansford2013, Naik2017, Khan2017a} addressed coexistence between DSRC (or Intelligent Transportation Systems -- ITS) devices and \mbox{Wi-Fi} in the 5.9 GHz band, which is currently under consideration for becoming open to \mbox{Wi-Fi} operations.
In~\cite{Lansford2013} potential DSRC/\mbox{Wi-Fi} coexistence issues were discussed, if \mbox{Wi-Fi} implemented its original CSMA/CA coexistence mechanism.
The authors in~\cite{Naik2017} proposed a real-time channelization algorithm for IEEE 802.11ac \mbox{Wi-Fi} to coexist with DSRC devices, where the \mbox{Wi-Fi} APs selected a primary channel and bandwidth, such that the \mbox{Wi-Fi} throughput was maximized. Both experimental and simulation results were presented, showing that the \mbox{Wi-Fi} throughput was increased via the proposed scheme compared to static channel allocation.
In \cite{Khan2017a} the performance of two mechanisms proposed in Europe for \mbox{Wi-Fi} to coexist with ITS was evaluated via simulations. Both mechanisms were based on LBT, as follows: the first mechanism used higher-duration sensing parameters when detecting ITS; the second mechanism probed for hidden ITS stations, and was able to vacate the channel. The authors found that there were three ITS transmitter-receiver distance ranges, corresponding to different coexistence characteristics: for short distances there were no coexistence problems; for medium distances outdoor \mbox{Wi-Fi} coexisted better than indoor \mbox{Wi-Fi}; and for long distances the ITS packet loss was high, but this was not considered problematic for safety applications.
\subsubsection{\textbf{TVWS}}
\label{revtvws}
The work in~\cite{Mizutani2015, Holland2014, Ibuka2015, Rempe2017} addressed coexistence with primary TV services, where spectrum resources were shared through a centralized database, in frequency, time, and space domains.
The authors in~\cite{Mizutani2015, Holland2014, Ibuka2015} experimentally verified the correct operation and performance of IEEE 802.11af and/or LTE in TVWS, through Ofcom's TVWS trial pilot program.
In~\cite{Rempe2017} a different topic was addressed, i.e. an implementation of a next-generation cognitive radio TV based on the ATSC 3.0 standard, coexisting with legacy TV devices. We note that~\cite{Rempe2017} is a short poster paper that did not present performance evaluation results.
\subsection{Summary \& Insights}
\label{hier_frame_sum}
There are few works that have addressed inter-technology coexistence with equal spectrum access rights within a hierarchical regulatory framework, i.e. primary/primary and secondary/secondary. Specifically, for primary/primary coexistence only integrated LTE/NB-IoT deployments have been considered, where centralized spectrum sharing mechanisms were implemented similarly to single-technology cellular networks. Resource blanking, i.e. sharing in time and frequency with fine granularity, was found efficient. We note that this technique has the advantage of already being standardized for LTE. Furthermore, ACI from LTE had a negligible effect on the network performance.
For secondary/secondary coexistence, the spectrum sharing mechanisms were proposed to be implemented either in a centralized manner via databases, or in a distributed manner as for a spectrum commons like the unlicensed bands. We note that centralized spectrum sharing can be applied in a straightforward way due to the requirement that secondary devices cooperate in any case with the database, in order to protect the primary. However, managing resource allocation also among secondary devices increases the computational effort for the database and is less dynamic with respect to the offered traffic, i.e. higher delays are expected due to the communication overhead between secondary devices and the database.
By contrast, distributed spectrum sharing is implemented directly in the wireless secondary devices. It was found that sharing in frequency (i.e. channel selection) is an efficient way to protect different technologies from each other. For co-channel coexistence, LBT performed better than duty cycling for high traffic load, but duty cycling may be sufficient. We note that the choice of implementing LBT or duty cycling may also depend on the required changes for existing standards. For instance, \mbox{Wi-Fi} already implements CSMA/CA as an LBT variant, whereas for LTE rather complex changes were needed to implement LBT in LAA. Implementing duty cycling for LTE via the already standardized resource blanking is a more straightforward technical solution.
Primary/secondary coexistence is not the focus of this survey, but a few examples from the literature were presented, in order to show the large number of bands that are being targeted by multiple technologies, where coexistence with the same access rights may also become an issue in the future. Such bands are the 3.5~GHz CBRS band in the U.S., the 2.3-2.4~GHz LSA band in Europe, 5~GHz radar bands, the 5.9 GHz DSRC band, and TVWS. Spectrum sharing for primary/secondary coexistence was implemented either in a centralized manner via databases, or via distributed sensing mechanisms. Sharing via databases is considered safer for protecting primary technologies in TVWS, the CBRS band, or the LSA bands, especially since different technologies may be deployed as secondary ones. By contrast, secondary \mbox{Wi-Fi} devices could protect primary DSRC devices by implementing distributed channel selection or sensing mechanisms, as \mbox{Wi-Fi} already implements variants of such mechanisms.
\begin{table*}[!t]
\modcounter
\caption{Literature review of inter-technology spectrum sharing with low-traffic technologies in a spectrum commons}
\label{table_review_3}
\centering
\begin{tabular}{|p{2.3cm}|p{1cm}|p{5cm}|p{4cm}|p{3cm}|}
\hline
\centering \textbf{Technologies}
& \centering \textbf{Ref.}
& \centering \textbf{Coexistence at Layer~2}
& \centering \textbf{Coexistence at Layer~1}
& \centering\let\newline\\\arraybackslash\hspace{0pt} \textbf{Coordination at Layer~2 based on constraints at Layer~8}\\
\hline
\multirow{6}{2.3cm}{\mbox{Wi-Fi}/ IEEE~802.15.4} & \cite{Pollin2008, Yuan2007, SikoraOttawa2005, Shuaib2006, PetrovaLasVegas2006} & \textbf{\mbox{Wi-Fi}}: CSMA/CA; \textbf{802.15.4}: CSMA/CA & -- & distributed \\
\cline{2-5}
& \cite{Angrisani2008} & \textbf{\mbox{Wi-Fi}}: CSMA/CA; \textbf{802.15.4}: CSMA/CA, polling & -- & distributed \\
\cline{2-5}
& \cite{Howitt2003} & \textbf{both}: frequency selection & -- & distributed or centralized \\
\cline{2-5}
& \cite{Won2005} & \textbf{\mbox{Wi-Fi}}: CSMA/CA; \textbf{802.15.4}: adaptive channel allocation & -- & local coordination for 802.15.4\\
\cline{2-5}
& \cite{Petrova2007} & \textbf{both}: CSMA/CA, static channel allocation & \textbf{\mbox{Wi-Fi}}: beamforming & distributed \\
\cline{2-5}
& \cite{Hauer2009} & \textbf{\mbox{Wi-Fi}}: CSMA/CA; \textbf{802.15.4}: CSMA/CA, adaptive power control & -- & distributed \\
\hline
\multirow{6}{2.3cm}{\mbox{Wi-Fi}/Bluetooth} & \cite{Shuaib2006, Sydanheimo2002, Lansford2001, Golmie2003a} & \textbf{\mbox{Wi-Fi}}: CSMA/CA & \textbf{Bluetooth}: FHSS & distributed \\
\cline{2-5}
& \cite{Chiasserini2002} & \textbf{\mbox{Wi-Fi}}: CSMA/CA; \textbf{both}: MAC traffic scheduling & \textbf{Bluetooth}: FHSS & collaborative or non-collaborative \\
\cline{2-5}
& \cite{Golmie2003s} & \textbf{\mbox{Wi-Fi}}: CSMA/CA; \textbf{Bluetooth}: scheduling & \textbf{Bluetooth}: adaptive FHSS & Bluetooth local coordination \\
\cline{2-5}
& \cite{Park2003} & -- & \textbf{\mbox{Wi-Fi}}: coded OFDM; \textbf{Bluetooth}: FHSS & distributed \\
\cline{2-5}
& \cite{Arumugam2003} & -- & \textbf{\mbox{Wi-Fi}}: weighing sub-carriers; \textbf{Bluetooth}: FHSS, antenna diversity & distributed \\
\cline{2-5}
& \cite{Ghosh2003} & -- & \textbf{\mbox{Wi-Fi}}: interference cancellation against Bluetooth & distributed \\
\hline
IEEE~802.15.4/ Bluetooth & \cite{SikoraOttawa2005} & \textbf{802.15.4}: CSMA/CA & \textbf{Bluetooth}: FHSS & distributed \\
\hline
IEEE 802.15.4/ microwave oven & \cite{SikoraOttawa2005} & \textbf{802.15.4}: CSMA/CA & -- & distributed \\
\hline
Bluetooth/ \{WCAM, RFID, microwave oven\} & \cite{Sydanheimo2002} & -- & \textbf{Bluetooth}: FHSS & distributed \\
\hline
\mbox{Wi-Fi}/LTE D2D & \cite{Wu2016} & \textbf{LTE D2D}: LBT, interference avoidance routing, switch to licensed band & -- & distributed\\
\hline
5G/IEEE~802.15.4 & \cite{Lackpour2017} & -- & \textbf{5G}: non-contiguous-OFDM, reconfigurable antennas & distributed \\
\hline
LTE/ZigBee & \cite{Parvez2016} & \textbf{LTE}: two 0.5 ms guard periods per frame; \textbf{802.15.4}: CSMA/CA & -- & distributed \\
\hline
IEEE 802.15.4/any interfering signal & \cite{Vermeulen2017} & -- & \textbf{802.15.4:} collision detection at transmitter with full duplex (self-interference cancellation) & distributed \\
\hline
\end{tabular}
\end{table*}
\begin{table*}[!t]
\addtocounter{table}{-1}
\modcounter
\caption{Inter-technology coexistence goals and performance for literature review of spectrum sharing with low-traffic technologies in a spectrum commons in Table~\ref{table_review_3}}
\label{table_review_3b}
\centering
\begin{tabular}{|p{2.3cm}|p{4cm}|p{2.2cm}|p{3cm}|p{3.5cm}|}
\hline
\centering \multirow{2}{*}{\textbf{Technologies}}
& \centering \multirow{2}{*}{\textbf{Coexistence Goals}}
& \multicolumn{3}{c|}{\textbf{Performance Evaluation}}\\
\cline{3-5}
&
& \centering \textbf{method}
& \centering \textbf{metric}
& \centering\let\newline\\\arraybackslash\hspace{0pt} \textbf{network size} \\
\hline
\begin{tabular}{p{2cm}}
\mbox{Wi-Fi}/ IEEE~802.15.4 \\ \\ \\ \\
\scriptsize\cite{Pollin2008, Howitt2003, Yuan2007, SikoraOttawa2005, PetrovaLasVegas2006, Petrova2007, Hauer2009, Shuaib2006, Angrisani2008, Won2005}
\end{tabular}
& \begin{tabular}{p{3.3cm}}
\emph{Impact on \mbox{Wi-Fi}}\\
$-$\textbf{(implicitly) vs. standalone} \scriptsize\cite{Pollin2008, Howitt2003, Shuaib2006, Angrisani2008, PetrovaLasVegas2006} \\
\hline
\emph{Impact on IEEE 802.15.4}\\
$-$\textbf{(implicitly) vs. standalone} \scriptsize\cite{Yuan2007, SikoraOttawa2005, PetrovaLasVegas2006, Petrova2007, Hauer2009, Shuaib2006, Angrisani2008, Won2005} \\
\hline
\emph{Other}\\
$-$\textbf{\mbox{Wi-Fi} packet error rate below 8\%} \scriptsize\cite{Howitt2003} \\
$-$\textbf{solve performance degradation of 802.15.4} \scriptsize\cite{Won2005}
\end{tabular}
& \begin{tabular}{p{2cm}}
$-$\textbf{measurements} \scriptsize\cite{Pollin2008, SikoraOttawa2005, PetrovaLasVegas2006, Petrova2007, Hauer2009, Shuaib2006, Angrisani2008, Won2005}\\ \\
$-$\textbf{analytical} \scriptsize\cite{Howitt2003, Yuan2007}\\ \\
$-$\textbf{simulations} \scriptsize\cite{Yuan2007, Won2005}
\end{tabular}
& \begin{tabular}{p{2.8cm}}
$-$\textbf{throughput} \scriptsize\cite{Pollin2008, Yuan2007, Shuaib2006} \\
$-$\textbf{packet error rate/loss} \scriptsize\cite{Pollin2008, Howitt2003, SikoraOttawa2005, Hauer2009, Angrisani2008, PetrovaLasVegas2006}\\
$-$\textbf{packet delivery ratio/success rate} \scriptsize\cite{Petrova2007, Hauer2009, Won2005}\\
$-$\textbf{received power} \scriptsize\cite{Petrova2007}\\
$-$\textbf{channel power} \scriptsize\cite{Angrisani2008}\\
$-$\textbf{SIR} \scriptsize\cite{Angrisani2008}\\
$-$\textbf{delay} \scriptsize\cite{Won2005}
\end{tabular}
& \begin{tabular}{p{3.3cm}}
$-$~\textbf{1 link of each technology} \scriptsize\cite{Pollin2008, Yuan2007, SikoraOttawa2005, PetrovaLasVegas2006, Petrova2007, Hauer2009, Shuaib2006, Won2005} \\ \\
$-$~\textbf{1 \mbox{Wi-Fi} link \& several 802.15.4 devices} \scriptsize\cite{Howitt2003, Angrisani2008}\\ \\
$-$~\textbf{100 802.15.4 devices and abstract interference} \scriptsize\cite{Won2005}
\end{tabular} \\
\hline
\begin{tabular}{p{2cm}}
\mbox{Wi-Fi}/Bluetooth \\ \\ \\ \\
\scriptsize\cite{Shuaib2006, Sydanheimo2002, Lansford2001, Golmie2003a, Chiasserini2002, Golmie2003s, Park2003, Arumugam2003, Ghosh2003}
\end{tabular}
& \begin{tabular}{p{3.3cm}}
\emph{Impact on \mbox{Wi-Fi}}\\
$-$\textbf{vs. standalone} \scriptsize\cite{Shuaib2006, Sydanheimo2002, Lansford2001, Golmie2003a, Park2003, Arumugam2003} \\
$-$\textbf{vs. coexistence without additional spectrum sharing mechanisms} \scriptsize\cite{Chiasserini2002, Golmie2003s, Ghosh2003}\\
\hline
\emph{Impact on Bluetooth}\\
$-$\textbf{vs. standalone} \scriptsize\cite{Shuaib2006, Sydanheimo2002, Lansford2001, Golmie2003a, Arumugam2003} \\
$-$\textbf{vs. coexistence without additional spectrum sharing mechanisms} \scriptsize\cite{Chiasserini2002, Golmie2003s} \\
$-$\textbf{vs. other coexistence mechanisms} \scriptsize\cite{Golmie2003s}\\
\end{tabular}
& \begin{tabular}{p{2cm}}
$-$\textbf{measurements} \scriptsize\cite{Shuaib2006, Sydanheimo2002, Lansford2001}\\ \\
$-$\textbf{analytical} \scriptsize\cite{Park2003}\\ \\
$-$\textbf{simulations} \scriptsize\cite{Lansford2001, Golmie2003a, Chiasserini2002, Golmie2003s, Arumugam2003, Ghosh2003}
\end{tabular}
& \begin{tabular}{p{2.8cm}}
$-$\textbf{throughput} \scriptsize\cite{Shuaib2006, Sydanheimo2002, Lansford2001} \\
$-$\textbf{packet error rate/loss} \scriptsize\cite{Sydanheimo2002, Golmie2003a, Golmie2003s, Arumugam2003, Ghosh2003}\\
$-$\textbf{delay} \scriptsize\cite{Chiasserini2002, Golmie2003s}\\
$-$\textbf{jitter} \scriptsize\cite{Golmie2003s} \\
$-$\textbf{goodput} \scriptsize\cite{Chiasserini2002, Golmie2003s} \\
$-$\textbf{bit error probability} \scriptsize\cite{Park2003}
\end{tabular}
& \begin{tabular}{p{3.3cm}}
$-$~\textbf{1 link of each technology} \scriptsize\cite{Shuaib2006, Sydanheimo2002, Lansford2001, Park2003, Arumugam2003, Ghosh2003}\\ \\
$-$~\textbf{1 Bluetooth link \& up to 2 \mbox{Wi-Fi} links} \scriptsize\cite{Golmie2003s}\\ \\
$-$~\textbf{up to 10 \mbox{Wi-Fi} devices and several Bluetooth links} \scriptsize\cite{Golmie2003a, Chiasserini2002}
\end{tabular} \\
\hline
\begin{tabular}{p{2cm}}
IEEE~802.15.4/ Bluetooth \\
\scriptsize\cite{SikoraOttawa2005}
\end{tabular}
& study mutual impact on both technologies (implicitly vs. standalone) & measurements & packet loss & two Bluetooth links and one 802.15.4 link \\
\hline
\begin{tabular}{p{2cm}}
IEEE 802.15.4/ microwave oven \\
\scriptsize\cite{SikoraOttawa2005}
\end{tabular}
& study impact on 802.15.4 (implicitly vs. standalone) & measurements & packet loss & one 802.15.4 link and one microwave oven \\
\hline
\begin{tabular}{p{2cm}}
Bluetooth/ \{WCAM, RFID, microwave oven\} \\
\scriptsize\cite{Sydanheimo2002}
\end{tabular}
& study impact on Bluetooth vs. standalone & measurements & data rate, packet error rate & one Bluetooth link and one interferer of another technology \\
\hline
\begin{tabular}{p{2cm}}
\mbox{Wi-Fi}/LTE D2D \\
\scriptsize\cite{Wu2016}
\end{tabular}
& increase D2D throughput vs. different licensed/unlicensed spectrum use strategies & simulations & throughput & one \mbox{Wi-Fi} link and one multi-hop D2D flow \\
\hline
\begin{tabular}{p{2cm}}5G/IEEE~802.15.4 \\
\scriptsize\cite{Lackpour2017}
\end{tabular}
& mitigate mutual interference vs. standalone \& vs. coexistence with 5G without spectrum sharing mechanisms & simulations & throughput & one ZigBee and one 5G link\\
\hline
\begin{tabular}{p{2cm}}LTE/ZigBee \\
\scriptsize\cite{Parvez2016}
\end{tabular}
& study mutual impact between LTE and ZigBee vs. standalone & simulations & throughput, SINR & 18 LTE BSs and 54 ZigBee APs\\
\hline
\begin{tabular}{p{2cm}}
IEEE 802.15.4/any interfering signal \\
\scriptsize\cite{Vermeulen2017}
\end{tabular}
& detect collisions while transmitting & measurements & detection and false alarm probabilities & one 802.15.4 link and one 802.15.4 interferer\\
\hline
\end{tabular}
\end{table*}
\section{Literature Review of Inter-Technology Coexistence in a Spectrum Commons}
\label{litSpecComm}
This section presents a review of the literature addressing inter-technology coexistence in a spectrum commons. We focus on the unlicensed bands as an example of a spectrum commons where the most diverse interactions between technologies occur, due to the largely technology-agnostic regulatory framework that allows any technology to operate in these bands without license costs, provided that the technical regulatory constraints at Layer~0 are met.
We classify the abundant literature on this topic according to the Application-layer criteria in Table~\ref{table_app}, i.e. work that addresses: (i)~coexistence with low traffic technologies in Section~\ref{litrev_lowtraffic}, Tables~\ref{table_review_3} to \ref{table_review_3b}; and (ii)~coexistence among high traffic technologies in Section~\ref{litrev_hightraffic}, Tables~\ref{table_review_4} to \ref{table_review_4d}.
In Section~\ref{sumSpecComm} we summarize and discuss the main findings.
\subsection{Coexistence with Low Traffic Technologies}
\label{litrev_lowtraffic}
We first present in Section~\ref{over_lowtraffic} an overview of our literature review on coexistence with low traffic technologies.
We then review in detail work on: (i)~IEEE~802.11 \mbox{Wi-Fi}/IEEE 802.15.4 in Section~\ref{litrev_802.15.4}; (ii)~IEEE~802.11 \mbox{Wi-Fi}/Bluetooth in Section~\ref{rev_bluetooth}; and (iii)~other technologies in Section~\ref{rev_other}.
Table~\ref{table_review_3} summarizes the spectrum sharing mechanisms and Table~\ref{table_review_3b} summarizes coexistence performance evaluation aspects, where \emph{standalone} is sometimes considered as a baseline case where there is no other coexisting technology present.
\subsubsection{\textbf{Literature Overview}}
\label{over_lowtraffic}
For coexistence with low traffic technologies in a spectrum commons, a similar number of works considered spectrum sharing mechanisms at Layer~2 as at Layer~1 (\emph{cf.} Table~\ref{table_review_3}). This shows the importance of Layer~1 techniques for mitigating interference, especially for coexistence cases where at least one technology carries a low traffic volume.
Furthermore, most of the work assumed distributed spectrum sharing mechanisms at Layer~2 as influenced by ownership at Layer~8, as expected in a spectrum commons.
In terms of coexistence goals (\emph{cf.} Table~\ref{table_review_3b}), most of the works compared the coexistence performance with either the standalone case, or coexistence without additional spectrum sharing mechanisms. We note that such an approach does not facilitate the performance comparison of different mechanisms among themselves, so that selecting an efficient mechanism for future coexistence cases is not straightforward.
The preferred performance evaluation methods were measurements and simulations. We emphasize that conducting measurements was facilitated by the existence of commercially available hardware (for e.g. Bluetooth, \mbox{Wi-Fi}, and IEEE 802.15.4), especially for works that did not propose new coexistence mechanisms. However, most of the work based on measurements considered very simplistic deployments of one link for each technology.
\subsubsection{\textbf{IEEE 802.11 \mbox{Wi-Fi}/IEEE 802.15.4}}
\label{litrev_802.15.4}
Coexistence between these technologies was addressed in~\cite{Won2005, Yuan2007, Pollin2008, SikoraOttawa2005, Angrisani2008, PetrovaLasVegas2006, Petrova2007, Shuaib2006, Hauer2009, Howitt2003}.
The authors in~\cite{Pollin2008, Yuan2007, SikoraOttawa2005, Shuaib2006, PetrovaLasVegas2006} addressed coexistence for basic standardized specifications, whereas~\cite{Howitt2003, Angrisani2008, Won2005, Petrova2007, Hauer2009} evaluated or proposed more advanced features to mitigate interference.
Specifically, the authors in~\cite{Pollin2008, PetrovaLasVegas2006} measured the impact of IEEE 802.15.4 on \mbox{Wi-Fi} performance. In~\cite{Pollin2008} it was found that the \mbox{Wi-Fi} throughput significantly decreased when the IEEE 802.15.4 transmitter was located close to the \mbox{Wi-Fi} receiver, due to the slow responsiveness of IEEE 802.15.4 when sensing the channel, which resulted in collisions. For other location configurations, both~\cite{Pollin2008, PetrovaLasVegas2006} found that the \mbox{Wi-Fi} packet loss was only marginally increased by coexistence, due to the much higher transmit power of \mbox{Wi-Fi} vs. IEEE 802.15.4.
The works in~\cite{Yuan2007, SikoraOttawa2005, Shuaib2006, PetrovaLasVegas2006, Angrisani2008} reported complementary results, i.e. that the IEEE 802.15.4 performance in terms of throughput and packet loss rate degraded significantly when coexisting with \mbox{Wi-Fi}, especially for high \mbox{Wi-Fi} load. This was explained by the higher transmit power, higher sensing threshold, and shorter backoff time slot for \mbox{Wi-Fi} vs. IEEE 802.15.4.
Also, \cite{Shuaib2006} reported that the \mbox{Wi-Fi} performance was affected by Bluetooth more than by IEEE~802.15.4. Although this effect was not explained in~\cite{Shuaib2006}, it was likely caused by Bluetooth frequency hopping, \emph{cf.} Section~\ref{rev_bluetooth}.
Two solutions were evaluated in~\cite{Angrisani2008} to improve the performance of IEEE~802.15.4 when coexisting with \mbox{Wi-Fi}: reducing the \mbox{Wi-Fi} duty cycle (i.e. the duration of a frame vs. total time between two frames) by reducing the \mbox{Wi-Fi} packet size, or increasing the time duration of the IEEE~802.15.4 polling window.
We note, however, that adjusting the \mbox{Wi-Fi} packet size is not a practical solution, especially since this also depends on the application type, which is not controlled by the network manager, but by the end user. As such, adjusting the IEEE~802.15.4 polling window could be more feasible for real deployments.
The authors in~\cite{Howitt2003, Won2005, Petrova2007} considered different channel selection schemes for enabling coexistence.
Specifically, in~\cite{Howitt2003} the impact of IEEE 802.15.4 on 802.11b was evaluated with generic frequency management and it was reported that \mbox{Wi-Fi} was only marginally affected when the channels were allocated such that the inter-technology interference was reduced.
In~\cite{Won2005} an adaptive channel allocation scheme was proposed for multi-hop IEEE 802.15.4 networks, in order to protect them from IEEE 802.11b. The scheme required local coordination among IEEE 802.15.4 nodes, which temporarily formed a group and changed their channel if a high level of interference was detected. This scheme was found to be effective for improving IEEE 802.15.4 coexistence performance especially in large-scale networks.
An experimental evaluation was presented in~\cite{Petrova2007}, which focused on the coexistence impact of IEEE 802.11g/n on 802.15.4 networks. Overlapping and non-overlapping channel configurations were considered and it was reported that the IEEE 802.15.4 network severely suffered in case of high co-channel \mbox{Wi-Fi} traffic load and that interference from adjacent channels may also be critical.
This shows overall that spectrum sharing in frequency is efficient for enabling inter-technology coexistence, but this technique requires a larger portion of spectrum, where multiple non-overlapping channels can be accommodated and where ACI from \mbox{Wi-Fi} is not negligible.
Also, the extent to which \mbox{Wi-Fi} beamforming decreased the IEEE~802.15.4 packet delivery ratio differed greatly depending on the beam orientations~\cite{Petrova2007}. This suggests that SDMA via beamforming at PHY cannot be used as a stand-alone spectrum sharing technique, especially for wireless networks with mobile nodes, where different beams may be oriented in the same direction. However, beamforming can be used as an additional spectrum sharing technique to improve the coexistence performance for deployments with enough spatial separation between interfering devices.
Unlike previous work, \cite{Hauer2009} focused on the impact of IEEE~802.11b \mbox{Wi-Fi} on 802.15.4 body area networks and found that the 802.15.4 packet loss was significantly affected only for the very low power regime. Adaptive power control was suggested as a solution.
\subsubsection{\textbf{IEEE 802.11 \mbox{Wi-Fi}/Bluetooth}}
\label{rev_bluetooth}
Coexistence between these technologies was addressed in~\cite{Sydanheimo2002, Lansford2001, Golmie2003a, Chiasserini2002, Golmie2003s, Shuaib2006, Park2003, Arumugam2003, Ghosh2003}. The authors in~\cite{Shuaib2006, Sydanheimo2002, Lansford2001, Golmie2003a} assumed standard specifications, whereas in~\cite{Chiasserini2002, Golmie2003s, Park2003, Arumugam2003, Ghosh2003} advanced features were proposed.
The authors in~\cite{Sydanheimo2002} measured the impact of mutual interference between IEEE 802.11b and Bluetooth. They found that the decrease in data rate was in general tolerable for both technologies.
In~\cite{Lansford2001} it was found through simulations and measurements that Bluetooth was less affected by \mbox{Wi-Fi} than vice-versa, for closely spaced \mbox{Wi-Fi} and Bluetooth links. This showed that the FHSS technique implemented by Bluetooth is quite effective when the hopping channels cover a wider band than a \mbox{Wi-Fi} channel. Also, the CSMA/CA MAC was not as efficient at mitigating interference that occurred with a high hopping rate.
Consistently, \cite{Golmie2003a} reported that a slower Bluetooth hopping rate caused less interference to \mbox{Wi-Fi}. Furthermore, increasing the \mbox{Wi-Fi} transmit power did not reduce the \mbox{Wi-Fi} packet loss, so lower transmit power was found to be desirable.
The authors in~\cite{Chiasserini2002} proposed two MAC traffic scheduling algorithms to cope with the interference between DSSS-based IEEE 802.11 (i.e. IEEE 802.11b) and Bluetooth: the first algorithm scheduled and adjusted the \mbox{Wi-Fi} packets when coexisting with Bluetooth voice links, whereas the second one adjusted Bluetooth packets for data links when coexisting with \mbox{Wi-Fi}.
Both schemes reportedly require only slight modifications of the IEEE 802.11 and Bluetooth standards.
The simulation results showed a significant increase in goodput for both technologies.
However, these schemes require \mbox{Wi-Fi} and Bluetooth to have information about each other's traffic. Although~\cite{Chiasserini2002} suggested that both collaborative information exchange and non-collaborative sensing and interference pattern recognition are possible solutions, it may be difficult to implement either of them in practice, especially if multiple devices are active.
Another scheduling scheme was considered in~\cite{Golmie2003s}, which postponed Bluetooth transmissions until a time slot associated with a good-quality frequency channel. This was compared with an adaptive frequency hopping mechanism for Bluetooth, which avoided channels used by \mbox{Wi-Fi}.
The proposed frequency hopping scheme required Bluetooth specification modifications and was found to be more suitable for environments where the interference conditions did not change fast, such that the same hopping sequence could be used for longer. By contrast, the scheduling scheme was found to be more suitable for the opposite case and did not require specification modifications.
The authors in~\cite{Park2003, Arumugam2003, Ghosh2003} considered PHY techniques for coexistence between Bluetooth and OFDM-based \mbox{Wi-Fi}, i.e. IEEE~802.11g.
Specifically, \cite{Park2003} found through an analytical model that coding significantly decreased the bit error probability for \mbox{Wi-Fi}, when interference from Bluetooth occured.
Also, \cite{Arumugam2003} found that the packet error rate could be decreased for both technologies through antenna diversity for Bluetooth, and through weighing of bits according to the interference level of the respective subcarriers for \mbox{Wi-Fi}.
Finally, an interference cancellation technique was proposed for \mbox{Wi-Fi} in~\cite{Ghosh2003}, where the multipath channel and interference characteristics were estimated, in order to reduce the impact of interference from Bluetooth. A reported advantage was the potentially higher throughput compared to MAC schemes, since \mbox{Wi-Fi} could operate simultaneously with Bluetooth. However, the proposed PHY scheme was only evaluated for a single link of each technology, so it is unclear what its performance in realistic, larger deployments is.
We note that features like coding and using antenna diversity are already a part of modern wireless communication standards, i.e. Wi-Fi and LTE.
\subsubsection{\textbf{Other Technologies}}
\label{rev_other}
Coexistence between other technologies where at least one of them is low-traffic was addressed in~\cite{Wu2016, Lackpour2017, Parvez2016, SikoraOttawa2005, Vermeulen2017, Sydanheimo2002}.
The authors in~\cite{SikoraOttawa2005} evaluated the performance of IEEE~802.15.4 when coexisting with Bluetooth or microwave ovens and reported that IEEE 802.15.4 was only marginally affected in terms of packet loss.
The work in~\cite{Sydanheimo2002} reported measurement results for Bluetooth coexisting with a wireless camera (WCAM), RFID, and a microwave oven and showed that the data rate of Bluetooth could be significantly reduced, especially for short distances between Bluetooth devices and coexisting devices of a different technology.
This shows that the spatial separation has a significant impact on the performance of low-power networks.
The routing performance of LTE-based multi-hop D2D communications coexisting with \mbox{Wi-Fi} in the unlicensed band was investigated in~\cite{Wu2016}. Three coexistence mechanisms were considered for D2D: LBT with sensing until the channel is available; interference avoidance routing (i.e. routing around \mbox{Wi-Fi}, so as to avoid contention); and switching to the licensed cellular band. The authors found that LTE-based D2D in the unlicensed band could increase the LTE network-wide capacity, but suggested that efficient algorithms to select the D2D transmission time are needed, as they may impact \mbox{Wi-Fi} negatively.
The authors in~\cite{Lackpour2017, Vermeulen2017} proposed PHY techniques for coexistence with IEEE 802.15.4.
In~\cite{Lackpour2017} non-contiguous-OFDM and reconfigurable antennas were proposed for 5G to coexist with IEEE~802.15.4, whereas in~\cite{Vermeulen2017} self-interference cancellation with an in-band full duplex radio was proposed for IEEE 802.15.4 transmitters, in order to stop transmission in case of collision with any other signal and thus save energy.
We note, however, that \cite{Lackpour2017, Vermeulen2017} only considered one link of each coexisting technology, so it is unclear what the performance of these techniques is in realistic deployments with multiple active links.
The coexistence performance of LTE and ZigBee (i.e. IEEE 802.15.4 at MAC and PHY layers) was evaluated in~\cite{Parvez2016}, for the 2.4~GHz band. Two guard periods were proposed in each LTE frame, so that ZigBee could access the channel. The authors found that ZigBee's performance was degraded more than that of LTE, but that the requirements for smart meter communications with ZigBee were still met.
This shows the efficiency of time-sharing schemes.
\begin{table*}[!t]
\addtocounter{mysubtable}{-2}
\modcounter
\caption{Literature review of inter-technology spectrum sharing among broadband technologies in a spectrum commons}
\label{table_review_4}
\centering
\begin{tabular}{|p{2.2cm}|p{1.5cm}|p{7cm}|p{1.8cm}|p{3cm}|}
\hline
\centering \textbf{Technologies}
& \centering \textbf{Ref.}
& \centering \textbf{Coexistence at Layer~2}
& \centering \textbf{Coexistence at Layer~1}
& \centering\let\newline\\\arraybackslash\hspace{0pt} \textbf{Coordination at Layer~2 based on constraints at Layer~8}\\
\hline
\mbox{Wi-Fi} EDCA/DCF & \cite{Hwang2006, Bianchi2005} & \textbf{both}: CSMA/CA with different sensing time & -- & distributed \\
\hline
\multirow{2}{2.2cm}{\mbox{Wi-Fi}/ IEEE~802.16} & \cite{Fu2007} & \textbf{\mbox{Wi-Fi}}: CSMA/CA, transmit power; \textbf{802.16}: transmit power & \textbf{both}: modulation & distributed \\
\cline{2-5}
& \cite{Berlemann2006} & \textbf{\mbox{Wi-Fi}}: CSMA/CA; \textbf{802.16}: channel blocking, ordering contention slots & -- & distributed \\
\hline
\multirow{6}{2.2cm}{\mbox{Wi-Fi}/LTE} & \cite{Babaei2014, Jian2015, Capretti2016, Gomez-Miguelez2016, BhorkarNewOrleans2015, Jeon2014, SagariLondon2015, ChenGlasgowMay2015, Nhtilae2013, Rupasinghe2014, VoicuLondon2015, Sagari2015, FuadM.Abinader2014, Li2016a, Voicu2016, Jian2017, JiaLondon2015, Bhorkar2014} & \textbf{\mbox{Wi-Fi}}: CSMA/CA; \textbf{LTE}: none & -- & distributed \\
\cline{2-5}
& \cite{SagariLondon2015, VoicuLondon2015} & \textbf{\mbox{Wi-Fi}}: CSMA/CA; \textbf{both}: channel allocation (random~\cite{SagariLondon2015, VoicuLondon2015}; graph coloring~\cite{SagariLondon2015}; avoid occupied channels~\cite{VoicuLondon2015}) & -- & distributed\cite{SagariLondon2015, VoicuLondon2015}; coordinated\cite{SagariLondon2015} \\
\cline{2-5}
& \cite{Hajmohammad2013, Cai2016} & \textbf{both}: spectrum splitting between technologies (subcarrier granularity~\cite{Cai2016}) & -- & likely cooperative~\cite{Hajmohammad2013}; cooperative~\cite{Cai2016} \\
\cline{2-5}
& \cite{Chaves2013} & \textbf{\mbox{Wi-Fi}}: CSMA/CA; \textbf{LTE}: power control in the uplink & -- & distributed \\
\cline{2-5}
& \cite{Yun2015} & \textbf{\mbox{Wi-Fi}}: modified CSMA/CA & \textbf{both}: decoding & colocated LTE and \mbox{Wi-Fi} receivers \\
\cline{2-5}
& \cite{Li2016c} & \textbf{\mbox{Wi-Fi}}: CSMA/CA & \textbf{LTE}: beamforming & distributed; LTE nodes also have 802.11 receivers \\
\hline
\multirow{7}{2.2cm}{\mbox{Wi-Fi}/LBT-LTE} & \cite{Li2016, VoicuLondon2015, Cano2016a, Sandoval2016} & \textbf{\mbox{Wi-Fi}}: CSMA/CA; \textbf{LTE}: \emph{generic LBT} -- different ED thresholds~\cite{Li2016}, ideal MAC and different channel selection schemes (random, least interfered)~\cite{VoicuLondon2015}, ETSI LBE~\cite{Cano2016a} & optimized topologies~\cite{Sandoval2016} & distributed \cite{Li2016, VoicuLondon2015, Cano2016a}, likely centralized~\cite{Sandoval2016} \\
\cline{2-5}
& \cite{JiaLondon2015, Zhang2015a, Xiao2016, Song2016} & \textbf{\mbox{Wi-Fi}}: CSMA/CA; \textbf{LTE}: \emph{LBT without random backoff} -- \cite{JiaLondon2015} two time granularity levels; \cite{Zhang2015a} ETSI FBE; adaptive transmission duration~\cite{Xiao2016}; dynamic channel switch~\cite{Xiao2016} & -- & distributed~\cite{JiaLondon2015, Zhang2015a, Song2016}, centralized~\cite{Xiao2016} \\
\cline{2-5}
& \cite{Li2016a, Zhang2015c, Jeon2014, ChenGlasgowMay2015, Mushunuri2017, Bhorkar2014, LiHongKong2015, Song2016, Gao2016} & \textbf{\mbox{Wi-Fi}}: CSMA/CA; \textbf{LTE}: \emph{LBT with random backoff within fixed interval (or fixed CW)} -- different ED thresholds~\cite{Li2016a, Bhorkar2014}, adaptive ED threshold~\cite{LiHongKong2015}, variable transmission duration~\cite{Zhang2015c}, different channel selection schemes (random, least power at AP or UE)~\cite{Bhorkar2014} & -- & distributed \\
\cline{2-5}
& \cite{Yoon2017, BhorkarNewOrleans2015, Ali2016, Falconetti2016, Simic2016, Voicu2016, Voicu2017, Voicu2017a, Li2016b, Gao2016} & \textbf{\mbox{Wi-Fi}}: CSMA/CA; \textbf{LTE}: \emph{LBT with binary exponential random backoff} -- backoff freeze~\cite{Falconetti2016}, ETSI LBE~\cite{Gao2016}, different ED thresholds~\cite{Falconetti2016}, different channel selection~\cite{Simic2016, Voicu2016, Voicu2017, Voicu2017a, Li2016b}, different transmit power~\cite{Simic2016, Voicu2016} & -- & distributed \\
\cline{2-5}
& \cite{TaoHongKong2015, Yin2015, Li2017, Hasan2016} & \textbf{\mbox{Wi-Fi}}: CSMA/CA; \textbf{LTE}: \emph{LBT with random backoff and adaptive contention window (other than binary exponential)} & -- & cooperative~\cite{TaoHongKong2015}, LTE coordination~\cite{Yin2015, Li2017, Hasan2016} \\
\hline
\multirow{4}{2.3cm}{\mbox{Wi-Fi}/ duty-cycle-LTE} & \cite{Chaves2013, Gomez-Miguelez2016, Nhtilae2013, Ali2016, RupasingheNewOrleans2015, Simic2016} & \textbf{\mbox{Wi-Fi}}: CSMA/CA; \textbf{LTE}: \emph{fixed duty cycle} -- 80\% with subframe granularity~\cite{Chaves2013}; 0-100\% with mean period 150~ms~\cite{Gomez-Miguelez2016}; 0-100\%~\cite{Nhtilae2013}; 50\% with period 80 ms and maximum 20 ms ON time~\cite{Ali2016}; 20-100\%\cite{RupasingheNewOrleans2015}; 50\%~\cite{Simic2016}; different channel selection \& Tx power~\cite{Simic2016} & -- & distributed \\
\cline{2-5}
& \cite{Abdelfattah2017, Capretti2016, Jeon2014, Li2016a, Voicu2016} & \textbf{\mbox{Wi-Fi}}: CSMA/CA; \textbf{LTE}: \emph{fixed duty cycle with different transmission patterns} -- 50\%~\cite{Abdelfattah2017} and 60\%~\cite{Capretti2016} consecutive/alternative active subframes; 50\% successive/alternative and synchronous/asynchronous~\cite{Jeon2014}; 50\% coordinated/uncoordinated~\cite{Voicu2016}; 33-67\% synchronous/asynchronous~\cite{Li2016a} & -- & distributed~\cite{Abdelfattah2017, Capretti2016, Jeon2014, Voicu2016, Li2016a}, LTE coordination~\cite{Voicu2016, Jeon2014, Li2016a} \\
\cline{2-5}
& \cite{Sadek2015, Voicu2017, Voicu2017a, Guan2016, Zhang2015a, Cano2016a, Cano2015, Jian2017, Sagari2015, Sriyananda2016, RupasingheNewOrleans2015, Simic2016, Voicu2016} & \textbf{\mbox{Wi-Fi}}: CSMA/CA; \textbf{LTE}: \emph{adaptive duty cycle} -- channel selection\cite{Sadek2015, Voicu2017, Voicu2017a, Guan2016, Simic2016, Voicu2016}; carrier aggregation (channel width)~\cite{Guan2016}; power control~\cite{Sagari2015, Sriyananda2016}; ideal TDMA (perfect scheduling)~\cite{Simic2016, Voicu2016}; different Tx power~\cite{Simic2016} & -- & distributed\cite{Sadek2015, Voicu2017, Voicu2017a, Zhang2015a, Cano2016a, Jian2017, Sriyananda2016, Simic2016, Voicu2016}; centralized~\cite{Sagari2015}; LTE coordination~\cite{Simic2016, Voicu2016, Guan2016, Cano2015} \\
\hline
\end{tabular}
\end{table*}
\begin{table*}[!t]
\addtocounter{table}{-1}
\modcounter
\caption{Inter-technology coexistence goals and performance for literature review of spectrum sharing among broadband technologies in a spectrum commons in Table~\ref{table_review_4}: \mbox{Wi-Fi} EDCA/DCF, \mbox{Wi-Fi}/IEEE 802.16, \mbox{Wi-Fi}/LTE}
\label{table_review_4b}
\centering
\begin{tabular}{|p{1.7cm}|p{4cm}|p{3cm}|p{3cm}|p{3.5cm}|}
\hline
\centering \multirow{2}{*}{\textbf{Technologies}}
& \centering \multirow{2}{*}{\textbf{Coexistence Goals}}
& \multicolumn{3}{c|}{\textbf{Performance Evaluation}}\\
\cline{3-5}
&
& \centering \textbf{method}
& \centering \textbf{metric}
& \centering\let\newline\\\arraybackslash\hspace{0pt} \textbf{network size} \\
\hline
\mbox{Wi-Fi} EDCA/DCF \scriptsize\cite{Hwang2006, Bianchi2005}
& study mutual impact between technologies vs. each other
& \begin{tabular}{p{2.8cm}}
$-$\textbf{analytical} \scriptsize\cite{Hwang2006, Bianchi2005};\\
$-$\textbf{OPNET simulations} \scriptsize\cite{Hwang2006}
\end{tabular}
& \begin{tabular}{p{2.8cm}}
$-$\textbf{throughput} \scriptsize\cite{Hwang2006, Bianchi2005}; \\
$-$\textbf{slot occupancy probability} \scriptsize\cite{Bianchi2005}
\end{tabular}
& 20--30 stations of each technology \\
\hline
\mbox{Wi-Fi}/ IEEE~802.16 \scriptsize\cite{Fu2007}
& study mutual impact between technologies vs. each other & analytical, simulations & bit error rate & one \mbox{Wi-Fi} and one 802.16 link \\
\hline
\begin{tabular}{p{1.5cm}}
\mbox{Wi-Fi}/LTE \\ \\ \\ \\
\scriptsize\cite{Babaei2014, Jian2015, Capretti2016, Gomez-Miguelez2016, BhorkarNewOrleans2015, Jeon2014, SagariLondon2015, ChenGlasgowMay2015, Nhtilae2013, Rupasinghe2014, VoicuLondon2015, Sagari2015, FuadM.Abinader2014, Li2016a, Voicu2016, Jian2017, JiaLondon2015, Bhorkar2014, Chaves2013, Yun2015, Li2016c, Hajmohammad2013, Cai2016}
\end{tabular}
& \begin{tabular}{p{3.7cm}}
\emph{Impact of coexistence with unmodified LTE on \mbox{Wi-Fi}} \\
$-$\textbf{no baseline} \scriptsize\cite{Babaei2014, ChenGlasgowMay2015, Jian2017};\\
$-$\textbf{vs. standalone} \scriptsize\cite{Jian2015, Capretti2016, Gomez-Miguelez2016, BhorkarNewOrleans2015, Jeon2014, SagariLondon2015, Nhtilae2013, Rupasinghe2014, VoicuLondon2015, Sagari2015, FuadM.Abinader2014, JiaLondon2015};\\
$-$\textbf{vs. coexistence with itself} \scriptsize\cite{BhorkarNewOrleans2015, Sagari2015, Voicu2016, Li2016a, Bhorkar2014} \\
\hline
\emph{Impact of coexistence on unmodified LTE}\\
$-$\textbf{no baseline} \scriptsize\cite{Capretti2016, ChenGlasgowMay2015, Sagari2015, Li2016a, Voicu2016, Jian2017};\\
$-$\textbf{vs. standalone} \scriptsize\cite{Jeon2014, SagariLondon2015, Nhtilae2013, Rupasinghe2014, VoicuLondon2015, FuadM.Abinader2014, JiaLondon2015};\\
$-$\textbf{vs. coexistence with itself} \scriptsize\cite{BhorkarNewOrleans2015, Bhorkar2014}\\
\hline
\emph{Other} \\
$-$\textbf{increase aggregate throughput vs. coexistence with unmodified LTE} \scriptsize\cite{SagariLondon2015}\\
$-$\textbf{mutual coexistence impact vs. channel selection and vs. LBT} \scriptsize\cite{VoicuLondon2015}\\
$-$\textbf{mutual coexistence impact vs. standalone \& vs. duty cycle} \scriptsize\cite{Chaves2013}\\
$-$\textbf{enable simultaneous \mbox{Wi-Fi} and LTE transmissions and compare aggregate throughput with time division} \scriptsize\cite{Yun2015}\\
$-$\textbf{enable simultaneous LTE and \mbox{Wi-Fi} transmissions and compare with an LBT variant} \scriptsize\cite{Li2016c}\\
$-$\textbf{maximize total capacity, ensure fairness and QoS for both technologies} \scriptsize\cite{Hajmohammad2013}\\
$-$\textbf{maximize overall resource utilization vs. an LBT variant} \scriptsize\cite{Cai2016}
\end{tabular}
& \begin{tabular}{p{3cm}}
$-$\textbf{simulations} \scriptsize\cite{BhorkarNewOrleans2015, Jeon2014, SagariLondon2015, Nhtilae2013, Rupasinghe2014, VoicuLondon2015, FuadM.Abinader2014, Li2016a, Voicu2016, Jian2017, JiaLondon2015, Bhorkar2014, Chaves2013, Yun2015, Cai2016}; \\ \\ \\
$-$\textbf{analytical} \scriptsize\cite{Babaei2014, SagariLondon2015, ChenGlasgowMay2015, Sagari2015, Li2016a, Bhorkar2014, Hajmohammad2013}; \\ \\ \\
$-$\textbf{measurements} \scriptsize\cite{Jian2015, Capretti2016, Gomez-Miguelez2016, Sagari2015, Yun2015}
\end{tabular}
& \begin{tabular}{p{2.8cm}}
$-$\textbf{throughput} \scriptsize\cite{Jian2015, Capretti2016, Gomez-Miguelez2016, BhorkarNewOrleans2015, Jeon2014, SagariLondon2015, Nhtilae2013, VoicuLondon2015, Sagari2015, FuadM.Abinader2014, Voicu2016, Jian2017, JiaLondon2015, Bhorkar2014, Chaves2013, Yun2015, Hajmohammad2013, Cai2016};\\
$-$\textbf{no. transmitted packets} \scriptsize\cite{Jian2015}; \\
$-$\textbf{channel/medium access probability} \scriptsize\cite{Babaei2014, ChenGlasgowMay2015, Li2016a}; \\
$-$\textbf{number/ probability of successful transmissions/links} \scriptsize\cite{ChenGlasgowMay2015, Li2016a}; \\
$-$\textbf{delay} \scriptsize\cite{Babaei2014}; \\
$-$\textbf{jitter} \scriptsize\cite{Capretti2016};\\
$-$\textbf{SINR} \scriptsize\cite{SagariLondon2015, Rupasinghe2014, Sagari2015, Li2016a, Yun2015, Li2016c}; \\
$-$\textbf{interference} \scriptsize\cite{Li2016a};\\
$-$\textbf{coverage probability} \scriptsize\cite{Li2016a}; \\
$-$\textbf{false sensing probability} \scriptsize\cite{Yun2015};\\
$-$\textbf{mean square error} \scriptsize\cite{Yun2015};\\
$-$\textbf{channel occupancy time} \scriptsize\cite{Li2016c};\\
$-$\textbf{Jain's fairness index} \scriptsize\cite{Hajmohammad2013};\\
$-$\textbf{utility} \scriptsize\cite{Cai2016}
\end{tabular}
& \begin{tabular}{p{3.2cm}}
$-$~\textbf{1 LTE link/eNB \& several \mbox{Wi-Fi} devices} \scriptsize\cite{Jian2015, Gomez-Miguelez2016, Sagari2015, Babaei2014, Capretti2016, Jian2017, Yun2015, Li2016c}; \\ \\ \\
$-$~\textbf{$\leq$10 APs of each technology} \scriptsize\cite{Jeon2014, ChenGlasgowMay2015, Nhtilae2013, FuadM.Abinader2014, Voicu2016, Rupasinghe2014, JiaLondon2015, Chaves2013, Cai2016};\\ \\ \\
$-$~\textbf{$\leq$100 APs or 400--5000~APs/km\textsuperscript{2} of each technology, or 50 total links} \scriptsize\cite{BhorkarNewOrleans2015, Bhorkar2014, SagariLondon2015, Li2016a, VoicuLondon2015, Hajmohammad2013}; \\
\end{tabular} \\
\hline
\end{tabular}
\end{table*}
\begin{table*}[!t]
\addtocounter{table}{-1}
\modcounter
\caption{Inter-technology coexistence goals and performance for literature review of spectrum sharing among broadband technologies in a spectrum commons in Table~\ref{table_review_4}: \mbox{Wi-Fi}/LBT-LTE}
\label{table_review_4c}
\centering
\begin{tabular}{|p{1.7cm}|p{6cm}|p{2cm}|p{3.5cm}|p{2cm}|}
\hline
\centering \multirow{2}{*}{\textbf{Technologies}}
& \centering \multirow{2}{*}{\textbf{Coexistence Goals}}
& \multicolumn{3}{c|}{\textbf{Performance Evaluation}}\\
\cline{3-5}
&
& \centering \textbf{method}
& \centering \textbf{metric}
& \centering\let\newline\\\arraybackslash\hspace{0pt} \textbf{network size} \\
\hline
\begin{tabular}{p{1.5cm}} \mbox{Wi-Fi}/ \mbox{LBT-LTE} \\ \\ \\ \\ \scriptsize\cite{Li2016, VoicuLondon2015, Cano2016a, Sandoval2016, Li2016a, Zhang2015c, JiaLondon2015, Zhang2015a, Xiao2016, Jeon2014, ChenGlasgowMay2015, Mushunuri2017, Bhorkar2014, LiHongKong2015, Song2016, Yoon2017, BhorkarNewOrleans2015, Ali2016, Falconetti2016, Simic2016, Voicu2016, Voicu2017, Voicu2017a, Li2016b, Gao2016, TaoHongKong2015, Yin2015, Li2017, Hasan2016}
\end{tabular}
& \begin{tabular}{p{5.5cm}}
\emph{Impact on \mbox{Wi-Fi}} \\
$-$\textbf{no baseline} \scriptsize\cite{Zhang2015c}; \\
$-$\textbf{vs. coexistence with itself} \scriptsize\cite{Li2016, Li2016a, Mushunuri2017, Bhorkar2014, LiHongKong2015, Song2016, Yoon2017, BhorkarNewOrleans2015, Ali2016, Falconetti2016, Simic2016, Voicu2016, Voicu2017, Voicu2017a, Li2016b, Gao2016, TaoHongKong2015, Yin2015, Zhang2015a} \\
$-$\textbf{vs. plain coexistence} \scriptsize\cite{Li2016, VoicuLondon2015, Li2016a, JiaLondon2015, Zhang2015a, Jeon2014, ChenGlasgowMay2015, Bhorkar2014, BhorkarNewOrleans2015, Simic2016, Voicu2016} \\
$-$\textbf{vs. standalone} \scriptsize\cite{JiaLondon2015, Zhang2015a, Jeon2014, Voicu2017, Voicu2017a, Gao2016} \\
$-$\textbf{vs. channel selection} \scriptsize\cite{VoicuLondon2015, Simic2016, Voicu2016, Voicu2017, Voicu2017a, Xiao2016} \\
$-$\textbf{vs. duty cycle variants} \scriptsize\cite{Li2016a, Zhang2015a, Jeon2014, Ali2016, Simic2016, Voicu2016, Voicu2017, Voicu2017a, Xiao2016} \\
$-$\textbf{vs. other LBT variants} \scriptsize\cite{Jeon2014, Mushunuri2017, Bhorkar2014, LiHongKong2015, Song2016, Yoon2017, Falconetti2016, Li2016b, Gao2016, TaoHongKong2015, Yin2015, Li2017, Hasan2016}\\
\hline
\emph{Impact on LTE} \\
$-$\textbf{no baseline} \scriptsize\cite{Zhang2015c} \\
$-$\textbf{vs. plain coexistence} \scriptsize\cite{Li2016, VoicuLondon2015, Li2016a, JiaLondon2015, Zhang2015a, Jeon2014, ChenGlasgowMay2015, Bhorkar2014, BhorkarNewOrleans2015, Simic2016, Voicu2016} \\
$-$\textbf{vs. coexistence with itself} \scriptsize\cite{Bhorkar2014, LiHongKong2015, BhorkarNewOrleans2015, Falconetti2016, Li2016b, TaoHongKong2015, Li2017} \\
$-$\textbf{vs. standalone} \scriptsize\cite{JiaLondon2015, Zhang2015a, Jeon2014, Voicu2017a} \\
$-$\textbf{vs. channel selection} \scriptsize\cite{VoicuLondon2015, Bhorkar2014, Voicu2016, Xiao2016, Voicu2017a} \\
$-$\textbf{vs. duty cycle variants} \scriptsize\cite{Li2016a, Zhang2015a, Jeon2014, Ali2016, Simic2016, Voicu2016, Xiao2016, Voicu2017a} \\
$-$\textbf{vs. other LBT variants} \scriptsize\cite{Jeon2014, Bhorkar2014, LiHongKong2015, Song2016, Yoon2017, Falconetti2016, Li2016b, Gao2016, TaoHongKong2015, Yin2015, Li2017, Hasan2016}\\
\hline
\emph{Other} \\
$-$\textbf{fairness (implicitly) for \mbox{Wi-Fi} vs. coexistence with itself} \scriptsize\cite{Mushunuri2017, Bhorkar2014, LiHongKong2015, Song2016, Ali2016, Simic2016, Voicu2017, Voicu2017a, TaoHongKong2015}\\
$-$\textbf{proportional fair rate allocation for \mbox{Wi-Fi} and LTE} \scriptsize\cite{Cano2016a} \\
$-$\textbf{fairness as same \mbox{Wi-Fi}/LTE airtime} \scriptsize\cite{Yoon2017}\\
$-$\textbf{proportional fair channel switch} \scriptsize\cite{Li2016b} \\
$-$\textbf{fairness as minimization of collision probability to \mbox{Wi-Fi}} \scriptsize\cite{Yin2015} \\
$-$\textbf{fairness as constant aggregate \mbox{Wi-Fi} throughput} \scriptsize\cite{Li2017} \\
$-$\textbf{airtime fairness for \mbox{Wi-Fi} based on altruistic gains} \scriptsize\cite{Hasan2016}\\
$-$\textbf{maximize aggregate LTE capacity in presence of \mbox{Wi-Fi}} \scriptsize\cite{Sandoval2016} \\
$-$\textbf{enable different levels of protection for \mbox{Wi-Fi}} \scriptsize\cite{Zhang2015c} \\
$-$\textbf{maximize total throughput given requirements of each technology} \scriptsize\cite{Song2016} \\
\end{tabular}
& \begin{tabular}{p{1.5cm}}
$-$\textbf{simulations} \scriptsize\cite{VoicuLondon2015, Cano2016a, Sandoval2016, Li2016a, Zhang2015c, JiaLondon2015, Zhang2015a, Xiao2016, Jeon2014, Mushunuri2017, Bhorkar2014, LiHongKong2015, Yoon2017, BhorkarNewOrleans2015, Ali2016, Falconetti2016, Simic2016, Voicu2016, Voicu2017, Voicu2017a, Li2016b, TaoHongKong2015, Yin2015, Li2017, Hasan2016};\\ \\ \\ \\
$-$\textbf{analytical} \scriptsize\cite{Li2016, Sandoval2016, Li2016a, Zhang2015c, ChenGlasgowMay2015, Mushunuri2017, Bhorkar2014, Song2016, Ali2016, Gao2016, Yin2015, Li2017, Hasan2016} \\
\end{tabular}
& \begin{tabular}{p{3cm}}
$-$\textbf{throughput} \scriptsize\cite{Li2016, VoicuLondon2015, Cano2016a, Sandoval2016, Zhang2015c, JiaLondon2015, Zhang2015a, Xiao2016, Jeon2014, Mushunuri2017, Bhorkar2014, LiHongKong2015, Song2016, Yoon2017, BhorkarNewOrleans2015, Ali2016, Falconetti2016, Simic2016, Voicu2016, Voicu2017,Voicu2017a, Li2016b, Gao2016, TaoHongKong2015, Yin2015, Li2017};\\
$-$\textbf{delay} \scriptsize\cite{Cano2016a, Mushunuri2017, Ali2016, Gao2016, TaoHongKong2015};\\
$-$\textbf{coverage probability} \scriptsize\cite{Sandoval2016, Li2016a};\\
$-$\textbf{successful transmissions} \scriptsize\cite{Li2016a, ChenGlasgowMay2015}; \\
$-$\textbf{protection level} \scriptsize\cite{Zhang2015c}; \\
$-$\textbf{transmission duration} \scriptsize\cite{Zhang2015c} \\
$-$\textbf{channel access probability} \scriptsize\cite{ChenGlasgowMay2015, Mushunuri2017}\\
$-$\textbf{collision probability} \scriptsize\cite{Mushunuri2017, Yin2015}\\
$-$\textbf{SINR} \scriptsize\cite{LiHongKong2015}\\
$-$\textbf{airtime} \scriptsize\cite{Yoon2017, Hasan2016}\\
$-$\textbf{Jain's fairness} \scriptsize\cite{Yoon2017, Voicu2017, Voicu2017a, Hasan2016}\\
$-$\textbf{channel occupation} \scriptsize\cite{Yin2015}\\
$-$\textbf{utility} \scriptsize\cite{Li2017}\\
$-$\textbf{Q-value} \scriptsize\cite{Li2017}\\
$-$\textbf{entropy} \scriptsize\cite{Hasan2016}\\
$-$\textbf{risk-informed interference assessment} \scriptsize\cite{Voicu2017, Voicu2017a}\\
\end{tabular}
& \begin{tabular}{p{1.7cm}}
$-$~\textbf{1 LTE link/AP \& several \mbox{Wi-Fi} devices} \scriptsize\cite{Yoon2017, Cano2016a, Zhang2015c, Mushunuri2017, Yin2015}\\ \\ \\
$-$~\textbf{$\leq$10 APs, or 15 APs/km\textsuperscript{2} of each technology} \scriptsize\cite{Li2016, ChenGlasgowMay2015, Gao2016, Ali2016, Li2016b, Hasan2016, Falconetti2016, JiaLondon2015, Xiao2016, Jeon2014, Song2016, Li2017};\\ \\ \\
$-$~\textbf{10--90 APs, or 400--5000 APs/km\textsuperscript{2} of each technology}: \scriptsize\cite{VoicuLondon2015, Simic2016, Voicu2016, Sandoval2016, Li2016a, Zhang2015a, Bhorkar2014, BhorkarNewOrleans2015, LiHongKong2015, TaoHongKong2015, Voicu2017, Voicu2017a}
\end{tabular}
\\
\hline
\end{tabular}
\end{table*}
\subsection{Coexistence among Broadband Technologies}
\label{litrev_hightraffic}
We first present in Section~\ref{ov_broadband} a literature overview of broadband technology coexistence, and then review in detail various strands of the work as follows.
We review the literature addressing coexistence among technologies of the IEEE 802.x standards in Section~\ref{litrev_802}.
We then focus on IEEE 802.11 \mbox{Wi-Fi}/LTE coexistence in the unlicensed bands, which has been recently extensively investigated in light of the two main proposed LTE variants for the unlicensed bands, i.e. LAA~\cite{3GPP2015} and LTE-U~\cite{Forum2015}. We classify the existing literature based on the main Layer~2 coexistence approaches for LTE\footnote{We note that \mbox{Wi-Fi} always implements CSMA/CA at the MAC layer, i.e. LBT with binary exponential random backoff. Although different approximations were adopted for modelling CSMA/CA in different papers (e.g.~\cite{VoicuLondon2015} does not consider the MAC inefficiency due to sensing time, \cite{Li2016a, Bhorkar2014} assume random backoff with fixed CW, and \cite{Voicu2016} estimates the binary exponential random backoff by means of an analytical model), a detailed review of such modelling techniques is out of the scope of this survey.}:
(i)~no MAC coexistence mechanism, i.e. LTE continuously transmits, in Section~\ref{wifiltecoex}; (ii)~LBT, i.e. the approach adopted by 3GPP for LAA~\cite{3GPP2016, 3GPP2016a}, in Section~\ref{litrev_lbt}; and (iii)~duty cycle, i.e. the approach adopted by the LTE-U Forum, in Section~\ref{litrev_dut}.
Table~\ref{table_review_4} summarizes the spectrum sharing mechanisms in the reviewed literature and Tables~\ref{table_review_4b} to \ref{table_review_4d} summarize coexistence performance evaluation aspects, where \emph{standalone} refers to the baseline case with a single technology, i.e. no coexisting technology is considered, and \emph{plain coexistence} refers to \mbox{Wi-Fi}/LTE coexistence where no spectrum sharing mechanism is implemented for LTE.
Furthermore, in Tables~\mbox{\ref{table_review_4b}} to \ref{table_review_4d} we group similar metrics in the literature under a few representative terms, e.g. \emph{throughput} also refers to goodput~\cite{Chiasserini2002}, offered/served load~\cite{Jeon2014}, capacity~\cite{Rupasinghe2014}, normalized throughput~\cite{FuadM.Abinader2014}, etc.
\subsubsection{\textbf{Literature Overview}}
\label{ov_broadband}
Only a few works have addressed coexistence among IEEE 802.x standards, as the dominant IEEE standard in the unlicensed bands is 802.11 \mbox{Wi-Fi}, such that the devices implement similar spectrum sharing mechanisms.
There is a large number of works that have addressed \mbox{Wi-Fi}/LTE coexistence in the unlicensed bands. Some of them consider LTE without any coexistence mechanism and identify the need to implement one, in order to allow \mbox{Wi-Fi} to access the spectum.
Most works consider different variants of either LBT-LTE, or duty-cycle-LTE and compare them only with standalone technologies, or with coexistence where LTE does not implement sharing mechanisms. We note that this approach does not facilitate a direct comparison between different mechanisms. A few works, however, considered both \mbox{Wi-Fi}/LBT-LTE and \mbox{Wi-Fi}/duty-cycle-LTE coexistence.
The authors report in general that the adaptive sharing mechanisms at Layer~2 (either duty cycle or LBT) achieve the best coexistence performance. However, some of these mechanisms require information that is not trivial to obtain with distributed mechanisms (e.g. traffic requirements, number of nodes, etc.). We note that many works have considered fairness when evaluating the coexistence performance, but different fairness definitions were used (\emph{cf.} Tables~\ref{table_review_4c} and \mbox{\ref{table_review_4d}}). However, a significant number of papers have adopted the fairness criterion used by 3GPP, i.e. ``not impact \mbox{Wi-Fi} services more than an additional \mbox{Wi-Fi} network''~\cite{3GPP2015}. As such, some works found that the most fair coexistence performance was obtained when LTE implemented an LBT mechanism similar to \mbox{Wi-Fi}'s LBT.
Furthermore, there have been very few proposals for Layer~1 sharing mechanisms, which suggests that such techniques are not developed enough to mitigate interference for broadband technologies, such that the most efficient mechanisms are sharing in time and/or frequency at Layer~2.
Finally, most of the works relied on simulations and analytical tools to evaluate the coexistence performance. Only few works have conducted basic experimental evaluations and only for duty-cycle-LTE. This shows the difficulty of obtaining such results due to the lack of devices that implement a fully functional open-source LTE stack, which could be modified in a straightforward manner for research purposes.
\subsubsection{\textbf{Coexistence among Broadband IEEE 802.x Technologies}}
\label{litrev_802}
The authors in~\cite{Hwang2006, Bianchi2005} addressed coexistence between legacy IEEE 802.11 devices implementing at the MAC the distributed coordination function (DCF) and new devices implementing enhanced distributed channel access (EDCA), i.e. different sensing durations, in order to grant different channel access priority levels for different traffic categories. The reported performance results validated the channel access priorities associated with different sensing durations~\cite{Hwang2006}.
Additionally, EDCA had higher channel access priority than DCF, due to the different backoff counter decrement procedure, through which it gained one additional backoff slot~\cite{Bianchi2005}.
Coexistence between IEEE 802.11a and 802.16 was addressed in~\cite{Fu2007, Berlemann2006}. In~\cite{Fu2007} the mutual interference was evaluated at the PHY layer, when transmissions from the two technologies overlapped in time and frequency. Furthermore, the authors suggested varying the transmit power and modulation scheme for coping with this interference.
In~\cite{Berlemann2006} channel blocking and ordering of contention slots was proposed for IEEE 802.16, in order to reserve the channel before 802.11a and thus to guarantee QoS for 802.16. However, no performance evaluation results were presented.
\subsubsection{\textbf{\mbox{Wi-Fi}/LTE Coexistence}}
\label{wifiltecoex}
A number of papers investigated \mbox{Wi-Fi}/LTE coexistence performance when LTE does not implement any coexistence mechanism, e.g.~\cite{Babaei2014, Jian2015, Capretti2016, Gomez-Miguelez2016, BhorkarNewOrleans2015, Jeon2014, SagariLondon2015, ChenGlasgowMay2015, Nhtilae2013, Rupasinghe2014, VoicuLondon2015, Sagari2015, FuadM.Abinader2014, Li2016a, Voicu2016, Jian2017, JiaLondon2015, Bhorkar2014}, either as an individual coexistence case, or as a baseline for comparison with other mechanisms. They all reported that the \mbox{Wi-Fi} performance was severely degraded and that LTE should implement an inter-technology coexistence mechanism when operating in the unlicensed bands.
For an overview of the main coexistence approaches considered for LTE in the unlicensed bands we refer the reader to e.g.~\cite{Cano2016, FuadM.Abinader2014, Chaves2013, Cui2016, Ho2017, Zhang2015a, Chen2017, Kwon2017}, where~\cite{Ho2017} presented a survey of the early literature on \mbox{Wi-Fi}/LTE coexistence, and~\cite{Kwon2017} focused on LAA standardized by 3GPP.
The authors in~\cite{SagariLondon2015, Chaves2013, Yun2015, Li2016c, Hajmohammad2013, Cai2016} proposed \mbox{Wi-Fi}/LTE coexistence solutions different than the MAC-based LBT and duty cycling.
The authors in~\cite{SagariLondon2015, Hajmohammad2013, Cai2016} focused on spectrum sharing in frequency.
In \cite{SagariLondon2015} it was found that even with random channel selection, a significant increase in network throughput can be achieved vs. co-channel deployments. Furthermore, two variants of a channel allocation scheme based on multigraph coloring were proposed, i.e. with intra- or inter-technology coordination. The inter-technology coordination did not improve the network throughput significantly compared to intra-technology coordination, but both were better than random channel selection.
The authors in~\cite{Hajmohammad2013} proposed spectrum splitting between \mbox{Wi-Fi} and LTE and aimed to maximize the total \mbox{Wi-Fi} and LTE femtocell capacity, while taking into account fairness and QoS constrains. This scheme was shown to improve the capacity of the LTE femtocells, compared to licensed spectrum splitting between femtocells and macrocells.
In~\cite{Cai2016} \mbox{Wi-Fi}/LTE coordinated spectrum splitting with subcarrier granularity was assumed. Some network controllers implemented decision trees and repeated games for spectrum splitting, in order to maximize their resource utilization. The scheme was shown to improve the throughput for both technologies compared to other LBT variants.
Although the results in~\cite{SagariLondon2015, Hajmohammad2013, Cai2016} show overall that spectrum sharing in frequency is efficient for facilitating inter-technology coexistence, all the proposed mechanisms require intra- or inter-technology coordination, which cannot be easily achieved in distributed deployments, where the devices are owned and managed by different parties.
In~\cite{Chaves2013} an uplink power control mechanism was proposed for LTE users, which resulted in a similar or somewhat higher mean user throughput for both LTE and \mbox{Wi-Fi}, compared to LTE with a duty cycle of 80\%.
However, selecting LTE with a duty cycle of 80\% as baseline does not prove the efficiency of LTE uplink power control overall, since it is expected that 80\% duty-cycling-LTE causes a significant level of interference, especially in dense deployments, and thus has a poor coexistence performance. Furthermore, for properly tuning the proposed LTE uplink power control, the network operator needs knowledge of the \mbox{Wi-Fi} network and its traffic. Finally, the proposed technique does not manage the interference caused by LTE downlink transmissions. As such, this power control mechanism could be used in conjunction with other spectrum sharing schemes, but is not sufficient as a standalone coexistence mechanism.
Two different PHY-layer techniques were proposed in~\cite{Yun2015, Li2016c}.
In~\cite{Yun2015} \mbox{Wi-Fi} and LTE could both transmit at the same time, on the same frequency, using a decoding method that enabled the separation of two overlapping OFDM signals (i.e. an interference cancellation technique).
The authors in~\cite{Li2016c} proposed estimating the direction of arrival of \mbox{Wi-Fi} signals by LTE and then applying null steering, such that LTE does not cause interfere in the direction of \mbox{Wi-Fi} (i.e. a beamforming technique).
The techniques in~\cite{Yun2015, Li2016c} resulted in good coexistence performance, but they both required co-located LTE and \mbox{Wi-Fi} receivers and were evaluated for a single LTE link. Additionally,~\cite{Yun2015} also requires substantial changes to the CSMA/CA \mbox{Wi-Fi} mechanism.
\subsubsection{\textbf{\mbox{Wi-Fi}/LBT-LTE coexistence}}
\label{litrev_lbt}
The works~\cite{Li2016, VoicuLondon2015, Cano2016a, Sandoval2016, JiaLondon2015, Zhang2015a, Xiao2016, Li2016a, Zhang2015c, Jeon2014, ChenGlasgowMay2015, Mushunuri2017, Bhorkar2014, LiHongKong2015, Yoon2017, BhorkarNewOrleans2015, Ali2016, Falconetti2016, Simic2016, Voicu2016, Voicu2017, Song2016, Gao2016, TaoHongKong2015, Yin2015, Li2017, Hasan2016, Li2016b} addressed \mbox{Wi-Fi}/LBT-LTE coexistence.
\paragraph{Generic LBT}
The work in~\cite{Li2016, VoicuLondon2015, Cano2016a, Sandoval2016} assumed LBT models at a level of abstraction for which the specifics of the backoff type are irrelevant, so we refer to this as \emph{generic LBT}.
The authors in~\cite{Li2016} found that proper selection of the sensing threshold was beneficial for coexistence.
We note that the sensing threshold, which is an inherent parameter for LBT technologies, has a critical impact on how much a technology defers to another one and it is thus a natural parameter to configure for granting different priorities in accessing the channel.
In~\cite{VoicuLondon2015} LBT-LTE was compared with different channel selection schemes for LTE, i.e. random or least-interfered channel. Channel selection was found to be more efficient than LBT at ensuring coexistence, which shows that spectrum sharing in frequency would be preferred over sharing in time in distributed deployments. However, this required a large number of channels, which may not always be available in practice. Furthermore, the rather large building shielding at 5~GHz contributed to reducing interference and ensuring harmonious coexistence.
In~\cite{Sandoval2016} a complementary solution to LBT was proposed, i.e. a framework that statistically optimizes the LTE network topology when coexisting with \mbox{Wi-Fi} in indoor scenarios, such that the aggregate LTE capacity is maximized and the required coverage achieved. However, this requires accurate models for radio propagation, service demand, load levels, and spatial distribution.
Overall,~\cite{Li2016, VoicuLondon2015, Sandoval2016} suggest that spectrum sharing in time, e.g. LBT, is required for \mbox{Wi-Fi}/LTE coexistence, but this can be complemented by other techniques like channel selection or topology optimization.
\paragraph{LBT without random backoff}
The work in~\cite{JiaLondon2015, Zhang2015a, Xiao2016} considered LBT-LTE without random backoff.
The authors in~\cite{JiaLondon2015} proposed two variants of LBT with fixed sensing duration, i.e. \emph{periodic} sensing with OFDM symbol granularity, and \emph{persistent} sensing with subframe granularity. In~\cite{Xiao2016} it was proposed that LTE directly transmits once the medium is sensed idle.
The results in~\cite{JiaLondon2015, Xiao2016} showed a satisfactory LTE and \mbox{Wi-Fi} user throughput, but both works implemented additional spectrum sharing techniques, i.e. \cite{JiaLondon2015} applied a much lower sensing threshold to defer to \mbox{Wi-Fi} than vice-versa, and in \cite{Xiao2016} LTE either dynamically switched the channel to allow \mbox{Wi-Fi} to transmit, or adaptively reserved some blank subframes for \mbox{Wi-Fi}.
This suggests that implementing only LBT without additional configuration/adaptation of the sensing time cannot ensure coexistence among broadband technologies in a spectrum commons. Namely, LBT has to be enhanced by tuning further parameters, e.g. sensing threshold, random backoff, or by applying additional spectrum sharing mechanisms, e.g. sharing in frequency.
\paragraph{LBT with random backoff within fixed interval}
The work in~\cite{Li2016a, Zhang2015c, Jeon2014, ChenGlasgowMay2015, Mushunuri2017, Bhorkar2014, LiHongKong2015} addressed \mbox{Wi-Fi} coexistence with \mbox{LBT-LTE} with random backoff within a fixed interval (or with fixed CW).
In~\cite{ChenGlasgowMay2015} it was found that \mbox{Wi-Fi} performance was improved when coexisting with LBT-LTE with fixed CW compared to the case where it coexisted with LTE without any coexistence mechanism, as expected.
Furthermore, it is expected that LBT with random backoff and fixed CW can avoid collisions better than LBT without random backoff, especially for broadband technologies with high traffic load and dense deployments.
Nonetheless, coexistence performance via LBT with random backoff and fixed CW was further improved with respect to a given coexistence goal by also tuning other parameters, e.g. sensing threshold~\cite{Bhorkar2014, LiHongKong2015}, channel selection schemes~\cite{Bhorkar2014}, transmission duration~\cite{Zhang2015c}. The results in~\cite{Bhorkar2014, Zhang2015c, LiHongKong2015} showed overall that different capacity gains and tradeoffs between \mbox{Wi-Fi} and LTE performance can be achieved.
Furthermore, the authors in~\cite{Mushunuri2017, Song2016} evaluated coexistence for different fixed CW and found that \mbox{Wi-Fi} and the total system performance could be increased if the CW was properly selected.
From the point of view of the resulting performance, tuning either of two different design parameters may be equivalent, but in practice the choice of parameter to adapt depends on the specific constraints at different layers of the technology circle, for a given deployment. For instance, sensing thresholds are lower-bounded by the minimum sensitivity of the receiver, whereas implementing channel selection requires that a sufficient number of channels are available.
The number of available channels is determined by regulatory constraints at Layer~0, whereas the receiver sensitivity is a PHY parameter, which is arguably in turn determined by equipment cost constraints at Layer~8.
\paragraph{LBT with binary exponential random backoff}
The authors in~\cite{Yoon2017, BhorkarNewOrleans2015, Ali2016, Falconetti2016, Simic2016, Voicu2016, Voicu2017, Li2016b, Gao2016} addressed \mbox{Wi-Fi} coexistence with LBT-LTE with binary exponential random backoff, which is one method to adapt the CW.
As \mbox{Wi-Fi} implements this method, this was also considered for LTE, in order to achieve the same behaviour when the two technologies share the spectrum and thus achieve fairness. For instance, \cite{Gao2016} found that a fixed CW was more beneficial for LTE instead of binary exponential random backoff, but at the same time degraded \mbox{Wi-Fi} performance more. We note that for LAA, binary exponential random backoff was eventually standardized in 3GPP Release 13.
In this context, further LTE parameters were either directly adopted from \mbox{Wi-Fi} (e.g. the sensing threshold~\cite{BhorkarNewOrleans2015}, varying the channel width by aggregating multiple channels~\cite{Falconetti2016}), or were adapted to match equivalent \mbox{Wi-Fi} parameters (e.g. the transmission time~\cite{Yoon2017}).
For other considered coexistence goals, \cite{BhorkarNewOrleans2015} reported that a suitable sensing threshold could improve the overall performance of \mbox{Wi-Fi} and LTE and in \cite{Li2016b} a proportional fair dynamic channel selection mechanism was proposed for LBT-LTE in order to coexist with \mbox{Wi-Fi}. A modification to binary exponential LBT was also introduced, i.e. a frozen period to ensure correct channel switching decision. The scheme was shown to be efficient especially for low traffic load.
\begin{table*}[!t]
\addtocounter{table}{-1}
\modcounter
\caption{Inter-technology coexistence goals and performance for literature review of spectrum sharing among broadband technologies in a spectrum commons in Table~\ref{table_review_4}: \mbox{Wi-Fi}/duty-cycle-LTE}
\label{table_review_4d}
\centering
\begin{tabular}{|p{1.7cm}|p{6cm}|p{2cm}|p{3.5cm}|p{2cm}|}
\hline
\centering \multirow{2}{*}{\textbf{Technologies}}
& \centering \multirow{2}{*}{\textbf{Coexistence Goals}}
& \multicolumn{3}{c|}{\textbf{Performance Evaluation}}\\
\cline{3-5}
&
& \centering \textbf{method}
& \centering \textbf{metric}
& \centering\let\newline\\\arraybackslash\hspace{0pt} \textbf{network size} \\
\hline
\begin{tabular}{p{1.5cm}} \mbox{Wi-Fi}/ duty-cycle-LTE \\ \\ \\ \\ \scriptsize\cite{Chaves2013, Gomez-Miguelez2016, Nhtilae2013, Ali2016, Abdelfattah2017, Capretti2016, Jeon2014, Li2016a, Sadek2015, Voicu2017, Voicu2017a, Guan2016, Zhang2015a, Cano2016a, Cano2015, Jian2017, Sagari2015, Sriyananda2016, RupasingheNewOrleans2015, Simic2016, Voicu2016}
\end{tabular}
& \begin{tabular}{p{5.5cm}}
\emph{Impact on \mbox{Wi-Fi}} \\
$-$\textbf{no baseline} \scriptsize\cite{Cano2015}; \\
$-$\textbf{vs. coexistence with itself} \scriptsize\cite{Ali2016, Li2016a, Sadek2015, Voicu2017, Voicu2017a, Guan2016, Zhang2015a, Zhang2015a, Simic2016, Voicu2016}; \\
$-$\textbf{vs. plain coexistence} \scriptsize\cite{Chaves2013, Gomez-Miguelez2016, Nhtilae2013, Capretti2016, Jeon2014, Li2016a, Sadek2015, Zhang2015a, Jian2017, Sagari2015, RupasingheNewOrleans2015, Simic2016, Voicu2016};\\
$-$\textbf{vs. standalone} \scriptsize\cite{Chaves2013, Gomez-Miguelez2016, Nhtilae2013, Abdelfattah2017, Capretti2016, Jeon2014, Voicu2017, Voicu2017a, Zhang2015a, Sriyananda2016}; \\
$-$\textbf{vs. channel selection} \scriptsize\cite{Voicu2017, Voicu2017a, Guan2016, Simic2016, Voicu2016};\\
$-$\textbf{vs. other duty cycle variants} \scriptsize\cite{Nhtilae2013, Abdelfattah2017, Capretti2016, Jeon2014, Li2016a, Sriyananda2016, RupasingheNewOrleans2015, Simic2016, Voicu2016};\\
$-$\textbf{vs. LBT variants} \scriptsize\cite{Ali2016, Jeon2014, Li2016a, Voicu2017, Voicu2017a, Zhang2015a, Cano2016a, Simic2016, Voicu2016};\\
$-$\textbf{vs. power control} \scriptsize\cite{Chaves2013, Sagari2015}\\
\hline
\emph{Impact on LTE} \\
$-$\textbf{no baseline} \scriptsize\cite{Cano2015}; \\
$-$\textbf{vs. plain coexistence} \scriptsize\cite{Chaves2013, Gomez-Miguelez2016, Nhtilae2013, Capretti2016, Jeon2014, Li2016a, Sadek2015, Zhang2015a, Jian2017, Sagari2015, RupasingheNewOrleans2015, Simic2016, Voicu2016};\\
$-$\textbf{vs. coexistence with itself} \scriptsize\cite{Sadek2015};\\
$-$\textbf{vs. standalone} \scriptsize\cite{Chaves2013, Nhtilae2013, Jeon2014, Zhang2015a, Sriyananda2016, Voicu2017a};\\
$-$\textbf{vs. channel selection} \scriptsize\cite{Guan2016, Simic2016, Voicu2016, Voicu2017a};\\
$-$\textbf{vs. other duty cycle variants} \scriptsize\cite{Nhtilae2013, Capretti2016, Jeon2014, Li2016a, Sriyananda2016, RupasingheNewOrleans2015, Simic2016, Voicu2016}; \\
$-$\textbf{vs. LBT variants} \scriptsize\cite{Ali2016, Jeon2014, Li2016a, Zhang2015a, Cano2016a, Simic2016, Voicu2016, Voicu2017a}; \\
$-$\textbf{vs. power control} \scriptsize\cite{Chaves2013, Sagari2015}\\
\hline
\emph{Other} \\
$-$\textbf{fairness (implicitly) for \mbox{Wi-Fi} vs. coexistence with itself} \scriptsize\cite{Ali2016, Abdelfattah2017, Voicu2017, Voicu2017a, Simic2016}\\
$-$\textbf{fair coexistence for \mbox{Wi-Fi} as half the throughput of standalone \mbox{Wi-Fi}} \scriptsize\cite{Abdelfattah2017};\\
$-$\textbf{max. network utility with fairness for \mbox{Wi-Fi} as airtime vs. coexistence with itself} \scriptsize\cite{Guan2016};\\
$-$\textbf{proportional fair rate allocation for \mbox{Wi-Fi} and LTE} \scriptsize\cite{Cano2016a, Cano2015};\\
$-$\textbf{maximize overall throughput with fairness as same airtime for LTE \& \mbox{Wi-Fi}} \scriptsize\cite{Jian2017};\\
$-$\textbf{maximize capacity and minimize Tx power} \scriptsize\cite{Sriyananda2016} \\
\end{tabular}
& \begin{tabular}{p{1.5cm}}
$-$\textbf{simulations} \scriptsize\cite{Chaves2013, Nhtilae2013, Ali2016, Abdelfattah2017, Jeon2014, Li2016a, Sadek2015, Voicu2017, Voicu2017a, Zhang2015a, Cano2016a, Jian2017, Sriyananda2016, RupasingheNewOrleans2015, Simic2016, Voicu2016};\\ \\ \\
$-$\textbf{analytical} \scriptsize\cite{Ali2016, Abdelfattah2017, Li2016a, Guan2016, Cano2015, Sagari2015, Sriyananda2016} \\ \\ \\
$-$\textbf{measurements} \scriptsize\cite{Gomez-Miguelez2016, Capretti2016, Sadek2015, Sagari2015} \\
\end{tabular}
& \begin{tabular}{p{3cm}}
$-$\textbf{throughput} \scriptsize\cite{Chaves2013, Gomez-Miguelez2016, Nhtilae2013, Ali2016, Abdelfattah2017, Capretti2016, Jeon2014, Sadek2015, Voicu2017, Voicu2017a, Zhang2015a, Cano2016a, Cano2015, Jian2017, Sagari2015, Sriyananda2016, RupasingheNewOrleans2015, Simic2016, Voicu2016};\\
$-$\textbf{jitter} \scriptsize\cite{Capretti2016}\\
$-$\textbf{delay} \scriptsize\cite{Ali2016, Cano2016a}\\
$-$\textbf{SINR} \scriptsize\cite{Abdelfattah2017, Sagari2015, RupasingheNewOrleans2015}\\
$-$\textbf{collision probability} \scriptsize\cite{Abdelfattah2017, Cano2015}\\
$-$\textbf{coverage probability} \scriptsize\cite{Li2016a}\\
$-$\textbf{successful links} \scriptsize\cite{Li2016a}\\
$-$\textbf{Jain's fairness index} \scriptsize\cite{Voicu2017, Voicu2017a, Guan2016}\\
$-$\textbf{airtime} \scriptsize\cite{Guan2016, Cano2015}\\
$-$\textbf{channel utilization} \scriptsize\cite{Jian2017}\\
$-$\textbf{energy efficiency} \scriptsize\cite{Sriyananda2016}\\
$-$\textbf{risk-informed interference assessment} \scriptsize\cite{Voicu2017, Voicu2017a}\\
\end{tabular}
& \begin{tabular}{p{1.7cm}}
$-$~\textbf{1 LTE link/AP \& several \mbox{Wi-Fi} devices} \scriptsize\cite{Gomez-Miguelez2016, Abdelfattah2017, Capretti2016, Cano2016a};\\ \\ \\
$-$~\textbf{$\leq$15 APs of each technology} \scriptsize\cite{Chaves2013, Nhtilae2013, Ali2016, Jeon2014, Guan2016, Cano2015, Jian2017, Sagari2015, Sriyananda2016, RupasingheNewOrleans2015}; \\ \\ \\
$-$~\textbf{10--30 APs, or up to 5000 APs/km\textsuperscript{2} of each technology} \scriptsize\cite{Voicu2017, Voicu2017a, Zhang2015a, Simic2016, Voicu2016, Li2016a, Sadek2015}\\
\end{tabular} \\
\hline
\end{tabular}
\end{table*}
\paragraph{LBT with random backoff and other adaptive CW}
\label{lbtOptim}
The authors in~\cite{TaoHongKong2015, Yin2015, Li2017, Hasan2016} considered LBT-LTE with random backoff and contention window adaptation, other than binary exponential.
Specifically, the authors in~\cite{TaoHongKong2015} adapted the CW of LTE based on a target average transmission delay, but since cooperative information exchange among LTE via the X2 interface was required, the CW adaptation could be too slow in practice.
The other works, i.e.~\cite{Yin2015, Li2017, Hasan2016}, solved mathematical optimization problems and also required cooperation at least among LTE devices. In such cases, it is not clear how sensitive the proposed coexistence mechanisms are to conditions in real deployments, e.g. cooperation among only some LTE operators.
In~\cite{Yin2015} the number of LTE users was maximized, while keeping the collision probability with \mbox{Wi-Fi} below a given threshold and in~\cite{Li2017} the LTE throughput was maximized, while keeping the \mbox{Wi-Fi} throughput constant via a genetic algorithm or multi-agent reinforcement learning.
In~\cite{Hasan2016} airtime fairness among LTE and \mbox{Wi-Fi} was considered and two mathematical approaches for characterizing fairness were compared, i.e. the Shapely value and proportional fairness.
We note that estimating the number of \mbox{Wi-Fi} devices was required in all~\cite{Yin2015, Hasan2016, Li2017}, but it is not clear how efficiently the number of \mbox{Wi-Fi} devices can be estimated, especially in case of mixed \mbox{Wi-Fi} and LTE traffic sent over the same channel.
Furthermore, since in~\cite{Yin2015} only one LTE AP was assumed, it is not clear what the performance of the proposed mechanism is with multiple LTE APs, which are not necessarily coordinated.
Finally, most works did not compare the performance of the proposed coordinated CW adaptation mechanism to that of an adaptive \emph{distributed} one with low computation complexity, e.g. binary exponential random backoff. Consequently, it is not clear whether these mathematical optimization approaches result in performance improvements over conventional CW adaptation approaches, especially in realistic deployments.
\subsubsection{\textbf{\mbox{Wi-Fi}/Duty-Cycle-LTE Coexistence}}
\label{litrev_dut}
The following work in the literature considered \mbox{Wi-Fi}/duty-cycle-LTE coexistence~\cite{Chaves2013, Gomez-Miguelez2016, Nhtilae2013, Ali2016, Abdelfattah2017, Capretti2016, Jeon2014, Li2016a, Sadek2015, Voicu2017, Guan2016, Zhang2015a, Cano2016a, Cano2015, Jian2017, Sagari2015, Sriyananda2016, RupasingheNewOrleans2015, Simic2016, Voicu2016}.
\paragraph{Fixed duty cycle}
The authors in~\cite{Chaves2013, Gomez-Miguelez2016, Nhtilae2013, Ali2016, RupasingheNewOrleans2015, Simic2016} considered LTE with fixed duty cycle.
We note that fixed duty cycling was initially proposed as a coexistence mechanism since it only required minimal modifications to the 3GPP LTE standard, i.e. it could be implemented based on subframe blanking.
However, it was shown that the \mbox{Wi-Fi} performance was significantly affected when coexisting with LTE implementing fixed duty cycling, and that more sophisticated coexistence mechanisms were needed, e.g.~\cite{Nhtilae2013}.
Furthermore, the authors in~\cite{Gomez-Miguelez2016} varied experimentally the fixed duty cycle, the transmit power, and the LTE bandwidth and center frequency. It was found that the results are vendor-specific and fine tuning for fairness was difficult.
\paragraph{Fixed duty cycle with different transmission patterns}
The authors in~\cite{Abdelfattah2017, Capretti2016, Jeon2014, Li2016a, Voicu2016} further considered different transmission patterns for LTE with fixed duty cycle,
in order to either study the coexistence performance of fixed duty cycling itself, or as a baseline for other coexistence mechanisms.
In general it was reported that, regardless of the transmission pattern, a fixed duty cycle for LTE could affect the \mbox{Wi-Fi} performance significantly.
For instance, the authors in~\cite{Abdelfattah2017} estimated the probability of collision and throughput for \mbox{Wi-Fi} via analytical models and ns-3 simulations, for LTE with a fixed duty cycle of 50\% and different sub-frame transmission patterns. It was found that the \mbox{Wi-Fi} performance strongly depended on the packet size. Consequently, adjusting the duty cycle and duty cycle period was suggested, in order to improve \mbox{Wi-Fi} performance.
Furthermore, the authors in~\cite{Capretti2016} performed an empirical evaluation for a fixed duty cycle of 60\% and different consecutive/alternating sub-frame transmission patters. Importantly, they found that coexistence was possible, but that tuning the network parameters was non-trivial, especially since muting patterns that resulted in higher \mbox{Wi-Fi} throughput also resulted in higher \mbox{Wi-Fi} jitter.
These results suggest overall that fixed duty cycling for LTE is not sufficient to ensure \mbox{Wi-Fi}/LTE coexistence, but that adaptive duty cycling could be a feasible solution.
\paragraph{Adaptive duty cycle}
\label{dutOptim}
The authors in~\cite{Sadek2015, Voicu2017, Guan2016, Zhang2015a, Cano2016a, Cano2015, Jian2017, Sagari2015, Sriyananda2016, RupasingheNewOrleans2015, Simic2016, Voicu2016} considered LTE with adaptive duty cycle.
We note that, although duty cycling for LTE can be implemented based on the existing subframe blanking specifications, algorithms that adapt the duty cycle require more advanced features, e.g. monitoring the channel (potentially via an additional \mbox{Wi-Fi} interface) in order to extract information about the coexisting \mbox{Wi-Fi} devices~\cite{Sadek2015, Guan2016, Jian2017, Cano2015}.
One major proposal for LTE-U was carrier sense with adaptive transmission (CSAT) by Qualcomm in~\cite{Sadek2015}, which implements adaptive duty cycle and channel selection based on estimating the number of active nodes, and their duty cycle and energy. Additional puncturing (i.e. short off-time during the longer on-time) was introduced to protect \mbox{Wi-Fi} delay-sensitive applications. The authors found that LTE could coexist with \mbox{Wi-Fi} at least as well as \mbox{Wi-Fi} coexisting with itself in terms of throughput.
Some proposals considering mathematical formulations of different \mbox{Wi-Fi}/LTE fairness coexistence goals included optimizing the LTE network throughput with \mbox{Wi-Fi} access time constraints through a cognitive coexistence scheme that determines dynamically the transmission time, channel selection, and channel width for LTE~\cite{Guan2016}; and achieving proportional fairness in terms of \mbox{Wi-Fi}/LTE channel airtime, while maximizing the overall aggregate throughput~\cite{Cano2015, Jian2017}.
Although (sub-)optimal solutions were obtained, these works did not compare the performance of their proposed schemes with other adaptive schemes, which are less computationally complex, so it is not clear whether adopting such formal mathematical approaches significantly improve the coexistence performance.
Specifically, in~\cite{Guan2016} it was suggested to implement the proposed algorithm in a server with powerful computing capabilities in the LTE network, which needed information about \mbox{Wi-Fi} obtained via crowd sourcing from LTE users. As such, it is not clear how sensitive this solution is to increased delay due to data computation and transfer to/from the server. Additionally, crowd-sourced data may be unreliable or insufficient in practice, if the number of users is low.
Furthermore, \cite{Cano2015, Jian2017} considered a single coordinated LTE BS/network, so it is not clear how the proposed algorithms perform for multiple uncoordinated LTE networks that apply the algorithms independently.
Different than previous work, centralized coordination between LTE and \mbox{Wi-Fi} was proposed in~\cite{Sagari2015}, through which the adaptive duty cycle mechanism and transmit power were optimized, such that a similar throughput was achieved for the two technologies. Although algorithms that have information about the entire network result in good network performance in general, they are applicable only for a restricted number of deployments in practice, where a single operator manages all deployments implementing either of the technologies.
Machine learning techniques were also proposed for adapting the LTE duty cycle, i.e. Q-learning for achieving the desired LTE capacity~\cite{RupasingheNewOrleans2015}, and multi-armed bandit machine learning to maximize the LTE average capacity and minimize the LTE transmit power~\cite{Sriyananda2016}. Both schemes were shown to result in considerable gains in the aggregate \mbox{Wi-Fi} and LTE throughput over fixed duty cycling. However, it is not clear how long the learning process takes and whether such complex mechanisms perform better than other adaptive schemes. Furthermore, the proposed learning algorithms do not consider the \mbox{Wi-Fi} performance, but only the LTE target capacity~\cite{RupasingheNewOrleans2015} or LTE minimum capacity~\cite{Sriyananda2016}, for which a single example value was evaluated in the respective works. As such, it is not clear what the \mbox{Wi-Fi} performance would be for other LTE target capacities and \mbox{Wi-Fi}/LTE traffic types.
\subsubsection{\textbf{\mbox{Wi-Fi}/LBT-LTE and \mbox{Wi-Fi}/Duty-Cycle-LTE Coexistence}}
Some work in the literature has investigated coexistence with both LBT- and duty-cycle-LTE~\cite{Cano2016a, Li2016a, Simic2016, Voicu2016, Voicu2017, Voicu2017a, Jeon2014, Ali2016, Zhang2015a}.
We note that this is an important contribution, since it facilitates the comparison of two major distinct time-sharing approaches. Since duty cycling can be adapted based on the number of active nodes, e.g. CSAT, LBT and adaptive duty cycling can implement the same functionality, i.e. facilitate an equal share of the channel for each device. The following tradeoff is expected: LBT has a higher MAC overhead due to its sensing time, but results in a lower number of collisions, whereas adaptive duty cycling has a lower MAC overhead, but also a higher number of collisions.
It was reported that, in order to achieve \mbox{Wi-Fi}/LTE coexistence fairness (i.e. LTE degraded the \mbox{Wi-Fi} performance at most as much as \mbox{Wi-Fi} coexisting with itself would do) LBT was preferred to fixed 50\% duty cycling~\cite{Ali2016, Jeon2014}.
However, the following LTE coexistence mechanisms were found to be equivalent for improving the \mbox{Wi-Fi} performance: (i)~a low fixed duty cycle; or (ii)~LBT with more sensitive sensing thresholds or lower priority than \mbox{Wi-Fi} when contending for the channel through the random backoff procedure~\cite{Li2016a}.
Overall, a similar \mbox{Wi-Fi} performance was obtained when coexisting with LBT-LTE or with adaptive duty-cycle-LTE based on CSAT~\cite{Cano2016a, Simic2016, Voicu2016, Voicu2017, Voicu2017a, Zhang2015a}.
As an insight, for longer LTE transmission time, the LTE throughput for LBT and CSAT was the same, but this increased the \mbox{Wi-Fi} delay~\cite{Cano2016a}.
Also, adaptive duty cycle was more beneficial in low-density networks, whereas LBT was better in high-density networks~\cite{Voicu2016}.
For the specific case of \mbox{Wi-Fi}/LTE coexistence in the 5~GHz unlicensed band, it was shown that the choice of time-sharing mechanism for LTE (i.e. LBT or adaptive duty cycling) is irrelevant, due to the large number of available channels~\cite{Simic2016}. This was confirmed when ACI was also modelled~\cite{Voicu2017, Voicu2017a}. We note that \cite{Voicu2017, Voicu2017a} adopted a new evaluation framework, i.e. risk-informed interference assessment, which was relevant for both policy and engineering coexistence goals.
\subsection{Summary \& Insights}
\label{sumSpecComm}
A large number of existing works considered inter-technology coexistence in a spectrum commons and we classified them into works addressing coexistence with low traffic technologies and coexistence among broadband technologies.
In general, it is not straightforward to compare the coexistence performance of different proposed spectrum sharing mechanisms in different works, due to the different considered scenarios, evaluation metrics, and coexistence goals. Moreover, most works compare the coexistence performance of their proposed mechanisms only with the performance of standalone technologies (i.e. not in coexisting deployments) or with the case where the coexisting technologies do not implement any additional sharing mechanism compared to their standard specifications.
For coexistence with low-traffic technologies, a similar number of works considered spectrum sharing techniques at Layer~1 as at Layer~2, where most of them were distributed, as expected for multiple uncoordinated network deployments in unlicensed bands.
Furthermore, a large number of works presented experimental results, which is important for capturing the coexistence performance in real deployments. We note that conducting experiments was facilitated by the availability of commercial hardware, especially for cases where only standard features of different technologies were evaluated.
For coexistence of low-traffic technologies, e.g. IEEE 802.15.4, with broadband technologies, e.g. \mbox{Wi-Fi}, it was found that the mismatch in transmit power between the two technologies was dominant, such that at short separation distances, the low-traffic technology was significantly affected, even if the broadband technology also implemented sharing in time at Layer~2, e.g. CSMA/CA.
Regarding Layer~1 techniques, FHSS was found efficient for Bluetooth when coexisting with \mbox{Wi-Fi}, whereas \mbox{Wi-Fi}'s CSMA/CA did not react fast enough to signals with fast hopping rate. We note, however, that the efficiency of FHSS was facilitated by the availability of some channels that were not occupied by \mbox{Wi-Fi}. Is is thus not clear what the performance of FHSS is for very dense deployments and congested channels, as expected in emerging networks.
More advanced PHY techniques were also proposed, e.g. interference cancellation and reconfigurable antennas, but were evaluated for only one link of each technology, which suggests that further investigation is needed, in order to determine the efficiency of such techniques in real deployments.
For coexistence among broadband technologies in a spectrum commons, most of the works considered \mbox{Wi-Fi}/LTE coexistence, as \mbox{Wi-Fi} used to be the only widely deployed broadband technology in such bands. LTE was only recently proposed to operate in the unlicensed bands, where it would thus be the second broadband technology.
Most works on \mbox{Wi-Fi}/LTE coexistence considered Layer~2 spectrum sharing in time and frequency and only few experimental results were reported, due to the limited availability of testbeds where full LTE stacks are implemented and can be modified in a straightforward manner.
In general, it was reported that inter-technology sharing in frequency via channel selection is more efficient than time-sharing mechanisms like LBT or duty cycling. However, this requires a sufficient number of channels, which may not always be available. As such, channel selection can only be used to enhance the coexistence performance of time sharing mechanisms.
For time sharing via LBT or duty cycling, it was found that adaptive mechanisms are required to achieve \mbox{Wi-Fi}/LTE coexistence, e.g. LBT with adaptive sensing duration or sensing threshold, or adaptive duty cycling. Fixed variants were not able to fulfil the considered fairness criteria in different works, as it is expected that they cannot take into account variations of device numbers, traffic, mobility, etc.
The distributed mechanisms of LBT with binary exponential random backoff and adaptive duty cycling based on CSAT for LTE were found to be overall equivalent from the point of view of the resulting \mbox{Wi-Fi} performance. This is due to the fact that LBT causes fewer collisions, but has a higher MAC overhead due to the sensing time, whereas adaptive duty cycling causes more collisions, but has a lower MAC overhead. Further implementation differences are as follows: LBT required significant changes to the 3GPP LTE standard, whereas adaptive duty cycling can be implemented based on the existing LTE specifications for subframe blanking. Nonetheless, adaptive duty cycling has the additional disadvantage that it does not comply with some regional spectrum regulations at Layer~0 and requires channel monitoring (potentially via a colocated \mbox{Wi-Fi} interface) to estimate the number of coexisting devices. As such, equipment cost considerations at Layer~8 may also affect the choice of LBT or adaptive duty cycling as the spectrum sharing mechanism at Layer~2.
Other adaptive variants for LBT and duty cycling were based on mathematical optimization, which has higher computation complexity. Most of these proposed solutions require coordination among LTE deployments and sometimes also with the \mbox{Wi-Fi} deployments. It is not clear how feasible such approaches are in practice and how the efficiency of such methods varies with Layer~7~and~8 parameters such as uncoordinated LTE deployments of different operators, different traffic types, mobility, and delay in obtaining the required information about the \mbox{Wi-Fi} network. Furthermore, the coexistence performance of the proposed optimized schemes was not compared with that of fully distributed adaptive schemes with low computation complexity. Consequently, it is not yet understood whether such highly optimized solutions, even in ideal conditions, significantly improve the network performance over conventional distributed schemes.
Furthermore, it was found that power control alone cannot ensure coexistence between different broadband technologies, due to the high data rate requirements and dense deployments, but that it can improve coexistence in conjunction with sharing in time or frequency.
Although PHY techniques were also proposed for coexistence among broadband technologies, e.g. interference cancellation, and beamforming techniques, they were evaluated with one of the technologies implemented for a single link only, so further investigation is needed to determine the efficiency and impact of such techniques in real network-wide deployments.
\section{Discussion \& Future Research Directions}
\label{sec_discussion}
In this section we summarize the insights from our survey on spectrum sharing mechanisms for wireless inter-technology coexistence and we indicate open challenges and possible future research directions.
\subsection{A System-Level View of Inter-Technology Spectrum Sharing}
The design of spectrum sharing mechanisms is influenced by both technical and non-technical aspects, such as regulatory restrictions, business models, and social practices.
Due to non-technical aspects, implementing the most efficient spectrum sharing mechanisms may not be straightforward.
For instance, changes in spectrum regulations were required for TVWS before secondary technologies could share underutilized spectrum with TV services.
Another example is the lack of business agreements among network operators, so that information exchange among e.g. different \mbox{Wi-Fi} hotspots operating in the same band may not be possible.
Consequently, coordinated spectrum sharing mechanisms cannot be implemented; instead, potentially less efficient, distributed sharing schemes must be used.
It is thus critical to \textbf{consider the design of spectrum sharing mechanisms for inter-technology coexistence from a unified, system-level perspective that includes both technical and non-technical aspects}.
The technology circle considered in this survey represents such a system-level framework, which incorporates Layers~1--7 of the OSI stack, regulatory restrictions at Layer~0, and business models and social practices at Layer~8.
\subsection{Recent Trends in Spectrum Sharing}
In our literature review in Sections~\ref{litHierFr} and~\ref{litSpecComm}, we identified three major recent technical and regulatory trends in terms of how spectrum is shared: \textbf{(i)}~more broadband technologies operating in a spectrum commons, i.e. \mbox{Wi-Fi}/LTE coexistence in the unlicensed bands; \textbf{(ii)}~introducing multiple primary technologies with equal access rights in the same spectrum band, which is managed by a single entity, e.g. LTE/NB-IoT coexistence; and \textbf{(iii)}~increasingly more bands set to be open for technologies with primary/secondary access rights, where \emph{secondary/secondary} inter-technology coexistence is also an issue, e.g. TVWS, the CBRS band, the 2.3-2.4 GHz band with LSA.
All three of these major developments represent the case of coexisting technologies with \emph{equal} spectrum access rights, which was the focus of this survey.
We note that, as discussed throughout Section~\ref{litHierFr}, the spectrum sharing mechanisms considered in the existing literature for primary/primary and secondary/secondary coexistence resemble either that of traditional, centrally coordinated cellular networks, or that of distributed networks operating in the unlicensed bands, as an instance of a spectrum commons.
\subsection{Challenges for Spectrum Sharing in a Spectrum Commons}
Designing spectrum sharing mechanisms for inter-technology coexistence in a \emph{spectrum commons} is the most challenging out of the three identified coexistence cases with equal spectrum access rights, due to the high \emph{heterogeneity} of the coexisting devices.
A spectrum sharing mechanism for a technology in such bands has to take into account the (intra-technology) spectrum sharing mechanisms of already existing technologies, but also to anticipate the behaviour of future technologies.
This can be addressed through regulatory limitations at Layer~0 for MAC protocols at Layer~2, e.g. ETSI specifying LBT in the 5~GHz unlicensed band in Europe.
As a result, 3GPP standardized LAA with LBT to coexist with \mbox{Wi-Fi} in the 5~GHz band.
By contrast, no such regulatory limitation on Layer~2 exists in the U.S. for the 5~GHz band, so \mbox{LTE-U} adopted an adaptive duty cycle MAC to facilitate coexistence with \mbox{Wi-Fi}. This was selected due to considerations at Layer~8, i.e. meeting the expectations to protect \mbox{Wi-Fi}~\cite{FCC2015}, while making the minimum changes to the LTE technology, in order to accelerate the time-to-market of commercial deployments.
Importantly, LAA and \mbox{LTE-U} are examples where \emph{inter-technology} spectrum sharing mechanisms also changed the way that \emph{intra-technology} coexistence is achieved; implementing LBT or duty cycling at the MAC layer for LTE enables spectrum sharing with \mbox{Wi-Fi} devices, but also among LTE devices/operators.
We emphasize that inter-technology coexistence among more than two wireless technologies has largely not been investigated in the literature yet. Thus far, this was not of practical interest, as \mbox{Wi-Fi} was the only widely-deployed broadband technology in the unlicensed bands, whereas low-traffic technologies did not pose major coexistence problems among themselves. However, studying coexistence among more than two dominant technologies may become important in the near future, due to the increasing heterogeneity of broadband technologies operating in a spectrum commons, e.g. \mbox{Wi-Fi}, LAA, and \mbox{LTE-U} operating in the 5~GHz unlicensed band.
Our survey also showed that properly evaluating inter-technology interactions in dense deployments is already complex, even for only two dominant technologies. This opens another valid research question, of whether current methodologies and modelling tools are sufficient to reliably capture the key interactions among multiple dominant technologies in the variety of coexistence cases that may arise.
\subsection{Layer~2 Spectrum Sharing in a Spectrum Commons}
Our literature review further revealed that, for inter-technology coexistence in a spectrum commons, the preferred spectrum sharing mechanisms are currently the traditional sharing in frequency and time at Layer~2,
especially for broadband technologies (\emph{cf.} Section~\ref{litrev_hightraffic}). Most of the works acknowledged that achieving coexistence through such mechanisms is possible, e.g. for \mbox{Wi-Fi}/LTE coexistence in the unlicensed bands via LBT, adaptive duty cycling, and channel selection.
It was also found that, whenever a large number of channels is available (e.g. in the 5~GHz unlicensed band), channel selection is an efficient mechanism to manage inter-technology interference, which results in only marginal performance degradation due to coexistence.
For the complementary case of inter-technology \emph{co-channel} transmissions, LBT with adaptive sensing time and adaptive duty cycling where found to provide a similar level of coexistence fairness and performance.
We note that evaluations of dynamic and heterogeneous channel widths across different coexisting technologies, due to advanced features like channel bonding, are largely missing from the literature.
Studying the impact of dynamic, heterogeneous channel-width selection for distributed deployments as expected in a spectrum commons is thus an important future research direction, as this results in more complex network-wide interference relationships among different technologies and devices.
Furthermore, as evident from our literature review in Sections~\ref{lbtOptim} and~\ref{dutOptim}, resource allocation algorithms derived from optimization of formal mathematical problems are difficult to implement, as they would require information exchange at a level that may not feasible in practice, due to Layer~8 aspects.
Moreover, we believe that perfect spectrum sharing optimization for inter-technology coexistence is in general not applicable for a spectrum commons, due to the limited information about other coexisting technologies and devices, the high level of heterogeneity, the large number of network managers, and the dynamics of the deployments; these factors are a direct effect of equal spectrum access rights at Layer~0 and distributed network ownership at Layer~8.
Nonetheless, many works in the literature studied inter-technology coexistence with respect to a formal optimization goal, e.g. proportional fairness in Sections~\ref{litrev_lbt} and~\ref{litrev_dut}, despite the potential challenges of implementing such solutions in practice.
We emphasize that the validity of these optimum solutions in real (non-idealized) deployments is still an open question, given the variability of system parameters like traffic demand, hardware performance, and network size.
Furthermore, it is not clear whether the performance of these solutions is better than for conventional distributed adaptive mechanisms with lower computation complexity, like LBT with binary exponential random backoff or adaptive duty cycle based on CSAT.
An important future research direction is thus performing a thorough sensitivity analysis, to determine the spectrum sharing mechanisms in the design space that are near-optimal yet robust for practical engineering deployments.
\subsection{Layer~1 Spectrum Sharing in a Spectrum Commons}
In the reviewed literature, Layer~1 spectrum sharing mechanisms were found to be efficient for low-traffic coexisting technologies, e.g. FHSS for Bluetooth in Section~\ref{rev_bluetooth}.
However, Layer~1 techniques such as interference cancellation and beamsteering were found to be less feasible in practice for achieving inter-technology coexistence among broadband technologies in Section~\ref{wifiltecoex}.
Such techniques are based on acquiring information through multiple wireless interfaces that decode signals of other coexisting technologies. Additionally, interference cancellation also requires changes in the MAC layer.
Further research is needed to determine the practical feasibility of achieving inter-technology coexistence via Layer~1 spectrum sharing mechanisms like interference cancellation and beamforming in large-scale heterogeneous deployments.
\subsection{Performance Evaluation of Spectrum Sharing Mechanisms}
We found that comparing the coexistence performance of different candidate spectrum sharing mechanisms is not straightforward, especially given the large amount of research work in the literature, with different assumptions and methods, often referring to different coexistence goals (\emph{cf.} Tables~\ref{table_review_3b}, \ref{table_review_4b}, and \ref{table_review_4c}).
As evident throughout our literature review, most of the works on inter-technology coexistence, and especially those addressing coexistence in a spectrum commons, focus on evaluating only a single or variants of a given main spectrum sharing mechanism (e.g. variants of LBT-LTE, or duty-cycle-LTE).
Moreover, this considered candidate spectrum sharing mechanism is often compared only to the baseline cases where a single technology uses the spectrum, or where newly coexisting technologies do not implement any additional sharing mechanism, e.g. LTE continuously transmitting in the 5 GHz unlicensed band as it traditionally does in dedicated licensed spectrum. Consequently, it is seldom possible to directly compare the coexistence performance reported in different works for different spectrum sharing mechanisms.
In order to address this issue, coexistence goals should be more clearly and explicitly defined in the first place.
Also, it is important to study candidate spectrum sharing mechanisms for different coexisting technologies within the same framework.
Furthermore, only few experimental results were reported for inter-technology coexistence in a spectrum commons, due to the lack of testbeds with fully operational protocol stacks where different coexistence mechanisms can be implemented in a straightforward manner.
We emphasize that empirical studies are crucial for evaluating the performance of inter-technology coexistence in real deployments and revealing potential implementation issues. Consequently, an important future research direction is developing flexible and accessible software and hardware platforms that can be configured with a moderate amount of effort to implement standard and proposed protocol stacks.
\subsection{Other Open Challenges}
The recent introduction of different LTE variants as broadband technologies in a spectrum commons suggests that, for capacity increase, operating in unlicensed bands is straightforward to adopt from a technical perspective. These LTE variants aggregate unlicensed carriers, i.e. LAA, and LTE-U, or operate exclusively in the unlicensed bands, i.e. MulteFire~\cite{MulteFireAlliance2017}.
This opens an interesting spectrum regulatory research question: whether it may be attractive to open more shared bands for traffic offloading and reserve licensed spectrum only for important signalling traffic and QoS-guaranteed services.
Finally, we note that most of the reviewed spectrum sharing mechanisms for inter-technology coexistence in a spectrum commons are fully distributed, and only a few centralized, as summarized in Section~\ref{litSpecComm} and Tables~\ref{table_review_3} and \ref{table_review_4}.
Considering more fundamental performance limits of inter-technology spectrum sharing mechanisms is still missing from the literature. Specifically, investigating the impact of different levels of coordination among networks of different technologies is a rich yet largely unexplored research direction.
\section{Conclusions}
\label{sec_conclusions}
In this survey we explored the design space of spectrum sharing mechanisms for wireless inter-technology coexistence from a unified, system-level perspective, i.e. the technology circle, that integrates technical and non-technical aspects at different layers.
We reviewed the literature on inter-technology coexistence with respect to different layers of the technology circle, where we considered technologies with equal spectrum access rights: (i)~primary/primary; (ii)~secondary/secondary; and (iii)~technologies operating in a spectrum commons.
Throughout the literature review we identified the following three major trends for inter-technology coexistence: (i)~more broadband technologies operating in a spectrum commons; (ii)~introducing multiple primary technologies with equal access rights in the same spectrum band; and (iii)~increasingly more bands set to be open for technologies with primary/secondary access rights, where secondary/secondary inter-technology coexistence may also become an issue.
Spectrum sharing mechanisms for primary/primary and secondary/secondary coexistence in the literature were similar to those in centralized coordinated cellular networks, or to those in a spectrum commons.
Out of the three identified cases of inter-technology coexistence with equal spectrum access rights, coexistence in a spectrum commons is the most challenging, due to the high heterogeneity of coexisting devices and technologies.
For such cases, Layer~2 mechanisms like distributed spectrum sharing in time and frequency (e.g. LBT, adaptive duty cycling, and channel selection) are currently considered efficient for ensuring coexistence, whereas the coexistence performance of advanced PHY layer techniques (e.g. interference cancellation, beamforming) in large, dense deployments has been largely unaddressed.
Furthermore, our survey revealed that the performance of proposed spectrum sharing mechanisms in different works is difficult to compare directly, due to the different assumptions, baselines, scenarios, coexistence goals, and evaluation methods, where only few works assess multiple spectrum sharing approaches within the same framework.
The key open challenges that we identified for inter-technology coexistence with equal spectrum access rights are: investigating the coexistence performance of interference cancellation and beamforming in network-wide deployments; evaluating the performance of more than two coexisting broadband technologies; considering heterogeneous channel widths throughout coexisting deployments; performing sensitivity analyses for highly optimized solutions with respect to parameters at different layers of the technology circle in real deployments; developing accessible software and hardware testing platforms; and considering the impact of different levels of coordination among coexisting technologies.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
2,877,628,091,036 | arxiv | \section{Introduction}
\label{intro}
As is well known Einstein's theory of gravity leads to so-called
gravitomagnetic effects, i.e., gravitational effects caused by moving or rotating masses. Such effects, e.g., related with a rotating Earth, have been studied intensively in the past. To be mentioned here is the Lense-Thirring effect that describes perturbations of a satellite orbit in the gravitomagnetic dipole field of a rotating body. It was first described by Lense and Thirring (1918, see \citet{lense:1984} for a translation) after extensive groundwork by Einstein. \citet{ciufolini:2004} found the effect to be confirmed within a 10 \% accuracy by a detailed analysis of the orbits of the two LAGEOS satellites. The results, however, are still subject to an ongoing debate (see \citet{iorio2011}).
Closely related to the Lense-Thirring effect is the precession a torque-free gyroscope experiences due to gravitomagnetism. This Pugh-Schiff effect was proposed to be an alternative test for frame-dragging (see \citet{pugh1959} and \citet{schiff1960}). Although unexpected problems arose \citet{everitt:2011} measured the effect within an accuracy of 13 \%.
The above mentioned effects solely take the spin dipole into account. Naturally one is also interested in the influence of higher spin multipole moments. Such higher multipole moments come into play via multipole expansions of the so-called gravitoelectric potential $\phi$ (a generalisation of the Newtonian potential $U$ which is often also denoted by $w$) and the gravitomagnetic vector potential $\mathbf{w}$. Both are used in order to parametrise the metric tensor in the first post-Newtonian approximation. For more details on the origin and form of the metric tensor in the first post-Newtonian approximation please see \citet{soffel2003} and \citet{soffeldamourxu:1991}.
In this paper $\mathbf{w}$ will be our primary concern. As for the gravitoelectric potential $\phi$ we will just use a post-Newtonian mass monopole, i.e., $\phi=GM/r$, where $G$ is the Gravitational constant, $M$ the Blanchet-Damour mass of the central body and $r = \| \mathbf{x} \|$ the distance to its centre of mass. The gravitomagnetic potential $\mathbf{w}$ is induced by a matter current density $\bfg{\sigma}$ ($\sigma^k := T^{0k}/c$, where $T^{\mu\nu}$ is the body's energy-momentum tensor and $c$ the speed of light). In the stationary case and outside a coordinate sphere ${\cal S}$ that fully covers the energy-momentum tensor of the central body $\mathbf{w}$ admits a multipole expansion of the form (e.g. \citet{blanchet:1989:377})
\begin{equation}
\bfl{w} = w_k \bfl{e}_k = - G \sum_{l=1}^{\infty} \frac{(-1)^l l}{l!(l+1)} \epsilon_{kab} J_{bL-1} \partial_{aL-1} \left(\frac{1}{r} \right) \bfl{e}_k \label{vectorpotential}
\end{equation}
with the spin multipole moments
\begin{equation}
J_L \coloneqq \int_{\mathbb{R}^3} \epsilon_{ab < k_l} \hat{x}_{L-1 > a} \sigma_{b} d^3x.
\end{equation}
In these equations $\epsilon_{abc}$ denotes the fully antisymmetric three dimensional Levi-Civita symbol for which we shall use $\epsilon_{123}:=+1$. The vectors $(\bfl{e}_k)$, $k=1,2,3$ stand for the canonical basis of the $\mathbb{R}^3$, i.e., $\bfl{e}_1=(1,0,0)^T$ etc. The spin-moments $J_L$ are Cartesian STF (Symmetric and Trace-free) tensors, where $L$ is a Cartesian multi-index, $L = (k_1, \dots ,k_l)$ of $l$ Cartesian indices, each taking the values $1,2,3$. Symmetric refers to the symmetry with respect to all $l$ indices while trace-free means that every contraction between two arbitray indices vanishes. For more information on STF-tensors please also see \citet{thorne1980} and \citet{soffel:1994:139}.
A summation over two equal (dummy) indices is implied automatically (e.g., $A_a B_a := A_1 B_1 + A_2 B_2 + A_3 B_3$) and $\partial_{ab} := \partial^2 /(\partial x^a \partial
x^b)$ etc. The hat on top of a symbol indicates the STF-part. Angle brackets indicate the STF-part with respect to the indices enclosed.
Note that for the first post-Newtonian metric the spin multipole moments have to be defined to Newtonian order only. The spin-dipole moment, $\mathbf{J}$, agrees with the usual intrinsic angular momentum of
the body.
In this paper we will study a central axisymmetric body rotating uniformly about its symmetry axis. Perturbations of satellite orbits caused by the corresponding gravitomagnetic field induced by spin multipole moments of arbitrary order will be considered by means of perturbation theory. Such spin multipole moments have been considered by \citet{teyssandier1978} and \citet{panhans:2014paper}. Earlier applications of the mentioned expansions can be found in \citet{letelier2008} for the precession of a gyroscope and in \citet{iorio2001} where an alternative derivation of the Lense-Thirring effect is given.
Here we shall also use a coordinate system with an axial symmetry with respect to the z-axis, i.e., the spin vector of the body points into z-direction. This simplifies the form of the used multipole expansion. On the other hand this might be a shortcoming because of the loss of generality due to unknown transformation behaviour of the spin moments. A treatment of a general spin orientation can be found in \citet{iorio2012} where a general spin-spin interaction was taken into account as well. A discussion about mixed effects connected with an arbitrary spin orientation can be found in \citet{iorio2015}.
\section{Equations of motion and STW-decomposition}
Post-Newtonian satellite equations of motion have been studied in detail in the literature (e.g., \citet{soffeldamourxu:1994}). Considering only a single isolated central body the coordinate acceleration of a satellite $\bfl{a}_{\text{S}}$ (e.g., equation (3.4) of \citet{soffeldamourxu:1994}) has the form
\begin{equation}
\bfl{a}_{\text{S}} = \nabla \phi + \frac{1}{c^2} \left[ - 4 \phi \nabla \phi - \mathbf{v} (3 \dot{\phi} + 4 (\mathbf{v} \nabla) \phi) + 4 \mathbf{\dot{w}} + v^{2} \nabla \phi + \mathbf{v} \times \mathbf{B} \right]
\end{equation}
where the gravitomagnetic field $\bfl{B}$ is defined by
\begin{equation}
\bfl{B} := - 4 \nabla \times \bfl{w} \, .
\end{equation}
Since our central interest are the perturbations induced by some stationary gravitomagnetic field, i.e., $\dot{\mathbf{w}}=0$, we will simplify this equation to
\begin{equation}
\bfl{a}_{\text{S}} = \nabla \phi + \bfl{a}_{\text{per}}
\end{equation}
with the perturbing acceleration
\begin{equation}
\bfl{a}_{\text{per}} := \frac{1}{c^2} \bfl{v} \times \bfl{B} \, .
\end{equation}
Here, $\bfl{v}$ is the satellite's coordinate velocity. Effects connected with a variation in time of the spin vector and orders of magnitude were studied in \citet{mashoon2008} and \citet{iorio2002}. For satellite orbits around the Earth such time dependencies can be neglected.
As mentioned in the beginning we will take $\phi = GM/r$ so the gravitoelectric part of the potential leads to unperturbed Keplerian orbits. The satellite orbit will be described by the usual set of orbital elements $(a,e,I,\Omega,\omega,M_0)$ and the perturbations by means of a Gaussian STW-perturbation theory of first order
with $S$, $T$, and $W$ being the scalar products of $\bfl{a}_{\text{per}}$ with $\bfl{n}$ for $S$, $\bfl{k}$ for $W$ and $\bfl{n} \times \bfl{k} $ for T. These vectors are defined by $\bfl{n} \coloneqq \bfl{x} / r$, $r:=\|x\|$ and $\bfl{k}:= \bfl{C}/C$, $\bfl{C} := \bfl{x} \times \bfl{v}$, $C := \|\bfl{C}\|$.
An explicit calculation yields
\begin{equation}
S = \frac{\mathbf{B} \cdot \mathbf{C}}{rc^2} , \quad
T = -\frac{\mathbf{x} \cdot \mathbf{v}}{C} S, \quad
W = \frac{[(\mathbf{x}\mathbf{v})(\mathbf{v}\mathbf{B})-v^2(\mathbf{x}\mathbf{B})]}{Cc^2}.
\label{STW}
\end{equation}
In a next step we want to calculate $\mathbf{B}$ explicitly. Using \eqref{vectorpotential} a calculation of the $\mathbf{B}$-field reveals
\begin{equation}
\mathbf{B} = 4 G \sum_{l=1}^\infty \frac{(-1)^l l}{(l+1)!} J_L \nabla \partial_L \left( \frac{1}{r} \right) = 4 G \sum_{l=1}^{\infty} \frac{l(2l-1)!!}{(l+1)!} \nabla \left(\frac{J_L \hat{n}_L}{r^{l+1}} \right).
\end{equation}
The use of spherical coordinates has proven to be of advantage and so we will use spherical spin multipole moments
\begin{equation}
\Xi_{lm}:= \int d^3x \left[r^l (\mathbf{x} \times \boldsymbol{\sigma}) \nabla Y^*_{lm} \right]
\end{equation}
as introduced by \citet{panhans:2014paper}. Here
\begin{equation}
Y_{lm}(\lambda, \phi):=\sqrt{\frac{2l+1}{4 \pi} \frac{(l-m)!}{(l+m)!}} \text{e}^{im \phi} P_{lm}(\lambda)
\end{equation}
stands for spherical harmonics and
\begin{equation}
P_{lm}(\lambda):=\frac{(-1)^m}{2^l l!} (1-\lambda^2)^{\frac{m}{2}} \frac{d^{m+l}}{d\lambda^{m+l}} (\lambda^2-1)^l
\end{equation}
are associated Legendre functions. Legendre polynomials are denoted as
\begin{equation}
P_l(\lambda):=P_{l0}(\lambda).
\end{equation}
In the following the arguments of the these functions will be the polar angle $\varphi$ while we plug in $\cos(\theta)$ for $\lambda$ where $\theta$ is the azimuth angle. With this convention we will suppress arguments when using these functions.
The spherical spin multipoles are connected with $J_L$ via
\begin{equation}
J_L = \frac{4 \pi (l-1)!}{(2l+1)!!} \sum_{m=-l}^l \hat{Y}^{lm}_L \Xi_{lm}. \label{JL}
\end{equation}
Written in terms of spherical harmonics $\bfl{B}$ becomes
\begin{equation}
\mathbf{B} = 4 G \sum_{l=1}^{\infty} \sum_{m=-l}^l \frac{4 \pi}{(2l+1)(l+1)} \Xi_{lm} \left\{ \frac{1}{r^{l+1}} \nabla Y_{lm}
-(l+1) Y_{lm} \frac{\mathbf{x}}{r^{l+3}} \right\}. \label{bfinal2}
\end{equation}
The assumption of an axial symmetry implies $\Xi_{lm} = \xi_l \delta_{0m}$. Under this assumption expression \eqref{bfinal2} reduces to
\begin{equation}
\mathbf{B} = - 4 G \sum_{l=1}^{\infty} \frac{2 \sqrt{\pi}}{\sqrt{(2l+1)}(l+1)} \xi_{l} \left\{ \frac{\sin(\theta) P_l'}{r^{l+2}} \mathbf{e}_{\theta}+ (l+1) P_l \frac{\mathbf{x}}{r^{l+3}} \right\},
\end{equation}
with
\begin{equation}
\bfl{e}_{\theta} = \cos(\theta) \cos(\phi) \bfl{e}_1 + \cos(\theta) \sin(\phi) \bfl{e}_2 - \sin(\theta) \bfl{e}_3
\end{equation}
while $P_l'$ is the first derivative of $P_l$ with respect to its variable.
As consequence of this form of $\bfl{B}$ we get for $S$ and $W$ ($T$ is given by $S$ through \eqref{STW})
\begin{align}
S = \; & \frac{8 \sqrt{\pi} C G}{c^2} \sum_{l=1}^{\infty} \frac{\xi_l}{\sqrt{2l+1}(l+1) r^{l+3}} \cos(I) P_l', \label{Sfinal2} \\
W = \; & \frac{8 \sqrt{\pi} C G}{c^2} \sum_{l=1}^{\infty} \frac{\xi_l}{\sqrt{2l+1}(l+1) r^{l+3}} \nonumber \\
& \left[ (l+1)P_l + \frac{re \sin(\nu) \cos(u) \sin(I)}{p} P_l' \right]. \label{Wfinal2}
\end{align}
In this equation $\nu$ notes the true anomaly and we used $u:=\nu + \omega$ and $p:=a(1-e^2)$. In the next section the perturbations of the orbital elements will be discussed.
\section{Discussion of the single orbital elements}
\label{sec:3}
\subsection{Semi-major axis}
The differential equation for the semi-major axis reads
\begin{equation}
\dot{a} =\frac{2}{n \sqrt{1-e^2}} \left( Se \sin(\nu)+T \frac{p}{r} \right).
\end{equation}
We make use of the second equation in \eqref{STW} and get
\begin{equation}
\dot{a} = \frac{2}{n \sqrt{1-e^2}} \left( e \sin(\nu) - \frac{\bfl{x} \cdot \bfl{v} }{C} \frac{p}{r} \right) S.
\end{equation}
In the last step we furthermore apply
\begin{equation}
\mathbf{x} \cdot \mathbf{v} = \frac{rC}{p}e \sin (\nu),
\end{equation}
an equation we will also need for the eccentricity, and find
\begin{equation}
\dot{a}=0.
\end{equation}
So for this special form of the perturbation acceleration the semi-major axis will not change over time.
\subsection{Eccentricity}
\label{subsecDiscus:1}
We are using again the second relation of \eqref{STW} and the expression for $\bfl{x} \cdot \bfl{v}$ together with equations known from the description with orbital elements,
\begin{equation}
\cos(E) = \frac{r}{p}(e+\cos(\nu)), \quad \frac{p}{r} = 1+e \cos (\nu),
\end{equation}
and find
\begin{align}
\dot{e} = \frac{8 \sqrt{\pi} G \cos(I)}{c^2} \sum_{l=1}^{\infty} \frac{\xi_l}{\sqrt{2l+1}(l+1)a^{l+2}} \left( \frac{r}{a} \right)^{-1-l} \sin(\nu) P_l' \big( \sin(u) \sin(I) \big).
\end{align}
The last three steps are expressing the $P_l'$ with $P_k$ via
\begin{align}
P_l' = \sum_{k=0}^{l-1} N_{lk} P_k, \quad N_{lk} = \begin{cases} 2k+1 & \quad \text{$k$ even, $l$ odd or $k$ odd, $l$ even} \\ 0 & \quad \text{otherwise} \end{cases},
\end{align}
(a consequence of no. 8.915/2. in \citet{gradstein:2007}) followed by rewriting the $P_k$ in terms of complex inclination functions $F_{kab}$ (see \citet{kaula1961})
\begin{equation}
P_k(\sin(u) \sin(I)) = \sum_{b=0}^k F_{k0b}(I) \text{e}^{i(k-2b)u}
\end{equation}
and finishing the series of conversions by eliminating the implicit time dependency through $\nu$ by using Hansen coefficients $X^{n,m}_s$ (e.g., \citet{hansennumerics:1990}) so we end with
\begin{align}
\dot{e} = & \; \frac{8 \sqrt{\pi} \cos (I) G}{c^2} \sum_{l=1}^{\infty} \frac{\xi_l}{\sqrt{2l+1}(l+1)a^{l+2}} \sum_{k=0}^{l-1} \sum_{b=0}^{k} N_{lk} F_{k0b}(I) \text{e}^{i \omega(k-2b)} \nonumber \\
& \frac{1}{2i} \sum_{s=-\infty}^{\infty} \Big( X^{-1-l,k-2b+1}_s - X^{-1-l,k-2b-1}_s \Big) \text{e}^{isM}. \label{dote}
\end{align}
In a first order perturbation theory we just have to integrate this expression with respect to $t$ which is fairly easy seeing the trivial time dependency $M(t)=nt+M_0$. One has to be careful with the term for $s=0$ though. The result is
\begin{align}
\Delta e = & \; \frac{8 \sqrt{\pi} \cos (I) G}{c^2} \sum_{l=1}^{\infty} \frac{\xi_l}{\sqrt{2l+1}(l+1)a^{l+2}} \sum_{k=0}^{l-1} \sum_{b=0}^{k} N_{lk} F_{k0b}(I) \text{e}^{i \omega(k-2b)} \nonumber \\
& \bigg\{ \frac{- 1}{2n} \sum_{\substack{s=-\infty \\ s \neq 0}}^{\infty} \Big( X^{-1-l,k-2b+1}_s - X^{-1-l,k-2b-1}_s \Big) \frac{\text{e}^{isM}}{s} \nonumber \\
& \hspace{0.3cm}+ \frac{1}{2i} \left( X^{-1-l,k-2b+1}_0 - X^{-1-l,k-2b-1}_0 \right)t \bigg\} \label{deltaet}.
\end{align}
The secular perturbations of $e$ vanish because of
\begin{align}
\Delta e_{\text{sec}}(t) & = \sum_{l=1}^{\infty} \frac{8 t G \sqrt{\pi} \cos (I) \xi_{l}}{\sqrt{2l+1}(l+1) a^{l+2}c^2} \sum_{\substack{k=0 \\ k=\text{even}}}^{l-1} \frac{N_{lk} F_{k0 \frac{k}{2}} (I)}{2i} \left( X^{-1-l,1}_0 - X^{-1-l,-1}_0 \right) \nonumber \\
& = 0
\end{align}
where we used $X^{n,m}_0 = X^{n,-m}_0$.
\subsection{Inclination}
\label{subsecDiscus:2}
The discussion of the inclination $I$ will follow the same pattern as before but gets slightly more complicated because of the appearance of $W$ rather than $S$. The starting point is the perturbation equation
\begin{align}
\dot{I} = \frac{8 \sqrt{\pi} G}{c^2} \sum_{l=1}^{\infty} \frac{\xi_l}{\sqrt{2l+1}(l+1)r^{l+2}} \cos(u) \left[ (l+1)P_l + \frac{r}{p}e \sin(\nu) \sin(I) \cos(u) P'_l\right].
\end{align}
This time we pass on expanding $P_l'$ but use
\begin{equation}
\cos(u) \sin(I) P_l'(\sin(u) \sin(I)) = \frac{\partial P_l}{\partial u}(\sin(u) \sin(I))
\end{equation}
instead. We apply the other conversions as before and find
\begin{align}
\dot{I} = \; & \frac{4 \sqrt{\pi} G}{c^2} \sum_{l=1}^{\infty} \frac{\xi_l}{\sqrt{2l+1}(l+1)a^{l+1} p} \left( \frac{r}{a} \right)^{-1-l} \nonumber \\
& \sum_{b=0}^l F_{l0b} \left( \text{e}^{iu}+\text{e}^{-iu} \right) \text{e}^{iu(l-2b)} \Big[ (l+1)(1+e \cos(\nu)) + i e (l-2b) \sin(\nu) \Big] \nonumber \\
\end{align}
\begin{align}
= \; & \frac{4 \sqrt{\pi} G}{c^2} \sum_{l=1}^{\infty} \frac{\xi_l}{\sqrt{2l+1}(l+1)a^{l+1} p} \left( \frac{r}{a} \right)^{-1-l} \sum_{b=0}^l F_{l0b} \text{e}^{i \omega(l-2b)} \nonumber \\
& \bigg\{ \left[ (l+1) \text{e}^{i \nu(l-2b+1)} + \frac{e}{2} (2l-2b+1) \text{e}^{i \nu(l-2b+2)} +\frac{e}{2} (2b+1) \text{e}^{i \nu(l-2b)} \right] \text{e}^{i \omega} \nonumber \\
& + \left[ (l+1) \text{e}^{i \nu(l-2b-1)} + \frac{e}{2} (2l-2b+1) \text{e}^{i \nu(l-2b)} +\frac{e}{2} (2b+1) \text{e}^{i \nu(l-2b-2)} \right] \text{e}^{-i \omega} \bigg\} \nonumber \\
= & \; \sum_{l=1}^{\infty} \frac{ 4 \sqrt{\pi} G \xi_l}{\sqrt{2l+1}(l+1)a^{l+2} (1-e^2)c^2} \sum_{b=0}^l F_{l0b} \text{e}^{i \omega(l-2b)} \nonumber \\
& \sum_{s=-\infty}^{\infty} \Bigg\{ \left[ (l+1) X^{-1-l,l-2b+1}_s + \frac{e}{2} (2l-2b+1) X^{-1-l,l-2b+2}_s \right. \nonumber \\
& \hspace{1cm} \left. + \frac{e}{2} (2b+1) X^{-1-l,l-2b}_s \right] \text{e}^{i \omega} + \Big[ (l+1) X^{-1-l,l-2b-1}_s \nonumber \\
& \hspace{1cm} + \frac{e}{2} (2l-2b+1) X^{-1-l,l-2b}_s +\frac{e}{2} (2b+1) X^{-1-l,l-2b-2}_s \Big] \text{e}^{-i \omega} \Bigg\} \text{e}^{isM}.
\end{align}
An integration yields the fairly long expression
\begin{align}
\Delta I(t) = \; & \sum_{l=1}^{\infty} \frac{4 G \sqrt{\pi} \xi_l}{\sqrt{2l+1}(l+1)a^{l+2} (1-e^2)c^2} \sum_{b=0}^l F_{l0b} \text{e}^{i \omega(l-2b)} \nonumber \\
& \Bigg\{ \sum_{\substack{s=-\infty \\ s \neq 0}}^{\infty} \bigg\{ \left[ (l+1) X^{-1-l,l-2b+1}_s + \frac{e}{2} (2l-2b+1) X^{-1-l,l-2b+2}_s \right. \nonumber \\
& \hspace{0.5cm} + \left. \frac{e}{2} (2b+1) X^{-1-l,l-2b}_s \right] \text{e}^{i \omega} + \Big[ (l+1) X^{-1-l,l-2b-1}_s \nonumber \\
& \hspace{0.5cm} + \left. \frac{e}{2} (2l-2b+1) X^{-1-l,l-2b}_s +\frac{e}{2} (2b+1) X^{-1-l,l-2b-2}_s \right] \text{e}^{-i \omega} \bigg\} \frac{ \text{e}^{isM}}{i n s} \nonumber \\
& \hspace{0.5cm} + \bigg\{ \left[ (l+1) X^{-1-l,l-2b+1}_0 + \frac{e}{2} (2l-2b+1) X^{-1-l,l-2b+2}_0 \right. \nonumber \\
& \hspace{0.5cm} + \frac{e}{2} \left. (2b+1) X^{-1-l,l-2b}_0 \right] \text{e}^{i \omega} + \Big[ (l+1) X^{-1-l,l-2b-1}_0 \nonumber \\
& \hspace{0.5cm} + \frac{e}{2} (2l-2b+1) X^{-1-l,l-2b}_0 +\frac{e}{2} (2b+1) X^{-1-l,l-2b-2}_0 \Big] \text{e}^{-i \omega} \bigg\} t \Bigg\}. \label{deltaihansen}
\end{align}
As for the secular perturbation we find
\begin{align}
\Delta I_{\text{sec}}(t) = & \; \frac{4 \sqrt{\pi} G t}{c^2} \sum_{\substack{l=1 \\ l=\text{odd}}}^{\infty} \frac{\xi_l}{\sqrt{2l+1}(l+1) a^{l+2} (1-e^2)} \nonumber \\
& \; \left\{ F_{l0 \frac{l+1}{2}} \left[ (l+1) X^{-1-l,0}_0 + \frac{e}{2} l X^{-1-l,1}_0 + \frac{e}{2} (l+2) X^{-1-l,-1}_0 \right] \right. \nonumber \\
& \left. + F_{l0 \frac{l-1}{2}} \left[ (l+1) X^{-1-l,0}_0 + \frac{e}{2} (l+2) X^{-1-l,1}_0 + \frac{e}{2} l X^{-1-l,-1}_0 \right] \right\} \nonumber \\
= & \; \frac{4 \sqrt{\pi} G t}{c^2} \sum_{l=1}^{\infty} \frac{\xi_l}{\sqrt{2l+1}(l+1) a^{l+2} (1-e^2)} \nonumber \\
& \; \left( F_{l0 \frac{l+1}{2}} + F_{l0 \frac{l-1}{2}} \right) \Big( l+1 \Big) \left( X^{-1-l,0}_0 + e X^{-1-l,1}_0 \right) \nonumber \\
= & \; 0,
\end{align}
which vanishes again due to $F_{l0 \frac{l+1}{2}} + F_{l0 \frac{l-1}{2}} = 0 $ for odd numbers $l$.
\subsection{Argument of the ascending node}
\label{subsecDiscus:3}
Luckily the above argumentation can be adopted for the calculation of $\Delta \Omega$ with some minor changes since the differential equations for $\Omega$ and $I$ just differ by the appearance of $\sin(u)$ rather than $\cos(u)$ and an additional $\sin(I)$ in the denominator which has no influence on the calculation at all.
So the result for $\Delta \Omega$ reads
\begin{align}
\Delta \Omega(t) = \; & \sum_{l=1}^{\infty} \frac{-i4 G \sqrt{\pi} \xi_l}{\sin(I) \sqrt{2l+1}(l+1)a^{l+2} (1-e^2) c^2} \sum_{b=0}^l F_{l0b} \text{e}^{i \omega(l-2b)} \nonumber \\
& \Bigg\{ \sum_{\substack{s=-\infty \\ s \neq 0}}^{\infty} \bigg\{ \left[ (l+1) X^{-1-l,l-2b+1}_s + \frac{e}{2} (2l-2b+1) X^{-1-l,l-2b+2}_s \right. \nonumber \\
& \hspace{0.5cm} + \left. \frac{e}{2} (2b+1) X^{-1-l,l-2b}_s \right] \text{e}^{i \omega} - \Big[ (l+1) X^{-1-l,l-2b-1}_s \nonumber \\
& \hspace{0.5cm} + \left. \frac{e}{2} (2l-2b+1) X^{-1-l,l-2b}_s +\frac{e}{2} (2b+1) X^{-1-l,l-2b-2}_s \right] \text{e}^{-i \omega} \bigg\} \frac{\text{e}^{isM}}{i n s} \nonumber \\
& \hspace{0.5cm} + \bigg\{ \left[ (l+1) X^{-1-l,l-2b+1}_0 + \frac{e}{2} (2l-2b+1) X^{-1-l,l-2b+2}_0 \right. \nonumber \\
& \hspace{0.5cm} + \left. \frac{e}{2} (2b+1) X^{-1-l,l-2b}_0 \right] \text{e}^{i \omega} - \Big[ (l+1) X^{-1-l,l-2b-1}_0 \nonumber \\
& \hspace{0.5cm} + \frac{e}{2} (2l-2b+1) X^{-1-l,l-2b}_0 +\frac{e}{2} (2b+1) X^{-1-l,l-2b-2}_0 \Big] \text{e}^{-i \omega} \bigg\} t \Bigg\}. \label{deltaOmega}
\end{align}
This has noticeable consequences in particular for the secular perturbations which read
\begin{align}
\Delta \Omega_{\text{sec}}(t) = & \; \frac{8 \sqrt{\pi} G t}{ c^2 \sin(I)} \sum_{\substack{l=1 \\ l=\text{odd}}}^{\infty} \frac{\xi_l}{\sqrt{2l+1}a^{l+2}} \Im \left( F_{l0\frac{l+1}{2}} \right) X^{-2-l, 0}_0 \nonumber \\
= & \; \frac{8 \sqrt{\pi} G t}{c^2 \sin(I)} \sum_{\substack{l=1 \\ l=\text{odd}}}^{\infty} \frac{\xi_{l}}{\sqrt{2l+1} a^{l+2} \left(1-e^2 \right)^{l+\frac{1}{2}}} \Im \left( F_{l0\frac{l+1}{2}}\right) \nonumber \\
& \sum_{n=0}^{\frac{l-1}{2}} \left( \frac{e}{2} \right)^{2n} \binom{2n}{n} \binom{l}{2n} \label{Omegasnu}
\end{align}
where we used
\begin{equation}
(1-e^2)X^{n,m}_s = X^{n+1,m}_s + \frac{e}{2} \left( X^{n+1,m+1}_s + X^{n+1,m-1}_s \right)
\end{equation}
with $s=m=0$, $n=-l-2$ and
\begin{equation}
X^{-(n+1),m}_0 = \left( \frac{e}{2} \right)^m \frac{1}{(1-e^2)^{n-\frac{1}{2}}} \sum_{b=0}^{\left[ \frac{n-m-1}{2}\right]} \left( \frac{e}{2} \right)^{2b} \binom{2b+m}{b} \binom{n-1}{2b+m}
\end{equation}
for $n \in \mathbb{N}$, $m \in \mathbb{Z}$.
If only the spin-dipole is kept we recover the well known result of Lense and Thirring (\citet{lense:1984}):
\begin{equation}
\Delta \Omega_{\text{sec}}^{l=1} = \frac{8 \sqrt{\pi} Gt}{c^2 \sin(I)} \frac{\xi_1}{\sqrt{3}a^3(1-e^2)^{\frac{3}{2}}} \Im \left( F_{101} \right) = \frac{2G}{c^2 a^3 n (1-e^2)^{\frac{3}{2}}} \sqrt{\frac{4\pi}{3}} \xi_1 nt.
\end{equation}
Inserting $\xi_1 = \sqrt{3/4\pi}J$ the Lense-Thirring expression is obtained.
Because of its relevance we want to apply this formula to the LAGEOS 2 satellite. We will treat Earth as a homogeneous body. Since the quadrupole term does not contribute to the secular perturbations at all the dipole needs to be compared to the octupole term. Evaluating formula \eqref{Omegasnu} gives
\begin{align}
\dot{\Omega}_{\text{sec}}^{l=3} = & \; 3 \sqrt{\frac{\pi}{7}} \frac{G \xi_3 (1+\frac{3}{2}e^2)}{c^2 a^5 (1-e^2)^{\frac{7}{2}}} \nonumber \\ & \left( 4 \cos^5 \left( \frac{I}{2} \right) \sin \left( \frac{I}{2} \right) - 12 \cos^3 \left( \frac{I}{2} \right) \sin^3 \left( \frac{I}{2} \right) + 4 \cos \left( \frac{I}{2} \right) \sin^5 \left( \frac{I}{2} \right) \right)
\end{align}
for the secular drift rate caused by the spin octupole. For the model of a homogeneous Earth one finds
\begin{equation}
\dot{\Omega}^{l=3}_{\text{sec}} = 0.02 \frac{\text{mas}}{\text{yr}}.
\end{equation}
Since our model of a homogeneous Earth overestimates the value of the spin octupole moment the actual value for the drift rate is even smaller than the calculated number. If one compares this value to the spin dipole term (\citet{ciufolini:2004}),
\begin{equation}
\dot{\Omega}^{l=1}_{\text{sec}} = 31.5 \frac{\text{mas}}{\text{yr}},
\end{equation}
it becomes obvious that the contribution of the spin octupole and all higher spin multipole moments can be neglected for Earth's satellite orbits at present.
\subsection{Argument of periapsis}
\label{subsecDiscus:4}
Most of the work for this orbital element is done by now because the result from \ref{subsecDiscus:3} can be used. However, in the differential equation for $\omega$ appears an additional term $\dot{\omega}_{\text{add}}$ which needs to be studied. So we calculate
\begin{align}
\dot{\omega}_{\text{add}} := & \; \frac{\sqrt{1-e^2}}{nae}
\Bigg\{ -S \cos(\nu)+T \sin(\nu)\left[1+\frac{r}{p}\right] \Bigg\} \nonumber \\
= & \; - \frac{C}{\mu e} \left[ \cos(\nu) + \frac{r}{p}e \sin^2(\nu) + \left( \frac{r}{p} \right)^2 e \sin^2(\nu)\right]S \nonumber \\
= & \; - \frac{8 \sqrt{\pi} p G \cos(I)}{e c^2} \sum_{l=1}^{\infty} \frac{\xi_l}{\sqrt{2l+1}(l+1)r^{l+3}} \nonumber \\
& \; \left( \cos(\nu) + \frac{r}{p}e \sin^2(\nu) + \left( \frac{r}{p} \right)^2 e \sin^2(\nu) \right) \sum_{k=0}^{l-1} \sum_{b=0}^k N_{lk} F_{k0b}(I) \text{e}^{iu(k-2b)}
\end{align}
\begin{align}
= & \; - \frac{8 \sqrt{\pi} p G \cos(I)}{e c^2} \sum_{l=1}^{\infty} \frac{\xi_l}{\sqrt{2l+1}(l+1)a^{l+3}}
\sum_{k=0}^{l-1} \sum_{b=0}^k N_{lk} F_{k0b}(I) \text{e}^{i\omega(k-2b)} \nonumber \\
& \; \sum_{s=-\infty}^{\infty} \Bigg\{ \frac{1}{2} \left( X^{-l-3,k-2b+1}_s + X^{-l-3,k-2b-1}_s \right) \nonumber \\
& \hspace{0.5cm} + \frac{e}{2(1-e^2)} \left[ X^{-l-2,k-2b}_s - \frac{1}{2} \left( X^{-l-2,k-2b+2}_s + X^{-l-2,k-2b-2}_s \right) \right] \nonumber \\
& \hspace{0.5cm} + \frac{e}{2(1-e^2)^2} \left[ X^{-l-1,k-2b}_s - \frac{1}{2} \left( X^{-l-1,k-2b+2}_s + X^{-l-1,k-2b-2}_s \right) \right] \Bigg\} \text{e}^{isM}. \label{dotomega_2}
\end{align}
We integrate \eqref{dotomega_2} and find
\begin{align}
\Delta \omega_{\text{add}}(t) & = - \sum_{l=1}^{\infty} \frac{8 \sqrt{\pi} G p \cos(I) \xi_l}{e c^2 \sqrt{2l+1}(l+1)a^{l+3}}
\sum_{k=0}^{l-1} \sum_{b=0}^k N_{lk} F_{k0b}(I) \text{e}^{i\omega(k-2b)} \nonumber \\
\Bigg\{ & \sum_{\stackrel{s=-\infty}{s \neq 0}}^{\infty} \bigg\{ \frac{1}{2} \left( X^{-l-3,k-2b+1}_s + X^{-l-3,k-2b-1}_s \right) \nonumber \\
+ & \frac{e}{2(1-e^2)} \left[ X^{-l-2,k-2b}_s - \frac{1}{2} \left( X^{-l-2,k-2b+2}_s + X^{-l-2,k-2b-2}_s \right) \right] \nonumber \\
+ & \frac{e}{2(1-e^2)^2} \left[ X^{-l-1,k-2b}_s - \frac{1}{2} \left( X^{-l-1,k-2b+2}_s + X^{-l-1,k-2b-2}_s \right) \right] \bigg\} \frac{\text{e}^{isM} }{ins} \nonumber \\
+ & \bigg\{ \frac{1}{2} \left( X^{-l-3,k-2b+1}_0 + X^{-l-3,k-2b-1}_0 \right) \nonumber \\
+ & \frac{e}{2(1-e^2)} \left[ X^{-l-2,k-2b}_0 - \frac{1}{2} \left( X^{-l-2,k-2b+2}_0 + X^{-l-2,k-2b-2}_0 \right) \right] \nonumber \\
+ &\frac{e}{2(1-e^2)^2} \left[ X^{-l-1,k-2b}_0 - \frac{1}{2} \left( X^{-l-1,k-2b+2}_0 + X^{-l-1,k-2b-2}_0 \right) \right] \bigg\}t \Bigg\}. \label{deltaomega_2}
\end{align}
So using the result from \ref{subsecDiscus:3} we find for the overall perturbation
\[
\Delta \omega (t) = -\cos(I) \Delta \Omega(t) + \Delta \omega_{\text{add}}(t)
\]
with $\Delta \Omega(t)$ from \eqref{deltaOmega} and $\Delta \omega_{\text{add}}(t)$ from \eqref{deltaomega_2}. The secular perturbations are given by
\begin{align}
\Delta \omega_{\text{sec}}(t) = \; & -\cos{I} \Delta \Omega_{\text{sec}}(t) - \sum_{l=1}^{\infty} \frac{8 \sqrt{\pi} G p \cos(I) t \xi_l}{e c^2 \sqrt{2l+1}(l+1)a^{l+3}}
\sum_{\substack{k=0 \\ k=\text{even}}}^{l-1} N_{lk} F_{k0\frac{k}{2}} \nonumber \\
& \Bigg\{ X^{-l-3,1}_0 + \frac{e\left( X^{-l-2,0}_0 - X^{-l-2,2}_0 \right)}{2(1-e^2)} + \frac{e\left( X^{-l-1,0}_0 - X^{-l-1,2}_0 \right)}{2(1-e^2)^2} \Bigg\} \nonumber \\
\end{align}
Again, for the spin-dipole the result agrees with the classical expression of Lense and Thirring:
\begin{align}
\omega_{\text{sec}}^{l=1} = & \; -\cos(I) \Delta \Omega_{\text{sec}}^{l=1} - \frac{8 \sqrt{\pi} G p \cos(I)}{e c^2} \frac{\xi_1}{\sqrt{3}a^{4}} \frac{e}{(1-e^2)^{\frac{5}{2}}} t \nonumber \\
= & \; -3 \cos(I) \Delta \Omega_{\text{sec}}^{l=1}.
\end{align}
\section{Summary}
\label{sec:4}
The aim of the paper was to investigate the influence of a central gravity field with a mass monopole and arbitrary spin multipole moments on satellite orbits given a stationary axisymmetric setting. In order to simplify the form of the multipole moments a coordinate system was chosen for which the spin vector pointed into the z-direction. We found that $\dot{a}=0$ holds in general (and not just in first order perturbation theory) and that $e$ and $I$ experience perturbations which have no secular contributions. For odd numbers $l$ there are secular perturbations for $\Omega$ and $\omega$ which yield for the spin-dipole, $l=1$, the well known results by Lense and Thirring. In the case of the LAGEOS 2 satellite we calculated the additional secular drift due to Earth's spin octupole and found a negligible small number. Further Physical implications and orders of magnitude will be discussed elsewhere.
\section*{Conflict of Interest}
The authors declare that they have no conflict of interest.
\bibliographystyle{apalike}
|
2,877,628,091,037 | arxiv | \section{Introduction}
The average multiplicities of particles produced in high energy
collisions are very useful tools to investigate the process of
hadron production in virtue of some peculiar features. Unlike
momentum spectra, hadron abundances (and correlations) are
Lorentz-invariant quantities; hence they do not depend on
complicated collective motions possibly present in the system
and may be calculated in the local comoving frames. In elementary
collisions, such as \ee, they are a direct and unique probe of
the hadronization process since they are independent of the
perturbative parton dynamics which is inherited by hadrons mainly
in their momentum spectrum. Therefore, it is very important to study
hadron abundances in order to reveal the basic mechanisms governing
hadron production in all kinds of collisions.\\
In the following sections we will sum up briefly the
statistical-thermodynamical approach to the problem of hadron production
and we will show its stunning capability of fitting all existing hadron
average multiplicities data in \ee, pp and \ppb collisions by using
only three free parameters.
A preliminary analysis of hadron abundances measured in heavy
ion collisions in full phase space within the same model will
be discussed in Sect. 4.
\section{The model}
The thermodynamical model of hadron production in \ee, pp, \ppb has
been described in detail elsewhere \cite{beca,erix,behe}; in this
section it is briefly summarized.\\
The basic assumption of the model is the formation of an arbitrary
number of hadron gas fireballs moving away from the primary interaction
region each with its own collective momentum. The parameters describing
the $i^{th}$ hadron gas fireball at thermal and chemical equilibrium
are the temperature $T_i$ and the volume $V_i$ in {\em its rest frame}
as well as its quantum numbers electric charge $Q$, baryon number $N$,
strangeness $S$, charm $C$ and beauty $B$. The partition function of
this system is calculated in the framework of the canonical formalism
of statistical mechanics, namely by summing only over the multi-hadronic
states having the same quantum numbers of the fireball. Therefore,
if $\QGzi=(Q,N,S,C,B)$ is the vector of fireballs quantum numbers
and $\QG$ is the vector of quantum numbers of a particular multi-hadronic
state, the partition function of the fireball reads:
\begin{equation}
Z(\QGzi) = \sum_{\rm{states}} \E^{-E/T_i} \delta_{\QG,\QGzi} \; .
\end{equation}
A parameter $\gamma_s$ accounting for a possibly incomplete strangeness
chemical equilibrium is introduced in the partition function by multiplying
by $\gamma_s^{s}$ the Boltzmann factors $\E^{-\epsilon_j/T}$ associated
to the $j^{th}$ hadron where $s$ is the number of its valence strange quarks
and anti-quarks.\\
The average multiplicity of any hadron species in the $i^{th}$ fireball
can be derived from the partition function (1). As this quantity
depends on the quantum vector $\QGzi$, the overall average multiplicity
depends on the number of fireballs $N$ and on their quantum configuration
$\{\QG_1^0,\ldots,\QG_N^0\}$. In principle any configuration may occur
provided that $\sum_{i=1}^N \QGzi = \QGz$, where $\QGz$ is the quantum
vector fixed by the initial state. However, it can be shown \cite{behe}
that the overall average multiplicity of any hadron indeed depends only on the
global quantities $\QGz$ and $V=\sum_{i=1}^N V_i$ (namely the sum of
all fireball volumes in their rest frames), provided that the temperatures
and the strangeness suppression factors $\gamma_s$ are the same for all
fireballs and the probabilities $w(\QG_1^0,\ldots,\QG_N^0)$ of occurrence
of a given quantum configuration are chosen to be:
\begin{equation}
w(\QG_1^0,\ldots,\QG_N^0) = \frac{\delta_{\zum_i \QGzi,\QGz} \prod_{i=1}^N
Z_i(\QGzi)}{\sum_{\QG_1^0,\ldots,\QG_N^0} \!\!\! \delta_{\zum_i \QGzi,\QGz}
\prod_{i=1}^N Z_i(\QGzi)} .
\end{equation}
It can be proved that this choice corresponds to the minimal deviation of
the system from global (i.e. thermal, chemical and mechanical) equilibrium.
After making use of the probabilities (2) to average the hadron production over
all possible quantum configurations, the overall average multiplicity of
the $j^{th}$ hadron turns out to be:
\begin{eqnarray}
\langle\!\langle n_j \rangle\!\rangle &=& \frac{1}{(2\pi)^5}
\int \dint^5 \phi \,\, \E^{\,\I\, \QGz \cdot \phi} \exp
[ V \sum_j F_j(T,\gamma_s,\PG)]
\nonumber \\
&\times& \frac {(2J_j+1)\,V}{(2\pi)^3} \int
\frac {\dint^3 p}{\gamma_s^{-s_j}
\exp \,(\sqrt{p^2+m^2_j}/T+\I \qj \cdot \PG) \pm 1} \; ,
\end{eqnarray}
where the upper sign is for fermions, the lower for bosons and:
\begin{equation}
F_j(T,\gamma_s,\PG)=\sum_j \frac{(2J_j+1)V}{(2\pi)^3} \int \dint^3 p \,\,
\log \, (1 \pm \gamma_s^{s_j} \E^{-\sqrt{p^2+m_j^2}/T -
\I \qj \cdot \phi})^{\pm 1} \; .
\end{equation}
Thus, under the previous assumptions, the hadron yields (3) depend only
on three unknown parameters $T$, $\gamma_s$ and $V$; the latter re-absorbs
the dependence on the number of fireballs. These unknown parameters have
to be determined by fitting the calculated multiplicities to the measured
ones at each center of mass energy.
\section{Results in \ee, pp and \ppbf collisions}
In order to calculate hadron abundances to be compared with experimental
data the primary yield of each hadron species calculated with eq.~(4)
is added to the contribution stemming from the decay of heavier hadrons,
which is calculated by using experimentally known decay modes and
branching ratios \cite{pdg,jet}. All light-flavored hadrons up to a
mass of 1.7 GeV and all heavy-flavored states inserted in the JETSET
tables \cite{jet} have been used as primary species. The effect of this
cut-off of hadron mass spectrum on final results has been shown to be negligible
\cite{beca,behe}.\\
The primary yield of resonances has been determined by convoluting the eq.~(4)
with a relativistic Breit-Wigner function within $2\Gamma$ from the central
mass value.\\
The measurements from different experiments have been averaged according to
a procedure described in ref. \cite{dean} taking into account {\it a posteriori}
disagreements and correlations.\\
Since the temperature is expected to be ${\cal O}(100)$ MeV the thermal
production of heavy flavored hadrons can be neglected while the perturbative
production is significant only in \ee collisions, where c and b quarks are
created in the primary interaction and do not re-annihilate. In this case
the presence of one charmed (bottomed) flavored hadron-anti-hadron pair is
demanded in a fraction of events $\sigma({\rm e}^+{\rm e}^- \rightarrow {\rm c} \overline
{\rm c} ( {\rm b} \overline {\rm b}))/\sigma({\rm e}^+{\rm e}^- \rightarrow
{\rm hadrons})$.\\
The fit is performed by minimizing the $\chi^2$:
\begin{equation}
\chi^2 = \sum_i \frac{(n_i[{\rm theo}] - n_i[{\rm expe}])^2}{\sigma_i^2}
\end{equation}
as a function of $T$, $V$ and $\gamma_s$. The errors $\sigma_i$ include
contributions from uncertainties on masses, widths and branching
ratios of various hadrons involved in the decay chain process; they have been
determined with an iterative fit procedure \cite{beca,behe}.
\begin{table*}[htb]
\caption[]{Values of fitted parameters. The parameter $V T^3$
has been used instead of $V$ in hadronic collisions because less
correlated to the temperature. The additional errors within brackets have
been estimated by excluding data points deviating the most from fitted
values and repeating the fit.}
\begin{center}
\begin{tabular}{| c || c | c | c | c |}
\hline
\multicolumn{5}{c}{\ee collisions} \\ \hline
$\sqrt s$(GeV)& Temp. (MeV) & Volume(Fm$^3$) & $\gamma_s$ & $\chi^2$/dof \\ \hline
29 & $163.6\pm3.6 $ & $26.7\pm4.1 $ & $0.724\pm0.045$ & 24.7/13 \\
35 & $165.2\pm4.4 $ & $24.9\pm4.7 $ & $0.788\pm0.045$ & 10.5/8 \\
44 & $169.6\pm9.5 $ & $23.2\pm8.7 $ & $0.730\pm0.060$ & 4.9/4 \\
91 & $160.3\pm1.7(3.3)$ & $50.0\pm3.9 $ & $0.673\pm0.020(0.028)$ & 70.1/22 \\ \hline
\multicolumn{5}{c}{pp collisions} \\ \hline
$\sqrt s$(GeV) & Temp. (MeV) & $V T^3$ & $\gamma_s$ & $\chi^2$/dof \\ \hline
19.4 & $190.8\pm27.4$ & $5.8\pm3.1$ & $0.463\pm0.037$ & 6.4/4 \\
23.6 & $194.4\pm17.3$ & $6.3\pm2.5$ & $0.460\pm0.067$ & 2.4/2 \\
26.0 & $159.0\pm9.5$ & $13.4\pm2.7$ & $0.570\pm0.030$ & 1.9/2 \\
27.4 & $169.0\pm2.1(3.4)$ & $11.0\pm0.69$ & $0.510\pm0.011(0.025)$ & 136.4/27 \\ \hline
\multicolumn{5}{c}{\ppb collisions} \\ \hline
$\sqrt s$(GeV) & Temp. (MeV) & $V T^3$ & $\gamma_s$ & $\chi^2$/dof \\ \hline
200 & $175.0\pm14.8$ & $24.3\pm7.9$ & $0.537\pm0.066$ & 0.70/2 \\
546 & $181.7\pm17.7$ & $28.5\pm10.4$ & $0.557\pm0.051$ & 3.78/1 \\
900 & $170.2\pm11.8$ & $43.2\pm11.8$ & $0.578\pm0.063$ & 1.8/2 \\
\hline
\end{tabular}
\end{center}
\end{table*}
The results of the fit are shown in table 1. The quoted numbers are the same
as in refs. \cite{erix,behe} except at $\sqrt s =91.2$ GeV where the
fit has been repeated with new LEP measurements \cite{newlep} (see fig. 1).\\
The fit quality is remarkably good at all center of mass energy points.
The most interesting result is undoubtedly the uniformity, within the
fit errors, of the freeze-out
temperature values independently of kind of reaction and center of mass energy.
The fact that $\gamma_s$ is always less than 1 demonstrates that strangeness
chemical equilibrium is not reached in any of the examined collisions.
Nevertheless, it is worth noticing that $\gamma_s$ is higher in \ee collisions
than in hadronic collisions at the same center of mass energy.\\
The use of the canonical formalism is essential since the system turns out
to be small enough to generate charged hadron ($\qj \ne 0$) suppression with
respect to neutral ones ($\qj =0$) even in an initially neutral system ($\QGz = 0$);
for a more detailed discussion see refs. \cite{erix,behe}.\\
All the fits have been performed by using as experimental input the measured
yields of light-flavored hadrons. Once the parameters of the model are
determined it is possible to predict the heavy-flavored hadrons abundances
provided that the production rate of c and b quark pairs is known.
In table 2 predictions for $\sqrt s = 91.2 $ GeV are compared to actual
LEP experiments measurements \cite{hf} averaged according to the procedure
mentioned above; the agreement is indeed very good.
\begin{table*}[htb]
\caption[]{Predictions of heavy flavored hadron abundances at $\sqrt s =91.2$
GeV obtained by using $T$, $V$ and $\gamma_s$ parameters quoted in table 1 and
$R_c =0.17$, $R_b =0.22$ according to LEP measurements \cite{rbb}. The B$_s^{**}$
prediction is affected by the interpretation of the observed peaks as four different
states or two different states (within brackets).}
\begin{center}
\begin{tabular}{| c || c | c | c |}
\hline
{\bf Hadron} & Prediction & Measured & Residual \\ \hline
D$^+$ & 0.0926 & 0.087$\pm$0.008 & -0.67 \\
D$^0$ & 0.233 & 0.227$\pm$0.012 & -0.50 \\
D$_s$ & 0.0579 & 0.066$\pm$0.010 & +0.81 \\
D$^{*+}$ & 0.108 & 0.0880$\pm$0.0054 & -3.7 \\
D$_s^+$/c-jet & 0.103 & 0.128$\pm$0.027 & +0.92 \\
D$_1$/c-jet & 0.0347 & 0.038$\pm$0.009 & +0.37 \\
D$^*_2$/c-jet & 0.0471 & 0.135$\pm$0.052 & +1.7 \\
D$_{s1}$/c-jet & 0.00536 & 0.016$\pm$0.0058 & +1.8 \\
B$^0$/b-jet & 0.412 & 0.384$\pm$0.026 & -1.1 \\
B$^*$/B & 0.692 & 0.747$\pm$0.067 & +0.82 \\
B$^*$/b-jet & 0.642 & 0.65 $\pm$0.06 & +0.13 \\
B$_s$/b-jet & 0.106 & 0.122$\pm$0.031 & +0.52 \\
B$^{**}_{u,d}$/b-jet & 0.206 & 0.26 $\pm$0.05 & +1.0 \\
B$^{**}$/B & 0.251 & 0.27 $\pm$0.06 & +0.32 \\
B$^{**}_s$/b-jet & 0.021(0.011) & 0.048$\pm$0.017 & +1.6 \\
B$^{**0}_s$/B$^+$ & 0.026(0.013) & 0.052$\pm$0.016 & +1.6 \\
$\Lambda_c^+$ & 0.0248 & 0.0395$\pm$0.0084 & +1.7 \\
b-baryon/b-jet & 0.0717 & 0.115 $\pm$0.040 & +1.1 \\
$(\Sigma_b+\Sigma^*_b)$/b-jet & 0.0404 & 0.048 $\pm$0.016 & +0.48 \\
$\Sigma_b/(\Sigma_b^*+\Sigma_b)$& 0.411 & 0.24 $\pm$0.12 & -1.4 \\
\hline
\end{tabular}
\end{center}
\end{table*}
\section{Thermal fits in heavy ion collisions}
The model described in Sect. 2 may be used to fit hadron abundances
measured in heavy ion collisions, provided that the same assumptions still
hold. Comparison of thermal calculations with experimental data have been
done recently by several authors with a grand-canonical, rather than canonical,
approach and by using multiplicities measured either in a restricted rapidity
range or in full phase space \cite{soll,munz,satz,spiel}.\\
In principle the canonical formalism is the only correct one in that it
ensures the exact conservation of initial quantum numbers. However, if
the volume $V$ is very large, it can be shown (see ref. \cite{behe}) that
the formula (3) giving the average primary $j^{th}$ hadron multiplicity
in the canonical formalism reduces to:
\begin{eqnarray}
\langle\!\langle n_j \rangle\!\rangle &=& (2J_j+1) \, \frac{V}{(2\pi)^3} \,
\sum_{n=1}^{\infty} (\pm 1)^{n+1} \gamma_s^{ns_j} \nonumber \\
&& \times \int \dint^3 p \,\, \E^{-n \sqrt{p^2+m_j^2}/T} \,\,
\E^{n \QG {\sf{A}}^{-1} \qj/2} \,\, \E^{-n^2 \qj {\sf{A}}^{-1} \qj/4} \; ,
\end{eqnarray}
by using a saddle-point approximation of the $\phi$-integrals in eq.~(3).
${\sf{A}}$ is a $N\times N$ matrix, where $N$ is the dimension of the quantum
vectors $\QG,\qj$, whose elements are proportional to $V$.
Hence, in the limit $V \rightarrow \infty$, the second exponential factor
in the above equation goes to 1 as the $\qj$ terms are finite (i.e. the
hadrons quantum numbers). On the other hand, the first exponential factor
can be written $\exp[n \muv \cdot \qj]$ where $\muv$ is a set of $N$ traditional
chemical potentials; the grand-canonical formalism is recovered in the large volume
limit. In heavy ion collisions one expects the canonical factor
$\exp[-n^2 \qj {\sf{A}}^{-1} \qj/4]$ to be a small correction of the grand
canonical formulae as the particle multiplicities, hence the volume, are
very large compared to pp or \ee collisions.\\
We fitted hadron abundances measured in SS \cite{ss} and SAg \cite{sag}
collisions in full phase space by using four free parameters: $T$, $V$,
$\gamma_s$ and $\mu_b$, the baryochemical potential. The strangeness and
electric chemical potential $\mu_s$ and $\mu_q$ have been determined with
the constraints of strangeness neutrality and conservation of the initial
electric charge/baryon number initial ratio:
\begin{eqnarray}
&& \sum_j S_j \langle\!\langle n_j \rangle\!\rangle = 0 \nonumber \\
&& \sum_j Q_j \langle\!\langle n_j \rangle\!\rangle = \frac{Z}{A}
\sum_j N_j \langle\!\langle n_j \rangle\!\rangle \; .
\end{eqnarray}
The results of the fit are shown in table 3 while the comparison between
fitted and experimental average multiplicities are shown in table 4. Due
to the strong correlation between $T$ and $V$ we chose to fit the parameter
$VT^3 \exp[-0.7 \, {\rm GeV}/T]$ instead of $V$.
\begin{table*}[htb]
\caption[]{Values of fitted parameters in SS and SAg collisions. Also
quoted the calculated chemical potentials $\mu_s$ and $\mu_q$.}
\begin{center}
\begin{tabular}{| c || c | c |}
\hline
Parameter & SS & SAg \\ \hline
$T$ (MeV) & 182.1$\pm$9.0 & 180.0$\pm$3.2 \\
$VT^3\exp[-0.7 {\rm GeV}/T]$& 3.51$\pm$0.14 & 5.43$\pm$0.35 \\
$\gamma_s$ &0.732$\pm$0.037 & 0.830$\pm$0.061 \\
$\mu_b/T$ &1.243$\pm$0.071 & 1.323$\pm$0.069 \\
$\chi^2/$dof & 17.2/5 & 5.5/3 \\ \hline
$\mu_s/T$ & -0.332 & -0.364 \\
$\mu_q/T$ & -0.0222 & -0.00316 \\ \hline
\end{tabular}
\end{center}
\end{table*}
\begin{table*}[htb]
\caption[]{Comparison between fitted and measured multiplicities in SS and
SAg collisions.}
\begin{center}
\begin{tabular}{| c || c | c | c |}
\hline
{\bf Particles SS} & Fitted & Measured & Residual \\ \hline
Baryons-Antibaryons & 54.57 & 54$\pm$3 & -0.19 \\
$h^-$ & 93.41 & 98$\pm$3 & +1.53 \\
K$^+$ & 12.61 & 12.5$\pm$0.4 & -0.28 \\
K$^-$ & 7.456 & 6.9$\pm$0.4 & -1.39 \\
K$^0_s$ & 9.834 & 10.5$\pm$1.7 & +0.39 \\
$\Lambda$ & 7.798 & 9.4$\pm$1.0 & +1.60 \\
$\bar \Lambda$ & 1.425 & 2.2$\pm$0.4 & +1.94 \\
p - $\bar{\rm p}$ & 22.59 & 21.2$\pm$1.3 & -1.07 \\
$\bar{\rm p}$ & 2.094 & 1.15$\pm$0.4 & -2.36 \\ \hline \hline
{\bf Particles SAg} & Fitted & Measured & Residual \\ \hline
Baryons-Antibaryons & 92.02 & 90$\pm$9 & -0.22 \\
$h^-$ & 152.04 & 160$\pm$8 & +1.00 \\
K$^0_s$ & 17.49 & 15.5$\pm$1.5 & -1.33 \\
$\Lambda$ & 14.39 & 15.2$\pm$1.2 & +0.68 \\
$\bar \Lambda$ & 2.440 & 2.6$\pm$0.3 & +0.53 \\
p - $\bar{\rm p}$ & 36.76 & 34$\pm$4 & -0.68 \\
$\bar{\rm p}$ & 3.043 & 2.0$\pm$0.8 & -1.31 \\ \hline
\end{tabular}
\end{center}
\end{table*}
It should be mentioned that
these results have been obtained by using only the experimental errors
without taking into account the uncertainties arising from hadron parameters
like masses, widths and branching ratios. \\
The resulting elements of the ${\sf{A}}$ matrix range between -0.02 and
0.06 in SS collisions and between -0.012 and 0.039 in SAg, confirming the
proximity to the grand-canonical regime.\\
The fitted temperature is compatible with that found in \ee, pp, and \ppb
collisions and the quality of the fit is good as well.
Strangeness chemical
equilibrium is not reached as demonstrated by the $\gamma_s$ values $<1$
although there is a clear increase with respect to pp and \ppb collisions.
Our results differ from those obtained in ref. \cite{soll} mainly because
of the available larger number of data points and the use of updated hadron
parameters in the decay chain.
A new fit performed by one of the authors of ref.
\cite{soll} shows a clear consistency with our results \cite{soll2}.
\section{Conclusions}
The analysis of hadron abundances in \ee, pp and \ppb collisions performed
in a suitable canonical formalism is in very good agreement with the hypothesis
of {\em local} thermal and chemical equilibrium. The most interesting results
of the thermal fits to experimental data is the constant value of freeze-out
temperature in all three kinds of collisions independently of center of mass
energy. This fact indicates that the transition from quarks-gluons to hadrons
occurs in a purely statistical fashion at critical values of pre-hadronic matter
parameters (such as energy density or pressure) corresponding to a (partially)
equilibrated hadron gas at $T_c \simeq 170$ MeV. Furthermore, evidence is found
for an incomplete strangeness phase space saturation.\\
The preliminary analysis of hadron abundances in full phase space in
SS and SAg heavy ion collisions resulted in a good agreement with the data
as well and a temperature value consistent with that found in elementary
collisions.\\
The strangeness enhancement going from pp to heavy ion collisions is explained
by two different effects: the increase of the extension of the system
reduces
the suppression due to strangeness conservation (canonical suppression) whilst
the increase of $\gamma_s$ further raises the yield of particles containing
strange quarks.
\section*{Acknowledgements}
I wish to express my gratitude to M. Gazdzicki who provided me with
heavy ion collisions data and for many illuminating discussions
about their analysis. I warmly thank J. Cleymans, U. Heinz, H. Satz,
and J. Sollfrank for useful and stimulating discussions. I would like to
thank all the participants and the organizers of the conference for having provided
a favorable and friendly work atmosphere and for having chosen a wonderful
environment.
|
2,877,628,091,038 | arxiv | \section{Introduction}\label{Intro}
The cosmic reionization is one of the most important but poorly understood
epochs in the history of the Universe. As the first stars form in the
earliest non-linear structures, they illuminate the ambient intergalactic
medium (IGM), create HII regions around them, and start the reionization
process of hydrogen. As the sources become brighter and more numerous,
HII regions grow in number and size, then merge with each other
and eventually percolate throughout the IGM.
Various observations have put constraints on the reionization process.
Based on an instantaneous reionization model, the temperature and
polarization data of the
cosmic microwave background (CMB) constrain the redshift of reionization
to be $z_{\rm reion}= 11.1\pm 1.1$ ($1\sigma$, \citealt{2013arXiv1303.5076P}),
while
the absence of
Gunn-Peterson troughs \citep{1965ApJ...142.1633G} in high redshift quasar (QSO)
absorption spectra suggest that the reionization of hydrogen was very
nearly complete by $z \approx 6$ (e.g. \citealt{2006ARA&A..44..415F}).
Several deep extra-galactic surveys have found more than 200 galaxies
at $z \sim 7-8$, but these are still the tip of iceberg, i.e. the most
luminous of the galaxy population at those redshifts (e.g. \citealt{2011ApJ...737...90B,
2012ApJ...759..135O,2012arXiv1212.5222M,2012ApJ...744..179S,2013MNRAS.429..150L,
2012ApJ...760..108B,2012ApJ...758...93F}).
Recently, measurements of the kinetic Sunyaev-Zel'dovich
effect with the South Pole Telescope have been used to put limits on the epoch
and duration of the reionization \citep{2012MNRAS.422.1403M,2012ApJ...756...65Z,2012arXiv1211.2832B},
though the obtained limits depend on the detailed
physics of reionization \citep{2012ApJ...756...65Z,2013arXiv1301.3607P}.
The most promising probe of this evolutionary stage is the 21cm transition
of neutral hydrogen (see \citealt{2006PhR...433..181F} for a review).
The EDGES\footnote{Experiment to Detect the Global EoR Signature,
see http://www.haystack.mit.edu/ast/arrays/Edges/} experiment has put
the first observational lower limit on the duration of the
epoch of reionization (EoR) of $\Delta z > 0.06$ \citep{2010Natur.468..796B},
and using the GMRT\footnote{The Giant Metrewave Radio Telescope,
see http://gmrt.ncra.tifr.res.in/}, \citet{2011MNRAS.413.1174P,2013arXiv1301.5906P}
put upper limits on the neutral hydrogen power spectrum.
The upcoming low frequency interferometers such as
LOFAR\footnote{The Low Frequency Array, see http://www.lofar.org/},
PAPER\footnote{The Precision Array for Probing the Epoch of Reionization,
see http://eor.berkeley.edu/}, MWA\footnote{The Murchison Widefield Array,
see http://www.mwatelescope.org/}, and 21CMA\footnote{The 21 Centimeter Array,
see http://21cma.bao.ac.cn/} may be able to detect signatures of reionization,
and the next generation instruments such as HERA\footnote{The Hydrogen Epoch of Reionization Array,
see http://reionization.org/} and
SKA\footnote{The Square Kilometre Array, see http://www.skatelescope.org/}
may be able to map out the reionization process in more detail, and
reveal the properties of the first luminous objects.
Interpreting the upcoming data from these instruments requires
detailed modeling of the reionization process.
Motivated by the results of numerical simulations,
\citet{2004ApJ...613....1F} developed a ``bubble model'' for the
growth of HII regions during the early reionization era.
In this model, at a given moment during the early stage of
reionization, a region is assumed to be ionized if the total number of
ionizing photons produced within exceeds the average number
required to ionize all the hydrogen in the region,
otherwise it is assumed to be neutral,
though there could be smaller HII regions within it. At the
very beginning, the ionized regions are mostly the
surroundings of the just-formed first stars or galaxies,
but as the high density regions where first stars and galaxies formed are
strongly correlated, very soon these regions would
grow larger and merge to contain several nearby galaxies. The bubble
model treatment can deal with
the fact that a region can be ionized by neighboring sources
rather than only interior galaxies.
In the bubble model the number of star
forming halos and ionizing photons are calculated with the
extended Press-Schechter model \citep{1991ApJ...379..440B,1993MNRAS.262..627L}.
The criterion of ionization is equivalent to the condition that the
average density of the region exceeds a certain threshold value
(ionization barrier). The mass function of the HII region can then
be obtained from the excursion set model, i.e. by calculating
the probability of a random walk trajectory first up-crossing the barrier.
With a linear fit to the ionization barrier, \citet{2004ApJ...613....1F}
obtained the HII bubble mass function during the early stage of reionization
(see the next section for more details). This
analytical model matches simulation results reasonably
well \citep{2007ApJ...654...12Z}, and is much faster to compute than
the radiative transfer numerical simulations, so it
can be used to explore large parameter space.
It also provides us an intuitive understanding on the physics of
the reionization process. Instead of the full analytical calculation,
one can also apply the same idea to make semi-numerical simulations
\citep{2007ApJ...654...12Z,2007ApJ...669..663M,
2009ApJ...703L.167A,2009MNRAS.394..960C, 2012arXiv1212.6099Z}.
In these simulations the
density field is generated by the usual N-body simulation or the first order perturbation theory,
the ionization field is then predicted with the same criteria as the analytical model.
The semi-numerical approach allows relatively
fast computation, while at the same
time providing three-dimensional visualization of the reionization process.
The bubble model also has certain limitations. As HII regions
form and grow, they begin to contact with each other,
spherical ``bubbles'' are no longer a good
description of the HII regions. After percolation of the HII regions,
the photons
from more distant regions, i.e. the ionizing background,
become very important. Eventually the total volume fraction of the bubbles
predicted by the model would exceed one, and
slightly before this moment the bubble model breaks down.
Although the bubble model may still be successful in some average
sense after percolation, and \citet{2007ApJ...654...12Z} indeed obtained fairly good
agreement between the model-based semi-numerical simulation and radiative transfer
simulations even after ionized bubbles
overlap, it is necessary to construct a more accurate
model for the late stage of reionization, to account for the non-bubble topology and
the existence of an ionizing background.
One may consider to use similar reasonings to construct an
analytical model for the remaining neutral regions after the percolation of
ionized regions. During this epoch, the high density of galaxies and minihalos
allow them to have a higher recombination rate and thus remain neutral.
Besides these compact neutral regions, there are also
large regions with
relatively low density, which remain neutral because fewer galaxies
formed within them. We shall call these neutral regions ``islands'', which
remains above the flooding ionization for a moment. This is in some sense similar to the
voids of large scale structure, just as the extended Press-Schechter model
can predict the number of both halos and voids, we can also develop models of
the neutral islands. However, we do need to change the barrier to
take into account the background ionizing photons in order to
model the island evolution correctly.
On the observational aspect, the island distribution and its evolution
are important for the 21cm signal, which directly relates to the
neutral components in the Universe, and it would be relatively easier for the
upcoming instruments to probe the signal at the late reionization
stages, where the redshifted 21cm line have higher frequencies and weaker foregrounds.
Also, the neutral islands may also contribute to the overall opacity of the IGM in
addition
to the Lyman-limit systems, and in turn affect the evolution of the UV background
and the detectability of high redshift galaxies (e.g. \citealt{2013MNRAS.429.1695B}).
In this paper, we aim to construct an analytical island model, which is
complementary to the bubble model. It applies to the neutral regions left
over after the ionized bubbles overlap with each other, when the neutral
islands are more isolated. Based on the excursion set formalism, we
identify the islands by finding the first-crossings of the random
walks {\it downward} the island barrier, which is deeper than the bubble barrier
because it takes into account the
background ionizing photons in addition to the photons produced by stars
inside the island region. We then use the excursion set model to calculate the
crossing probability at different mass scales, and derive the mass
distribution function of the islands.
However, inside the large neutral islands smaller ionized bubbles may also
form. We investigate this ``bubbles-in-island'' problem by
considering the conditional probability for the excursion trajectory
to first down-cross the island barrier, then up-cross
the original bubble barrier (without the contribution of the ionizing
background) at a smaller scale. It turns out that a large number of bubbles
may form inside the islands, such that a large fraction of the inside of some
``islands'' is ionized. However, we may set
a percolation threshold as an upper limit on the ``bubbles-in-island''
fraction, below which the islands are still relatively simple. We also try
to shed light on the shrinking process of the islands, and obtain a
coherent picture on the late stage of the epoch of reionization.
In the following, we first briefly review the excursion set theory and the bubble
model in \S\ref{reviewEST}, then we generalize it and develop the formalism
of ``island model'' in \S\ref{Model}, and we employ a
simple toy model to illustrate the calculation. An important aspect of the theory
is the treatment of the so called ``bubbles-in-island'' problem, i.e. self-ionized
bubbles inside the neutral islands, we also discuss how to take this effect into account.
\S\ref{ion_back} presents our treatment of the ionizing background
taking into account the absorption from Lyman-limit systems.
With these tools in hand we study the reionization process in \S\ref{results},
the consumption rate of background ionizing photons is assumed to be
proportional to the surface area of the island. The size distribution of the
islands are calculated for different redshifts.
We summarize our results and conclude in \S\ref{Discuss}.
Throughout this paper, we adopt the cosmological parameters from the
7-year {\it Wilkinson Microwave Anisotropy Probe} (WMAP7) measurements
combined with BAO and $H_0$ data: $\Omega_b = 0.0455$, $\Omega_c = 0.227$,
$\Omega_\Lambda = 0.728$,
$H_{\rm 0} = 70.2\ensuremath{\,{\rm km}}\ensuremath{\, {\rm s}^{-1}} \ensuremath{\,{\rm Mpc}}^{-1}$, $\sigma_{\rm 8} = 0.807$ and
$n_{\rm s} = 0.961$ \citep{2011ApJS..192...18K}, but the
results are not sensitive to these parameters.
\section{A Brief Review of the Excursion Set Theory and the Bubble Model}\label{reviewEST}
\subsection{The Excursion Set Model}
Our island model is based on the excursion set theory. Here we give a
brief review of the excursion set approach, especially its application
to the reionization process, i.e. the bubble model.
For a more comprehensive review
of the excursion set theory and its extensions and applications, we refer the
interested readers to \citet{2007IJMPD..16..763Z} and references therein.
In what follows, we consider the density contrast field evaluated
at some early time but extrapolated to the present
day using linear perturbation theory.
Consider a point $\mathbf{x}$ in space,
the density contrast $\delta(\mathbf{x})$ around
it depends on the smooth mass scale $M$ under consideration. The variance of
the density fluctuations on scale $M$, $S=\sigma^2(M)$, monotonically
decreases with increasing $M$ in our Universe, so we can
use $S$ to represent the scale $M$. Starting at $M = \infty$,
i.e. $S = 0$, we move to smaller and smaller
scales surrounding the point of interest, and compute the smoothed
density field as we go along. If we use a k-space top hat window function
to smooth the density field, at each scale $k$
a set of independent Fourier modes
are added, the trajectory of $\delta$ can be described by a random walk
where each step is independent, forming random
trajectories on the $S-\delta$ plane. Each of these trajectories starts
from the origin of the $(S,\delta)$ plane, with the variance of all trajectories given
by $\langle \delta^2 (S) \rangle = S$. Two sample trajectories are
shown in Fig.~\ref{Fig.trace}. Typically, the trajectories jitter more and
deviate farther from $\delta=0$ at larger $S$.
\begin{figure}[t]
\centering{
\includegraphics[scale=0.4]{trace.eps}
\caption{Two random walk trajectories in the excursion
set theory. Here $S=\sigma^2(M)$ denotes the variance
of $\delta_{\rm M}$, which is the density fluctuation
smoothed on mass scale $M$. All trajectories originates from
$(S,\delta)=(0,0)$. The horizontal line represents a flat
barrier, motivated by spherical collapse.}
\label{Fig.trace}
}
\end{figure}
It is assumed that at redshift $z$ and on scale $M$,
regions with average density above certain threshold value
$\delta_c$ will collapse into halos, while regions with average density below the threshold
would remain uncollapsed. The galaxies formed inside sufficiently massive
halos. In some models, $\delta_c$ is only a function of redshift, more generally
it is a function of both redshift and mass scale. The formation of a halo
corresponds to the trajectory up-crossing a barrier $\delta_c(M,z)$
in the $S-\delta$ plane. The excursion set theory was developed
to compute the probabilities for such crossing, and then give the mass
distribution of the corresponding halos.
An important issue which must be addressed is the ``cloud-in-cloud'' problem. For a
given central point, the critical threshold could be exceeded multiple times,
corresponding to possible halos on different mass scales. In the excursion set
theory, one determines the largest smoothing scale $M$ (smallest $S$),
at which a trajectory {\it first} up-crosses the halo barrier at $\delta_c$, and identify it as the
halo at that redshift, while smaller scale crossings are ignored. Physically, it is reasonable to think
that the smaller scale upcrossing corresponds to a small halo which formed earlier and merged into the
bigger halo.
The probability of the barrier crossing can be computed by
solving a diffusion equation with the appropriate boundary conditions, and the {\it first crossing}
probability can be calculated with an absorbing barrier. For a constant density barrier
and a starting point of $(\delta_0,S_0)$, the differential probability of first-crossing
the barrier $\delta_c$ at $S$, known as the ``first-crossing distribution'', can be written as:
\begin{equation}
f(S|\delta_0,S_0) {\rm d}S = \frac{\delta_c-\delta_0}{\sqrt{2\pi}(S-S_0)^{3/2}}\,
\exp \left[ - \, \frac{(\delta_c-\delta_0)^2}{2(S-S_0)}\right]\, {\rm d}S,
\end{equation}
and around the whole Universe, the mass function of the virialized halos is obtained by setting $S_0 = 0$ and $\delta_0 = 0$, which is
\begin{equation}\label{GeneralMF}
\frac{{\rm d}n}{{\rm d} \ln M} = \bar{\rho}_{\rm m,0} f(S) \left|\frac{{\rm d}S}{{\rm d}M}\right|.
\end{equation}
Besides the halo mass function, the excursion set theory can also be used to model the halo
formation and growth \citep{1991ApJ...379..440B,1993MNRAS.262..627L}, and halo clustering
properties \citep{1996MNRAS.282..347M}. Apart from the virialized halos, it could be applied to
various structures in the Universe, such as the voids in the galaxy
distribution \citep{2004MNRAS.350..517S,2012MNRAS.420.1648P,2006MNRAS.366..467F,2007MNRAS.382..860D}
and the ionized bubbles during the
early stages of reionization \citep{2004ApJ...613....1F}. It has also been extended to the case of
moving barriers \citep{2002MNRAS.329...61S,2006ApJ...641..641Z}. Strictly speaking, the
probabilities given above is calculated for uncorrelated steps, which is correct for the
k-space tophat filter but not for the real space tophat filter. The excursion set model with
correlated steps have also been developed \citep{2008MNRAS.389..461P,2012MNRAS.420.1429P,
2012MNRAS.419..132P,2012MNRAS.423L.102M,2013arXiv1303.0337F,2013arXiv1306.0551M}, but below we will still use the uncorrelated model for its
simplicity.
\subsection{The Bubble Model}
In the excursion set model of ionized bubbles during reionization,
i.e. the ``bubble model'', a region is considered ionized if it could emit sufficient
ionizing photons to get all the hydrogen atoms in the region ionized \citep{2004ApJ...613....1F}.
Assuming that the number of the ionizing photons emitted is proportional to the total
collapse fraction of the region, the ionization condition can be written as
\begin{equation}\label{Eq.bubbleCriterion}
f_{\rm coll} \ge \xi^{-1},
\end{equation}
where
\begin{equation}
\xi = f_{\rm esc}\, f_\star\, N_{\rm \gamma/H}\, (1+\bar{n}_{\rm rec})^{-1}
\end{equation}
is an ionizing efficiency factor, in which $f_{\rm esc}$, $f_\star$, $N_{\rm \gamma/H}$,
and $\bar{n}_{\rm rec}$
are the escape fraction, star formation efficiency,
the number of ionizing photons emitted per H atom
in stars, and the average number of recombinations
per ionized hydrogen atom, respectively.
For a Gaussian density field, the collapse fraction of a
mass scale $M$ with the mean linear overdensity $\delta_{\rm M}$ at redshift $z$ can be
written as \citep{1991ApJ...379..440B,1993MNRAS.262..627L}:
\begin{equation}\label{Eq.fcoll}
f_{\rm coll}(\delta_{\rm M}; M,z) = {\rm erfc} \left[
\frac{\delta_c(z)-\delta_{\rm M}}{\sqrt{2[S_{\rm max}-S(M)]}}\right],
\end{equation}
where $S_{\rm max} = \sigma^2(M_{\rm min})$,
in which $M_{\rm min}$ is the minimum collapse scale, and $\delta_c(z)$
is the critical density for
collapse at redshift $z$ linearly extrapolated to the present time.
$M_{\rm min}$ is usually taken to be the mass corresponding to
a virial temperature of $10^4 \ensuremath{\, {\rm K}}$,
at which atomic hydrogen line cooling becomes efficient.
With this collapse fraction, the self-ionization constraint can be written as a barrier on
the density contrast \citep{2004ApJ...613....1F}:
\begin{equation}\label{Eq.bubbleBarrier}
\delta_{\rm M} > \delta_{\rm B}(M,z) \equiv \delta_c(z) - \sqrt{2[S_{\rm
max} - S(M)]} \, {\rm erfc}^{-1} \left(\xi^{-1} \right).
\end{equation}
Solving for the {\it first}-up-crossing distribution of random walks with respect
to this barrier, $f(S,z)$,
the bubbles-in-bubble effect has been included, and
the size distribution of ionized bubbles can be obtained from Eq.(\ref{GeneralMF}),
then the average volume fraction of ionized regions can be written as:
\begin{equation}
Q^{\rm B}_{\rm V} = \int {\rm d}M \,\frac{{\rm d}n}{{\rm d}M}\, V(M).
\end{equation}
In the linear approximate solution, $\delta_{\rm B}(M,z) = \delta_{\rm B,0} + \delta_{\rm B,1} S$,
with the intercept on $S=0$ axis given by
\begin{equation}
\delta_{\rm B,0} \equiv
\delta_c(z) - \sqrt{2\, S_{\rm max}}\, {\rm erfc}^{-1} \left(\xi^{-1} \right),
\label{eq:b0}
\end{equation}
and the slope is
\begin{equation}
\delta_{\rm B,1} \equiv \left. \frac{\partial \delta_{\rm B}}{\partial S} \right|
_{S \rightarrow 0} = \frac{
{\rm erfc}^{-1} \left(\xi^{-1} \right)}{\sqrt{2S_{\rm max}}}.
\label{eq:b1}
\end{equation}
The number density of HII bubbles is then given by \citep{2004ApJ...613....1F}
\begin{equation}
M \frac{{\rm d}n}{{\rm d}M} = \frac{1}{\sqrt{2\,\pi}} \
\bar{\rho}_{\rm m,0} \ \left|\frac{{\rm d} S}{{\rm d} M} \right| \
\frac{\delta_{\rm B,0}}{S^{3/2}} \exp
\left[ - \frac{\delta_{\rm B}^2(M,z)}{2\, S} \right].
\end{equation}
According to the bubble model, at high redshifts the regions of high
overdensity were ionized earlier, because only in such regions
galaxy-harboring halos formed, producing sufficient number of ionizing
photons. In the excursion set theory, this is represented by those trajectories which
excurse over the high barrier $\delta_{\rm B}(S)$. As structures grow, the
barrier function $\delta_{\rm B}(S)$ lowers, thus regions of
relatively lower density become ionized. As the density
and size of bubbles increase, they begin to
overlap. As long as the topology of the bubbles remains mostly discrete, this
description is valid. However, at a certain point, the intercept $\delta_{\rm B,0}$ drops low enough
to 0 that all trajectories which started out raising from the origin point of
the $S-\delta$ plane would have crossed the barrier,
and regions of the average density of the Universe would have been ionized. In fact,
the bubble description of HII regions perhaps failed slightly earlier,
because when the ionized regions occupy a sizable fraction of the total volume, they become
connected, the topology becomes sponge-like, and it is no longer
possible to treat the ionized regions as individual bubbles.
\section{The Excursion Set Model of Neutral Islands}\label{Model}
\subsection{The general formalism}
The bubble model succeeds in describing the growth of HII regions
before the percolation of HII regions.
As a natural generalization to the bubble model, we develop a model which
is appropriate for the late stage of reionization,
when the HII regions have overlapped with each other,
and the neutral regions are more isolated and embedded in the sea of
photon-ionized plasma and ionizing photons. According to the
bubble model, the regions with higher densities are ionized earlier, and by this
stage even the regions of average density have been ionized, so the remaining large scale
neutral regions (``islands'') are underdense regions. Of course, besides these large
neutral regions, there are also galaxies and minihalos, in which neutral hydrogen exists
because they have very high density and hence high recombination rates, which keep them from
being ionized. We shall not discuss these small, highly dense HI systems in this paper, their
number distribution can be predicted with the usual halo model formalism
(see \citealt{2002PhR...372....1C} for a review).
The neutral islands during the late era of reionization are more likely isolated than the ionized bubbles,
similar to the voids at lower redshifts.
In the island model, we assume that most part of the Universe has been ionized,
but the reionization has not been completed.
The condition for a region remains neutral is just the opposite of the ionization
condition, that is, the total number of ionizing photons
is less than the number required to ionize all hydrogen atoms in the region.
At this stage, however, it is also important to include the background ionizing
photons which are produced outside the region.
An island of mass scale $M$ at redshift $z$ has to
satisfy the following condition in order to remain neutral:
\begin{equation}\label{Eq.IslandCondition}
\xi f_{\rm coll}(\delta_{\rm M}; M,z)+ \frac{\Omega_m}{\Omega_b} \frac{N_{\rm back} m_{\rm H}
}
{M X_{\rm H} (1+\bar{n}_{\rm rec})} < 1,
\end{equation}
where $N_{\rm back}$ is the number of background ionizing photons that are
consumed by the island, and $X_{\rm H}$ is the mass fraction of the baryons in hydrogen.
The first term on the L.H.S. is due to self-ionization, while the second term is
due to the ionizing background. Note that in the usual convention of the bubble model,
the number of recombination factor $(1+\bar{n}_{\rm rec})^{-1}$ is absorbed in the $\xi$
parameter, and to be consistent with these literatures here we follow this convention, but
we should keep in mind that if one changes $\bar{n}_{\rm rec}$, the adopted $\xi$ value
should be changed accordingly.
Using Eq.~(\ref{Eq.fcoll}),
the condition (\ref{Eq.IslandCondition}) can be rewritten as a
constraint on the overdensity of the region:
\begin{equation}\label{Eq.islandBarrier}
\delta_{\rm M} < \delta_{\rm I}(M,z) \equiv \delta_c(z) - \sqrt{2[S_{\rm
max} - S(M)]} \, {\rm erfc}^{-1} \left[K(M,z)\right],
\end{equation}
where
\begin{equation}\label{Eq.K}
K(M,z) = \xi^{-1} \left[1 - N_{\rm back} (1+\bar{n}_{\rm rec})^{-1}
\frac{m_{\rm H}} {M (\Omega_b / \Omega_m) X_{\rm H}}\right].
\end{equation}
Due to the contribution of the ionizing background photons, in the excursion set model
the barrier for the neutral islands is different from the barrier used in
the bubble model (Eq.~(\ref{Eq.bubbleBarrier})), as the ionizing background would not be present
when the bubbles are isolated. Below, we shall call a barrier with only the
self-ionization term the ``bubble barrier'', denoted by $\delta_{\rm B}(M,z)$, since
it is used to compute the probability of forming bubbles.
Inclusion of the ionizing background would make the barrier
much more negative, and we shall
call the full barrier the ``island barrier'', denoted by $\delta_{\rm I}(M,z)$.
As discussed in the last section, the bubble
barrier lowers as the structure formation
progresses. Even if we simply compute the barrier as in the original bubble model,
i.e. including only the ionizing photons from collapsed halos within the
region being considered, it could have negative intercepts, i.e. $\delta_{\rm B}(S=0)<0$
(see e.g. the thin lines in Fig.~\ref{Fig.barrierV}).
When bubble barrier passes through the origin point of the $\delta - S$ plane, all regions with
the mean density $\delta=0$ are ionized, this means that most of the Universe is ionized.
It is also from this moment onward a global ionizing background is gradually set up.
We will define the redshift when this occurred as
the ``background onset redshift'' $z_{\rm back}$,
and it can be solved from the following equation:
\begin{equation}
\delta_{\rm I}(S=0;z=z_{\rm back}) = \delta_c(z_{\rm back}) - \sqrt{2\, S_{\rm max}(z_{\rm back})}
\;{\rm erfc}^{-1}(\xi^{-1}) = 0.
\end{equation}
We take $\{f_{\rm esc}, f_{\star}, N_{\rm \gamma/H}, \bar{n}_{\rm rec}\}
=\{0.2, 0.1, 4000, 1\}$ as the fiducial set of parameters,
so that $\xi=40$ and $z_{\rm back}=8.6$, consistent with the
observations of the quasars/gamma-ray bursts absorption
spectra \citep{2008MNRAS.386..359G,2008MNRAS.388L..84G} and Lyman alpha emitters
surveys (e.g. \citealt{2006ApJ...647L..95M,2007ApJ...671.1227D}) which
suggests $x_{\rm HI} \ll 1$ at $z \approx 6$.
We note that this background onset redshift is also consistent with our ionizing background
model presented in \S\ref{ion_back}, in which the intensity of the ionizing background
starts to rapidly increase around redshift $z \sim 8 - 9$ (see Fig.~\ref{Fig.Gamma_12}).
However, the exact value of this background onset redshift has little impact on
the final model predictions on the island distribution, as the ionizing background increases
quite rapidly during the late stage of reionization (see \S\ref{ion_back}) and
the main background contribution to the ionizations comes from the redshift range
just above the redshift under consideration.
As all trajectories start from the point $(S,\delta)=(0,0)$, and the island barrier has a negative intercept,
we see that instead of the usual up-crossing condition
in the excursion set model, here the condition of forming a neutral island is represented
by a {\it down-crossing} of the barrier. Once a random walk trajectory hits
the island barrier, we identify an island with the crossing scale,
and assign the points inside this region to a neutral island of the appropriate mass.
Similar to the ``cloud-in-cloud'' problem in the halo
model \citep{1991ApJ...379..440B},
or the ``void-in-void'' problem in the void model \citep{2004MNRAS.350..517S},
there is also an ``island-in-island'' problem. As in those cases, this problem can
also be solved naturally by considering only the
{\it first}-{\it down}-crossings of the barrier curve.
For a general barrier, \citet{2006ApJ...641..641Z} developed an intergral equation method
for computing the first-{\it up}-crossing distribution.
Similarly, denoting the island scale with its variance $S_{\rm I}$, the first-{\it down}-crossing
distribution of random trajectories with an arbitrary island barrier can be solved as:
\begin{equation}
f_{\rm I}(S_{\rm I}) = -g_1(S_{\rm I}) - \int_0^{S_{\rm I}} {\rm d}S'
f_{\rm I}(S')\left[g_2(S_{\rm I},S')\right],
\end{equation}
where
\begin{equation}
g_1(S_{\rm I}) = \left[\frac{\delta_{\rm I}(S_{\rm I})}{S_{\rm I}}
-2\frac{{\rm d}\delta_{\rm I}}{{\rm d}S_{\rm I}}\right] P_0[\delta_{\rm I}(S_{\rm I}),S_{\rm I}],
\end{equation}
\begin{equation}
g_2(S_{\rm I},S') = \left[2\frac{{\rm d}\delta_{\rm I}}{{\rm d}S_{\rm I}}
- \frac{\delta_{\rm I}(S_{\rm I})-\delta_{\rm I}(S')}{S_{\rm I}-S'}\right]
P_0[\delta_{\rm I}(S_{\rm I})-\delta_{\rm I}(S'),S_{\rm I}-S'],
\end{equation}
and $P_0(\delta,S)$ is the normal Gaussian distribution with variance $S$, which is defined as
\begin{equation}
P_0(\delta,S) = \frac{1}{\sqrt{2\pi S}} \exp
\left(-\frac{\delta^2}{2S}\right).
\end{equation}
These integral equations can be solved numerically with the algorithm of
\citet{2006ApJ...641..641Z}, we can then obtain
the mass function of islands at redshift $z$:
\begin{equation}
\frac{{\rm d}n}{{\rm d}\ln M_{\rm I}}(M_{\rm I},z)
= \bar{\rho}_{\rm m,0} f_{\rm I}(S_{\rm I},z) \left|\frac{{\rm d}S_{\rm I}}{{\rm d}M_{\rm I}}\right|.
\label{eq.hostMF}
\end{equation}
With the neutral island mass function, the volume fraction of neutral regions is given by
\begin{equation}
Q^{\rm I}_{\rm V} = \int {\rm d}M_{\rm I}\,\frac{{\rm d}n}{{\rm d}M_{\rm I}}\, V(M_{\rm I}).
\end{equation}
\subsection{A toy model with island-permeating ionizing background photons}
\label{toy_model}
To illustrate the basic ideas of the island model, let us
consider a toy model in which the ionizing photons permeated through the
neutral islands with a uniform density. This is not a physically realistic model, because if
ionizing photons can permeate through the neutral regions with sufficient flux,
there would be no distinct ionizing bubbles or
neutral islands, though it may be possible to have a small component of
penetrating radiation such as hard X-rays, but that would be much smaller than the
total ionizing background.
The reason we consider this model is that it is possible to derive
a simple analytical solution, which could illustrate some aspects of the island model.
The island-permeating ionizing background photons are likely to be hard X-rays,
whose mean free paths are extremely large even in the IGM with a high neutral fraction.
Therefore, here we use an extremely simple model for the ionizing background,
in which the absorptions by dense clumps are neglected, and the mean free path of
these background photons are comparable with the Hubble scale. In any case, this is a toy
model, a more realistic model for the ionizing background will be described in the next section.
Further, we assume that the total number of ionizing photons produced by
redshift $z$ is proportional to the total collapse fraction of
the Universe at that redshift. Some of these photons would have already been consumed by
ionizations took place before that redshift, and the ionizing background photons are
what left behind. The comoving number density of background ionizing photons is then given by
\begin{equation}
n_\gamma = \bar{n}_{\rm H}\, f_{\rm coll}(z)\,
f_\star\, N_{\rm \gamma/H}\, f_{\rm esc} - (1-Q^{\rm I}_{\rm V})\,\bar{n}_{\rm H}\,(1+\bar{n}_{\rm rec}),
\label{eq.n_gamma}
\end{equation}
where $\bar{n}_{\rm H}$ is the average comoving number density of hydrogen in the Universe,
and the other parameters are the same as those in Eq.(\ref{Eq.bubbleCriterion}).
The number density of ionizing photons given by Eq.~(\ref{eq.n_gamma}) depends on
the global neutral fraction $Q^{\rm I}_{\rm V}$, which is only known after we have applied the
ionizing background intensity itself and solved the reionization model, so this equation
should be solved iteratively.
\begin{figure}[t]
\centering{
\includegraphics[scale=0.4]{BarrierV.eps}
\caption{The island barriers in the model with uniform island-permeating ionizing background
photons, the barriers are plotted for redshifts 8.2, 8.0 and 7.8 as thick curves from top to bottom respectively.
Here we assume $\{f_{\rm esc}, f_{\star}, N_{\gamma/H},
\bar{n}_{\rm rec}\}=\{0.2, 0.1, 4000, 1\}$.
The bubble barriers (without ionizing background) at the same set of redshifts
are shown as thin curves. On the top of figure box we also show the mass scales corresponding to $S$ for
reference.}
\label{Fig.barrierV}
}
\end{figure}
Suppose that the background ionizing photons are uniformly distributed and consumed
within the islands, then $N_{\rm back}$ is proportional to the island volume.
We see from Eq.(\ref{Eq.K}) that $N_{\rm back}$ cancels
with the island mass $M$ in the denominator, and
we have $N_{\rm back}/M = n_\gamma/ \bar{\rho}_{\rm m}$. Therefore,
in this model, the $K$ factor is essentially independent of $M$, i.e. $K(M,z) = K(z)$,
then the island barrier becomes:
\begin{equation}
\delta_{\rm I}(M,z) = \delta_c(z) - \sqrt{2\, [S_{\rm
max} - S(M)]} \, {\rm erfc}^{-1} \left[K(z)\right].
\end{equation}
For a given redshift, $K=$constant, so similar to the bubble barrier,
the only dependence of the island barrier on mass scale comes
from $S(M)$. Taking the fiducial set of parameters,
we plot the island barriers at redshift 8.2, 8.0
and 7.8 in Fig.~\ref{Fig.barrierV} with thick curves from top to bottom respectively. The
bubble barriers are also plotted with thin lines in the same figure.
Indeed, in this case the island barriers have the similar shape as the bubble barriers.
Both barriers increase with $S$, as shown in Fig.~\ref{Fig.barrierV}.
\begin{figure}[t]
\centering{
\includegraphics[scale=0.4]{fI_V.eps}}
\caption{The first-down-crossing distribution in the island-permeating photon model
as a function of the island scale at redshifts 8.2, 8.0 and 7.8 from top to bottom
respectively.}
\label{Fig.fI_V}
\end{figure}
As the redshift decreases, the linearly extrapolated critical overdensity
$\delta_c(z)$ decreases, and both barriers move downward.
For a given set of parameters, as the redshift decreases, $n_\gamma$
increases and $\bar{\rho}_m$ decreases, so that $N_{\rm back}/M$ increases. As a result,
the island barrier decreases faster than the bubble barrier for the same decrease in redshift.
We cut all the curves in the figure at $\xi\,M_{\rm min}$, which is the scale
for which a halo of $M_{\rm min}$ can ionize, and this set the lower limit of
a bubble.
In this toy model, we also cut the island scale at $\xi\,M_{\rm min}$, because
at smaller scales, the non-linear effect becomes important, and the collapse fraction
computed from the extended Press-Schechter model (Eq.(\ref{Eq.fcoll})), which is valid for Gaussian
density field, is not accurate anymore. The exact value of the cutoff mass is not critical
for the illustrative purpose here. Note that this mass-cut of islands is not necessary
for the more realistic island model presented in \S\ref{results}, in which the lower limit of
an island scale is naturally set by the survival limit of islands in the presence of an ionizing background
(see the text in \S\ref{results}).
Below this scale, the neutral hydrogen exists only in minihalos or galaxies.
The first-down-crossing distribution for the islands in the
island-permeating photons model
is plotted for three redshifts in Fig.~\ref{Fig.fI_V}, $S$ and corresponding mass scale $M$
are shown on the bottom and top axes respectively. As expected, at small $S$
the down-crossing probability is vanishingly small, because in this region the barrier
is very negative, and the average displacement of the random trajectories is still very
small. As $S$ increases, the trajectories excurse with wider ranges, and
in this model the barriers also raise up with increasing $S$, so the crossing probability
increases rapidly.
For $z=8.2$, the probability peaks at $S_{\rm I} \approx 5.8$
with $f_{\rm I} \approx 0.07$,
then begins to decrease, because for many trajectories the first crossing happened earlier.
As the redshift decreases,
the island barrier moves downward rapidly, and it becomes harder and harder to down-cross it at large scales,
with most of the first down-crossings happen at smaller scales.
As a result, the first-down-crossing probability decreases very rapidly at large scale,
and it increases at small scales.
\begin{figure}[t]
\centering{
\subfigure{\includegraphics[scale=0.4]{MIdnI_dMI_V_AnaAppro.eps}}
\subfigure{\includegraphics[scale=0.4]{Vdn_dlnR_V_linear.eps}}
\caption{{\it Left panel}: The number distribution functions of neutral islands in the model with
a uniform island permeating ionizing background. The numerical solutions are shown as thick curves
for redshifts 8.2, 8.0 and 7.8 from top to bottom on the right respectively,
the corresponding volume filling factors of islands are $Q^{\rm I}_{\rm V} =$ $0.70\; (z = 8.2)$, $0.59\; (z = 8.0)$,
and $0.46\; (z = 7.8)$, respectively. The thin curves show the distribution function given by the
analytical form in the linear approximation.
{\it Right panel}: The size distributions of islands at the
same redshifts as in the left panel, normalized by the total neutral fraction $Q^{\rm I}_{\rm V}$.}
\label{Fig.MF_Rdistr_V}
}
\end{figure}
The mass functions of islands at three redshifts are plotted in the left panel
of Fig.~\ref{Fig.MF_Rdistr_V}. The volume filling factors of the neutral islands are
$Q^{\rm I}_{\rm V} =0.70\; (z = 8.2)$, $0.59\; (z = 8.0)$,
and $0.46\; (z = 7.8)$, respectively, and the corresponding ionizing background
can be expressed as an HI photoionization rate of $\Gamma_{\rm HI} = n_\gamma (1+z)^3 \,c\,\sigma_i
\approx 1.6\times10^{-11} \ensuremath{\, {\rm s}^{-1}}$.
Here $\sigma_i$ is the frequency averaged photoionization cross-section
of hydrogen.
This level of the ionizing background is unreasonably high, because in this toy
model, we have neglected the effects of dense clumps, minihalos,
and any other possible absorbing systems that could limit
the mean free path of the ionizing photons.
To facilitate comparisons with the bubble
distribution function in \citet{2004ApJ...613....1F}, we also plot in the right
panel the volume weighted distribution of the effective radii of the islands
computed assuming that the islands
are uniform spheres, normalized by the total neutral fraction as in the bubble model.
Note that
\begin{equation}\label{Eq.R}
V \frac{{\rm d}n}{{\rm d}\ln R} \propto 3M^2 \frac{ {\rm d}n}{{\rm d}M} \propto M\frac{ {\rm d}n}{{\rm d}\ln M},
\end{equation}
so this also reflects how masses are distributed in islands of different sizes.
Unsurprisingly, within a given volume, small size islands
are much more numerous than larger ones, as shown in the left panel.
Similar to the general shape of the volume weighted bubble size distribution in the bubble model,
there is a peak in the island size distribution at each redshift in this model. This means that
in the photon-permeating model, the neutral mass is dominated by those islands with the characteristic
scale where the distribution peak locates.
As redshift decreases, the left panel of Fig.~\ref{Fig.MF_Rdistr_V}
shows that the number of large islands decreases rapidly, while the number of the
smallest ones even increases a little. This evolutionary behavior is also shown in the right panel
of Fig.~\ref{Fig.MF_Rdistr_V},
in which large bubbles gradually disappeared, resulting in a raising curve on the
small $R$ end.
In fact, for this toy model, the barrier shape is very close to a straight line, for which
simple analytical solution exists and is very accurate. If we expand the barrier as a linear function of
$S$, we have
\begin{equation}
\delta_{\rm I} (M,z) = \delta_{\rm I,0}+\delta_{\rm I,1}\,S,
\end{equation}
where the intercept is
\begin{equation}
\delta_{\rm I,0} \equiv \delta_c(z) - \sqrt{2\, S_{\rm max}}\, {\rm erfc}^{-1} \left[K(z) \right],
\label{eq:Ib0}
\end{equation}
and the slope is
\begin{equation}
\delta_{\rm I,1} \equiv \frac{{\rm erfc}^{-1} \left[K(z) \right]}{\sqrt{2\,S_{\rm max}}}.
\end{equation}
Then the mass function of the host islands can be expressed analytically:
\begin{equation}\label{Eq.AnalyticMF}
M_{\rm I} \frac{{\rm d}n}{{\rm d}M_{\rm I}} = \frac{1}{\sqrt{2\,\pi}} \
\bar{\rho}_{\rm m,0} \ \left|\frac{{\rm d} S}{{\rm d} M_{\rm I}} \right| \
\frac{|\delta_{\rm I,0}|}{S^{3/2}(M_{\rm I})} \exp
\left[ - \frac{\delta_{\rm I}^2(M_{\rm I},z)}{2\, S(M_{\rm I})} \right].
\end{equation}
These are plotted as thin lines in the left panel of Fig.~\ref{Fig.MF_Rdistr_V},
we see they almost coincide with the results of numerical solutions (thick lines).
The model of this subsection is only for demonstrating the formalism of calculation with
additional (background) ionizing photons, and for simplicity we assumed that the
consumed photons are proportional to the island volume.
This is not realistic, because the ionization
caused by a background is more likely proportional to the surface area $\Sigma$ of the island.
In the next sections we shall consider more realistic models.
\subsection{The Bubbles In Islands}
\label{bubbles_in_island}
Before moving to more realistic models, let us address the problem
of ``bubbles-in-island'' first.
In the above we have assumed that the neutral islands are simple spherical regions, but
in fact there might also be self-ionized regions inside an island.
This ``bubbles-in-island'' problem is similar but in the opposite sense of
the ``voids-in-cloud'' problem in the void
model \citep{2004MNRAS.350..517S,2012MNRAS.420.1648P}.
We identify the bubbles inside neutral islands in the excursion set framework
by considering the trajectories which first
down-crossed the island barrier $\delta_{\rm I}$ at $S_{\rm I}$, then at a larger $S_{\rm B}$ up-crossed over
the bubble barrier $\delta_{\rm B}$. The bubble barrier is the barrier
defined without considering the ionizing background, since this
background should be absent inside large neutral regions. Note that in the
toy model discussed above, the ionizing background permeates through the neutral
islands, then it does not make sense to distinguish the island barriers outside and
the bubble barriers inside, and
the problem of bubbles-in-island can not be discussed.
In the following, we denote the {\it host} island scale (including the bubbles inside)
and the bubble scale by $S_{\rm I}$ and $S_{\rm B}$ respectively, the first
down-crossing distribution by $f_{\rm I}(S_{\rm I},\delta_{\rm I})$, and denote the
conditional probablity for a bubble form inside as
$f_{\rm B}(S_{\rm B},\delta_{\rm B}|S_{\rm I},\delta_{\rm I})$.
The probability distribution of finding a bubble of size $S_{\rm B}$ in a host island of
size $S_{\rm I}$ is then given by
\begin{equation}
\mathcal{F}(S_{\rm B},S_{\rm I})=f_{\rm I}(S_{\rm I},\delta_{\rm I})~
\cdot ~f_{\rm B}(S_{\rm B},\delta_{\rm B}|S_{\rm I},\delta_{\rm I}).
\end{equation}
The neutral mass of an island is given by the total mass of the host
island minus the masses of bubbles of various sizes embedded in the host island, i.e.
\begin{equation}
M = M_{\rm I}(S_{\rm I}) - \sum_i M_{\rm B}^{i} (S_{\rm B}^{i}).
\end{equation}
The conditional probability distribution
$f_{\rm B}(S_{\rm B},\delta_{\rm B}|S_{\rm I},\delta_{\rm I})$ characterizes the size
distribution of bubbles inside an island of scale $S_{\rm I}$ and
overdensity $\delta_{\rm I}$, and
$f_{\rm B}(S_{\rm B},\delta_{\rm B}|S_{\rm I},\delta_{\rm I}) {\rm d}S_{\rm B}$ is the
conditional probability of a random walk which first up-crosses $\delta_{\rm B}$
at between $S_{\rm B}$ and $S_{\rm B}+dS_{\rm B}$ given a starting point
of $(S_{\rm I},\delta_{\rm I})$.
In order to compute $f_{\rm B}$, we could effectively shift the origin point
of coordinates to the point $(S_{\rm I},\delta_{\rm I})$, then the
method developed by \citet{2006ApJ...641..641Z} is still
applicable. The effective bubble barrier becomes:
\begin{equation}
\delta_{\rm B}^{\prime} = \delta_{\rm B}(S+S_{\rm I}) - \delta_{\rm I}(S_{\rm I}),
\end{equation}
where $S = S_{\rm B} - S_{\rm I}$.
Given an island $(S_{\rm I},\delta_{\rm I})$, on average, the fraction of volume (or mass)
of the island occupied by bubbles of different sizes is
\begin{equation}
q_{\rm B}(S_{\rm I},\delta_{\rm I};z) = \int_{S_{\rm I}}^{S_{\rm max}(\xi\cdot M_{\rm min})}
\left[1+\delta_{\rm I}\,D(z)\right]~f_{\rm B}(S_{\rm B},
\delta_{\rm B}|S_{\rm I},\delta_{\rm I})~ {\rm d}S_{\rm B}.
\end{equation}
The factor $[1+\delta_{\rm I}\,D(z)]$ enters because these bubbles
are in the environment with underdensity of $\delta_{\rm I}\,D(z)$, where $D(z)$ is
the linear growth factor.
Then the net neutral mass of the host island can be written as
$M=M_{\rm I}(S_{\rm I}) \,[1-q_{\rm B}(S_{\rm I},\delta_{\rm I};z)]$.
Taking into account the effect of bubbles-in-island, the neutral mass function
of the islands at redshift $z$ is
\begin{equation}
\frac{{\rm d}n}{{\rm d}M}(M,z)
= \frac{{\rm d}n}{{\rm d}M_{\rm I}} \frac{{\rm d}M_{\rm I}}{{\rm d}M}
= \frac{\bar{\rho}_{\rm m,0}}{M_{\rm I}} f_{\rm I}(S_{\rm I},z)
\left|\frac{{\rm d}S_{\rm I}}{{\rm d}M_{\rm I}}\right| \frac{{\rm d}M_{\rm I}}{{\rm d}M}.
\end{equation}
\section{The ionizing background}\label{ion_back}
The intensity of the ionizing background is very important in the late reionization epoch.
However, it has only been constrained after reionization from the mean transmitted flux
in the Ly-$\alpha$ forest (e.g. \citealt{2011MNRAS.412.1926W,2011MNRAS.412.2543C}), and in
any case it evolves with redshift and depends on the detailed history of the reionization.
Conversely, the evolution of the ionizing background also affects the reionization process.
In the toy model presented in \S\ref{toy_model}, we considered an island-permeating ionizing
background, for which the absorptions from dense clumps are neglected, and the resulting
intensity of the ionizing background is unreasonably high.
Here we give a more realistic model for the ionizing background. Due to the existence of dense
clumps that have high recombination rate and limit the mean free path of the ionizing
background photons, an island does not see all the ionizing photons emitted by all the sources,
but only out to a distance of roughly the mean free path of the ionizing photons.
The comoving number density of background ionizing photons at redshift $z$ can be modeled as
the integration of escaped ionizing photons that are emitted from newly collapsed objects
and survived to the distances between the sources and the position under consideration:
\begin{equation}\label{n_gamma}
n_\gamma(z) \;=\; \int_z\, \bar{n}_{\rm H}\, \left|\frac{{\rm d}f_{\rm coll}(z')}{{\rm d}z'}\right|\, f_\star\, N_{\rm \gamma/H}\, f_{\rm esc}\, \exp \left[\,-\, \frac{l(z,z')}{\lambda_{\rm mfp}(z)}\right]\, {\rm d}z',
\end{equation}
where $l(z,z')$ is the physical distance between the source at redshift $z'$ and the redshift $z$ under
consideration, and $\lambda_{\rm mfp}$ is the physical mean free path of the background
ionizing photons.
Various absorption systems could limit the mean free path of the background ionizing photons.
The most frequently discussed absorbers are Lyman limit systems, which have large enough HI column density to keep self-shielded (e.g. \citealt{2000ApJ...530....1M,2005MNRAS.363.1031F,2013MNRAS.429.1695B}).
Minihalos are also self-shielding systems that could block ionizing photons. \citet{2005MNRAS.363.1031F}
developed a simple model for the mean free path of ionizing photons in a Universe where minihalos
dominate the recombination rate. However, as also mentioned in \citet{2005MNRAS.363.1031F}, the formation and the abundance of minihalos are highly uncertain \citep{2003MNRAS.346..456O}, and minihalos would be probably evaporated during the late epoch of reionization \citep{1999ApJ...523...54B,2004MNRAS.348..753S}, although they may consume substantial ionizing photons before they are totally evaporated \citep{2005MNRAS.361..405I}.
In addition to Lyman limit systems and minihalos, the accumulative absorption by low column density systems can not be neglected \citep{2005MNRAS.363.1031F}, but the quantitative contribution from these systems are quite uncertain, and need to be calibrated by high resolution simulations or observations.
Here we focus on the effect of Lyman limit systems on the mean free path of ionizing photons, and use a simple model for the IGM density distribution developed by \citet{2000ApJ...530....1M} (hereafter MHR00).
In the MHR00 model, the volume-weighted density distribution of the IGM measured from
numerical simulations can be fitted by the formula
\begin{equation}
P_{\rm V}(\Delta)\, {\rm d}\Delta \;=\; A_0\, \exp \left[\, -\, \frac{(\Delta^{-2/3} - C_0)^2}{2\,(2\delta_0/3)^2}\, \right]\, \Delta^{-\beta}\, {\rm d}\Delta
\end{equation}
for $z\sim 2 - 6$, where $\Delta = \rho/\bar{\rho}$. Here $\delta_0$ and $\beta$ are parameters fitted
to simulations. The value of $\delta_0$ can be extrapolated to higher redshifts by the function
$\delta_0 \,=\, 7.61/(1+z)$ \citep{2000ApJ...530....1M}, and we take $\beta = 2.5$ for the redshifts
of interest. The parameters $A_0$ and $C_0$ are set by normalizing $P_{\rm V}(\Delta)$ and
$\Delta P_{\rm V}(\Delta)$ to unity.
Using the density distribution of the IGM, the mean free path of ionizing photons can be determined
by the mean distance between self-shielding systems with relative densities above a critical value $\Delta_{\rm crit}$, and can be written as \citep{2000ApJ...530....1M,2005MNRAS.361..577C}
\begin{equation}\label{Eq.mfp}
\lambda_{\rm mfp} \;=\; \frac{\lambda_0}{[1\,-\, F_{\rm V}(\Delta_{\rm crit})]^{2/3}},
\end{equation}
where $F_{\rm V}(\Delta_{\rm crit})$ is the volume fraction of the IGM occupied by regions
with the relative density lower than $\Delta_{\rm crit}$, given by
\begin{equation}
F_{\rm V}(\Delta_{\rm crit}) \;=\; \int_0^{\Delta_{\rm crit}} P_{\rm V}(\Delta) \,{\rm d}\Delta.
\end{equation}
Following \citet{2001ApJ...559..507S}, and assuming photoionization equilibrium and case A recombination rate, the critical relative density for a clump to self-shield can be approximately
written as (see also \citealt{2000ApJ...530....1M,2005MNRAS.363.1031F,2013MNRAS.429.1695B}):
\begin{equation}\label{critical_density}
\Delta_{\rm crit} \;=\; 36\, \Gamma_{-12}^{2/3}\, T_4^{2/15}\, \left(\frac{\mu}{0.61}\right)^{1/3}\,
\left(\frac{f_e}{1.08}\right)^{-2/3}\, \left(\frac{1+z}{8}\right)^{-3},
\end{equation}
where $\Gamma_{-12}\,=\, \Gamma_{\rm HI}/10^{-12} \ensuremath{\, {\rm s}^{-1}}$ is the hydrogen photoionization rate
in units of $10^{-12} \ensuremath{\, {\rm s}^{-1}}$, $T_4 \,=\, T/10^4 \ensuremath{\, {\rm K}}$ is the gas temperature in units of $10^4 \ensuremath{\, {\rm K}}$,
$\mu$ is the mean molecular weight, and $f_e \,=\, n_e/n_{\rm H}$ is the free electron fraction with respect to hydrogen. For the mostly ionized IGM during the late stage of reionization, we assume $T_4 = 2$.
The HI photoionization rate $\Gamma_{\rm HI}$ in Eq.(\ref{critical_density}) is related to the total
number density of ionizing photons $n_\gamma$ in Eq.(\ref{n_gamma}) by
\begin{equation}
\Gamma_{\rm HI} \;=\; \int \, \frac{{\rm d}n_\gamma}{{\rm d}\nu}\, (1+z)^3\, c\, \sigma_\nu\, {\rm d} \nu,
\end{equation}
where ${\rm d}n_\gamma/{\rm d}\nu$ is the spectral distribution of the background ionizing photons,
$c$ is the speed of light, and $\sigma_\nu = \sigma_0\, (\nu/\nu_0)^{-3}$ with
$\sigma_0 = 6.3 \times 10^{-18} \ensuremath{\,{\rm cm}}^2$ and $\nu_0$ being the frequency of hydrogen ionization threshold.
Assuming a power law spectral distribution of the form
${\rm d}n_\gamma/{\rm d}\nu = (n_\gamma^0/\nu_0) (\nu/\nu_0)^{-\eta-1}$,
in which $n_\gamma^0$ is related to the total photon number density $n_\gamma$ by
$n_\gamma = n_\gamma^0/\eta$, then the HI photoionization rate can be written as
\begin{equation}\label{Gamma_HI}
\Gamma_{\rm HI} \;=\; \frac{\eta}{\eta+3}\, n_\gamma\, (1+z)^3\, c\, \sigma_0.
\end{equation}
In the following we assume $\eta = 3/2$ to approximate the spectra of starburst galaxies
\citep{2005MNRAS.363.1031F}.
\begin{figure}[t]
\centering{
\includegraphics[scale=0.4]{Gamma_12_z.eps}
\caption{The redshift evolution of the hydrogen ionization rate $\Gamma_{\rm -12}$.}
\label{Fig.Gamma_12}
}
\end{figure}
It has been suggested that the characteristic length $\lambda_0$ in Eq.(\ref{Eq.mfp})
is related to the Jeans length and can be fixed by comparing
with low redshift observations \citep{2005MNRAS.361..577C,2013ApJ...772...93K}.
We take $\lambda_0 = A_{\rm mfp} r_{\rm J}$, where $r_{\rm J}$ is the physical Jeans length.
Taking the proportional constant $A_{\rm mfp}$ as a free parameter, the comoving number density
of background ionizing photons $n_\gamma$, or equivalently the HI photoionization rate
$\Gamma_{\rm HI}$, can be solved by combining Eq.(\ref{n_gamma}) - (\ref{critical_density})
and Eq.(\ref{Gamma_HI}). We
scale the hydrogen photoionization rate to be $\Gamma_{\rm HI} = 10^{-12.8} \ensuremath{\, {\rm s}^{-1}}$ at redshift 6, as
suggested by recent measurements from the Ly-$\alpha$ forest
\citep{2011MNRAS.412.1926W,2011MNRAS.412.2543C}. Then the parameter $A_{\rm mfp}$ is
constrained to be $A_{\rm mfp} = 0.482$.
The redshift evolution of the hydrogen photoionization rate due to the ionizing background
is shown in Fig.~\ref{Fig.Gamma_12}.
Note that by scaling the background photoionization rate of hydrogen
to the observed value, we implicitly take into account the possible absorptions
due to minihalos and low column density systems.
In the above treatment of the ionizing background, the derived intensity is effectively
the averaged value over the whole Universe. Due to the clustering of the ionizing sources, however,
the ionizing background should fluctuate significantly from place to place at the end of reionization.
The detailed space fluctuations of the ionizing background would be challenging to incorporate, and
for the purpose of illustrating the island model and predicting the statistical results in the next section,
here we use a uniform ionizing background with the averaged intensity.
\section{The Island model of Reionization}\label{results}
\subsection{Ionization at the surfaces of neutral islands}
We now use the excursion set model developed above to study the neutral islands
during the reionization process. In the section \ref{toy_model}, we used a simple toy model
to illustrate the basic formalism, but we have noted that it is based on
an unrealistic assumption,
that the ionizing photons permeate through the neutral islands.
Here we consider more physically motivated model assumptions.
We assume that a spatially homogeneous
ionizing background flux is established throughout all of the ionized
regions at redshift $z_{\rm back}$.
These ionizing photons can not penetrate the neutral islands, but were consumed
near the surface of the islands.
We may then assume that the photons consumed by an island at any instant
is proportional to its surface area, or in terms of mass, $M^{2/3}$.
The number of background ionizing photons consumed is then given by
\begin{equation}\label{Nph}
N_{\rm back} = \int F(z)\,\Sigma_{\rm I} (t) \,{\rm d}t,
\end{equation}
where $\Sigma_{\rm I}$ is the physical surface area of the neutral island,
while $F(z)$ is the physical number flux of background ionizing photons which is related to the
comoving photon number density by $F(z)=n_\gamma(z)\, (1+z)^3\,c/4$.
For spherical islands, the surface area is
related to the scale radius by $\Sigma_{\rm I}=4\pi R^2 / (1+z)^2$, in which $R$ is in comoving coordinates.
For non-spherical islands,
one could still introduce a characteristic scale $R$ and
the area would be related to $R^2$. In fact,
under the action of the ionizing background, non-spherical
neutral regions have a tendency to evolve to
spherical ones because a sphere has the minimum surface area for the same volume.
\begin{figure}[t]
\centering{
\subfigure{\includegraphics[scale=0.4]{BarriervS.eps}}
\subfigure{\includegraphics[scale=0.4]{fI_vS.eps}}
\caption{{\it Left panel}: the island barriers for our fiducial model.
The solid, dashed and dot-dashed curves are for redshifts 6.9, 6.7 and 6.5
from top to bottom respectively, and the corresponding neutral fractions of the Universe
(excluding the bubbles in islands)
are $Q_{\rm V}^{\rm HI} = $ 0.17, 0.11, and 0.05, respectively.
{\it Right panel}: The corresponding first down-crossing distributions at the
same redshifts as the left panel.}
\label{Fig.barrier_fI_vS}
}
\end{figure}
The usual excursion set approach does not contain time or history,
and everything is determined from the information at a given redshift.
However, we see from Eq.~(\ref{Nph}) that the consumption of the
ionizing background photons by an island depends on its history.
Below we try to solve this problem by considering some simplified assumptions.
We assume that the neutral islands shrink with time,
and the hydrogen number density around an island is nearly a constant, which is approximately
true when we are considering large scales.
For simplicity, let us consider a spherical island.
When the island shrinks, counting the required number of ionizations gives
\begin{equation}
n_{\rm H}(R)(1+\bar{n}_{\rm rec}) \,4\pi R^2\, (-{\rm d}R)\, =
F(z)\,\frac{4\pi R^2}{(1+z)^2}\, {\rm d}t,
\end{equation}
where the hydrogen number density $n_{\rm H}$ is in comoving coordinates, so that
\begin{equation}
\frac{{\rm d}R}{{\rm d}t} = -\frac{F(z)/(1+z)^2}{n_{\rm H}(R) (1+\bar{n}_{\rm rec})}
\approx -\frac{F(z)/(1+z)^2}{\bar{n}_{\rm H} (1+\bar{n}_{\rm rec})}.
\end{equation}
Integrating from the background onset redshift $z_{\rm back}$ to redshift $z$, we have
\begin{equation}\label{eq.dR}
\Delta R \equiv R_i - R_f = \int_z^{z_{\rm back}} \frac{F(z)}
{\bar{n}_{\rm H} (1+\bar{n}_{\rm rec})} \, \frac{{\rm d}z}{H(z)(1+z)^3},
\end{equation}
where $R_i$ and $R_f$ denote the initial and final scale of the island respectively.
This shows that the change in $R$ is independent of the mass of the island, but
depends solely on the elapsed time.
The total number of background ionizing photons consumed is given by
\begin{equation}\label{Eq.DeltaN}
N_{\rm back} = \frac{4\pi}{3} \left(R_i^3-R_f^3\right) \bar{n}_{\rm H} (1+\bar{n}_{\rm rec}),
\end{equation}
\subsection{Island Size Distribution}
With this model for the consumption behavior of the background ionizing photons, and
taking the fiducial set of parameters, we plot the island barriers of
inequation (\ref{Eq.islandBarrier})
in the left panel of Fig.~\ref{Fig.barrier_fI_vS} for several redshifts.
The corresponding first down-crossing distributions as a function of the host island scale
$S_{\rm I}$ (i.e. including ionizing bubbles inside
the island) are plotted in the right panel of Fig.~\ref{Fig.barrier_fI_vS}.
Unlike the toy model with permeating ionizing photons, in this model the shape of the
island barriers is drastically different from the bubble barriers, hence a
different shape of the first down-crossing distribution curves.
The island and bubble barriers have the same intercept at $S \sim 0$, because
on very large scales, the contribution of the ionizing background which is proportional
to the surface area would become unimportant when compared with the self-ionization which is
proportional to the volume. However, the island barriers bend downward at $S>0$, because
of the contribution of the ionizing background. As the barrier curves become gradually steeper
when approaching larger $S$,
it is increasingly harder for the random walks to first down-cross them at smaller scales,
even though on the smaller scales the dispersion of the random trajectory grow larger.
As a result, the first down-crossing distribution rapidly increases to a peak value and drops
down on small scales, and there
is a mass-cut on the host island scale, $M_{\rm I,min}$, at each redshift in
order to make sure $K(M,z)\ge 0$. This lower cut on the island mass scale
assures $\Delta R \le R_i$, i.e. the whole island is not completely ionized during
this time by the ionizing background, and $M_{\rm I,min}$ is the minimum mass of the host
island at $z_{\rm back}$ that can survive till the redshift $z$ under consideration.
\begin{figure}[t]
\centering{
\includegraphics[scale=0.4]{MIdnI_dMI_vS.eps}
\caption{The mass function of the host islands in terms of the mass at redshift $z$ (thick lines)
and the initial mass at redshift $z_{\rm back}$ (thin lines) for our fiducial model.
The solid, dashed, and dot-dashed lines are for $z=$ 6.9, 6.7,
and 6.5, from top to bottom respectively.}
\label{Fig.MF_host_vS}
}
\end{figure}
The mass distribution function of the host islands can be obtained directly from
Eq.~(\ref{eq.hostMF}), from which we
can see clearly the shrinking process of these islands.
What we are interested is the mass of the host island at
redshift $z$, but the mass scale $M$ in
Eqs.~(\ref{Eq.IslandCondition}-\ref{Eq.K}) is the initial island
mass at redshift $z_{\rm back}$. We may convert the two masses
using Eq.~(\ref{eq.dR}):
\begin{equation}
\frac{M_f}{M_i}=(1-\frac{\Delta R}{R_i})^3
\label{eq.Mf}
\end{equation}
Islands with initial radius $R_i<\Delta R $ would not survive, and islands with larger
radius would also evolve into smaller ones.
The distributions of the host island mass (including ionized bubbles inside)
are plotted for $z = $ 6.9, 6.7, and 6.5 in Fig.~\ref{Fig.MF_host_vS} as thick
lines. The distributions of the corresponding progenitors at redshift $z_{\rm back}$ are plotted as thin lines.
Using our fiducial model parameters, the volume filling factors of these progenitors
at $z_{\rm back}$ are $Q^{\rm host}_{\rm V,i} = $ 0.51, 0.31, and 0.14, for the host islands
survived at $z = $ 6.9, 6.7, and 6.5 respectively.
The initial mass distribution of these progenitors all have
a very steep lower mass cutoff, because below that minimal mass by redshift $z$
the whole island would be completely ionized by the background photons. Due to the mapping
of Eq.~(\ref{eq.Mf}), the cutoff in the final mass distribution is not as sharp as the
initial mass distribution and the whole distribution curve begin to bend down at lower masses.
\subsection{Bubbles-in-Islands}
However, the total mass function of the host islands does not give a full
picture of the reionization process, since there could be ionized bubbles inside these
islands. Even though the outside ionization background is shielded from
the center of the neutral islands, there might be
galaxies formed inside the neutral islands, and the photons
emitted by these galaxies ionize part of
the islands. The neutral islands are located in underdense regions, so fewer
galaxies formed, nevertheless, by the end of the epoch of reionization,
galaxy formation inside them can not be neglected.
\begin{figure}[t]
\centering{
\subfigure{\includegraphics[scale=0.4]{dnb_dlnMb_vS_z690.eps}}
\subfigure{\includegraphics[scale=0.4]{qB_vS.eps}}
\caption{{\it Left panel}: The mass function of bubbles in an island
of scale $S_{\rm I} = $ 0.01, 0.05, and 0.1, from bottom to top respectively.
The redshift shown here is 6.9. {\it Right panel}: The average mass fraction
of bubbles in an island as a function of the island scale at redshifts $z = 6.9$,
6.7, and 6.5, from top to bottom respectively. The percolation threshold $p_c = 0.16$
is also shown as the horizontal line.}
\label{Fig.qB_vS}
}
\end{figure}
As discussed in \S \ref{bubbles_in_island}, the distribution of bubbles in an island can be
calculated from the conditional probability of up-crossing the bubble barrier after down-crossing
the island barrier. We plot the resulting mass function of inside bubbles for three
different host islands at redshift $z=6.9$ in the left panel of Fig.~\ref{Fig.qB_vS}.
The masses of the host islands are $M\approx 2\times 10^{17} M_\odot \;(S_{\rm I}=0.01)$,
$2\times 10^{16} M_\odot \;(S_{\rm I}=0.05)$, and $8\times 10^{15} M_\odot \;(S_{\rm I}=0.1)$,
from bottom to top respectively. We see the bubbles in islands follow a power law
distribution, with small bubbles more numerous. The upward trend at the large scale end
on each mass distribution curve is due to the numerical error in the up-crossing probability when the
inside bubble scale approaches the host island scale.
To assess the total amount of bubbles in islands, we plot in the right panel of
Fig.~\ref{Fig.qB_vS} the average mass fraction
of bubbles-in-island as a function of the host island mass.
We see that there could be a sizable fraction of the host island which is ionized from within,
especially for the larger islands.
At $z=6.9$ and for $M>10^{12} M_\odot$, this fraction
is higher than 35\%, and it is higher than 60\% for $M>10^{14} M_\odot$ host islands
at the same redshift, so within these large neutral islands smaller ionized bubbles flourishes.
From the excursion set point of view, it is not unusual for the random trajectory
to turn upward the bubble barrier after just down crossed the island barrier,
especially at large scales where the displacement between the island barrier
and the bubble barrier is small. Therefore, even though the whole region is underdense, a large
fraction of it could be sufficiently dense for galaxies to form and create
ionized regions around them. The bubble fraction drops sharply for smaller
islands, because the island barrier departs from the bubble barrier rapidly at small scales, and
it is less likely to form galaxies inside small islands with very low densities.
Interestingly, as redshift decreases this fraction drops down.
For $z=6.5$, it is about 7\%
for $M\sim10^{12} M_\odot$ host islands,
and about 42\% for $M\sim10^{14} M_\odot$ host islands.
This is because what are left at later time are relatively deep underdense regions,
and the probability of forming galaxies in such underdense environments is lower.
\begin{figure}[t]
\centering{
\subfigure{\includegraphics[scale=0.4]{Mdn_dM_vS.eps}}
\subfigure{\includegraphics[scale=0.4]{Vdn_dlnR_vS_linear.eps}}
\caption{{\it Left panel}: The mass function of neutral islands at redshift $z = $
6.9, 6.7, and 6.5, from top to bottom respectively. The corresponding volume
filling factor of the neutral islands at these redshifts are $Q_{\rm V}^{\rm HI} = $ 0.17,
0.11, and 0.05, respectively. {\it Right panel}: The size distribution of neutral
islands, with the scale $R$ converted from their volume, at redshifts $z = 6.9$,
6.7, and 6.5, from bottom to top at the center respectively.}
\label{Fig.MF_Rdis_vS_nopc}
}
\end{figure}
Excluding the bubbles in islands, we plot the mass function and the size
distribution of the {\it net}
neutral islands in the left and right panel of Fig.~\ref{Fig.MF_Rdis_vS_nopc} respectively.
The solid, dashed, and dot-dashed lines are for $z = $ 6.9, 6.7, and 6.5, with a
volume filling factor of the net neutral islands of
$Q_{\rm V}^{\rm HI} = 0.17 (z = 6.9), 0.11 (z = 6.7)$, and 0.05 ($z = 6.5$), respectively.
Similar to the host island mass function shown in Fig.~\ref{Fig.MF_host_vS},
there is also a small scale cutoff on the neutral island mass due to the existence of
an ionizing background. Because of the high bubbles-in-island fraction
in large host islands, excluding the
bubbles in islands results in much fewer large islands. As seen from the size
distribution in the right panel, in which the scale $R$ is converted from the
neutral island volume assuming spherical shape, both the mass fractions of large
and small islands decrease with time, and the distribution curve becomes sharper
and sharper, but the characteristic scale of the neutral islands remains almost unchanged.
Fig.~\ref{Fig.MF_Rdis_vS_nopc} shows basically the number and mass distribution
of the neutral components of
the host islands. However, the results of bubbles-in-island fraction in the
right panel of Fig.~\ref{Fig.qB_vS}
show that within large host islands, a large fraction
of the island volume could be ionized by the photons from newly formed galaxies within.
A naive application of the host island mass function may greatly overestimate the mean
neutral fraction of the Universe, while the application of the neutral island size distribution,
as shown in the right panel of Fig.~\ref{Fig.MF_Rdis_vS_nopc},
would never reveal the real image of the ionization field.
Indeed, if there are so many ionized bubbles inside large neutral islands,
it may be difficult to visually identify the host islands. In light of this, we need
to consider the condition under which the isolated island picture is still applicable.
Especially, if the bubbles inside an island are so numerous and large as to overlap with
each other, they may form a network which percolates through the whole island,
and break the island into pieces, or form a sponge-like topology of neutral and ionized regions.
\subsection{Percolation Model}
Within the spherical model, it is difficult to deal with the sponge-like
topology, but we may limit ourselves to the case where the treatment is still valid.
According to the theory of percolation, in a binary phase system, percolation of one phase
occurs when the filling factor of it exceeds
a threshold fraction $p_c$ (see e.g. \citealt{1991fds.book.....B}).
In the context of cosmology, \citet{1993ApJ...413...48K} obtained the percolation
threshold $p_c$ for the clustered large scale structures from cosmological simulations.
However, the spatial distribution of ionized bubbles and neutral islands are much less
filamentary than the gravitationally clustered dark matter or galaxies. As the
ionization field follows the density field \citep{2012arXiv1211.2832B}, which is
almost Gaussian on large scales \citep{2013arXiv1303.5084P}, here we use the
percolation threshold for a gaussian random field of $p_c = 0.16$
\citep{1993ApJ...413...48K}, below
which we may assume that the bubbles-in-island does not
percolate through the whole island.
The problem of percolation appears in several stages of reionization.
At the early stage of reionization, the filling factor of ionized bubbles increases
as the bubble model predicted. Once the bubble filling factor becomes larger than
the percolation threshold $p_c$, the ionized bubbles are no longer isolated,
and the predictions made from the bubble model are not accurate anymore.
Therefore, the threshold $p_c$ sets a critical redshift $z_{\rm Bp}$, below which
the bubble model may not be reliable. Similarly, the model of neutral islands can make
accurate predictions only below a certain redshift $z_{\rm Ip}$, when the
island filling factor is below $p_c$. The ionizing background was set up after
the ionized bubbles percolated but before the islands were all
isolated, so $z_{\rm Bp} > z_{\rm back}>z_{\rm Ip}$.
Finally, the percolation threshold may also be applied to the bubbles-in-island fraction.
An island with a high value of $q_{\rm B}$ may not qualify
as a whole neutral island, and the bubbles inside it are probably not isolated regions.
\begin{figure}[t]
\centering{
\includegraphics[scale=0.4]{BarriervS_pc.eps}
\caption{The basic island barriers (green curves), the percolation threshold
induced barriers (red curves), and the effective island barriers (black curves)
for our fiducial model. The solid, dashed and dot-dashed curves
are for redshifts 6.9, 6.7 and 6.5 from top to bottom respectively.}
\label{Fig.barriervS}
}
\end{figure}
It may be desirable to consider also the distribution of
those {\it bona fide} neutral islands, for which the bubble fraction is below
the percolation threshold, i.e. after excluding those islands
with $q_{\rm B} > p_c$. This percolation criterion of $q_{\rm B} < p_c$ acts
as an additional barrier for finding islands,
those islands with high bubbles-in-island fractions are excluded,
but the neutral regions in them contribute to the number of smaller islands.
This additional barrier is obtained by solving $q_{\rm B}(S_{\rm I},\delta_{\rm I};z) < p_c$,
and are plotted in Fig.~\ref{Fig.barriervS} with red lines for redshift $z = $
6.9, 6.7, and 6.5, from top to bottom respectively. The basic island barriers are also plotted
in the same figure with green lines. The combined effective island barriers
are shown as black lines.
The barrier resulted from the percolation criterion takes its effect on large scales
as larger islands could have higher bubbles-in-island fractions, and
larger scale islands need to be more underdense to keep the whole region mostly neutral.
The basic island
barrier (\ref{Eq.islandBarrier}) is effective on small scales, because small
islands are easier to be swallowed by the ionizing background.
According to the percolation criterion, the island model can be reasonably
applied at redshifts below $z_{\rm Ip} \sim 6.9$ in our fiducial model, though for
other parameter set the value would be different.
With the combined island barrier taking into account the bubbles-in-island effect,
we find host islands by computing the first down-crossing distribution, and find
bubbles in them by computing the conditional first up-crossing distribution with
respect to the bubble barrier. Subtracting the bubbles in islands, the
mass distribution of the neutral islands and the volume filling factor of the neutral
components $Q_{\rm V}^{\rm HI}$ are obtained.
The resulting size distribution of the neutral islands in terms of the effective
radii is plotted in Fig.~\ref{Fig.Rdistr_vS} for redshifts $z = 6.9$, 6.7, and 6.5.
The distribution curve is normalized by the total neutral fraction in each redshift,
which is
$Q_{\rm V}^{\rm HI} = 0.16 (z = 6.9), 0.09 (z = 6.7)$,
and 0.04 $(z = 6.5)$, respectively.
\begin{figure}[t]
\centering{\includegraphics[scale=0.4]{Vdn_dlnR_vS_linear_percolation.eps}
\caption{The size distribution of neutral islands in our fiducial model taking into account the bubbles-in-island effect and the $p_c$ cutoff on bubbles-in-island fraction. The solid, dashed, and dot-dashed curves are for redshifts $z = 6.9$, 6.7, and 6.5, respectively, and the corresponding volume filling factors of neutral islands are
$Q_{\rm V}^{\rm HI} =$ $0.16\; (z = 6.9)$, $0.09\; (z = 6.7)$, and $0.04\; (z = 6.5)$, respectively. }
\label{Fig.Rdistr_vS}
}
\end{figure}
We note that after applying the $p_c$ cutoff, the resulting neutral fraction at
a specific redshift differs a little from the model without the $p_c$ cutoff.
Intuitively, the percolation threshold act only as a
different definition of islands, and should not change the ionization state of the IGM. This is true because
those islands excluded by the percolation threshold will be considered as pieces of smaller islands that
still contribute to the total neutral fraction. However, two competitive facts are taking effects in our
island-finding procedure, which could make the results different.
First, we have assumed that the
bubbles in islands are all ionized, but neglected those small islands that
could possibly exist in these relatively large bubbles.
When applying the $p_c$ cutoff, some large islands with large bubbles are excluded, and the random walk
would continue to enter the scales smaller than the bubbles, and could possibly find smaller islands that
are embedded in large bubbles. Therefore, the model with $p_c$ cutoff could find more small islands that
are not accounted for in the model without $p_c$ cutoff, and tends to predict higher neutral fraction.
On the other hand, one large island with high bubbles-in-island fraction is taken as several smaller islands
in the model with $p_c$ cutoff, and small islands are more significantly influenced by the ionizing background.
This fact would result in lower neutral fraction for the model with $p_c$ cutoff.
As the redshift decreases, more and more small islands are swallowed by the ionizing background, so
the second effect gradually dominates over the first one.
With the fiducial parameters used here, the second effect dominates for the redshifts of interest,
and the neutral fractions predicted in the model with $p_c$ cutoff is slightly lower than in the model without
$p_c$ cutoff.
As shown in Fig.~\ref{Fig.Rdistr_vS}, in this model, the island size distribution after $z_{\rm Ip}$ also has a peak.
For this set of model parameters, the characteristic size of neutral
islands at $z=6.9$ is about 1.6 Mpc,
but the distribution extends a range, with the lower value as small as 0.2 Mpc,
and the high value as large as 10 Mpc.
As the redshift decreases, small islands disappear rapidly because of the ionizing background.
This is qualitatively consistent with simulation results \citep{2008ApJ...681..756S}
in which small islands are much rarer during the late reionization as compared to
those small ionized bubbles in the early stage.
As the reionization proceeds, the large islands shrink and the small islands are being swallowed
by the ionizing background, with the small ones disappearing more rapidly, and
the peak position of the distribution curve shifts slightly towards larger scale but does not change much.
Due to the rapidly decreasing number of
small islands, the distribution curve becomes narrower.
The distribution also becomes taller with decreasing redshift because it is normalized against the
volume neutral fraction $Q_{\rm V}^{\rm HI}$ at each redshift.
With $Q_{\rm V}^{\rm HI}$ decreasing, the normalized distribution has narrower
and higher peaks, but the absolute number of
neutral islands per comoving volume is decreasing.
\section{Conclusion}\label{Discuss}
This paper is devoted to the understanding of the late stage of the epoch of
reionization. According to the bubble model \citep{2004ApJ...613....1F}
and radiative transfer simulations, reionization started with the ionization of
regions with higher-than-average densities, as stars and galaxies formed earlier in
such regions, while the regions with lower average densities remained neutral
for longer time. Inspired by the bubble model, here we try to understand
the evolution of the remaining large neutral regions
which we call ``islands'' during the late stage of reionization.
We developed a model of their mass distribution and evolution
based on the excursion set theory. The excursion set theory is appropriate
for constructing the ionized bubble model
and the neutral island model because the reionization field
follows the density field on large scales \citep{2012arXiv1211.2821B}.
With the inclusion of an ionizing background,
which should exist after the percolation of ionized regions,
we set an island barrier on the density contrast in the excursion set theory for
the islands to remain neutral, and an island was identified
when the random walk first-{\it down}-crosses
the island barrier. We presented algorithms
for computing the first-down-crossing distribution, obtained mass function for the islands,
and also provide a semi-empirical way to determine the intensity
of the ionizing background during the late reionization era.
We first illustrated the formalism of computation with
a simple toy model, where the number of
consumed ionizing background photons per unit time is proportional to the volume of the
island, i.e. the ionizing background is uniformly distributed within the island. While this
is not realistic, it is relatively simple to derive the analytical expression of the
neutral island mass function. The model predicts a large number
of small islands. We then considered a more realistic model, where the ionizing background
only causes the ionization at the surface of the island, so that the consumption rate of
the ionizing background is proportional to the surface area of the island.
Under the action of such ionizing photons, an island would shrink with time.
The larger islands shrink, while smaller ones disappear. As a result of this,
there is a minimal initial mass at the ``background onset redshift'' for the islands.
We obtained the distribution function of the initial and final mass of the islands at
different redshifts.
However, ionized bubbles also formed within the large neutral islands, these
bubbles-in-islands must be take into account. For this we considered two barriers,
the island barrier and the bubble barrier, at the same time. The former inludes the effect
of ionizing background at the surface of the island, while the latter does not.
The bubbles embedded in an island were found by computing the first-{\it up}-crossings
over the bubble barrier after the random walks have {\it down}-crossed
the island barrier at the host island scale, and the volume fraction of
bubbles-in-island are obtained. We find that for a large island, a large portion of
its interior could be ionized.
The bubbles-in-island problem limited the applicability of this model, because in
non-symmetrical cases, the presence of bubbles may break the island into small
pieces, which would increase the exposed surface of the island.
To address this problem, we applied a percolation
criterion as an additional island barrier on large scales. Islands with large
bubbles-in-island fraction are excluded, because in the real world where the
bubbles are not spherical and concentric, these bubbles would have percolated through
the island and break it into smaller islands.
Using the combined island barrier and excluding the ionized bubbles in the islands,
the volume filling factor of neutral islands in the Universe
and the size distribution of the neutral islands were derived. Our island model applies
to the large scale structure of neutral regions in the linear regime, but
it may be possible to account for the small scale physics,
such as the minihalo absorptions, by introducing a consuming term in
the formula (e.g. \citealt{2005MNRAS.363.1031F,2012ApJ...747..127Y}).
At a given instant shortly after the isolation of islands, our model predicts
that the size distribution of the islands has a peak of a few Mpc,
depending on the model parameters.
As the redshift decreases, the small islands disappear rapidly
while the large ones shrinks, but the characteristic
scale of the islands does not change much.
Eventually, all these large scale
neutral islands are swamped by ionization, only compact neutral regions such as galaxies
or minihalos remain.
In our semi-empirical model of the ionizing background, the main absorbers of the ionizing photons
are self-shielded Lyman limit systems. However, one needs to check to what extent the lower density
neutral islands regulate the mean free path of the ionizing photons. The mean free path due to the
existence of islands can be estimated with $\lambda_{\rm mfp}^{\rm I}(z) \sim 1/[\int \pi R_{\rm f}^2\,
({\rm d}n_{\rm f}/{\rm d}M_{\rm f})\, {\rm d}M_{\rm f}]$, where $R_{\rm f}$ and
${\rm d}n_{\rm f}/{\rm d}M_{\rm f}$ are the size and mass function of final host islands respectively at
redshift $z$. We found that at $z=6.9$, the mean free path of ionizing photons due to islands
$\lambda_{\rm mfp}^{\rm I} \sim 1.12$ physical Mpc as compared with that due to Lyman limit system
$\lambda_{\rm mfp} \sim 0.30$ physical Mpc. At $z=6.7$, $\lambda_{\rm mfp}^{\rm I} \sim 2.68$
physical Mpc as compared with $\lambda_{\rm mfp} \sim 0.38$ physical Mpc, while $z=6.5$,
$\lambda_{\rm mfp}^{\rm I} \sim 7.93$ physical Mpc as compared with $\lambda_{\rm mfp} \sim 0.48$
physical Mpc. Therefore, the mean free path of ionizing photons due to islands is always much larger than
the mean free path due to Lyman limit systems, and the effect of islands on the ionizing background is
negligible as compared to the effect of small scale dense clumps. As the redshift decreases, the large
scale islands become less and less important in regulating the mean free path of ionizing photons.
Considering the dominant contribution of the Lyman limit systems to the IGM opacity, would they also
contribute significantly to the neutral volume during the late era of reionization?
The volume fraction of these Lyman limit systems can be estimated with
$1\,-\, F_{\rm V}(\Delta_{\rm crit})$, and it is about 0.0062, 0.0046, and 0.0036, respectively for $z = 6.9$,
$6.7$, and $6.5$, much lower than the volume filling factor of islands.
Because of the much lower number density and larger size of islands, the mean free path due to islands is
much larger than that due to Lyman limit systems, even though the volume filling fraction of islands is larger.
Therefore, the majority of neutral volume of the IGM is occupied by the islands, which is consistent with
our model assumption, but the opacity of the IGM is dominated by the dense Lyman limit systems.
The results shown here are primarily qualitative, the quantitative
predictions are dependent on our model assumptions and model parameters. Current
observations have not yet been able to constrain such
parameters effectively, and they can be redshift-dependent.
Our model assumption may also be too simplistic,
for example, we may over-predict the number of large islands because
they are more likely non-spherical,
and the ionizing background should have stronger effect on them as they
have larger surface area for the same volume.
These uncertainties could be constrained in the future if the model
predictions are compared with 21cm and/or other observations, and
as the properties of ionizing sources, the evolution
of neutral islands, and the intensity of the ionizing background become
better known. We shall investigate the late reionization epoch by
numerical simulations and compare it with the analytical models
in subsequent works.
\acknowledgments
We deeply appreciate the insight of the referee and the constructive comments.
We thank Jun Zhang, Jie Zhou, Hy Trac and Renyue Cen
for many helpful discussions. This work is supported by
the Ministry of Science and Technology 863 project grant 2012AA121701, the
NSFC grant 11073024, and the John Templeton foundation.
Y.X. is supported by China Postdoctoral Science Foundation and
by the Young Researcher Grant of National Astronomical Observatories,
Chinese Academy of Sciences.
Support for the work of M.S. was provided by NASA through Einstein Postdoctoral
Fellowship grant number PF2-130102 awarded by the Chandra X-ray Center, which is
operated by the Smithsonian Astrophysical Observatory for NASA under contract NAS8-03060.
Zuihi Fan is supported by NSFC under grant 11173001 and 11033005.
|
2,877,628,091,039 | arxiv |
\section{Introduction}
\label{sec:intro}
\subsection{Cluster algebras}
In \cite{fomin2002cluster}, Fomin and Zelevinsky invented cluster
algebras as a combinatorial approach to dual canonical bases of quantum
groups (discovered by Lusztig \cite{Lusztig90} and Kashiwara \cite{Kashiwara90}
independently). The quantum cluster algebras were later introduced
in \cite{BerensteinZelevinsky05}. These algebras possess many seeds,
which are constructed recursively by an algorithm called mutation.
Every seed consists of some skew-symmetrizable matrix and a collection
of generators called (quantum) cluster variables. We might view these
seeds as analog of local charts of algebraic varieties\footnote{In fact, we have a family of varieties called cluster varieties, whose
local charts are tori, local coordinate functions are cluster variables,
and transition maps are determined by the matrices in the seeds, cf.
\cite{FockGoncharov03}.}.
There are many attempts to ``good'' bases of cluster algebras, cf.
\cite{GeissLeclercSchroeer10,GeissLeclercSchroeer10b,GeissLeclercSchroeer11}
\cite{musiker2013bases,thurston2014positive} \cite{HernandezLeclerc09}
\cite{Nakajima09,KimuraQin14,Qin12} \cite{lee2014greedy,lee2014greedyPNAS}
\cite{gross2014canonical} \cite{Qin15} \cite{KKKO15}. In view of
the original motivation of Fomin and Zelevinsky, a good basis should
contain all the quantum cluster monomials (monomials of quantum cluster
variables belonging to the same seed).
\subsection{Berenstein-Zelevinsky's triangular basis approach}
In \cite{BerensteinZelevinsky2012}, Berenstein and Zelevinsky proposed
the following new approach to good bases of quantum cluster algebras:
\begin{itemize}
\item Inspired by the Kazhdan-Lusztig theory, construct a triangular
basis $C^{t}$ in each seed $t$ such that it contains all the quantum
cluster monomials in that seed. More precisely, first construct a
basis consisting of some ordered products of quantum cluster variables,
then Lusztig's lemma \cite[Theorem 1.1]{BerensteinZelevinsky2012}
guarantees a unique new basis whose transition matrix from the old
one is unitriangular, whence the name triangular basis.
\item Prove that these triangular bases give rise to a common basis
for all seeds.
\end{itemize}
If this approach works, then we have a common triangular basis containing
the quantum cluster monomials in all seeds. However, Berenstein-Zelevinsky's
construction only works for those special seeds of acyclic type, cf.
Section \ref{sub:BZ-basis} for the definition. They arrived at a
common basis for the acyclic seeds, which we call the $BZ$-basis
and denote by $C$.
On the other hand, it is known that the quantum cluster algebras associated
with acyclic quiver and $z$-coefficient pattern are isomorphic to
some quantum unipotent subgroups and, consequently, inherit the dual
canonical bases, cf. \cite{GeissLeclercSchroeer11}\cite{KimuraQin14}.
In \cite{KimuraQin14}, Kimura and the author showed that, for such
quantum cluster algebras, the dual canonical bases contain all the
quantum cluster monomials. It is natural to propose the following
conjecture.
\begin{Conj}\label{conj:BZ_basis_good}
For quantum cluster algebras associated with an acyclic quiver and
$z$-coefficient pattern, its dual canonical basis agrees with Berenstein-Zelevinsky's
triangular basis $C$.
\end{Conj}
The verification of this conjecture would imply the desired property
that Berenstein-Zelevinsky's triangular basis contains all quantum
cluster monomials.
\subsection{Different triangular bases in monoidal categorification}
Inspired by this new approach of Berenstein-Zelevinsky, in \cite{Qin15},
in order to prove monoidal categorification conjectures of quantum
cluster algebras, the author introduced very different triangular
bases for injective-reachable quantum cluster algebras. For every
seeds $t$, we can define a such triangular bases $\can^{t}$, cf.
Section \ref{sub:Triangular-basis}.
There are two crucial differences of the common triangular basis $\can$
in \cite{Qin15} with the basis $C$ of Berenstein-Zelevinsky:
\begin{enumerate}
\item The basis is unique but its existence cannot be guaranteed,
because Lusztig's lemma does not apply.
\item The expectation from Fock-Goncharov basis conjecture is included
in the definition and plays an important role.
\end{enumerate}
\subsection{Results}
We have two very different constructions of triangular bases. It is
desirable to compare these bases, which are both defined for acyclic
seeds. The main result of this paper claims that they are the same
for quantum cluster algebras arising from acyclic skew-symmetric matrices
(or, equivalently, from acyclic quivers).
\begin{Thm}[Main result]\label{thm:acyclic}
Let ${\cA}}%\newcommand{\qClAlg}{{\cA^q}$ be a quantum cluster algebras who has a seed $t$ with
an acyclic skew-symmetric matrix $B(t)$. Then in this seed, its triangular
basis $\can^{t}$ in \cite{Qin15} agrees with Berenstein-Zelevinsky's
triangular basis $C$.
\end{Thm}
Notice that, for the quantum cluster algebra arising from an acyclic
quiver and $z$-coefficient pattern, its common triangular bases in
\cite{Qin15} is the dual canonical basis. Therefore, our main result
Theorem \ref{thm:acyclic} implies Conjecture \ref{conj:BZ_basis_good}.
Our proof is based on ideas and techniques developed by the author
in \cite{Qin15}, in particular, the maximal degree tracking and the
composition of unitriangular transitions. The triangular bases treated
in this paper are much easier than those in \cite{Qin15} and our
paper does not depend on the long proof there. In particular, we give
a self-contained proof that the triangular bases $\can^{t}$ in different
acyclic seeds $t$ are the same, cf. Theorem \ref{thm:common_triangular_basis}.
We could further propose the following natural conjecture.
\begin{Conj}\label{conj:symmetrizable}
The triangular basis $\can^{t}$ agrees with Berenstein-Zelevinsky's
triangular basis $C$ in seeds associated with acyclic skew-symmetrizable
seeds.
\end{Conj}
In a previous private communication with Zelevinsky, the author pointed
out that for bipartite orientation, this conjecture is true. The details
will be given in the appendix, cf. Theorem \ref{thm:bipartite}.
\section*{Acknowledgments}
The author thanks Andrei Zelevinsky and Kyungyong Lee for conversations
on acyclic cluster algebras. He thanks Yoshiyuki Kimura, Qiaoling
Wei and Changjian Fu for remarks.
\section{Preliminaries}
\subsection{Quantum cluster algebras}
We recall the definition of quantum cluster algebras by \cite{BerensteinZelevinsky05}
and follow the convention in \cite{Qin15}. Let $[x]_{+}$ denote
$\max(x,0)$. Let $\tilde{B}$ be an $m\times n$ integer matrix with
$n\leq m$. Its $n\times n$ upper submatrix $B$ is called the principal
part. Assume that $\tilde{B}$ is of rank $n$ and $B$ skew-symmetrizable
(namely, there exists a diagonal matrix with strictly positive integer
diagonal entries such that its product with $B$ is skew-symmetric).
We can choose $\Lambda$ an $m\times m$ skew-symmetric integer matrix
such that $\tilde{B}^{T}\Lambda=(\begin{array}{cc}
D & 0\end{array})$ for some diagonal matrix $D$ with strictly positive integer diagonal
entries. Such a pair $({\widetilde{B}},\Lambda)$ is called a compatible pair.
A quantum seed $t$ (or seed for simplicity) consists of a compatible
pair $({\widetilde{B}}(t),\Lambda(t))$ and a collection of indeterminate $X_{i}(t)$,
$1\leq i\leq m$, called $X$-variables. Let $\{e_{i}\}$ denote the
natural basis of $\mathbb{Z}^{m}$ and $X(t)^{e_{i}}=X_{i}(t)$. We define
the corresponding quantum torus ${\mathcal T}(t)$ to be the Laurent polynomial
ring $\mathbb{Z}[q^{\pm{\frac{1}{2}}}][X(t)^{g}]_{g\in\mathbb{Z}^{m}}$ with the usual addition
$+$, the usual multiplication $\cdot$, and the twisted product
\begin{align*}
X(t)^{g}*X(t)^{h} & =q^{{\frac{1}{2}}\Lambda(t)(g,h)}X(t)^{g+h},
\end{align*}
where $\Lambda(t)(\ ,\ )$ denote the bilinear form on $\mathbb{Z}^{m}$ such
that
\begin{align*}
\Lambda(t)(e_{i},e_{j}) & =\Lambda(t)_{ij}.
\end{align*}
${\mathcal T}(t)$ admits a bar-involution $\overline{(\ )}$ which is $\mathbb{Z}$-linear
such that
\begin{align*}
\overline{q^{s}X(t)^{g}} & =q^{-s}X(t)^{g}.
\end{align*}
Notice that all Laurent monomials in ${\mathcal T}(t)$ commute with each other
up to a $q$-power, which is called $q$-commute.
Let $b_{ij}$ denote the $(i,j)$-entry of ${\widetilde{B}}(t)$. We define the
$Y$-variables to be the following Laurent monomials:
\begin{align*}
Y_{k}(t) & =X(t)^{\sum_{1\leq i\leq m}[b_{ik}]_{+}e_{i}-\sum_{1\leq j\leq m}[-b_{jk}]_{+}e_{j}}.
\end{align*}
For any direction $1\leq k\leq n$, the following operation (called
the mutation $\mu_{k}$) gives us a new seed $t'=\mu_{k}t=((X_{i}(t'))_{1\leq i\leq m},{\widetilde{B}}(t'),\Lambda(t'))$:
\begin{itemize}
\item $X_{i}(t')=X_{i}(t)$ if $i\neq k$,
\item $X_{k}(t')=X(t)^{-e_{k}+\sum_{i}[b_{ik}]_{+}e_{i}}+X(t)^{-e_{k}+\sum_{j}[-b_{jk}]_{+}e_{j}}$,
\item ${\widetilde{B}}(t')=(b_{ij}')$ is determined by ${\widetilde{B}}(t)=(b_{ij})$:
$\begin{cases}
b'_{ik} & =-b_{ki}\\
b_{ij}' & =b_{ij}+[b_{ik}]_{+}[b_{kj}]_{+}-[-b_{ik}]_{+}[-b_{kj}]_{+}\qquad\mathrm{if}\ i,j\neq k
\end{cases}$
\item $\Lambda(t')$ is skew-symmetric and satisfies
\begin{align*}
\begin{cases}
\Lambda(t')_{ij} & =\Lambda(t)_{ij}\qquad i,j\neq k\\
\Lambda(t')_{ik} & =\Lambda(t)(e_{i},-e_{k}+\sum_{j}[-b_{jk}]_{+}e_{j})\qquad i\neq k
\end{cases}
\end{align*}
\end{itemize}
The quantum torus ${\mathcal T}(t')$ for the new seed $t'$ is defined similarly.
Notice that, by \cite[Proposition 6.2]{BerensteinZelevinsky05}, any
$Z\in{\mathcal T}(t)\cap{\mathcal T}(t')$ is bar-invariant in ${\mathcal T}(t)$ if and only
if it is bar-invariant in ${\mathcal T}(t')$.
We define a quantum cluster algebra ${\cA}}%\newcommand{\qClAlg}{{\cA^q}$ as the following:
\begin{itemize}
\item Choose an initial seed $t_{0}=((X_{1},\cdots,X_{m}),{\widetilde{B}},\Lambda)$.
\item All the seeds $t$ are obtained from $t_{0}$ by iterated mutations
at directions $1\leq k\leq n$.
\item ${\cA}}%\newcommand{\qClAlg}{{\cA^q}=\mathbb{Z}[q^{\pm{\frac{1}{2}}}][X_{n+1}^{-1},\cdots,X_{m}^{-1}][X_{i}(t)]_{t,1\leq i\leq m}.$
\end{itemize}
The $X$-variables $X_{i}(t)$ in the seeds are called the quantum
cluster variables. We call $X_{n+1},\ldots,X_{m}$ the frozen variables
or the coefficients.
The correction technique developed in \cite[Section 9]{Qin12} provides
a convenient tool for studying the bases of ${\cA}}%\newcommand{\qClAlg}{{\cA^q}$, cf. \cite[Section 5]{Qin15}
for a summary. It tells us that most phenomenons and properties of
bases keep unchanged when we change the coefficient part of the seed
$t$, namely the lower $(m-n)\times n$ submatrix $B^{c}(t)$ of ${\widetilde{B}}(t)$,
or when we change $\Lambda(t)$.
Finally, notice that to each rank $n$ quiver $Q$, we can associate
an $n\times n$ skew-symmetric matrix $B$ such that its entry $b_{ij}$
is given by the difference of the number of arrows from $i$ to $j$
with that of $j$ to $i$. All skew-symmetric matrices arise in this
way. So, if the matrix $B(t)$ of a seed $t$ is skew-symmetric, we
say $t$ is skew-symmetric or $t$ arises from a quiver; if $B(t)$
is skew-symmetrizable, we say $t$ is skew-symmetrizable.
\subsection{Triangular basis\label{sub:Triangular-basis}}
Choose any seed $t$. We recall the following notions introduced in
\cite[Section 3.1]{Qin15}
\begin{Def}[Pointed elements and normalization]A Laurent polynomial
$Z$ in the quantum torus ${\mathcal T}(t)$ is said to be pointed if it takes
the form
\begin{align}
Z & =X(t)^{g}\cdot(1+\sum_{0\neq v\in\mathbb{N}^{n}}c_{v}Y(t)^{v}),\label{eq:pointed}
\end{align}
for some coefficients $c_{v}\in\mathbb{Z}[q^{\pm{\frac{1}{2}}}]$.
In this case, $Z$ is said to be pointed at degree $g$, and we denote
$\deg^{t}Z=g$.
If $Z=q^{s}X(t)^{g}(1+\sum_{0\neq v\in\mathbb{N}^{n}}c_{v}Y(t)^{v})$ for
some $s\in\frac{\mathbb{Z}}{2}$, we use $[Z]^{t}$ to denote the pointed
element $q^{-s}Z$ and call it the normalization of $Z$ in ${\mathcal T}(t)$.
\end{Def}
Notice that all the quantum cluster variables are pointed.
In order to say that a pointed element has a unique maximal degree,
we need to introduce the following partial order.
\begin{Def}[Degree lattice and dominance order]
We call $\mathbb{Z}^{m}$ the degree lattice and denote it by $\mathrm{D}(t)$.
Its dominance order $\prec_{t}$ is defined to be the partial order
such that $g'\prec_{t}g$ if and only if $g'=g+\deg^{t}Y(t)^{v}$
for some $0\neq v\in\mathbb{N}^{n}$.
\end{Def}
We might omit the symbol $t$ in $X_{i}(t)$, $I_{k}(t)$,$\prec_{t}$,
$\deg^{t}$ or $[\ ]^{t}$ for simplicity.
\begin{Lem}[{\cite{Qin15}[Lemma 3.1.2]}]\label{lem:finite_interval}
For any $g'\preceq_{t}g$ in $\mathbb{Z}^{m}$, there exists finitely many
$g''\in\mathbb{Z}^{m}$ such that $g'\preceq_{t}g''\preceq_{t}g$.
\end{Lem}
Assume that, in ${\mathcal T}(t)$, we have (possibly infinitely many) elements
$\mathbb{L}}%% almost simple module, generic basis \mathbb{L_{j}$ pointed in different degrees. Let we denote $\mathbb{L}}%% almost simple module, generic basis \mathbb{L_{j}=\sum_{g\in\mathbb{Z}^{m}}c_{g;j}X^{g}$
where $c_{g;j}\in\mathbb{Z}[q^{\pm{\frac{1}{2}}}]$. A linear combination $\sum_{j}a_{j}\mathbb{L}}%% almost simple module, generic basis \mathbb{L_{j}$
with $a_{j}\in\mathbb{Z}[q^{\pm{\frac{1}{2}}}]$ is well defined and contained in ${\mathcal T}(t)$
if $\sum_{j}a_{j}c_{g;j}$ is a finite sum for all $g\in\mathbb{Z}^{m}$ and
vanishes except for finitely many $g$.
Assume that $Z$ be a Laurent polynomial in ${\mathcal T}(t)$ such that it
is a well defined linear combination of $\mathbb{L}}%% almost simple module, generic basis \mathbb{L_{j}$:
\begin{align}
Z & =\sum_{j}a_{j}\mathbb{L}}%% almost simple module, generic basis \mathbb{L_{j},\qquad a_{j}\in\mathbb{Z}[q^{\pm{\frac{1}{2}}}].\label{eq:decomposition}
\end{align}
We say that this decomposition $\prec_{t}$-triangular if there exists
a unique $\prec_{t}$-maximal element $\deg^{t}\mathbb{L}}%% almost simple module, generic basis \mathbb{L_{0}$ in $\{\deg^{t}\mathbb{L}}%% almost simple module, generic basis \mathbb{L_{j}\}$.
It is further called $\prec_{t}$-unitriangular if $a_{0}=1$, or
$(\prec_{t},{\mathbf m})$-triangular if $a_{j}\in{\mathbf m}=q^{-{\frac{1}{2}}}\mathbb{Z}[q^{-{\frac{1}{2}}}]$
for $j\neq0$. A set $\{Z\}$ is said to be $(\prec_{t},{\mathbf m})$-unitriangular
to $\{\mathbb{L}}%% almost simple module, generic basis \mathbb{L_{j}\}$ if all its elements $Z$ has such property.
\begin{Lem}[{\cite{Qin15}[Lemma 3.1.9]}]
If the decomposition \eqref{eq:decomposition} is $\prec_{t}$-triangular,
then it is the unique $\prec_{t}$-triangular decomposition of $Z$
in $\{\mathbb{L}}%% almost simple module, generic basis \mathbb{L_{j}\}$.
\end{Lem}
\begin{proof}
Thanks to Lemma \ref{lem:finite_interval}, we can recursively determine
all the coefficients $a_{j}$ of $\mathbb{L}}%% almost simple module, generic basis \mathbb{L_{j}$ in \eqref{eq:decomposition},
starting from the higher $\prec_{t}$-order Laurent degrees, cf. \cite{Qin15}[Remark 3.1.8].
\end{proof}
The following lemma will be useful. It allows us to switch to the
desired dominance order.
\begin{Lem}[{\cite{Qin15}[Lemma 3.1.9]}]\label{lem:has_triangular_order}
(i) If \eqref{eq:decomposition} is a finite decomposition of a pointed
element $Z$, then it is $\prec_{t}$-unitriangular.
(ii) If, further, all but one coefficients in \eqref{eq:decomposition}
belong to ${\mathbf m}$, then \eqref{eq:decomposition} is $(\prec_{t},{\mathbf m})$-unitriangular.
\end{Lem}
\begin{proof}
(i) We recall the proof in \cite{Qin15}[Lemma 3.1.9]. Compare maximal
degrees of both hand sides of a finite decomposition, we obtain that
the finite set $\{\deg\mathbb{L}}%% almost simple module, generic basis \mathbb{L_{j}\}$ contains a unique maximal element
$\deg\mathbb{L}}%% almost simple module, generic basis \mathbb{L_{0}$ for some $\mathbb{L}}%% almost simple module, generic basis \mathbb{L_{0}$ such that $\deg\mathbb{L}}%% almost simple module, generic basis \mathbb{L_{0}=\deg Z$.
So this decomposition is $\prec_{t}$-triangular. Finally, $a_{0}=1$
because $Z$ has coefficient $1$ in its leading degree.
(ii) By (i), $Z$ admits a $\prec_{t}$-unitriangular decomposition.
The hypothesis in (ii) simply tells us that the coefficients other
than the leading coefficient (equals 1) belong to ${\mathbf m}$.
\end{proof}
For any $1\leq k\leq n$, let $I_{k}(t)$ denote\footnote{We use the notation $I_{k}$ because this cluster variable corresponds
to the $k$-th indecomposable injective module of a quiver with potential
\cite{DerksenWeymanZelevinsky08,DerksenWeymanZelevinsky09}.} the unique quantum cluster variable (if it exists) such that $\mathrm{pr}_{n}\deg^{t}I_{k}(t)=-e_{k}$,
where $\opname{pr}_{n}$ is the projection of $\mathbb{Z}^{m}$ onto the first $n$-components.
The quantum cluster algebra ${\cA}}%\newcommand{\qClAlg}{{\cA^q}$ is said to be \textit{injective
reachable} if $I_{k}(t)$ exists for any $1\leq k\leq n$. This property
is independent of the choice of the seed $t$ by \cite{Plamondon10a}\cite{gross2014canonical}.
In this case, the quantum cluster variables $I_{k}(t)$, $1\leq k\leq n$,
$q$-commute with each other because they belong to the same seed
(denoted by $t[1]$ in \cite{Qin15}).
\begin{Rem}\label{rem:acyclic_injectives}
In the convention of Section \ref{sub:BZ-basis}, if $B(t)$ is acyclic,
we can obtain the quantum cluster variables $I_{k}$, $\forall1\leq k\leq n$,
by applying the sequence of mutations on each vertex $1,\cdots,n$
such that the their order increases with respect to $\triangleleft$.
In particular, the corresponding cluster algebra is injective reachable.
See Example \ref{eg:Kronecker} for an explicit calculation.
\end{Rem}
\begin{Def}[Triangular basis {\cite[Definition 6.1.1]{Qin15}}]\label{def:triangular_basis}
The triangular basis $\can^{t}$ for the seed $t$ is defined to be
the basis of the quantum cluster algebra ${\cA}}%\newcommand{\qClAlg}{{\cA^q}$ such that
\begin{itemize}
\item The quantum cluster monomials $[\prod_{1\leq i\leq m}X_i(t)^{u_i}]^t$,$[\prod_{1\leq k\leq n}I_{k}(t)^{v_k}]^t$ belong to $\can^{t}$, $\forall u_i,v_k\in\mathbb{N}$.
\item (bar-invariance) The basis elements are invariant under the
bar involution in ${\mathcal T}(t)$.
\item (parametrization) The basis elements are pointed, and we have
the bijection
\begin{align*}
\deg^{t}:\can^{t} & \simeq\mathrm{D}(t)=\mathbb{Z}^{m}.
\end{align*}
\item (triangularity) For any $X_{i}(t)$ and $S\in\can^{t}$, we
have\\
\begin{align*}
[X_{i}(t)*S]^{t} & =b+\sum c_{b'}\cdot b',
\end{align*}
where $\deg^{t}b'\prec_{t}\deg^{t}b=\deg^{t}X_{i}(t)+\deg^{t}S$ and
the coefficients $c_{b'}\in{\mathbf m}=q^{-{\frac{1}{2}}}\mathbb{Z}[q^{-{\frac{1}{2}}}]$.
\end{itemize}
\end{Def}
It is easy to show that if $\can^{t}$ exists, then it is unique by
the triangularity and bar-invariance, cf \cite[Lemma 6.2.6(i)]{Qin15}.
In order to study $\can^{t}$, \cite{Qin15} introduced the injective
pointed set $\opname{inj}^{t}$ in the seed $t$:
\begin{align*}
\opname{inj}^{t} & =\{\opname{inj}^{t}(f,u,v)|f\in\mathbb{Z}^{[n+1,m]},\ u,\ v\in\mathbb{N}^{[1,n]},u_{k}v_{k}=0\forall k\in[1,n]\}\\
\opname{inj}^{t}(f,u,v) & =[\prod_{n+1\leq i\leq m}X_{i}(t)^{f_{i}}*\prod_{1\leq k\leq m}X_{k}(t)^{u_{k}}*\prod_{1\leq k\leq m}I_{k}(t)^{v_{k}}]^{t}
\end{align*}
This is a linearly independent family of pointed elements contained
in ${\cA}}%\newcommand{\qClAlg}{{\cA^q}$. By the triangularity of $\can^{t}$, the set of pointed
elements $\opname{inj}^{t}$ is $(\prec_{t},{\mathbf m})$-unitriangular to $\can^{t}$.
It follows that $\can^{t}$ is also $(\prec_{t},{\mathbf m})$-unitriangular
to $\opname{inj}^{t}$, cf. \cite[Lemma(inverse transition)]{Qin15}.
\begin{Eg}[Type $A_3$]\label{eg:A_3_triangular}
Consider the matrix ${\widetilde{B}}=\left(\begin{array}{ccc}
0 & -1 & 0\\
1 & 0 & -1\\
0 & 1 & 0\\
-1 & 1 & 0\\
0 & -1 & 1\\
0 & 0 & -1
\end{array}\right)$, which is the matrix of the ice quiver in Figure \ref{fig:A3Quiver}.
In the convention of \cite{KimuraQin14}, its principal part is an
acyclic type $A_{3}$ quiver and coefficient part the $z$-pattern.
There is a natural matrix $\Lambda$ such that $({\widetilde{B}},\Lambda)$ is
compatible. The corresponding quantum cluster algebra ${\cA}}%\newcommand{\qClAlg}{{\cA^q}$ is
isomorphic to the quantum unipotent subgroup $A_{q}(\mathfrak{\mathfrak{n}}(c^{2}))$
localized at the coefficients $X_{4},X_{5},X_{6}$, where the Coxeter
word $c=s_{3}s_{2}s_{1}$ (read from right to left).
The quantum cluster variables $I_{1},I_{2},I_{3}$ are obtained from
consecutive mutations at $1,2,3$. Our pointed element $\opname{inj}(f,u,v)$
\begin{align*}
\opname{inj}(f,u,v) & =[X_{4}^{f_{4}}*X_{5}^{f_{5}}*X_{6}^{f_{6}}*X_{1}^{u_{1}}*X_{2}^{u_{2}}*X_{3}^{u_{3}}*I_{1}^{v_{1}}*I_{2}^{v_{2}}*I_{3}^{v_{3}}]
\end{align*}
is a localized dual PBW basis element (rescaled by a $q$-power),
and the triangular basis is the localized (rescaled) dual canonical
basis, cf. \cite{KimuraQin14}.
\end{Eg}
\begin{figure}[htb!] \centering \beginpgfgraphicnamed{fig:A3Quiver} \begin{tikzpicture} \node [shape=circle, draw] (v1) at (1,-3) {3}; \node [shape=circle, draw] (v2) at (2,-1.5) {2}; \node [shape=circle, draw] (v3) at (3,0) {1};
\node [shape=diamond, draw] (v4) at (-4,-3) {6}; \node [shape=diamond, draw] (v5) at (-3,-1.5) {5}; \node [shape=diamond, draw] (v6) at (-2,0) {4};
\draw[-triangle 60] (v1) edge (v2); \draw[-triangle 60] (v2) edge (v3); \draw[-triangle 60] (v5) edge (v1); \draw[-triangle 60] (v6) edge (v2); \draw[-triangle 60] (v1) edge (v4); \draw[-triangle 60] (v2) edge (v5); \draw[-triangle 60] (v3) edge (v6); \end{tikzpicture} \endpgfgraphicnamed \caption{Acyclic A3 quiver with $z$-pattern} \label{fig:A3Quiver} \end{figure}
\begin{Lem}[{Substitution \cite{Qin15}[Lemma 6.4.4]}]\label{lem:substitution}
If a pointed element $Z$ is $(\prec_{t},{\mathbf m})$-unitriangular to $\can^{t}$,
so does $[\prod_{n+1\leq i\leq m}X^{f_{i}}*X^{u}*Z*I^{v}]$ for any
$f\in\mathbb{Z}^{[n+1,m]}$,$u,v\in\mathbb{N}^{n}$.
\end{Lem}
\begin{proof}
$Z$ is $(\prec_{t},{\mathbf m})$-unitriangular to $\opname{inj}^{t}$ and admits
a $(\prec_{t},{\mathbf m})$-unitriangular decomposition
\begin{align*}
Z & =\sum_{s}a_{s}\opname{inj}^{t}(f^{(s)},u^{(s)},v{}^{(s)}).
\end{align*}
Replace $Z$ by this decomposition in $[\prod_{n+1\leq i\leq m}X^{f_{i}}*X^{u}*Z*I^{v}]$,
the result is $(\prec_{t},{\mathbf m})$-unitriangular to $\can^{t}$, by
the triangularity of $\can^{t}$ and comparison of $q$-powers (cf.
\cite[Lemma 6.2.4]{Qin15}).
\end{proof}
\subsection{Berenstein-Zelevinsky's triangular basis\label{sub:BZ-basis}}
Work in some chosen seed $t$, whose symbol we often omit. Assume
that its principal part $B=B(t)$ is acyclic, namely, there exists
an order $\triangleleft$ on the vertex $\{1,\ldots,n\}$ such that
$b_{ij}\leq0$ whenever $i\triangleleft j$. In this case, $t$ is
called an acyclic seed. If $i\triangleleft j$, we say $i$ is $\triangleleft$-inferior
than $j$, and also denote $j\triangleright i$.
A vertex $j\in[1,n]$ is said to be a source point in $t$ if $j$
is $\triangleleft$-maximal, namely, $j\triangleright k$ for all
$1\leq k\leq n$. Similarly, it is called a sink point in $t$ if
$j$ is $\triangleleft$-minimal, namely, $j\triangleleft k$ for
all $1\leq k\leq n$.
For any $1\leq k\leq n$, let $b_{k}=\tilde{B}e_{k}$ denote the $k$-th
column of $\tilde{B}$. Let $S_{k}=S_{k}(t)$ denote\footnote{We use the symbol $S_{k}$ because this cluster variable corresponds
to the $k$-th simple $S_{k}$ in an associated quiver with potential.} the quantum cluster variable $X_{k}(\mu_{k}t)$. Notice that $S_{k}=X^{-e_{k}+[-b_{k}]_{+}}\cdot(1+Y_{k})$
and we have $\deg S_{k}=-e_{k}+[-b_{k}]_{+}$, where $[-b_{k}]_{+}$
denote $([-b_{jk}]_{+})_{1\leq j\le m}$.
For any $a\in\mathbb{Z}^{m}$, Bernstein and Zelevinsky defined the standard
monomials
$E_{a}=[\prod_{n<j\leq m}X^{a_{j}}*\prod_{1\leq k\leq n}X_{k}^{[a_{k}]_{+}}*\prod_{1\leq k\leq n}^{\triangleleft}S_{k}^{[-a_{k}]_{+}}]$,
where the last factor is the product with increasing $\triangleleft$
order, cf. \cite[(1.17) (1.22) Remark 1.3]{BerensteinZelevinsky2012}.
Define $r(a)=\sum_{1\leq k\leq n}[-a_{k}]_{+}$. Define partial order
$a\prec_{BZ}a'$ if and only if $r(a)<r(a')$.
\begin{Def}
The Berenstein-Zelevinsky's acyclic triangular basis for the seed
$t$ is defined to be the basis $C^{t}=\{C_{a}\}$ of ${\cA}}%\newcommand{\qClAlg}{{\cA^q}$ such
that each $C_{a}$ is bar-invariant and $(\prec_{BZ},q^{{\frac{1}{2}}}\mathbb{Z}[q^{{\frac{1}{2}}}])$-triangular
to the basis $\{E_{a}\}$.
\end{Def}
We call $C^{t}$ the BZ-basis for simplicity. Applying the bar involution,
we obtain that $C^{t}$ is $(\prec_{BZ},{\mathbf m})$-triangular to $\{\overline{E}_{a}\}$,
where
\begin{align*}
\overline{E}_{a} & =[\prod_{1\leq k\leq n}^{\triangleright}S_{k}^{[-a_{k}]_{+}}*\prod_{1\leq k\leq n}X_{k}^{[a_{k}]_{+}}*\prod_{n<j\leq m}X^{a_{j}}]
\end{align*}
where the first factor is the product with decreasing $\triangleleft$
order.
\begin{Eg}
Let us continue Example \ref{eg:A_3_triangular}. The standard monomials,
after the bar involution, gives us
\begin{align*}
\overline{E}_{a} & =[S_{3}^{[-a_{3}]_{+}}*S_{2}^{[-a_{2}]_{+}}*S_{1}^{[-a_{1}]_{+}}*X_{1}^{[a_{1}]_{+}}*X_{2}^{[a_{2}]_{+}}*X_{3}^{[a_{3}]_{+}}**X_{4}^{a_{4}}*X_{5}^{a_{5}}*X_{6}^{a_{6}}].
\end{align*}
Notice that $X_{4},X_{5},X_{6}$ $q$-commute with all the factors.
\end{Eg}
\begin{Thm}[{\cite{BerensteinZelevinsky12}[Theorem 1.4]}]
The Berenstein-Zelevinsky's triangular basis $C^{t}$ is independent
of the acyclic seed $t$ chosen, which we denote by $C$.
\end{Thm}
\section{Compare triangular bases}
\subsection{Basic results}
Let we choose and work with any seed $t$ whose matrix $B(t)$ is
acyclic.
\begin{Lem}\label{lem:change_BZ_order}
For any acyclic seed $t$, each $C_{a}$ is $(\prec_{t},{\mathbf m})$-unitriangular
to $\{\overline{E}_{a}\}$.
\end{Lem}
\begin{proof}
Each $C_{a}$ is a finite linear combination of $\{\overline{E}_{a}\}$
with one term of coefficient $1$ and others of coefficients in ${\mathbf m}$.
This decomposition is $\prec_{t}$-triangular by Lemma \ref{lem:has_triangular_order}.
\end{proof}
\begin{Lem}\label{lem:keep_pointed}
If $n$ is a source point, then $\overline{E}_{a}$ remains pointed
in $t'=\mu_{n}t$.
\end{Lem}
\begin{proof}
It might be possible to deduce this result from the existence of common
Berenstein-Zelevinsky triangular bases in $t$ and $t'$. Let us give
an alternative elementary verification.
In order to show that the $q$-normalization factor producing by the
factors of $\overline{E}_{a}$ remains unchanged in ${\mathcal T}(t')$, it
suffices to show that, for any $1\leq i,j\leq m$, $1\leq l<k\leq n$,
$i\neq k$, we have
\begin{eqnarray}
\Lambda(t)(\deg^{t}X_{i},\deg^{t}X_{j}) & = & \Lambda(t')(\deg^{t'}X_{i},\deg^{t'}X_{j})\label{eq:lambda_XX}\\
\Lambda(t)(\deg^{t}X_{i},\deg^{t}S_{k}) & = & \Lambda(t')(\deg^{t'}X_{i},\deg^{t'}S_{k})\label{eq:lambda_XS}\\
\Lambda(t)(\deg^{t}S_{l},\deg^{t}S_{k}) & = & \Lambda(t')(\deg^{t'}S_{l},\deg^{t'}S_{k}).\label{eq:lambda_SS}
\end{eqnarray}
Notice that we have $\deg^{t}S_{l}=-e_{l}+\sum_{s}[-b_{sl}]_{+}e_{s}$,
where all $e_{s}$ appearing have $s\neq n$. Therefore, we deduce
that $\deg^{t'}S_{l}=\deg^{t}S_{l}$, $\forall l<n$, by the tropical
transformation of $g$-vectors, cf. \cite[Section 3.2]{Qin15}\cite{FockGoncharov03}\cite[(7.18)]{FominZelevinsky07}.
The first two equations simply follows from the mutation rule from
$\Lambda(t)$ to $\Lambda(t')$. It remains to check \eqref{eq:lambda_SS}.
By using \eqref{eq:lambda_XS}, we obtain
\begin{eqnarray*}
& & \Lambda(t)(\deg^{t}S_{l},\deg^{t}S_{k})\\
& = & \Lambda(t)(-\deg^{t}X_{l}+\sum_{s}[-b_{sl}]_{+}\deg^{t}X_{s},\deg^{t}S_{k})\\
& = & -\Lambda(t)(\deg^{t}X_{l},\deg^{t}S_{k})+\sum_{s}[b_{-sl}]_{+}\Lambda(t)(\deg^{t}X_{s},\deg^{t}S_{k})\\
& = & -\Lambda(t')(\deg^{t'}X_{l},\deg^{t'}S_{k})+\sum_{s}[b_{-sl}]_{+}\Lambda(t')(\deg^{t'}X_{s},\deg^{t'}S_{k})\\
& = & \Lambda(t')(\deg^{t'}S_{l},\deg^{t'}S_{k}).
\end{eqnarray*}
\end{proof}
The following statement is the main result of \cite{KimuraQin14}
accompanied with the coefficient correction technique in \cite{Qin12}.
\begin{Thm}[\cite{KimuraQin14}\cite{Qin12}]\label{thm:ayclic_triangular_basis}
If the principal part $B(t)$ of a seed $t$ is acyclic and skew-symmetric,
then the triangular basis $\can^{t}$ for $t$ exists. Moreover, it
contains all the quantum cluster monomials.
\end{Thm}
\begin{proof}
When we choose the special coefficient pattern $B^{c}(t)$ to be $z$-pattern
as in \cite{KimuraQin14}, the quantum cluster algebra is isomorphic
to a subalgebra of a quantized enveloping algebra \cite{GeissLeclercSchroeer11}.
Under this identification, $X_{i}(t)$, $I_{k}(t)$ are the factors
of the dual PBW basis element, and the triangular basis $\can^{t}$
is just the restriction of the dual canonical basis on this subalgebra
(and localized at the coefficients $(X_{n+1},\cdots,X_{m})$). By
\cite{KimuraQin14}, this basis contains all the quantum cluster monomials.
By the correction technique in \cite{Qin12}, we can change the coefficient
pattern $B^{c}(t)$ and $\Lambda(t)$ while keeping the claim true.
\end{proof}
The following statement is implied by the general result in \cite[Theorem 9.4.1]{Qin15}.
We sketch a much easier proof for this special case.
\begin{Thm}\label{thm:common_triangular_basis}
Let $t$ and $t'$ be two seeds such that $t'=\mu_{k}t$ for some
$1\leq k\leq n$ and $B(t)$, $B(t')$ are acyclic and skew-symmetric.
Then the quantum cluster algebra has a basis $\can$ which is the
triangular basis for both $t$ and $t'$.
\end{Thm}
\begin{proof}
Because $t$ and $t'$ are acyclic, by Theorem \ref{thm:ayclic_triangular_basis},
we know that the triangular bases $\can^{t}$ and $\can^{t'}$ for
$t$ and $t'$ exist. Moreover, the quantum cluster monomials $X_{k}'^{d}=X_{k}(t')^{d}$,
$I_{k}'^{d}=I_{k}(t')^{d}$ belong to $\can^{t}$, where $d\in\mathbb{N}$.
Therefore, $X_{k}'^{d}$ and $I_{k}'^{d}$ have $(\prec_{t},{\mathbf m})$-unitriangular
decomposition in the injective pointed set $\opname{inj}^{t}$. These are
the only new factors of elements in $\opname{inj}^{t'}$ which are not factors
of elements in $\opname{inj}^{t}$.
Easy calculation shows that elements in $\opname{inj}^{t'}$ remain pointed
in ${\mathcal T}(t)$, cf. \cite[Lemma 5.3.2]{Qin15}. Substituting their new
factors $X_{k}'^{d}$ and $I_{k}'^{d}$ by the decomposition in $\opname{inj}^{t}$,
we deduce that $\opname{inj}^{t'}$ is $(\prec_{t},{\mathbf m})$-unitriangular to
$\opname{inj}^{t}$ by Lemma \ref{lem:substitution}.
Also, notice that $\can^{t'}$ is $(\prec_{t'},{\mathbf m})$-unitriangular
to $\opname{inj}^{t'}$ and $\opname{inj}^{t}$ is $(\prec_{t},{\mathbf m})$-unitriangular
to $\can^{t}$. Composing these three transitions, we obtain that
any $S'\in\can^{t'}$ is a finite combination of elements $S$, $S_{i}$
in $\can^{t}$:
\begin{align*}
S' & =S+\sum_{i}a_{i}S_{i},
\end{align*}
with coefficient $a_{i}\in{\mathbf m}$.
Now by the bar-invariance of $\can^{t}$ and $\can^{t'}$, we must
have $a_{i}=0$ and $S'=S$. It follows that the two triangular bases
$\can^{t}$ and $\can^{t'}$ are the same.
\end{proof}
\subsection{Proof of the main result}
For any chosen $1\leq j\leq n$, let $t[j^{-1}]$ denote the seed
obtained from $t$ by deleting the $j$-th column in the matrix ${\widetilde{B}}(t)$.
This operation is called freezing the vertex $j$. We have the corresponding
quantum cluster algebra ${\cA}}%\newcommand{\qClAlg}{{\cA^q}(t[j^{-1}])$. Observe that the normalization
$[\ ]^{t[j^{-1}]}=[\ ]^{t}$ because $\Lambda(t[j^{-1}])=\Lambda(t)$
by construction. Moreover, the partial order $\prec_{t[n^{-1}]}$
implies $\prec_{t}$ by definition. We can define similarly, for $f\in\mathbb{Z}^{\{j\}\cup[n+1,m]},u,v\in\mathbb{N}^{[1,n]-\{j\}}$,
where $u_{k}v_{k}=0$ for any $k$:
\begin{align*}
& \opname{inj}^{t[j^{-1}]}(f,u,v)\\
& =[\prod_{n+1\leq i\leq m}X_{i}^{f_{i}}*X_{j}^{f_{j}}*\prod_{1\leq k\leq n,k\neq j}X_{k}^{u_{k}}*\prod_{1\leq k\leq n,k\neq j}I_{k}(t[j^{-1}])^{v_{k}}]^{t[j^{-1}]}.
\end{align*}
We want to compare this new injective pointed set $\opname{inj}^{t[j^{-1}]}$
with the old one $\opname{inj}^{t}$. One has to pay attention to the possible
localization at $X_{j}$ in the seed $t[j^{-1}]$.
Assume the vertex $n$ to be $\triangleleft$-maximal, namely, a source point,
then $I_{k}(t[n^{-1}])=I_{k}(t)$ for all $1\leq k< n$,
cf. Remark \ref{rem:acyclic_injectives}, and, moreover, $(\deg Y_{i})_{n}=b_{ni}\geq 0$
$\forall 1\leq i\leq n$. It follows that the Laurent monomials of $I_{k}(t)$, $\forall k\neq n$, have non-negative degrees in $X_{n}$.
Notice that, for a source point $n$, if $f_{n}\geq0$,
then $\opname{inj}^{t[n^{-1}]}(f,u,v)\in\opname{inj}^{t}$.
\begin{Lem}\label{lem:restricted_basis}
Assume that $n$ is a source point and a pointed element $Z\in{\cA}}%\newcommand{\qClAlg}{{\cA^q}(t[n^{-1}])$
has a finite combination of
\begin{align*}
Z & =\sum_{s}a_{s}\opname{inj}^{t[n^{-1}]}(f^{(s)},u^{(s)},v^{(s)}).
\end{align*}
If $(\deg Z)_{n}\geq0$, then we have $f_{n}^{(s)}\geq0$ whenever
$a_{s}\neq0$. Consequently, all $\opname{inj}^{t[n^{-1}]}(f^{(s)},u^{(s)},v^{(s)})$
appearing in the combination are contained in $\opname{inj}^{t}$.
\end{Lem}
\begin{proof}
Recall that $\opname{inj}^{t[n^{-1}]}$ is a linearly independent family of
pointed elements with distinguished leading degrees. By Lemma \ref{lem:has_triangular_order}(i), the given decomposition of $Z$ is $\prec_t$-unitriangular with a unique leading term $\opname{inj}^{t[n^{-1}]}(f^{(0)},u^{(0)},v^{(0)})$ whose leading degree equals $\deg Z$. So the leading degrees of all $\opname{inj}^{t[n^{-1}]}(f^{(s)},u^{(s)},v^{(s)})$ appearing are $\prec_t$-inferior or equal to $\deg Z$. Since $(\deg Z)_n\geq 0$ and $(\deg Y_i)_n\geq 0$, $\forall 1\leq i\leq n$, they are all non-negative in the $n$-th components.
Notice that $\opname{pr}_n \deg I_{k}(t)=-e_k$ by definition and, in particular, the leading degree $\deg I_{k}(t)$, $\forall k<n$, vanishes in the $n$-th components.
It follows that $\deg \opname{inj}^{t[n^{-1}]}(f^{(s)},u^{(s)},v^{(s)})$ has non-negative $n$-th component if and only if $f^{(s)}_n\geq 0$. The claim follows.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:acyclic}]
We prove the claim by induction on the rank $n$ of ${\widetilde{B}}(t)$. The
cases $n=0$ are trivial.
Up to relabeling vertices, let us assume that $n$ is a source point
in $t$. Denote $t'=\mu_{n}t$.
It suffices to show that every $\overline{E}_{a}$, $a\in\mathbb{Z}^{m}$,
is $(\prec_{t},{\mathbf m})$-triangular to $\can^{t}$. If so, combined with
Lemma \ref{lem:change_BZ_order}, we obtain that every bar-invariant
element $C_{a}$ is $(\prec_{t},{\mathbf m})$-triangular to $\can^{t}$ and,
consequently, must belong to $\can^{t}$. It follows that the two
bases $\can^{t}$ and $C$ must agree.
(i) Assume $a_{n}\geq0$. Consider the seed $t[n^{-1}]$ obtained
by freezing the vertex $n$ in $t$. It is acyclic whose matrix ${\widetilde{B}}(t[n^{-1}])$
has rank $n-1$. By induction hypothesis, its triangular basis $\can^{t[n^{-1}]}$
agrees with its BZ-basis $C^{t[n^{-1}]}$. Notice that the corresponding
standard monomial $E_{a}$ is also a standard monomials for seed $t[n^{-1}]$.
Therefore, $\overline{E}_{a}$ admits a finite decomposition in $C^{t[n^{-1}]}=\can^{t[n^{-1}]}$
with one term of coefficient $1$ and other terms of coefficient in
${\mathbf m}$. Recall that $\can^{t[n^{-1}]}$ is $\prec_{t[n^{-1}]}$-unitriangular
to $\opname{inj}^{t[n^{-1}]}$. Composing these two transitions, we see that
$\overline{E}_{a}$ has a finite decomposition in $\opname{inj}^{t[n^{-1}]}$
with one term of coefficient $1$ and others of coefficient in ${\mathbf m}$.
Further notice that $(\deg\overline{E}_{a})_{n}\geq0$, by Lemma \ref{lem:restricted_basis},
the decomposition terms appearing belong to $\opname{inj}^{t}$. By Lemma
\ref{lem:has_triangular_order}, $\overline{E}_{a}$ is $(\prec_{t},{\mathbf m})$-unitriangular
to $\opname{inj}^{t}$, and consequently $(\prec_{t},{\mathbf m})$-unitriangular
to $\can^{t}$.
(ii) When $a_{n}<0$, let us rewrite $\overline{E}_{a}$ as $[S_{n}^{[-a_{n}]_{+}}*\overline{E}_{a_{\hat{n}}}]^{t}$,
where $a_{\hat{n}}$ denote the vector obtained from $a$ by setting
the $n$-th component to $0$. Notice that $\overline{E}_{a}$ is
also pointed in $t'$ by Lemma \ref{lem:keep_pointed}, namely, $\overline{E}_{a}=[S_{n}^{[-a_{n}]_{+}}*\overline{E}_{a_{\hat{n}}}]^{t'}$.
For the seed $t'$, we freeze the vertex $n$ and repeat the argument
in $(i)$, it follows that $\overline{E}_{a_{\hat{n}}}$ is $(\prec_{t'},{\mathbf m})$-unitriangular
to the triangular basis $\can^{t'}$ of the seed $t'$. Notice that
$S_{n}$ is the $n$-th cluster variable in the seed $t'$. By Lemma
\ref{lem:substitution}, we obtain that $\overline{E}_{a}$ is $(\prec_{t'},{\mathbf m})$-unitriangular
to the triangular basis $\can^{t'}$ of the seed $t'$. Because $\can^{t}=\can^{t'}$
by Theorem \ref{thm:common_triangular_basis}, $\overline{E}_{a}$
is $(\prec_{t},{\mathbf m})$-unitriangular to $\can^{t}$ by Lemma \ref{lem:has_triangular_order}.
\end{proof}
\subsection{Bipartite skew-symmetrizable case}
We say the seed $t$ has a bipartite orientation (we say $t$ is bipartite
for short), if we have $\{1,\cdots,n\}=V_{0}\sqcup V_{1}$, such that
all the vertices in $V_{0}$ are source points and those in $V_{1}$
are sink points.
Assume that $t$ is bipartite. Let we denote by $t'$ the seed obtained
from $t$ by mutating at all the vertices in $V_{1}$, namely,
\begin{align*}
\mu_{V_{1}} & =\prod_{k\in V_{1}}\mu_{k}\\
t' & =\mu_{V_{1}}t.
\end{align*}
Notice that the mutations $\mu_{k}$, $k\in V_{1}$, commute with
each other.
The following lemma follows from the definitions of the corresponding
cluster variables, cf. Figure \ref{fig:knitting} for identification
of cluster variables, where $i\in V_{0}$, $j\in V_{1}$, the graph
are constructed via the knitting algorithm, cf. \cite{Keller08Note}.
\begin{Lem}
\label{lem:comparison} We have, for any $1\leq i,j\leq n$,
\begin{align}
X_{i}(t')=X_{i}(t),\ i\in V_{0},\\
X_{j}(t')=I_{j}(t),\ j\in V_{1},\\
S_{i}(t')=I_{i}(t),\ i\in V_{0},\\
S_{j}(t')=X_{j}(t),\ j\in V_{1}.
\end{align}
\end{Lem}
\begin{figure}[htb!] \centering \beginpgfgraphicnamed{fig:knitting} \begin{tikzpicture} \node [ draw] (v1) at (1,-3) {$X_i(t)$}; \node [ draw] (v2) at (2,-1.5) {$X_j(t)$}; \node [draw] (v3) at (0,-1.5) {$I_j(t)$}; \node [draw] (v4) at (-1,-3) {$I_i(t)$};
\draw[-triangle 60] (v1) edge (v2); \draw[-triangle 60] (v3) edge (v1); \draw[-triangle 60] (v4) edge (v3);
\node [ draw] (v11) at (6,-3) {$X_i(t')$}; \node [ draw] (v12) at (7,-1.5) {$S_j(t')$}; \node [draw] (v13) at (5,-1.5) {$X_j(t')$}; \node [ draw] (v14) at (4,-3) {$S_i(t')$};
\draw[-triangle 60] (v11) edge (v12); \draw[-triangle 60] (v13) edge (v11); \draw[-triangle 60] (v14) edge (v13);
\end{tikzpicture} \endpgfgraphicnamed \caption{Part of knitting graphs for the seeds $t$ and $t'$.} \label{fig:knitting} \end{figure}
It follows from Lemma \ref{lem:comparison} that those $S(t')$, $i\in V_{0}$
$q$-commute with each other, and $S_{j}(t')$, $j\in V_{1}$, $q$-commute
with each other.
Notice that $t'$ is still bipartite with the vertices in $V_{0}$
being sink points and the vertices in $V_{1}$ being source points.
\begin{Lem}\label{lem:variable_commute}
(1) For any $1\leq k\neq j\leq n$, such that $j\in V_{1}$, $X_{k}(t)$
and $I_{j}(t)$ $q$-commute.
(2) For any $1\leq i\neq k\leq n$, such that $i\in V_{0}$, $X_{i}(t)$
and $I_{k}(t)$ $q$-commute
\end{Lem}
\begin{proof}
(1) $X_{k}(t)$ and $I_{j}(t)$ are quantum cluster variables in the
same seed $\mu_{j}t$.
(2) By (1), it remains to check the case $i,k\in V_{0}$. Notice that
$V_{0}$ consist of sink points in $t'=\mu_{V_{1}}t$. $X_{i}(t)$
and $I_{k}(t)$ are quantum cluster variables in the same seed $\mu_{k}t'$.
\end{proof}
\begin{Lem}\label{lem:keep_pointed_bipartite}
The pointed element $\overline{E}_{a}$ defined in $t'$ remains pointed
in $t=\mu_{V_{1}}t'$.
\end{Lem}
\begin{proof}
The vertices in $V_{1}$ are source points in $t'$ which are not
connected by arrows. We simply repeat the proof of Lemma \ref{lem:keep_pointed}.
\end{proof}
\begin{Thm}\label{thm:bipartite}
For bipartite $t$, the Berenstein-Zelevinsky's triangular basis $C$
is also the triangular basis $\can^{t}$.
\end{Thm}
\begin{proof}
Notice that, in the seed $t'$, the vertices in $V_{1}$ are source
points and $\triangleleft$-superior than those in $V_{0}$. Using
Lemma \ref{lem:variable_commute}(ii), we have, for any $a\in\mathbb{Z}^{m}$,
\begin{align}
\overline{E}_{a} & =[\prod_{j\in V_{1}}S_{j}(t')^{[-a_{j}]_{+}}*\prod_{i\in V_{0}}S_{i}(t')^{[-a_{i}]_{+}}*\prod_{j\in V_{1}}X_{j}(t')^{[a_{j}]_{+}}*\prod_{i\in V_{0}}X_{i}(t')^{[a_{i}]_{+}}*\prod_{n+1\leq j\leq m}X_{j}(t')^{a_{j}}]^{t'}\nonumber \\
& =[\prod_{j\in V_{1}}X_{j}(t)^{[-a_{j}]_{+}}*\prod_{i\in V_{0}}I_{i}(t)^{[-a_{i}]_{+}}*\prod_{j\in V_{1}}I_{j}(t)^{[a_{j}]_{+}}*\prod_{i\in V_{0}}X_{i}(t)^{[a_{i}]_{+}}*\prod_{n+1\leq j\leq m}X_{j}(t')^{a_{j}}]^{t'}.\nonumber \\
& =[\prod_{j\in V_{1}}X_{j}(t)^{[-a_{j}]_{+}}*\prod_{i\in V_{0}}X_{i}(t)^{[a_{i}]_{+}}*\prod_{i\in V_{0}}I_{i}(t)^{[-a_{i}]_{+}}*\prod_{j\in V_{1}}I_{j}(t)^{[a_{j}]_{+}}*\prod_{n+1\leq j\leq m}X_{j}(t')^{a_{j}}]^{t'}\label{eq:rewrite_monomial}
\end{align}
By Lemma \ref{lem:keep_pointed_bipartite}, $\overline{E}_{a}$ remains
to be pointed in $t$. Then \eqref{eq:rewrite_monomial} tells us
that it belongs to the injective pointed set $\opname{inj}^{t}$. All elements
of $\opname{inj}^{t}$ take this form. So we see the BZ-basis $C$ verifies
the conditions (i)(ii)(iv) in Definition \ref{def:triangular_basis}.
A closer examination tells us that the condition (iii) in Definition
\ref{def:triangular_basis} is also verified by the basis $C$, cf.
\cite{BerensteinZelevinsky2012}. So $C$ is the triangular basis
$\can^{t}$ for the seed $t$.
\end{proof}
\begin{Eg}[Kronecker quiver type]\label{eg:Kronecker}
Let us look at the quantum cluster algebra with the seed $t$ given
by ${\widetilde{B}}=\left(\begin{array}{cc}
0 & 2\\
-2 & 0
\end{array}\right)$ and $\Lambda=\left(\begin{array}{cc}
0 & 1\\
-1 & 0
\end{array}\right)$. We have the set of source points $V_{0}=\{1\}$ and the set of sink
points $V_{1}=\{2\}$.
Its seed $t'=\mu_{V_{1}}t$ has the matrices ${\widetilde{B}}=\left(\begin{array}{cc}
0 & -2\\
2 & 0
\end{array}\right)$ and $\Lambda=\left(\begin{array}{cc}
0 & -1\\
1 & 0
\end{array}\right)$. The vertex $2$ is the source point in $t'$. It is easy to compute
that
\begin{eqnarray*}
S_{1}(t') & = & X(t')^{-e_{1}}+X(t')^{-e_{1}+2e_{2}}\\
S_{2}(t') & = & X(t')^{-e_{2}+2e_{1}}+X(t')^{-e_{2}}\\
Y_{1}(t') & = & X(t')^{2e_{2}}\\
Y_{2}(t') & = & X(t')^{-2e_{1}}.
\end{eqnarray*}
By \cite[(6.4)]{BerensteinZelevinsky2012}\cite{ding2012bases}, we
have the following bar-invariant pointed element $X_{\delta}$ in
the BZ-basis $C$, given by
\begin{align*}
X_{\delta} & =q^{{\frac{1}{2}}}S_{1}(t')*S_{2}(t')-q^{\frac{3}{2}}X_{2}(t')*X_{1}(t')\\
& =X(t')^{e_{1}-e_{2}}\cdot(1+Y_{2}(t')+Y_{1}(t')Y_{2}(t'))\\
& =X(t')^{e_{1}-e_{2}}+X(t')^{-e_{1}-e_{2}}+X(t')^{e_{2}-e_{1}}.
\end{align*}
Taking the bar-involution, we obtain
\begin{eqnarray*}
X_{\delta} & = & q^{-{\frac{1}{2}}}S_{2}(t')*S_{1}(t')-q^{-\frac{3}{2}}X_{1}(t')*X_{2}(t')\\
& = & [S_{2}(t')*S_{1}(t')]^{t'}-q^{-2}[X_{1}(t')*X_{2}(t')]^{t'}.
\end{eqnarray*}
We have
\begin{eqnarray*}
S_{2}(t') & = & X_{2}(t)\\
S_{1}(t') & = & I_{1}(t)\\
& = & X(t)^{-e_{1}}(1+Y_{1}(t)+(q+q^{-1})Y_{1}(t)Y_{2}(t)+Y_{1}(t)Y_{2}(t)^{2})\\
X_{2}(t') & = & I_{2}(t)\\
& = & X(t)^{-e_{2}}(1+Y_{2}(t))\\
X_{1}(t') & = & X_{1}(t)
\end{eqnarray*}
Then $X_{\delta}$ can be rewritten as
\begin{align*}
X_{\delta} & =[X_{2}(t)*I_{1}(t)]^{t}-q^{-2}[X_{1}(t)*I_{2}(t)]^{t}\\
& =X(t)^{-e_{1}+e_{2}}(1+Y_{1}(t)+(1+q^{-2})Y_{1}(t)Y_{2}(t)+q^{-2}Y_{1}(t)Y_{2}(t)^{2})\\
& \qquad-q^{-2}X^{e_{1}-e_{2}}(1+Y_{2}(t))\\
& =X(t)^{-e_{1}+e_{2}}(1+Y_{1}(t)+Y_{1}(t)Y_{2}(t))\\
& =X(t)^{-e_{1}+e_{2}}+X(t)^{-e_{1}-e_{2}}+X(t)^{e_{1}-e_{2}}.
\end{align*}
Notice that the normalization factors do not change:
\begin{align*}
\Lambda(t)(\deg^{t}X_{2}(t),\deg^{t}I_{1}(t)) & =\Lambda(t)(e_{2},-e_{1})= & 1 & =\Lambda(t')(\deg^{t'}S_{2}(t'),\deg^{t'}S_{1}(t'))\\
\Lambda(t)(\deg^{t}X_{1}(t),\deg^{t}I_{2}(t)) & =\Lambda(t)(e_{1},-e_{2})= & -1 & =\Lambda(t')(\deg^{t'}X_{1}(t'),\deg^{t}X_{2}(t')).
\end{align*}
Therefore, the pointed element $X_{\delta}$ is $(\prec_{t},{\mathbf m})$-unitriangular
to the injective pointed set $\opname{inj}^{t}$, and consequently $(\prec_{t},{\mathbf m})$-unitriangular
to the triangular basis $\can^{t}$. It follows from its bar-invariance
that $X_{\delta}$ belongs to the triangular basis $\can^{t}$.
\end{Eg}
\bibliographystyle{halpha}
|
2,877,628,091,040 | arxiv | \section{Introduction}
\label{sec:intro}
Fluid-structure interaction (FSI) describes a multi-physics phenomenon that involves the highly non-linear coupling between a deformable or moving structure and a surrounding or internal fluid. There has been intensive interest in solving FSI problems due to its wide applications in biomedical, engineering and architecture fields, such as the simulation of blood-cell interactions, the study of wing fluttering in aerodynamics and the design of dams with reservoirs. However, it is generally difficult to achieve analytical solution to FSI problem with its nonlinear and multi-physics nature. Instead, there have been extensive studies in its numerical solutions and an increasing demand for more efficient and accurate numerical schemes
\cite{bungartz2006fluid,chakrabarti2005numerical,dowell2001modeling,hou2012numerical,richter2017fluid}.
Numerical methodologies for solving FSI problems can be roughly categorized into partitioned and monolithic schemes.
Distinct mechanisms in fluid and structure domains naturally suggest solvers using partitioned schemes \cite{farhat2006provably,NobileVergara08}. This numerical procedure treats each physical phenomenon separately and allows the use of existing software frameworks that are well-established for each subproblem. However, the design of efficient partitioned schemes that produce stable and accurate results remains a challenge, especially when the density of fluid is comparable to that of structure due to numerical instabilities known as added mass effect \cite{causin2005added}. The design and analysis of partitioned schemes to circumvent such problems has been an active research area in the past decade \cite{causin2005added,fernandez2013fully,lukavcova2013kinematic,banks2016added,bukac2020refactorization}.
An alternative to partitioned strategy is the monolithic approach, which solves the fluid flow and structure dynamics simultaneously using one unified fully-coupled formulation \cite{hubner2004monolithic,tezduyar2007modelling,rugonyi2001finite}. The boundary conditions on the fluid-structure interface will be automatically satisfied in the procedure. Monolithic schemes are usually more robust than partitioned schemes and allow more rigorous analysis of discretization and solution techniques \cite{kloppel2011fluid,richter2017fluid}. However, monolithic schemes have been criticized for requiring well-designed preconditioners\cite{gee2011truly,Nobile01,badia2008modular}, more memory and computation time since the whole system is solved in one formulation.
In this paper, we present a novel monolithic divergence-conforming HDG scheme for
a linear FSI problem
with a thick structure.
The fluid Stokes problem is discretized using the divergence-free HDG
scheme of Lehrenfeld and Sch\"oberl \cite{Lehrenfeld10,
LehrenfeldSchoberl16}, and the structure linear elasticity problem is discretized using
the divergence-conforming HDG scheme of Fu et al. \cite{FuLehrenfeld20}.
We approximate the fluid and structure velocities together using a single
$H(\mathrm{div})$-conforming finite element space, and we also introduce
a global (hybrid) unknown that approximate the tangential component of the
velocities on the mesh skeleton for the purpose of efficient
implementation.
A pressure-robust optimal energy-norm estimate
is obtained for the resulting semidiscrete scheme.
We then use a Crank-Nicolson time discretization, and the fluid-structure interface conditions are naturally treated
monolithically. Our fully discrete scheme produces an exactly
divergence-free fluid velocity approximation and is energy stable.
When polynomials of degree $k\ge 1$ is used in the scheme,
the global linear system, which is symmetric and indefinite,
consists of degrees of freedom (DOFs) for
the normal component of velocity (of polynomial degree $k$) on the mesh
skeleton (facets), the tangential hybrid velocity (of polynomial degree $k-1$)
on the mesh skeleton, and one pressure DOF per element on the
mesh. The linear system problem is then solved via a preconditioned MinRes
method \cite{Saad03} with a block diagonal preconditioner which is of similar form as the uniform
preconditioner studied in Olshanskii et al. \cite{Olshanskii06}
for a generalized Stokes interface problem. We further use an auxiliary
space preconditioner of Xu \cite{Xu96}
with algebraic multigrid (AMG) for the velocity block and a hypre AMG preconditioner
for the pressure block to arrive at the final block AMG preconditioner.
This preconditioner is numerically verified to be robust with respect to
mesh size, time step size, and material parameters.
The rest of the paper is organized as follows.
In Section \ref{sec:hdg}, we introduce the spatial and temporal
discretization of the divergence-conforming HDG scheme for
a linear FSI problem with a thick structure.
We then present the block AMG preconditioner in
Section \ref{sec:prec}.
The a priori error analysis of the semidiscrete scheme is performed in
Section \ref{sec:err}.
Numerical results are presented in Section \ref{sec:num}.
We conclude in Section \ref{sec:conclude}.
\section{The monolithic divergence-conforming
HDG scheme for a linear FSI model}
\label{sec:hdg}
\subsection{The model FSI problem}
We consider the interaction between an incompressible, viscous fluid and
a elastic structure. We denote by $\Omega^f(t)\subset \mathbb{R}^d$ the domain occupied by the
fluid and $\Omega^s(t)\subset \mathbb{R}^d$, $d=2,3$ by the solid at the time
$t\in[0,T]$. Let $\Gamma(t) = \overline{\Omega^f}
\cap \overline{\Omega^s}$ be the part of the boundary where the elastic
solid interacts with the fluid; see Fig. \ref{fig:fsi}.
\begin{figure}[ht!]
\begin{center}
\begin{tikzpicture}
\draw[thick,pattern=dots,draw=none]
(5,3.75)
to[out=190,in=-90] (2,6.5)
to[out=90,in=180] (5,8)
to[out=-70,in=100] (5,3.75) --cycle;
\draw[thick,fill=blue!30,draw=none]
(5,3.75)
to[out=10,in=-90] (8,6)
to[out=90,in=0] (5,8)
to[out=-70,in=100] (5,3.75) --cycle;
\draw[ultra thick,red] (5,8)
to[out=-70,in=100] (5,3.75);
\draw[ultra thick,draw=blue!50!black]
(5,3.75)
to[out=10,in=-90] (8,6)
to[out=90,in=0] (5,8);
\draw[ultra thick,draw=green!50!black]
(5,3.75)
to[out=190,in=-90] (2,6.5)
to[out=90,in=180] (5,8);
\node at (3.6,6.5)[fill=white,below,scale=1.2] {\color{black} \Large
$\Omega^f$};
\node at (6.6,6.5)[below,scale=1.2] {\color{black} \Large $\Omega^s$};
\node at (5.6,5)[below,scale=1.2] {\color{red!50!black} \Large
$\Gamma$};
\end{tikzpicture}
\end{center}
\caption{\it Sketch of a domain for FSI.}
\label{fig:fsi}
\end{figure}
For the purpose of this paper, we assume that the
nonlinear convection term in the fluid is negligible and
the solid is linearly elastic and the deformation is small.
Hence, the domain $\Omega^{f/s}$ does not change over time, and
the fluid flow is modeled using the time dependent Stokes equations
while the structure is modeled using the linear elastodynamics equations:
\begin{subequations}
\label{model}
\begin{alignat}{2}
\label{f-eq}
\left.
\begin{tabular}{rl}
$\rho^f \partial_t \bld u^f -\nabla \cdot \bld \sigma^f(\bld u^f, p^f) =$&
$\bld f^f$\quad\\
$\nabla\cdot \bld u^f =$ &$0$
\end{tabular}\right\} &\;\; \text{in
} \Omega^f\times [0,T],\\
\label{s-eq}
\left.
\begin{tabular}{rl}
$\rho^s \partial_t \bld u^s -\nabla \cdot \bld \sigma^s(\bld \eta^s) =$&
$\bld f^s$\quad\\
$\partial_t\bld \eta^s-\bld u^s =$ &$0$
\end{tabular}\right\} &\;\; \text{in
} \Omega^s\times [0,T],
\end{alignat}
where $\rho^f$ is the fluid density,
$\bld u^f$ is the fluid velocity, $p^f$ is the fluid pressure,
$\bld f^f$ is the fluid source term, and
$\bld \sigma^f$ is the fluid stress
tensor given as follows:
\begin{align*}
\bld \sigma^f(\bld u^f, p^f): =
-p^f\bld I + 2\mu^f\bld D(\bld u^f),
\end{align*}
where $\bld I$ is the identity tensor,
$\mu^f$ is the fluid viscosity, and
$\bld D(\bld u^f):=\frac12(\nabla\bld u^f+(\nabla \bld u^f)^T)$
is the fluid strain rate tensor,
while $\rho^s$ is the structure density,
$\bld \eta^s$ is the structure displacement,
$\bld u^s$ is the structure velocity,
$\bld f^s$ is the structure source term, and
$\bld \sigma^s$ is the structure Cauchy stress
tensor given as follows:
\begin{align*}
\bld \sigma^s(\bld \eta^s): =
\lambda^s(\nabla \cdot\bld\eta^s)\bld I + 2\mu^s\bld D(\bld \eta^s),
\end{align*}
where $\mu^s$ and $\lambda^s$ are the Lam\'e constants.
The fluid and structure sub-problems are coupled with the following {\it kinematic}
and
{\it dynamic} coupling conditions \cite{richter2017fluid} on the interface $\Gamma$:
\begin{alignat}{2}
\label{interface}
\left.
\begin{tabular}{r}
$ \bld u^f = \; \bld u^s$\\
$\bld \sigma^f\bld n^f+\bld \sigma^s\bld n^s =\;0$
\end{tabular}\right\} \text{ on }\Gamma\times [0,T],
\end{alignat}
where
$\bld n^f$ and $\bld n^s$ are the normal
directions on the fluid-structure interface $\Gamma$
pointing from the fluid and structure domains,
respectively.
To close the system, we need proper initial and boundary conditions. For
simplicity, in our analysis we consider a homogeneous Dirichlet boundary conditions
on the exterior boundaries:
\begin{align}
\label{bcbc}
\bld u^f =&\; 0 \;\;\text{ on
} \Gamma^f:=\partial\Omega^f\backslash\Gamma,\quad\quad
\bld \eta^s =\;0 \;\;\text{ on } \Gamma^s:=\partial\Omega^s\backslash\Gamma.
\end{align}
We mention that other standard boundary conditions on the exterior boundaries
can also be used, see e.g. the numerical results in Section \ref{sec:num}.
Finally, the initial condition is given as follows:
\begin{align}
\label{init}
\bld u^f(x, 0) =&\; \bld u^f_0(x) \;\;\text{ on
} \Omega^f,\quad\quad
\bld u^s(x,0) =\;\bld u^s_0(x), \,
\bld \eta^s(x,0) =\;\bld \eta^s_0(x), \,
\;\;\text{ on } \Omega^s,
\end{align}
\end{subequations}
where $\bld u^f_0, \bld u^s_0$, and $\bld \eta^s_0$, respectively, are the
initial fluid velocity, initial structure velocity, and initial structure
displacement, respectively.
\subsection{Preliminaries and finite element spaces}
We assume the domains $\Omega^f$, $\Omega^s$, as well as the interface
$\Gamma$ are polypope.
Let $\Omega$ be the union of the fluid and structure domains, i.e.,
$\overline{\Omega} = \overline{\Omega^f}\cup \overline{\Omega^s}$.
Let $\mathcal{T}_h$ be an interface-fitted conforming simplicial triangulation of the domain
$\Omega$ such that the interface $\Gamma$ is the union of element facets.
For any element $K\in \mathcal{T}_h$, we denote by $h_K$ its diameter and we denote
by
$h$ the maximum diameter over all mesh elements.
Denote by $\mathcal{T}_h^f$ the set of mesh elements that belong to $\Omega^f$ and by
$\mathcal{T}_h^s$ those belong to $\Omega^s$. Denote by $\mathcal{E}_h$ the set of facets of
$\mathcal{T}_h$, by $\mathcal{E}_h^f$ the set of facets
that are interior to $\overline{\Omega^f}$, and by
$\mathcal{E}_h^s$ the set of facets that are interior to $\overline{\Omega^s}$.
We also denote by $\Gamma_h$, $\Gamma_h^f$, $\Gamma_h^s$ the set of facets that lie on the interface
$\Gamma$, the fluid exterior boundary $\Gamma^f$, and the
solid exterior boundary $\Gamma^s$, respectively. We have $\Gamma_h = \mathcal{E}_h^f\cap \mathcal{E}_h^s$.
Given a simplex $S\subset \mathbb{R}^d, d=1,2,3$, we denote
$\mathcal{P}^m(S)$, $m\ge 0$, as the space of polynomials of degree at most
$m$.
Given a facet $F\in \mathcal{E}_h$ with normal direction $\bld n$, we denote
$\mathsf{tang}(\bld w):=\bld w-(\bld w\cdot\bld n)\bld n$ as the {\it
tangential component} of a vector field $\bld w$.
The following finite element spaces will be used in our scheme:
\begin{subequations}
\label{spaces}
\begin{align}
\bld V_h^{r}:=&\;\{\bld v\in H(\mathrm{div};\Omega): \;\;\bld v|_K\in
[\mathcal{P}^r(K)]^d, \;\forall K\in \mathcal{T}_h\}, \\
\label{vh0}
\bld V_{h,0}^{r}:=&\;\{\bld v\in \bld V_h^{r}: \;\;\bld v\cdot
\bld n|_F = 0,\; \forall F\in \Gamma_h^f\cup\Gamma_h^s\}, \\
\widehat{\bld V}_h^{r}:=&\;\{\widehat{\bld v}
\in [L^2(\mathcal{E}_h)]^d: \;\;\widehat{\bld v}|_F\in
[\mathcal{P}^r(F)]^d,\;\;\widehat{\bld v}\cdot \bld n|_F=0,
\;\forall F\in \mathcal{E}_h\}, \\
\label{vhath0}
\widehat{\bld V}_{h,0}^{r}:=&\;\{\widehat{\bld v}\in
\widehat{\bld V}_h^{r,s}: \;\;\mathsf{tang}(\widehat{\bld
v})|_F = 0,\; \forall F\in \Gamma_h^f\cup \Gamma_h^s\}, \\
Q_h^{r}:=&\;\{q\in L^2(\Omega): \;\;q|_K\in \mathcal{P}^r(K),
\;\forall K\in \mathcal{T}_h\},
\end{align}
where $r\ge 0$ is the polynomial degree.
\end{subequations}
We further use a superscript $f/s$ to indicate the restriction of these
spaces on the fluid/structure domain, that is,
\begin{align*}
\bld V_{h,0}^{r,f}:=\{\bld v|_{\mathcal{T}_h^f}:\;\bld v\in\bld V_{h,0}^{r}\}, \;\;
\bld V_{h,0}^{r,s}:=&\{\bld v|_{\mathcal{T}_h^s}:\;\bld v\in\bld V_{h,0}^{r}\},\\
\widehat{\bld V}_{h,0}^{r,f}:=\{\widehat{\bld v}|_{\mathcal{E}_h^f}:\;\widehat{\bld
v}\in\widehat{\bld V}_{h,0}^{r}\}, \;\;
\widehat{\bld V}_{h,0}^{r,s}:=&\{\widehat{\bld v}|_{\mathcal{E}_h^s}:\;\widehat{\bld v}\in
\widehat{\bld V}_{h,0}^{r}\},\\
Q_h^{r,f}:=\{q|_{\mathcal{T}_h^f}:\;q\in Q_h^{r}\}, \;\;
Q_h^{r,s}:=&\{q|_{\mathcal{T}_h^s}:\;q\in Q_h^{r}\}.
\end{align*}
\subsection{Semi-discrete divergence-conforming HDG scheme}
In this subsection, we present the divergence-conforming HDG spatial
discretization
\cite{Lehrenfeld10,
LehrenfeldSchoberl16, FuLehrenfeld20}
of the linear FSI system \eqref{model}.
We use the globally divergence-conforming finite element space
$\bld V_h^{r}$ in
\eqref{vh0} to approximate the global velocity
\begin{align}
\label{vel}
\bld u=\left\{\begin{tabular}{ll}
$\bld u^f$ & on $\Omega^f$,\\
$\bld u^s$ & on $\Omega^s$
\end{tabular}\right.,
\end{align}
and
the global tangential facet finite element space $\widehat{\bld V}_h^{r}$
in \eqref{vhath0} to approximate the tangential component of
the global velocity $\bld u$ on the mesh skeleton.
The weak formulation of the divergence-conforming HDG scheme with
polynomial degree $k\ge 1$ for
\eqref{model} is given as follows:
Find $(\bld u_h, \widehat{\bld u}_h, p_h^f, \bld \eta_h^s, \widehat{\bld
\eta}_h^s)\in \bld V_{h,0}^k\times \widehat{\bld V}_{h,0}^{k-1}\times
Q_h^{k-1,f}\times\bld V_{h,0}^{k,s}\times \widehat{\bld V}_{h,0}^{k-1,s}
$ such that
\begin{subequations}
\label{semi}
\begin{alignat}{2}
\label{semi-1}
(\rho \partial_t\bld u_h, \bld v_h)+
2\mu^f A_h^f(
(\bld u_h, \widehat{\bld u}_h),(\bld v_h, \widehat{\bld v}_h) )
-(p_h^f, \nabla\cdot \bld v_h)_f
-(\nabla\cdot\bld u_h, q_h^f)_f\;\;
&\\
+ 2\mu^s A_h^s(
(\bld \eta^s_h, \widehat{\bld \eta}^s_h),(\bld v_h, \widehat{\bld v}_h) )
+\lambda^s(\nabla \cdot \bld \eta_h^s, \nabla \cdot\bld v_h)_s
&\;
=\; (\bld f, \bld v_h), \nonumber\\
\label{semi-2}
(\partial_t \bld \eta_h^s-\bld u_h, \bld \xi_h^s)_s &\;= \;0,\\
\label{semi-3}
\langle\partial_t \widehat{\bld \eta}_h^s-\widehat{\bld u}_h,
\widehat{\bld \xi}^s_h\rangle_s&\;=\;0,
\end{alignat}
for all
$(\bld v_h, \widehat{\bld v}_h, q_h^f, \bld \xi_h^s, \widehat{\bld
\xi}_h^s)\in \bld V_{h,0}^k\times \widehat{\bld V}_{h,0}^{k-1}\times
Q_h^{k-1,f}\times\bld V_{h,0}^{k,s}\times \widehat{\bld V}_{h,0}^{k-1,s}
$, where
$(\cdot, \cdot)$ denotes the $L^2$-inner product on the domain $\Omega$,
$(\cdot, \cdot)_f$ denotes the $L^2$-inner product on the fluid domain
$\Omega^f$,
$(\cdot, \cdot)_s$ denotes the $L^2$-inner product on the structure domain
$\Omega^s$, and
$\langle\cdot,\cdot\rangle_s$ denotes the $L^2(\mathcal{E}_h^s)$-inner product on the
structure mesh skeleton $\mathcal{E}_h^s$,
moreover, $\rho= \left\{\begin{tabular}{ll}
$\rho^f$ & on $\Omega^f$\\
$\rho^s$ & on $\Omega^s$
\end{tabular}\right.
$ is the global density and
$\bld f= \left\{\begin{tabular}{ll}
$\bld f^f$ & on $\Omega^f$\\
$\bld f^s$ & on $\Omega^s$
\end{tabular}\right.
$ is the global source term on $\Omega$.
Here the operators $A_h^f$ and $A_h^s$ are the following
symmetric interior penalty HDG diffusion operators with
a {\it projected
jumps} formulation:
for $i\in\{f, s\}$,
\end{subequations}
\begin{align}
\label{hdg-diff}
A_h^i((\bld v_h, \widehat{\bld v}_h), (\bld w_h, \widehat{\bld
w}_h)):=\sum_{K\in\mathcal{T}_h^i}&\;\int_{K}\bld D(\bld v_h):
\bld D(\bld w_h)\,\mathrm{dx}
-\int_{\partial K}
\bld D(\bld v_h)\bld n\cdot \mathsf{tang}(\bld w_h-\widehat{\bld w}_h)
\,
\mathrm{ds}\\
&\hspace{-26ex}
-\int_{\partial K}
\bld D(\bld w_h)\bld n\cdot \mathsf{tang}(\bld v_h-\widehat{\bld v}_h)
\,
\mathrm{ds}
+
\int_{\partial K}
\frac{\alpha k^2}{h}
\Pi_h(\mathsf{tang}(\bld v_h-\widehat{\bld v}_h))
\cdot \Pi_h(\mathsf{tang}(\bld w_h-\widehat{\bld w}_h))
\,
\mathrm{ds}\nonumber,
\end{align}
where $\Pi_h$ denotes the $L^2(\mathcal{E}_h)$-projection onto the tangential
facet finite element space $\widehat{\bld V}_h^{k-1}$.
Efficient implementation of this local projector $\Pi_h$ was discussed in
\cite[Section 2.2.2]{LehrenfeldSchoberl16}.
Here $\alpha>0$ is a sufficiently large stabilization parameter that
ensures the following coercivity result:
\begin{align}
\label{coercivity}
A_h^{i}((\bld v_h, \widehat{\bld v}_h),
(\bld v_h, \widehat{\bld v}_h))
\ge \frac12
\sum_{K\in\mathcal{T}_h^i}
\left(\|\bld D(\bld v_h)\|_K^2
+\frac{\alpha k^2}{h}
\|\Pi_h(\mathsf{tang}(\bld v_h-\widehat{\bld v}_h))\|_{\partial K}^2
\right),
\end{align}
where $\|\cdot\|_S$ indicates the $L^2$-norm on the domain $S$.
A sufficient condition on $\alpha$ that guarantees the above coercivity result was
presented in \cite[Lemma 1]{AinsworthFu18}.
We take $\alpha = 8$ in our numerical experiments in Section \ref{sec:num}.
The following two results show
consistency and stability of the semi-discrete scheme \eqref{semi}.
\begin{lemma}[Galerkin-orthogonality for the semi-discrete scheme]
Let $(\bld u, p^f, \bld \eta^s)\in H^2(\Omega)\times
H^1(\Omega^f)\times H^2(\Omega^s)$ be the solution to the
model problem \eqref{model}.
Then, the equations \eqref{semi} holds true with
$(\bld u_h, \widehat{\bld u}_h,
p^f_h, \bld \eta^s_h, \widehat{\bld \eta}^s_h)$
replaced by
$(\bld u, \bld u|_{\mathcal{E}_h},
p^f, \bld \eta^s, \bld \eta^s|_{\mathcal{E}_h^s})$. That is, we have
\begin{subequations}
\label{semiX}
\begin{alignat}{2}
\label{semiX-1}
(\rho \partial_t\bld u, \bld v_h)+
2\mu^f A_h^f(
(\bld u, \widehat{\bld u}),(\bld v_h, \widehat{\bld v}_h) )
-(p^f, \nabla\cdot \bld v_h)_f
-(\nabla\cdot\bld u, q_h^f)_f\;\;
&\\
+ 2\mu^s A_h^s(
(\bld \eta^s, \widehat{\bld \eta}^s),(\bld v_h, \widehat{\bld v}_h) )
+\lambda^s(\nabla \cdot \bld \eta^s, \nabla \cdot\bld v_h)_s
&\;
=\; (\bld f, \bld v_h), \nonumber\\
\label{semiX-2}
(\partial_t \bld \eta^s-\bld u, \bld \xi_h^s)_s &\;= \;0,\\
\label{semiX-3}
\langle\partial_t \widehat{\bld \eta}^s-\widehat{\bld u},
\widehat{\bld \xi}^s_h\rangle_s&\;=\;0,
\end{alignat}
for all
$(\bld v_h, \widehat{\bld v}_h, q_h^f, \bld \xi_h^s, \widehat{\bld
\xi}_h^s)\in \bld V_{h,0}^k\times \widehat{\bld V}_{h,0}^{k-1}\times
Q_h^{k-1,f}\times\bld V_{h,0}^{k,s}\times \widehat{\bld V}_{h,0}^{k-1,s}
$, where $\widehat{\bld u}=\bld u|_{\mathcal{E}_h}$ and
$\widehat{\bld \eta}^s = \bld \eta^s|_{\mathcal{E}_h^s}$.
\end{subequations}
\end{lemma}
\begin{proof}
The equations \eqref{semiX-2} and \eqref{semiX-3}
follows from the second equation in \eqref{s-eq}.
We are left to prove the equation \eqref{semiX-1}.
Since $\mathsf{tang}(\bld u-\widehat{\bld u}) = 0$, we have,
for any function $(\bld v_h, \widehat{\bld v}_h)\in
\bld V_{h,0}^{k}\times \widehat{\bld V}_{h,0}^{k-1}$,
\begin{align*}
A_h^f(
(\bld u, \widehat{\bld u}),(\bld v_h, \widehat{\bld v}_h) )
= &\;
\sum_{K\in\mathcal{T}_h^f}\;\int_{K}\bld D(\bld u):
\bld D(\bld v_h)\,\mathrm{dx}
-\int_{\partial K}
\bld D(\bld u)\bld n\cdot \mathsf{tang}(\bld v_h-\widehat{\bld v}_h)\,
\mathrm{ds}\\
= &\;
-(\nabla\cdot\bld D(\bld u), \bld v_h)_f
+\sum_{K\in\mathcal{T}_h^f}
\int_{\partial K}
\bld D(\bld u)\bld n\cdot ((\bld v_h\cdot \bld n) \bld
n+\mathsf{tang}(\widehat{\bld v}_h))\,
\mathrm{ds}\\
= &\;
-(\nabla\cdot\bld D(\bld u), \bld v_h)_f
+
\int_{\Gamma_h}
\bld D(\bld u)\bld n^f\cdot ((\bld v_h\cdot \bld n) \bld
n+\mathsf{tang}(\widehat{\bld v}_h))\,
\mathrm{ds}.
\end{align*}
Similarly, we have, for any function $(\bld v_h, \widehat{\bld v}_h)\in
\bld V_{h,0}^{k}\times \widehat{\bld V}_{h,0}^{k-1}$,
\begin{align*}
A_h^s(
(\bld \eta^s, \widehat{\bld \eta^s}),(\bld v_h, \widehat{\bld v}_h) )
= &\;
-(\nabla\cdot\bld D(\bld \eta^s), \bld v_h)_s
+ \int_{\Gamma_h}
\bld D(\bld \eta^s)\bld n^s\cdot ((\bld v_h\cdot \bld n) \bld
n+\mathsf{tang}(\widehat{\bld v}_h))\,
\mathrm{ds},\\
(p^f, \nabla\cdot \bld v_h)_f =
&\;-(\nabla p^f, \bld v_h)_f +
\int_{\Gamma_h} p^f (\bld v_h\cdot \bld n^f)\,\mathrm{ds},\\
(\nabla\cdot\bld \eta^s, \nabla\cdot \bld v_h)_s =
&\;-(\nabla (\nabla\cdot \bld \eta^s), \bld v_h)_s +
\int_{\Gamma_h} (\nabla\cdot \bld \eta^s)
(\bld v_h\cdot \bld
n^s)\,\mathrm{ds}.
\end{align*}
Combining these equations, we get
\begin{align*}
&(\rho \partial_t\bld u, \bld v_h)+
2\mu^f A_h^f(
(\bld u, \widehat{\bld u}),(\bld v_h, \widehat{\bld v}_h) )
-(p^f, \nabla\cdot \bld v_h)_f
-(\nabla\cdot\bld u, q_h^f)_f\;\;
\\
& + 2\mu^s A_h^s(
(\bld \eta^s, \widehat{\bld \eta}^s),(\bld v_h, \widehat{\bld v}_h) )
+\lambda^s(\nabla \cdot \bld \eta^s, \nabla \cdot\bld v_h)_s
-(\bld f, \bld v_h)
\;\\
& =\; (\rho^f\partial_t \bld u^f-\nabla\cdot \bld \sigma^f-\bld f^f, \bld v_h)_f
+(\rho^s\partial_t\bld u^s-\nabla\cdot \bld \sigma^s-\bld f^s, \bld v_h)_s
\\
&
\;\;\;\; +\int_{\Gamma_h}(\bld \sigma^f\bld n^f+\bld \sigma^s\bld n^s)
( (\bld v_h\cdot \bld
n)\cdot \bld n+\mathsf{tang}(\widehat{\bld v}_h)))\,\mathrm{ds}
\\
&=0,
\end{align*}
where
we used the PDE \eqref{f-eq}, \eqref{s-eq}, and the
dynamic interface condition in \eqref{interface}.
This completes the proof of the equation $\eqref{semiX-1}$.
\end{proof}
\begin{lemma}[Stability for the semi-discrete scheme]
\label{lemma:stab-semi}
Let $(\bld u_h, \widehat{\bld u}_h, p_h^f, \bld \eta_h^s,
\widehat{\bld \eta}_h^s)\in
\bld V_{h,0}^k\times \widehat{\bld V}_{h,0}^{k-1}\times
Q_h^{k-1,f}\times\bld V_{h,0}^{k,s}\times \widehat{\bld V}_{h,0}^{k-1,s}
$ be the numerical solution to the semi-discrete scheme \eqref{semi}.
Then, the velocity approximation on the fluid domain is exactly divergence free:
\begin{align}
\label{div1}
\nabla\cdot \bld u_h|_{\mathcal{T}_h^f} = 0,
\end{align}
and the following energy identity holds:
\begin{align}
\label{ener1}
\frac12\frac{d}{dt} E_h = -2\mu^f
A_h^f((\bld u_h, \widehat{\bld u}_h),(\bld u_h, \widehat{\bld u}_h))
+(\bld f, \bld u_h),
\end{align} where
$
E_h := (\rho\bld u_h, \bld u_h)+
\lambda^s(\nabla \cdot \bld \eta_h^s, \nabla \cdot \bld \eta^s_h)_s
+2\mu^s
A_h^s((\bld \eta_h, \widehat{\bld \eta}_h),(\bld \eta_h, \widehat{\bld
\eta}_h))
$
is the total energy.
\end{lemma}
\begin{proof}
Let us first prove the divergence-free property \eqref{div1}.
By the choice of the velocity finite element space $\bld V_h^{k}$ and
fluid pressure finite element space $Q_h^{k-1, f}$, we have
$\nabla\cdot\bld u_h|_{\mathcal{T}_h^f}\in Q_h^{k-1, f}$. Now taking
$q_h^f=\nabla\cdot\bld u_h|_{\mathcal{T}_h^f}$ in equation \eqref{semi-1}, we get
$$
(\nabla\cdot\bld u_h, \nabla\cdot\bld u_h)_f=0.
$$ Hence the divergence-free property \eqref{div1} holds true.
Next, let us prove the energy identity \eqref{ener1}.
Taking test function
$(\bld v_h, \widehat{\bld v}_h) = (\bld u_h, \widehat{\bld u}_h)$
in equation \eqref{semi-1}, and using the divergence-free property
\eqref{div1}, we get
\begin{align*}
(\rho \partial_t\bld u_h, \bld u_h)+
2\mu^f A_h^f(
(\bld u_h, \widehat{\bld u}_h),(\bld u_h, \widehat{\bld u}_h) )
+ 2\mu^s A_h^s(
(\bld \eta^s_h, \widehat{\bld \eta}^s_h),(\bld u_h, \widehat{\bld u}_h) )
+\lambda^s(\nabla \cdot \bld \eta_h^s, \nabla \cdot\bld u_h)_s
&\;
=\; (\bld f, \bld v_h).
\end{align*}
Since $\bld u_h|_{\mathcal{T}_h^s}\in \bld V_h^{k,s}$ and
$\widehat{\bld u}_h|_{\mathcal{E}_h^s}\in \widehat{\bld V}_h^{k-1,s}$,
equations \eqref{semi-2}--\eqref{semi-3} implies that
$$
\bld u_h|_{\mathcal{T}_h^s} = \partial_t \bld \eta_h^s, \text{ and }
\widehat{\bld u}_h|_{\mathcal{E}_h^s} = \partial_t {\widehat{\bld \eta}}_h^s.
$$
Hence,
$$
2\mu^s A_h^s(
(\bld \eta^s_h, \widehat{\bld \eta}^s_h),(\bld u_h, \widehat{\bld u}_h) )
=
2\mu^s
A_h^s(
(\bld \eta^s_h, \widehat{\bld \eta}^s_h),(\partial_t\bld \eta^s_h,
\partial_t\widehat{\bld \eta}_h^s) )
=\frac12\frac{d}{dt}(2\mu^s
A_h^s(
(\bld \eta^s_h, \widehat{\bld \eta}^s_h),(\bld \eta^s_h,
\widehat{\bld \eta}_h^s))
),
$$
and
$$
\lambda^s(\nabla \cdot \bld \eta_h^s, \nabla \cdot\bld u_h)_s
=
\frac12\frac{d}{dt}\left(\lambda^s(\nabla \cdot \bld \eta_h^s, \nabla
\cdot\bld \eta_h^s)_s\right).
$$
Combining the above equations, we arrive at the energy identity
\eqref{ener1}. This completes the proof.
\end{proof}
\subsection{Monolithlic fully discrete divergence-conforming HDG scheme}
In this subsection, we consider the temporal discretization of the
semi-discrete scheme \eqref{semi}.
We propose to use the second-order Crank-Nicolson
scheme.
For any positive integer $j\in \mathbb{Z}_+$, let
$(\bld u_h^{j-1},
\bld \eta_h^{s,j-1}, \widehat{\bld
\eta}_h^{s,j-1})\in \bld V_{h,0}^k \times\bld V_{h,0}^{k,s}\times
\widehat{\bld V}_{h,0}^{k-1,s}
$ be the numerical solution at time $t_{j-1}$.
Give the time step size $\delta t_{j-1}>0$,
we proceed to find the solution
$(\bld u_h^{j},
\bld \eta_h^{s,j}, \widehat{\bld
\eta}_h^{s,j})\in \bld V_{h,0}^k \times\bld V_{h,0}^{k,s}\times
\widehat{\bld V}_{h,0}^{k-1,s}
$ at time $t_{j}=t_{j-1}+\delta t_{j-1}$ along with the solution
$(\widehat{\bld u}_h^{j-1/2},
p_h^{f,j-1/2})\in \widehat{\bld V}_{h,0}^{k-1} \times
Q_{h}^{k-1,f}
$ at time $t_{j-1/2}=t_{j-1}+\frac12\delta t_{j-1}$ such that the following
equations holds:
\begin{subequations}
\label{full}
\begin{alignat}{2}
\label{full-1}
(\rho \frac{\bld u_h^{j}-\bld u_h^{j-1}}{\delta t_{j-1}}, \bld v_h)+
2\mu^f A_h^f(
(\bld u_h^{j-1/2}, \widehat{\bld u}_h^{j-1/2}),(\bld v_h, \widehat{\bld v}_h) )
-(p_h^{f,j-1/2}, \nabla\cdot \bld v_h)_f
&\\
\;\; -(\nabla\cdot\bld u_h^{j-1/2}, q_h^f)_f\;\;
+ 2\mu^s A_h^s(
(\bld \eta^{s,j-1/2}_h, \widehat{\bld \eta}^{s,j-1/2}_h),(\bld v_h, \widehat{\bld v}_h) )
+\lambda^s(\nabla \cdot \bld \eta_h^{s,j-1/2},
\nabla \cdot\bld v_h)_s
&\;
=\; (\bld f^{j-1/2}, \bld v_h), \nonumber\\
\label{full-2}
(\frac{\bld \eta_h^{s,j}-\bld \eta_h^{s,j-1}}{\delta t_{j-1}}-\bld
u_h^{j-1/2}, \bld \xi_h^s)_s &\;= \;0,\\
\label{full-3}
\langle
\frac{\widehat{\bld \eta}_h^{s,j}-\widehat{\bld \eta}_h^{s,j-1}}{\delta t_{j-1}}-
\widehat{\bld u}_h^{j-1/2},
\widehat{\bld \xi}^s_h\rangle_s&\;=\;0,
\end{alignat}
for all
$(\bld v_h, \widehat{\bld v}_h, q_h^f, \bld \xi_h^s, \widehat{\bld
\xi}_h^s)\in \bld V_{h,0}^k\times \widehat{\bld V}_{h,0}^{k-1}\times
Q_h^{k-1,f}\times\bld V_{h,0}^{k,s}\times \widehat{\bld V}_{h,0}^{k-1,s}
$,
where
$$
\bld u_h^{j-1/2}:=\frac12(\bld u_h^j+\bld u_h^{j-1}),\;\;
\bld \eta_h^{s,j-1/2}:=\frac12(\bld \eta_h^{s,j}+\bld \eta_h^{s,j-1}), \;\;
\widehat{\bld \eta}_h^{s,j-1/2}:=\frac12(
\widehat{\bld \eta}_h^{s,j}+\widehat{\bld \eta}_h^{s,j-1}),
$$
\end{subequations}
We have the following result on the energy stability of the
fully discrete scheme \eqref{full}.
\begin{lemma}[Stability for the fully discrete scheme]
\label{lemma:stab-full}
Let $(\bld u_h^{0},
\bld \eta_h^{s,0}, \widehat{\bld
\eta}_h^{s,0})\in \bld V_{h,0}^k \times\bld V_{h,0}^{k,s}\times
\widehat{\bld V}_{h,0}^{k-1,s}
$ be a proper projection of the initial data
in \eqref{init} such that
$\nabla\cdot \bld u_h^0|_{\mathcal{T}_h^f} = 0$.
For any positive integer $j\in \mathbb{Z}_+$, let $(\bld u_h^{j},
\widehat{\bld u}_h^{j-1/2}, p_h^{f,j-1/2}, \bld \eta_h^{s,j},
\widehat{\bld \eta}_h^{s,j})\in
\bld V_{h,0}^k\times \widehat{\bld V}_{h,0}^{k-1}\times
Q_h^{k-1,f}\times\bld V_{h,0}^{k,s}\times \widehat{\bld V}_{h,0}^{k-1,s}
$ be the numerical solution to the fully discrete scheme \eqref{full}.
Then, the velocity approximation on the fluid domain is exactly divergence free:
\begin{align}
\label{div2}
\nabla\cdot \bld u_h^j|_{\mathcal{T}_h^f} = 0,
\end{align}
and the following energy identity holds true:
\begin{align}
\label{ener2}
\frac12\frac{E_h^j-E_h^{j-1}}{\delta t_{j-1}} = -2\mu^f
A_h^f((\bld u_h^{j-1/2}, \widehat{\bld u}_h^{j-1/2}),(\bld u_h^{j-1/2},
\widehat{\bld u}_h^{j-1/2}))
+(\bld f^{j-1/2}, \bld u_h),
\end{align} where
$
E_h^j := (\rho\bld u_h^j, \bld u_h^j)+
\lambda^s(\nabla \cdot \bld \eta_h^{s,j}, \nabla \cdot \bld \eta^{s,j}_h)_s
+2\mu^s
A_h^s((\bld \eta^{s,j}_h, \widehat{\bld \eta}^{s,j}_h),(\bld \eta^{s,j}_h, \widehat{\bld
\eta}^{s,j}_h))
$
is the total energy at time $t_j$.
\end{lemma}
\begin{proof}
The proof follows the same line as those for the semi-discrete case in
Lemma \ref{lemma:stab-semi}, which we omit for simplicity.
\end{proof}
\subsubsection{Efficient implementation of the fully discrete
scheme \eqref{full}}
\label{sub:imp}
We now present an efficient implementation of the scheme
\eqref{full} whose globally coupled linear system consists of
DOFs for
the normal component of the velocity approximation,
the tangential component of the hybrid velocity approximation
on the mesh skeleton, and one pressure DOF per element
on the mesh.
We need the following result on the characterization of the
fully discrete solution.
\begin{lemma}[Characterization of the fully discrete solution]
\label{lemma:char}
\begin{subequations}
For any positive integer $j\in \mathbb{Z}_+$, let $(\bld u_h^{j},
\widehat{\bld u}_h^{j-1/2}, p_h^{f,j-1/2}, \bld \eta_h^{s,j},
\widehat{\bld \eta}_h^{s,j})\in
\bld V_{h,0}^k\times \widehat{\bld V}_{h,0}^{k-1}\times
Q_h^{k-1,f}\times\bld V_{h,0}^{k,s}\times \widehat{\bld V}_{h,0}^{k-1,s}
$ be the numerical solution to the fully discrete scheme \eqref{full}.
Then,
$(\bld u_h^{j-1/2}, \widehat{\bld u}_h^{j-1/2},p_h^{f,j-1/2})
\in
\bld V_{h,0}^k\times \widehat{\bld V}_{h,0}^{k-1}\times
Q_h^{k-1,f}
$ is the unique solution to the following equations:
\begin{alignat}{2}
\label{fullX}
&(2\rho \frac{\bld u_h^{j-1/2}}{\delta t_{j-1}}, \bld v_h)+
2\mu^f A_h^f(
(\bld u_h^{j-1/2}, \widehat{\bld u}_h^{j-1/2}),(\bld v_h, \widehat{\bld
v}_h) )\\
& -(p_h^{f,j-1/2}, \nabla\cdot \bld v_h)_f
-(\nabla\cdot\bld u_h^{j-1/2}, q_h^f)_f
\;\;\nonumber\\
&
+\frac12\delta t_{j-1}\left( 2\mu^s A_h^s(
(\bld u^{j-1/2}_h, \widehat{\bld u}^{j-1/2}_h),(\bld v_h, \widehat{\bld v}_h) )
+\lambda^s(\nabla \cdot \bld u_h^{j-1/2},
\nabla \cdot\bld v_h)_s\right)
\;
=\;
F^{j-1/2}((\bld v_h, \widehat{\bld v}_h))\nonumber
\end{alignat}
for all
$(\bld v_h, \widehat{\bld v}_h, q_h^f)\in \bld V_{h,0}^k\times \widehat{\bld V}_{h,0}^{k-1}\times
Q_h^{k-1,f}$, where the right hand side
\begin{align}
\label{source}
F^{j-1/2}((\bld v_h, \widehat{\bld v}_h))
=&\;
(2\rho \frac{\bld u_h^{j-1}}{\delta t_{j-1}}, \bld v_h)
+(\bld f^{j-1/2}, \bld v_h)\\
&\;
- \left( 2\mu^s A_h^s(
(\bld \eta^{s,j-1}_h, \widehat{\bld \eta}^{s,j-1}_h),(\bld v_h, \widehat{\bld v}_h) )
+\lambda^s(\nabla \cdot \bld \eta_h^{s,j-1},
\nabla \cdot\bld v_h)_s\right),\nonumber
\end{align}
where $\bld f^{j-1/2}$ is the source term evaluated at time
$t_{j-1/2}$.
Moreover, the velocity and displacement approximations at time
$t_j$ satisfy the following relations:
\begin{align}
\label{u-recov}
\bld u_h^{j} = &\; 2\bld u_h^{j-1/2}-\bld u_h^{j-1},
\;\;
\bld \eta_h^{s,j} = \;
\bld \eta_h^{s,j-1}+\delta t_{j-1}\bld u_h^{j-1/2}|_{\mathcal{T}_h^s},\;\;
\widehat{\bld \eta}_h^{s,j} = \;
\widehat{ \bld \eta}_h^{s,j-1}+\delta t_{j-1}\widehat{\bld
u}_h^{j-1/2}|_{\mathcal{T}_h^s}.
\end{align}
\end{subequations}
\end{lemma}
\begin{proof}
The relations \eqref{u-recov} are direct consequences
of the definition of $\bld u_h^{j-1/2}$, the same choice of the finite
element spaces for velocity and displacement, and
the equations \eqref{full-2}--\eqref{full-3}.
Plugging in these relations back to the equations \eqref{full-1}, and
reordering the terms, we recover the equations \eqref{fullX}.
This completes the proof.
\end{proof}
\begin{remark}[Connection with the coupled momentum method]
The idea of using the same finite element space
for displacement and velocity approximations
to
eliminate the displacement unknowns in the global linear system was
originated in the coupled momentum method of
Figueroa et al. \cite{figueroa2006coupled},
where they considered an FSI problem with thin structure.
See also related work in \cite{NobileVergara08}.
\end{remark}
With the help of Lemma \ref{lemma:char}, we proceed to implement
the fully discrete scheme \eqref{full} as follows:
Let $(\bld u_h^{0},
\bld \eta_h^{s,0}, \widehat{\bld
\eta}_h^{s,0})\in \bld V_{h,0}^k \times\bld V_{h,0}^{k,s}\times
\widehat{\bld V}_{h,0}^{k-1,s}
$ be a proper projection of the initial data
in \eqref{init}.
For $j = 1,2,\cdots$, we proceed the following three steps to
advance solution from time level $t_{j-1}$ to time level $t_{j}=t_{j-1}+\delta
t_{j-1}$:
\begin{itemize}
\item [(1)] Determine the time step size
$\delta t_{j-1}$ and compute the right hand side
$F^{j-1/2}$ in \eqref{source}.
\item [(2)]
Solve
for
$(\bld u_h^{j-1/2}, \widehat{\bld u}_h^{j-1/2},p_h^{f,j-1/2})
\in
\bld V_{h,0}^k\times \widehat{\bld V}_{h,0}^{k-1}\times
Q_h^{k-1,f}
$ using equations \eqref{fullX}.
\item [(3)] Recover velocity and displacement approximations
at time $t_{j}$ using the relations \eqref{u-recov}.
\end{itemize}
The major computational cost of the above implementation lies in
the global linear system solver in step (2).
To make the linear system problem easier to solve, we
introduce an equivalent
characterization of the solution to the equations \eqref{fullX}
in Lemma \ref{lemma:mod} below. In the actual implementation,
we solve the equivalent linear system problem \eqref{fullY} in step (2)
instead of \eqref{fullX}.
In the next section, we will design
an efficient block AMG preconditioner for this system.
\begin{lemma}[A modified implementation of the scheme \eqref{fullX}]
\label{lemma:mod}
For any positive integer $j\in \mathbb{Z}_+$,
let
$(\bld u_h^{j-1/2}, \widehat{\bld u}_h^{j-1/2},p_h^{j-1/2})
\in
\bld V_{h,0}^k\times \widehat{\bld V}_{h,0}^{k-1}\times
Q_h^{k-1}$ be the unique solution to the following
equations:
\begin{alignat}{2}
\label{fullY}
&(2\rho \frac{\bld u_h^{j-1/2}}{\delta t_{j-1}}, \bld v_h)+
2\mu^f A_h^f(
(\bld u_h^{j-1/2}, \widehat{\bld u}_h^{j-1/2}),(\bld v_h, \widehat{\bld
v}_h) )
+\delta t_{j-1}\mu^s A_h^s(
(\bld u^{j-1/2}_h, \widehat{\bld u}^{j-1/2}_h),(\bld v_h, \widehat{\bld v}_h) )
\\
& -(p_h^{j-1/2}, \nabla\cdot \bld v_h)
-(\nabla\cdot\bld u_h^{j-1/2}, q_h)
-(\frac{2}{\delta t_{j-1}\lambda^s}p_h^{j-1/2}, q_h)_s
\;\;=\;
F^{j-1/2}((\bld v_h, \widehat{\bld v}_h))\nonumber
\end{alignat}
for all
$(\bld v_h, \widehat{\bld v}_h, q_h)\in \bld V_{h,0}^k\times \widehat{\bld V}_{h,0}^{k-1}\times
Q_h^{k-1}$, where the right hand side $F^{j-1/2}$
is defined in \eqref{source}.
Then
$(\bld u_h^{j-1/2}, \widehat{\bld u}_h^{j-1/2},p_h^{j-1/2}|_{\mathcal{T}_h^f})
\in
\bld V_{h,0}^k\times \widehat{\bld V}_{h,0}^{k-1}\times
Q_h^{k-1,f}$ is the unique
solution to the equations \eqref{fullX}.
\end{lemma}
\begin{proof}
Taking test function $q_h\in Q_h^{k-1,s}$ in equations \eqref{fullY}, we get
$$
(\nabla \cdot \bld u_h^{j-1/2}+\frac{2}{\delta t_{j-1}\lambda^s}
p_h^{j-1/2},
q_h)_s = 0 \quad \forall q_h\in Q_h^{k-1,s}.
$$
Since $\nabla\cdot \bld u_h^{j-1/2}|_{\mathcal{T}_h^s} \in Q_h^{k-1,s}$,
the above equation implies that
$$
p_h|_{\mathcal{T}_h^s} = -\frac{\delta t_{j-1}\lambda^s}{2}\nabla\cdot \bld
u_h^{j-1/2}|_{\mathcal{T}_h^s}.
$$
Hence,
$$
-(p_h^{j-1/2}, \nabla \cdot \bld v_h)
=
-(p_h^{j-1/2}, \nabla \cdot \bld v_h)_f
+\frac{\delta t_{j-1}}{2}\lambda^s(\nabla\cdot\bld u_h^{j-1/2}, \nabla
\cdot \bld v_h)_s.$$
Plugging this expression back to the equations \eqref{fullY}, we recover
the equations in \eqref{fullX} for all
$(\bld v_h, \widehat{\bld v}_h, q_h)\in \bld V_{h,0}^k\times \widehat{\bld V}_{h,0}^{k-1}\times
Q_h^{k-1,f}$. This completes the proof.
\end{proof}
\begin{remark}[Other high-order
implicit time stepping strategies]
We concentrated on the discretization and implementation of the
Crank-Nicolson time stepping \eqref{full} in this subsection.
Alternatively, one can use any other high-order implicit time stepping
strategies, like the backward different formula (BDF) or the
diagonally implicit Runge-Kutta methods \cite{HairerWanner10}.
The third-order BDF3 scheme reads as follows (assuming uniform time step
size $\delta t>0$):
For $j\ge 3$,
given approximations
$(\bld u_h^{j-m},
\bld \eta_h^{s,j-m}, \widehat{\bld
\eta}_h^{s,j-m})\in \bld V_{h,0}^k \times\bld V_{h,0}^{k,s}\times
\widehat{\bld V}_{h,0}^{k-1,s}
$ at time $t_{j-m}=(j-m)\,\delta t$
for $m=1,2,3$,
we proceed to find the solution
$(\bld u_h^{j},\widehat{\bld u}_h^j, q_h^{f,j},
\bld \eta_h^{s,j}, \widehat{\bld
\eta}_h^{s,j})\in \bld V_{h,0}^k \times\widehat{\bld V}_{h,0}^{k-1}\times Q_h^f\times
\bld V_{h,0}^{k,s}\times
\widehat{\bld V}_{h,0}^{k-1,s}
$ at time $t_{j}=j\,\delta t$
such that the following
equations holds:
\begin{subequations}
\label{fullB}
\begin{alignat}{2}
\label{fullB-1}
(\rho \mathtt{D}_t\bld u_h^j, \bld v_h)+
2\mu^f A_h^f(
(\bld u_h^{j}, \widehat{\bld u}_h^{j}),(\bld v_h, \widehat{\bld v}_h) )
-(p_h^{f,j}, \nabla\cdot \bld v_h)_f
&\\
\;\; -(\nabla\cdot\bld u_h^{j}, q_h^f)_f\;\;
+ 2\mu^s A_h^s(
(\bld \eta^{s,j}_h, \widehat{\bld \eta}^{s,j}_h),(\bld v_h, \widehat{\bld v}_h) )
+\lambda^s(\nabla \cdot \bld \eta_h^{s,j},
\nabla \cdot\bld v_h)_s
&\;
=\; (\bld f^{j}, \bld v_h), \nonumber\\
\label{fullB-2}
\mathtt{D}_t\bld \eta_h^{s,j}
-\bld
u_h^{j-1/2}, \bld \xi_h^s)_s &\;= \;0,\\
\label{fullB-3}
\langle
\mathtt{D}_t\widehat{\bld \eta}_h^{s,j}
\widehat{\bld u}_h^{j-1/2},
\widehat{\bld \xi^s}_h\rangle_s&\;=\;0,
\end{alignat}
for all
$(\bld v_h, \widehat{\bld v}_h, q_h^f, \bld \xi_h^s, \widehat{\bld
\xi}_h^s)\in \bld V_{h,0}^k\times \widehat{\bld V}_{h,0}^{k-1}\times
Q_h^{k-1,f}\times\bld V_{h,0}^{k,s}\times \widehat{\bld V}_{h,0}^{k-1,s}
$,
where
$$
\mathtt{D}_t\bld \phi^{j}:=\frac{1}{\delta t}
\left(\frac{11}{6}\phi^j-3\phi^{j-1}+\frac32 \phi^{j-2}
-\frac13\phi^{j-3}\right)
$$
is the third order BDF
discretization of the time derivative $\partial_t \phi$.
We can proceed along the same lines as in Subsection
\ref{sub:imp} to implement the scheme \eqref{fullB} such that
we only need to solve a global linear system
of the form \eqref{fullY} in each time step.
\end{subequations}
\end{remark}
\section{Preconditioning}
\label{sec:prec}
\subsection{Preliminaries}
In this section, we concentrate ourselves to the efficient solver for the
linear system problem \eqref{fullY}.
The same technique can be used to solve the related linear system
for the scheme based on BDF3 time stepping \eqref{fullB}.
To simplify notation, we remove all temporal indices in this section.
Hence the linear system problem we are interested in have the following specific
form: Find
$(\bld u_h, \widehat{\bld u}_h,p_h)
\in
\bld V_{h,0}^k\times \widehat{\bld V}_{h,0}^{k-1}\times
Q_h^{k-1}$ such that
\begin{alignat}{2}
\label{fullZ}
&(2\rho \frac{\bld u_h}{\delta t}, \bld v_h)+
2\mu^f A_h^f(
(\bld u_h, \widehat{\bld u}_h),(\bld v_h, \widehat{\bld
v}_h) )
+\delta t\mu^s A_h^s(
(\bld u_h, \widehat{\bld u}_h),(\bld v_h, \widehat{\bld v}_h) )
\\
& -(p_h, \nabla\cdot \bld v_h)
-(\nabla\cdot\bld u_h, q_h)
-(\frac{2}{\delta t\,\lambda^s}p_h, q_h)_s
\;\;=\;
F((\bld v_h, \widehat{\bld v}_h))\nonumber
\end{alignat}
for all
$(\bld v_h, \widehat{\bld v}_h, q_h)\in \bld V_{h,0}^k\times \widehat{\bld V}_{h,0}^{k-1}\times
Q_h^{k-1}$. Note that all the
finite element spaces are defined on the whole domain $\Omega$.
To further simplify notation, we denote
$$\mu:= \left\{\begin{tabular}{ll}
$\mu^f$ & on $\Omega^f$\\
$0.5 \delta t\,\mu^s$ & on $\Omega^s$
\end{tabular}\right.,
\text{ and }
\gamma:= \left\{\begin{tabular}{ll}
$0$ & on $\Omega^f$\\
$2/\delta t/\lambda^s$ & on $\Omega^s$
\end{tabular}\right..
$$
We also denote
$$
A_h^{\mu}((\bld u_h, \widehat{\bld u}_h),(\bld v_h, \widehat{\bld
v}_h) ):=
2\mu^f A_h^f(
(\bld u_h, \widehat{\bld u}_h),(\bld v_h, \widehat{\bld
v}_h) )
+\delta t\mu^s A_h^s(
(\bld u_h, \widehat{\bld u}_h),(\bld v_h, \widehat{\bld v}_h)),
$$
which is an HDG discretization of the variable coefficient diffusion
operator $-\nabla\cdot(\mu\bld D(\bld u))$ on the whole domain $\Omega$.
Hence, the formulation \eqref{fullZ} simplifies to
\begin{align}
\label{fullF}
&\frac{2}{\delta t}(\rho \bld u_h, \bld v_h)+
A_h^{\mu}(
(\bld u_h, \widehat{\bld u}_h),(\bld v_h, \widehat{\bld
v}_h) )
-(p_h, \nabla\cdot \bld v_h)
-(\nabla\cdot\bld u_h, q_h)
-(\gamma p_h, q_h)
= F((\bld v_h, \widehat{\bld v}_h)).
\end{align}
The problem \eqref{fullF} can be rewritten in a matrix-vector formulation:
Find $[\underline{\mathsf{u}}_h; \mathsf{p}_h]\in
\mathbb{R}^{N_u+N_p}$ such that
\begin{align}
\label{matrix}
\left[\begin{tabular}{cc}
$\mathsf{A}_{h}^\mu+\frac{2}{\delta t}
\mathsf{M}^{\rho}_h$ & $\mathsf{B}_h$ \\[.8ex]
$\mathsf{B}_h^T$ & $-\mathsf{M}^{\gamma}_h$
\end{tabular}
\right]
\left[
\begin{tabular}{l}
$\underline{\mathsf u}_h$\\
$\mathsf p_h$
\end{tabular}
\right]
=\left[
\begin{tabular}{l}
$\mathsf{F}$\\
$0$
\end{tabular}
\right],
\end{align}
where $\underline{\mathsf u}_h\in \mathbb{R}^{N_u}
$ is the coefficient vector for the
compound velocity approximation
$ \underline{\bld u}_h:= (\bld u_h, \widehat{\bld u}_h)
\in \bld V_{h,0}^k\times
\widehat{\bld V}_{h,0}^{k-1}$ with
$N_u$ being the dimension of the compound finite element space
$ \bld V_{h,0}^k\times
\widehat{\bld V}_{h,0}^{k-1}$,
${\mathsf p}_h\in \mathbb{R}^{N_p}
$ is the coefficient vector for the
pressure approximation
${p}_h\in {Q}_{h}^{k-1}$ with
$N_p$ being the dimension of the finite element space
${Q}_{h}^{k-1}$. Moreover,
the matrix $\mathsf{A}_h^\mu\in \mathbb{R}^{N_u\times N_u}$
is associated with the bilinear form
$A_h^\mu(\underline{\bld u}_h, \underline{\bld v}_h)$,
the matrix $\mathsf{M}_h^\rho\in \mathbb{R}^{N_u\times N_u}$
is associated with the bilinear form
$(\rho \bld u_h, \bld v_h)$,
the matrix $\mathsf{B}_h\in \mathbb{R}^{N_p\times N_u}$
is associated with the bilinear form
$-(p_h, \nabla\cdot \bld v_h)$,
the matrix $\mathsf{M}_h^{\gamma}\in \mathbb{R}^{N_p\times N_p}$
is associated with the bilinear form
$(\gamma p_h, q_h)$, and
the vector $\mathsf{F}\in \mathbb{R}^{N_u}$ is associated
with the linear form $F(\underline{\bld v}_h)$.
The big matrix in the linear system \eqref{matrix} has a block structure and is
symmetric and indefinite, with the 1-1 block
$\mathsf{A}_{h}^\mu+\frac{2}{\delta t}
\mathsf{M}^{\rho}_h$
being symmetric positive definite (SPD),
and the 2-2 block $-\mathsf{M}^{\gamma}_h$ being symmetric and negative semi-definite.
A popular method to solve the symmetric saddle point problem
\eqref{matrix}, which we adopt in this work,
is to use a preconditioned MinRes solver \cite{Saad03}
with the following block diagonal preconditioner
\cite{Murphy00,Ipsen01}:
\begin{align}
\label{prec}
\mathsf{P} =
\left[\begin{tabular}{cc}
$\widehat{\mathsf {iA}}
$
& $0$ \\[.8ex]
$0$ & $\widehat{\mathsf{iS}}$
\end{tabular}
\right]
\end{align}
where $\widehat{\mathsf{iA}}$ is an appropriate preconditioner of the SPD matrix
$\mathsf A:=\mathsf{A}_{h}^\mu+\frac{2}{\delta t}
\mathsf{M}^{\rho}_h,$
and
$\widehat{\mathsf{iS}}$ is an appropriate preconditioner of the
(dense) Schur complement SPD matrix
$\mathsf S :=\mathsf{B}_h^T(\mathsf{A}_{h}^\mu+\frac{2}{\delta t}
\mathsf{M}^{\rho}_h)^{-1}\mathsf{B}_h
+\mathsf{M}_h^{\gamma}
.$
The detailed construction of the
preconditioner for the Schur complement (pressure) matrix $\mathsf{S}$ is discussed in Subsection \ref{sec:s-block}, where
we borrow ideas in the literature on preconditioning
the closely related, generalized Stokes problem \cite{Cahouet88,Elman01,Olshanskii06}.
The detailed construction of the
preconditioner for the SPD velocity matrix $\mathsf{A}$
is discussed in Subsection \ref{sec:a-block}, where
we use
an auxiliary space preconditioner \cite{Xu96}
along with algebraic multigrid.
We mention that for polynomial degree $k\ge 2$, the preconditioned
MinRes solver is applied to the static condensed subsystem of
\eqref{matrix}, see the discussion in Remark \ref{rk:condense}.
\begin{remark}[Connection with a generalized Stokes interface problem]
The discretization \eqref{fullF}, or the form \eqref{matrix},
is closely related to
a divergence-conforming HDG discretization of a
generalized Stokes interface problem (with a fixed interface)
with variable density $\rho$ and variable viscosity
$\mu$, c.f.
\cite{Fu20}. The only difference between the divergence-conforming HDG linear system for
the generalized Stokes interface problem and the
current FSI problem is that the pressure block is {\it zero} for the
former, while it is $-\mathcal{M}_{h}^{\gamma}$
for the latter, which is a symmetric negative semi-definite
matrix and
represents the compressibility of the structure.
A non-zero pressure block also appears in the
finite element discretization of the Stokes problem
using pressure-stabilized methods, or the linear elasticity problem with
a displacement-pressure formulation.
\end{remark}
\begin{remark}[Static condensation for $k\ge 2$]
\label{rk:condense}
When polynomial degree $k\ge 2$, we shall
solve the linear system problem
\eqref{matrix} using static condensation to locally eliminate
interior velocity DOFs and high-order pressure DOFs
\cite{LehrenfeldSchoberl16}.
The resulting global linear system after static condensation
consists of DOFs for the normal component of velocity
approximation (of degree $k$)
in $\bld V_{h,0}^k$
and the tangential (hybrid) velocity approximation (of degree $k-1$)
in $\widehat{\bld V}_{h,0}^{k-1}$
on the mesh skeleton (facets), and
cell-average of pressure approximations (of degree 0) on the mesh.
We denote the compound velocity space corresponding to
the DOFs on mesh skeleton by $\underline{\bld V}_{h,0}^{k,
\mathsf{gl}}$ which is a subset of the compound space
$\underline{\bld V}_{h,0}^k$. The global pressure space is
simply the space of piecewise constants $Q_h^0$.
The linear system after condensation has a similar structure as that in
\eqref{matrix} with a reduced size. We shall apply the precondioned
MinRes solver for the condensed system in this case.
For this case, the matrices $\mathsf{A}$
and $\mathsf{S}$ shall be understood to be defined on the
reduced spaces $\underline{\bld V}_{h,0}^{k,\mathsf{gl}}$ and $Q_h^{0}$, respectively.
\end{remark}
\subsection{Preconditioning the Schur complement pressure matrix
$\mathsf S$}
\label{sec:s-block}
The preconditioner $\widehat{\mathsf{iS}}$ acts on the
piecewise constant global pressure space $Q_h^0$, and
is given as follows:
\begin{align}
\label{s-pre}
\widehat{\mathsf{iS}}
=(\mathsf{M}_h^{\mu,\gamma})^{-1}
+(\mathsf{N}_h^{\rho,\gamma})^{-1},
\end{align}
where
$\mathsf{M_h^{\mu,\gamma}}$ is the
weighted mass matrix associated with the bilinear form
$((\mu^{-1}+\gamma) p_h, q_h)$ on the piecewise constant
global pressure space $Q_h^0$,
and $\mathsf{N}_h^{\rho,\gamma}$ is the matrix
associated with the bilinear form
\begin{align}
\label{neumann}
(\gamma p_h, q_h)+\frac{\delta t}{2}
\sum_{F\in\mathcal{E}_h\backslash{\partial \Omega}}\int_{F}\frac{\{\rho^{-1}\}}{h}\jmp{p_h}
\jmp{q_h}\,\mathrm{ds},\quad \forall p_h, q_h\in Q_h^0,
\end{align}
where
$\{\rho\}|_{F}:=\frac{\rho^++\rho^-}{\rho^+\rho^-}$ is the geometric
average of $\rho$, and
$\jmp{\phi}=\phi^+-\phi^-$ is the jump of $\phi$ on an interior facet $F$.
Note that the mass matrix $\mathsf{M}^{\mu,\gamma}$ is diagonal and
its inversion is trivial.
Also, note that the bilinear form \eqref{neumann}
corresponds to the interior penalty discretization of the operator
$\gamma p - \frac{\delta t}{2}\nabla \cdot (\rho^{-1}\nabla p)$
with a homogeneous Neumann boundary condition using the piecewise constant space $Q_h^0$.
The jump term
in \eqref{neumann} was shown in \cite{Rusten96} to be
spectrally equivalent to the operator
$\frac{\delta t}{2}\mathsf{B}_h^T(\mathsf{M}_h^{\rho})^{-1}\mathsf{B}_h$ when
the density $\rho$ is uniformly bounded from above and below.
Hence, $(\mathsf{N}_h^{\rho,\gamma})^{-1}$ serves as a robust
preconditioner for the (dense) Schur complement matrix
$\frac{\delta
t}{2}\mathsf{B}_h^T(\mathsf{M}_h^{\rho})^{-1}\mathsf{B}_h+\mathsf{M}_h^{\gamma}$.
In the actual numerical realization
of $(\mathsf{N}_h^{\rho,\gamma})^{-1}$
, we use the hypre's BoomerAMG preconditioner \cite{hypre, Henson02}
for the matrix $\mathsf{N}_h^{\rho,\gamma}$.
We note that the
pressure Schur complement preconditioner \eqref{s-pre}
was initially introduced for the generalized Stokes problem (constant
density, constant viscosity, and $\gamma=0$) by
Cahouet and Chabard \cite{Cahouet88}.
Robustness of this
Cahouet-Chabard preconditioner
for the generalized Stokes problem
with respect to variations in the mesh size
$h$ and time step size $\delta t$ was proven in \cite{Bramble97,Mardal04,
Olshanskii06}.
It was then generalized by Olshanskii et al. \cite{Olshanskii06}
to the generalized Stokes interface problem
(variable density, variable viscosity, and $\gamma =0$).
While a theoretical proof of the robustness of the preconditioner in \cite{Olshanskii06} for the variable density and viscosity
case was lacking due to the lack of regularity results for the stationary
Stokes interface problem, numerical results performed in
\cite{Olshanskii06} seems to indicate that the preconditioner is
robust also with respect to the jumps in viscosity and density
in large parameter ranges.
Hence, our preconditioner \eqref{s-pre} can be considered as a
generalization of the one in \cite{Olshanskii06} to take into account the
structure compressibility ($\gamma>0$ on $\Omega_s$) in the pressure block.
\subsection{Preconditioning the velocity stiffness matrix $\mathsf A$}
\label{sec:a-block}
The matrix $\mathsf{A}$ corresponds to the divergence-conforming
HDG discretization of the elliptic operator $\frac{2}{\delta t}\rho\bld u
-2\nabla\cdot (\mu\mathbf{D}(\bld u))$. Here we propose to use
the auxiliary space precondioner \cite{Xu96} developed in \cite{Fu20a}.
The auxiliary space is the continuous linear Lagrange
finite elements:
$$
\bld V_{h,0}^{cg} := \{\bld v\in \bld H_0^{1}(\Omega):\;\;
\bld v|_K\in [\mathcal{P}^1(K)]^d, \;\forall K\in \mathcal{T}_h\}.
$$
The auxiliary space preconditioner for $\mathsf{A}$ is of the following
form:
\begin{align}
\label{a-pre}
\widehat{ \mathsf{iA}} = \mathsf{R} + \mathsf{P}\mathcal{A}^{-1}\mathsf{P}^T,
\end{align}
where $\mathsf{R}\in \mathbb{R}^{N_{u}^{\mathsf{gl}}\times
N_{u}^{\mathsf{gl}}}$ is the (point) Gauss-Seidel smoother for the matrix
$\mathsf{A}$, with $N_{u}^\mathsf{gl}$
being the dimension of the reduced compound space
$\underline{\bld V}_{h,0}^{k,\mathsf{gl}}$,
the matrix $\mathcal{A}\in \mathbb{R}^{N_c\times N_c}
$ is the matrix associated with the
following bilinear form on the auxiliary space $\bld V_{h,0}^{cg}$:
$$
(\frac{2}{\delta t}\mathtt u_h, \mathtt v_h)
+ 2(\mu\bld D(\mathtt u_h),
\bld D(\mathtt v_h)),\quad \forall \mathtt u_h,\mathtt v_h\in \bld
V_{h,0}^{cg},
$$
where $N_c$ is the dimension of $\bld V_{h,0}^{cg}$,
and the matrix
$\mathsf{P}\in \mathbb{R}^{N_u^{\mathsf{gl}}\times N_c}$
is associated with the projector
$\underline{\Pi}:
\bld V_{h,0}^{cg}
\rightarrow \underline{\bld V}_{h,0}^{k,\mathsf{gl}}
$, which is defined as follows:
for any function
$\mathtt u_h\in \bld V_{h,0}^{cg}$, find
$\underline{\Pi}\mathtt u_h
= (\Pi\mathtt u_h, \widehat{\Pi}\mathtt u_h)\in
\underline{\bld V}_{h,0}^{k,\mathsf{gl}}$ such that
\begin{subequations}
\label{proj}
\begin{align}
\sum_{F\in\mathcal{E}_h}
\int_F
(\Pi \mathtt u_h\cdot \bld n)(\bld v_h \cdot\bld n)
\mathrm{ds}
=&\;
\sum_{F\in\mathcal{E}_h}
\int_F
(\mathtt u_h\cdot \bld n)(\bld v_h\cdot\bld n)
\mathrm{ds},\\
\sum_{F\in\mathcal{E}_h}
\int_F
\mathsf{tang}(\widehat{\Pi} \mathtt{{u}}_h)\cdot
\mathsf{tang}(\widehat{\bld v}_h)
\mathrm{ds}
=&\;
\sum_{F\in\mathcal{E}_h}
\int_F
\mathsf{tang}(\mathtt u_h)\cdot
\mathsf{tang}(\widehat{\bld v}_h)
\mathrm{ds},
\end{align}
\end{subequations}
for all $(\bld v_h, \widehat{\bld v}_h)\in
\underline{\bld V}_{h,0}^{k,\mathsf{gl}}.
$
Note that the projector is locally facet-by-facet defined,
and the transformation matrix $\mathsf{P}$ is sparse.
For the numerical realization $\mathcal{A}^{-1}$, we
again use hypre's BoomerAMG.
\section{Semidiscrete a priori error analysis}
\label{sec:err}
In this section, we present an a prior error analysis for the semidiscrete
scheme \eqref{semi}. To simplify notation, we write
\begin{align*}
A\lesssim B
\end{align*}
to indicate that there exists a constant $C$, independent of mesh size $h$,
material parameters $\rho^{f/s}$, $\mu^{f/s}$, $\lambda^s$, and the numerical solution, such that $A\le CB$.
We denote the following (semi)norms:
\begin{subequations}
\label{norms}
\begin{align}
\|
(\bld v, \widehat{\bld v})
\|_{i,h}
&:=\;
\sum_{K\in\mathcal{T}_h^i}
\left(\|\bld D(\bld v_h)\|_K^2
+\frac{\alpha k^2}{h}
\|\Pi_h(\mathsf{tang}(\bld v_h-\widehat{\bld v}_h))\|_{\partial K}^2
\right),\\
\|
(\bld v, \widehat{\bld v})
\|_{i,*,h}
&:=\;
\left(\|
(\bld v, \widehat{\bld v})
\|_{i,h}^2+\sum_{K\in\mathcal{T}_h^i}h\|\bld D(\bld{v})\|_{\partial K}^2\right)^{1/2},\\
|\!|\!|\{\bld{v},
\bld \xi^s, \widehat{\bld \xi^s}\}
|\!|\!|_h
&:=\;
\left(\|\rho^{1/2}\bld{v}\|^2+2\mu^s\|
(\bld \xi^s, \widehat{\bld \xi^s})
\|_{s,h}^2+\lambda^s\|\nabla\cdot\bld{\xi}^s\|^2_s\right)^{1/2},
\end{align}
\end{subequations}
for $i\in\{f,s\}$
and $(\bld v,\widehat{\bld v}, \bld \xi^s,
\widehat{\bld \xi^s})\in
\bld V_{h,0}^k\times \widehat{\bld V}_{h,0}^{k-1}
\times\bld V_{h,0}^{k,s}\times \widehat{\bld V}_{h,0}^{k-1,s}
$, where we denote $\|\cdot\|$ as the $L^2$-norm on $\Omega$, $\|\cdot\|_i$ as the $L^2$-norm on $\Omega^i$.
The inequality \eqref{coercivity} implies the
coercivity of the bilinear form $A_h^{i}$ with respect to
the norm $\|\cdot\|_{i,h}$.
We also have the following boundedness of the operator $A_h^{i}$:
\begin{align}
\label{boundNorm}
A_h^{i}(
(\bld v, \widehat{\bld v}),
(\bld w_h, \widehat{\bld w}_h)
)
&\lesssim
\|
(\bld v, \widehat{\bld v})
\|_{i,*,h}
\|
(\bld w_h, \widehat{\bld w}_h)
\|_{i,h},
\end{align}
for all
$ (\bld v, \widehat {\bld v})\in
\underline{\bld V}^i+\left(
\bld V_{h,0}^{k,i}\times \widehat{\bld V}_{h,0}^{k-1, i}\right)$ and
$ (\bld w_h, \widehat {\bld w}_h)\in
\bld V_{h,0}^{k,i}\times \widehat{\bld V}_{h,0}^{k-1, i}$,
where
\[
\underline{\bld V}^i:=\{(\bld v, \bld v|_{\mathcal{E}_h^i}):\;\;
\bld v|_K\in H^2(K), \;\;\forall K\in\mathcal{T}_h^i\}.
\]
We use the classical Brezzi-Douglas-Marini (BDM) interpolator
$\Pi_{BDM}$\cite[Proposition 2.3.2]{boffi2013mixed} to project $\bld{u}$ and
$\bld{\eta}^s$ onto the finite element spaces $\bld{V}^k_{h,0}$ and
$\bld{V}^{k,s}_{h,0}$. We denote $\Pi_Q$ as the $L^2$-projection onto the finite
element space $Q_h^{k-1}$. Note that due to the commuting projection property, we have:
\begin{subequations}
\label{commut}
\begin{alignat}{3}
(\nabla\cdot\Pi_{BDM}\bld{\eta^s},q_h^s)_s
&=
(\nabla\cdot\bld{\eta^s},q_h^s)_s, \quad&&\forall q_h^s\in Q_h^{k-1,s},
\\
\label{divFreeProj}
\nabla\cdot\Pi_{BDM}\bld{u}^f|_{\Omega^f}
&=
\Pi_{Q}(\nabla\cdot\bld{u}^f)|_{\Omega^f}=0.
\end{alignat}
\end{subequations}
The following standard approximation property of the BDM projector
$\Pi_{BDM}$ and the $L^2$-projector $\Pi_h$ onto
$\widehat{\bld V}_{h,0}^{k-1}$ is well-known; see \cite[Proposition 2.3.8]{Lehrenfeld10}.
\begin{lemma
\label{lemma:approx}
Let $\bld u\in [H^1(\Omega)]^d\cap [H^{k+1}(\mathcal{T}_h)]^d$.
Then the following estimates hold:
\begin{align}
\label{approxErr}
\|(\bld{u}-\Pi_{BDM}\bld u,\bld{u}|_{\mathcal{E}_h^i}-\Pi_h\bld{u})\|_{i,*,h}^2
&\lesssim
h^{2k}\sum_{K\in\mathcal{T}_h^i}\|\bld{ u}\|_{H^{k+1}(K)}^2,
\end{align}
for $i\in\{f,s\}$.
\end{lemma}
To further simplify notation, we denote:
\begin{alignat*}{3}
&\underline{\bld{\delta}_{\bld{u}}}
&&:=
(\bld{\delta}_{\bld{u}},\bld{\delta}_{\widehat{\bld{u}}})
&&:=
(\bld{u}-\Pi_{BDM}\bld{u},\bld{u}|_{\mathcal{E}_h}-\Pi_h\bld{u}),
\\
&\underline{\bld{\delta}_{\bld{\eta}^s}}
&&:=
(\bld{\delta}_{\bld{\eta}^s},\bld{\delta}_{\widehat{\bld{\eta}}^s})
&&:=
(\bld{\eta}^s-\Pi_{BDM}\bld{\eta}^s,\bld{\eta}^s|_{\mathcal{E}_h^s}-\Pi_h\bld{\eta}^s),
\\
&\delta_{p^f}&&:=p^f-\Pi_Q p^f&&,
\\
&\underline{\bld{\varepsilon}_{\bld{u}}}
&&:=
(\bld{\varepsilon}_{\bld{u}},\bld{\varepsilon}_{\widehat{\bld{u}}})
&&:=
(\bld{u}_h-\Pi_{BDM}\bld{u},\widehat{\bld{u}}_h-\Pi_h\bld{u}),
\\
&\underline{\bld{\varepsilon}_{\bld{\eta}^s}}
&&:=
(\bld{\varepsilon}_{\bld{\eta}^s},\bld{\varepsilon}_{\widehat{\bld{\eta}}^s})
&&:=
(\bld{\eta}^s_h-\Pi_{BDM}\bld{\eta}^s,\widehat{\bld{\eta}}^s_h-\Pi_h\bld{\eta}^s),
\\
&\varepsilon_{p^f}&&:=p^f_h-\Pi_Q p^f&&.
\end{alignat*}
where
$(\underline{\bld{u}_h},p^f_h,\underline{\bld{\eta}^s_h})
\in
\underline{\bld{V}_{h,0}^k}\times Q_{h,0}^{k-1,f}\times\underline{\bld{V}_{h,0}^{k,s}}$
is the solution to the semi-discrete scheme \eqref{semi}, with
the componound spaces denoted as
\[
\underline{\bld{V}_{h,0}^k}:=\bld {V}_{h,0}^k\times \widehat{\bld
V}_{h,0}^{k-1},\;\;
\underline{\bld{V}_{h,0}^{k,s}}:=\bld {V}_{h,0}^{k,s}\times \widehat{\bld
V}_{h,0}^{k-1,s}.
\]
\begin{lemma}[Error equations of the semi-discrete scheme \eqref{semi}]
We have
the following error equations for the semi-discrete scheme \eqref{semi} :
\begin{subequations}
\label{semiErrEqu}
\begin{align}
\label{semiErrEqu-1}
(\rho\partial_t\bld{\varepsilon}_{\bld{u}},\bld{v}_h)
+2\mu^f A_h^f\left(\underline{\bld{\varepsilon}_{\bld{u}}},\underline{\bld{v}_h}\right)
&-(\bld{\varepsilon}_{p^f},\nabla\cdot\bld{v}_h)_f
+2\mu^s A_h^s\left(\underline{\bld{\varepsilon}_{\bld{\eta}^s}},\underline{\bld{v}_h}\right)
+\lambda^s(\nabla\cdot\bld{\varepsilon}_{\bld{\eta}^s},\nabla\cdot\bld{v}_h)_s
\\
&=\;(\rho\partial_t\bld{\delta}_{\bld{u}},\bld{v}_h)
+2\mu^f A_h^f\left(\underline{\bld{\delta}_{\bld{u}}},\underline{\bld{v}_h}\right)
+2\mu^s A_h^s\left(\underline{\bld{\delta}_{\bld{\eta}^s}},\underline{\bld{v}_h}\right),\nonumber
\\
\label{semiErrEqu-2}
(\partial_t\bld{\varepsilon}_{\bld{\eta}^s},\bld{\xi}_h^s)_s
&=\;(\bld{\varepsilon}_{\bld{u}^s},\bld{\xi}_h^s)_s,\\
\label{semiErrEqu-3}
\langle\partial_t\bld{\varepsilon}_{\widehat{\bld{\eta}}^s},\widehat{\bld{\xi}}_h^s\rangle_s
&=\;\langle\bld{\varepsilon}_{\widehat{\bld{u}}^s},\widehat{\bld{\xi}}_h^s\rangle_s.
\end{align}
\end{subequations}
for all $(\underline{\bld{v}_h},q_h^f,\underline{\bld{\xi}_h^s})\in\underline{\bld{V}_{h,0}^k}\times Q_{h,0}^{k-1,f}\times\underline{\bld{V}_{h,0}^{k,s}}$.
\end{lemma}
\begin{proof}
By subtracting the semi-discrete scheme \eqref{semiX-1} from the consistency result \eqref{semi-1}, then adding and subtracting the above projectors, we can get the error equation \eqref{semiErrEqu-1}, where the commutative property of BDM interpolation \eqref{commut} is used. Then \eqref{semiErrEqu-2} and \eqref{semiErrEqu-3} can be easily derived since we have $\partial_t\Pi_{BDM}\bld{u}^s=\partial_t\Pi_{BDM}\bld{\eta}^s$, $\partial_t\Pi_h\bld{u}^s=\partial_t\Pi_h\bld{\eta}^s$.
\end{proof}
Note that due to the same finite elements space of velocity and displacement
approximation in $\Omega^s$, the error equations \eqref{semiErrEqu-2} and
\eqref{semiErrEqu-3} actually imply that
$\bld{\varepsilon}_{\bld{u}^s}=\partial_t\bld{\varepsilon}_{\bld{\eta}^s},
\bld{\varepsilon}_{\widehat{\bld{u}}^s}=\partial_t\bld{\varepsilon}_{\widehat{\bld{\eta}}^s}$.
Now we are ready to present the main result in this section.
\begin{theorem}
\label{theorem:semiErr}
Let $(\underline{\bld{u}_h},p_h^f,\underline{\bld{\eta}_h^s})$ be the solution
to semi-discrete scheme \eqref{semi} with initial data such that
$\left(\underline{\bld{\varepsilon}_{\bld{u}}}(0),\varepsilon_{p^f}(0),\underline{\bld{\varepsilon}_{\bld{\eta}^s}}(0)\right)=\left(\underline{\bld{0}},0,\underline{\bld{0}}\right)$.
Assume the solution $(\bld u, \bld \eta^s)$ to the model problem \eqref{model}
is smooth.
Then the following estimation holds for all $T>0$:
\begin{equation}
\label{semiErr}
|\!|\!|
\{\bld{\varepsilon}_{\bld{u}}(T),\underline{\bld{\varepsilon}_{\bld{\eta}^s}}(T)\}
|\!|\!|_h^2
+
\mu^f\int_{0}^{T}
\|\underline{\bld{\varepsilon}_{\bld{u}}}\|_{f,h}^2\mathrm{dt}
\lesssim\;
h^{2k}\left(\text{\Large$\Xi$}_1+\text{\Large$\Xi$}_2+\text{\Large$\Xi$}_3\right),
\end{equation}
where
\begin{align*}
\text{\Large$\Xi$}_1
&:=
T\int_{0}^{T}\left(\|\rho^{1/2}\partial_t\bld{u}\|_{H^{k}(\Omega)}^2
+
\mu^s
\|\partial_t\bld{\eta}^s\|_{H^{k+1}(\Omega^s)}
\right)\mathrm{dt},
\\
\text{\Large$\Xi$}_2
&:=
\mu^f\int_{0}^{T}
\|\bld{u}\|_{H^{k+1}(\Omega^f)}^2\mathrm{dt},
\\
\text{\Large$\Xi$}_3
&:=
\mu^s
\|\bld{\eta}^s\|_{L^{\infty}\left(H^{k+1}(\Omega^s)\right)}.
\end{align*}
\end{theorem}
\begin{proof}
Here we use the standard energy argument. Take $(\underline{\bld{v}_h},q_h^f)=(\underline{\bld{\varepsilon}_{\bld{u}}},\varepsilon_{p^f})$ in error equation \eqref{semiErrEqu-1} and plug in $\bld{\varepsilon}_{\bld{u}^s}=\partial_t\bld{\varepsilon}_{\bld{\eta}^s}, \bld{\varepsilon}_{\widehat{\bld{u}}^s}=\partial_t\bld{\varepsilon}_{\widehat{\bld{\eta}}^s}$, we get:
\begin{align*}
\frac{1}{2}\frac{\partial}{\partial_t}&\underbrace{\left(\|\rho^{1/2}\bld{\varepsilon}_{\bld{u}}\|^2
+
2\mu^s A_h^s\left(\underline{\bld{\varepsilon}_{\bld{\eta}^s}},\underline{\bld{\varepsilon}_{\bld{\eta}^s}}\right)
+
\lambda^s\|\nabla\cdot\bld{\varepsilon}_{\bld{\eta}^s}\|_s^2\right)}_{:=\mathcal{H}(t)}
+2\mu^f
A_h^f\left(\underline{\bld{\varepsilon}_{\bld{u}}},\underline{\bld{\varepsilon}_{\bld{u}}}\right)
\\
&=\;
(\rho\partial_t\bld{\delta}_{\bld{u}},\bld{\varepsilon}_{\bld{u}})
+
2\mu^f A_h^f\left(\underline{\bld{\delta}_{\bld{u}}},\underline{\bld{\varepsilon}_{\bld{u}}}\right)
+
2\mu^s
A_h^s\left(\underline{\bld{\delta}_{\bld{\eta}^s}},\partial_t\underline{\bld{\varepsilon}_{\bld{\eta}^s}}\right),\nonumber
\end{align*}
where we used the exactly divergence-free property of $\bld{u}^f$ and
$\Pi_{BDM}\bld{u}^f$. By plugging in the right-hand side the chain rule for
the time derivative
\begin{align*}
\frac{d}{d t}
A_h^s\left(\underline{\bld{\delta}_{\bld{\eta}^s}},\underline{\bld{\varepsilon}_{\bld{\eta}^s}}\right)
=
A_h^s\left(\partial_t\underline{\bld{\delta}_{\bld{\eta}^s}},\underline{\bld{\varepsilon}_{\bld{\eta}^s}}\right)
+
A_h^s\left(\underline{\bld{\delta}_{\bld{\eta}^s}},\partial_t\underline{\bld{\varepsilon}_{\bld{\eta}^s}}\right),
\end{align*}
and then applying the Cauchy-Schwarz inequality and boundedness of $A_h^i$ \eqref{boundNorm}, we get:
\begin{align*}
\frac{1}{2}\frac{\partial}{\partial_t}\mathcal{H}(t)
+
2\mu^f
A_h^f\left(\underline{\bld{\varepsilon}_{\bld{u}}},\underline{\bld{\varepsilon}_{\bld{u}}}\right)
\lesssim&
\|\rho^{1/2}\partial_t\bld{\delta}_{\bld{u}}\|\,
\|\rho^{1/2}\bld{\varepsilon}_{\bld{u}}\|
+
2\mu^f\|\underline{\bld{\delta}_{\bld{u}}}\|_{f,*,h}
\|\underline{\bld{\varepsilon}_{\bld{u}}}\|_{f,h}
\\
&+
2\mu^s
\|\partial_t\underline{\bld{\delta}_{\bld{\eta}^s}}\|_{s,*,h}
\|\underline{\bld{\varepsilon}_{\bld{\eta}^s}}\|_{s,h}
+
2\mu^s
\frac{d}{dt}
A_h^s\left(\underline{\bld{\delta}_{\bld{\eta}^s}},\underline{\bld{\varepsilon}_{\bld{\eta}^s}}\right)
\\
\lesssim&
\Theta^{1/2}|\!|\!|\{\bld{\varepsilon}_{\bld{u}},\underline{\bld{\varepsilon}_{\bld{\eta}^s}}\}|\!|\!|_h
+
2\mu^f
\|\underline{\bld{\delta}_{\bld{u}}}\|_{f,*,h}
\|\underline{\bld{\varepsilon}_{\bld{u}}}\|_{f,h}
+
2\mu^s
\frac{\partial}{\partial_t}
A_h^s\left(\underline{\bld{\delta}_{\bld{\eta}^s}},\underline{\bld{\varepsilon}_{\bld{\eta}^s}}\right),
\end{align*}
where $\Theta:=\left(\|\rho^{1/2}\partial_t\bld{\delta}_{\bld{u}}\|^2+ 2\mu^s
\|\partial_t\underline{\bld{\delta}_{\bld{\eta}^s}}\|_{s,*,h}^2\right)$. Integrate both sides over time from $t=0$ to $t=T$, combined with $\left(\underline{\bld{\varepsilon}_{\bld{u}}}(0),\varepsilon_{p^f}(0),\underline{\bld{\varepsilon}_{\bld{\eta}^s}}(0)\right)=\left(\underline{\bld{0}},0,\underline{\bld{0}}\right)$, we get:
\begin{align*}
\mathcal{H}(T)
+
\mu^f
\int_{0}^{T}A_h^f\left(\underline{\bld{\varepsilon}_{\bld{u}}},\underline{\bld{\varepsilon}_{\bld{u}}}\right)\mathrm{dt}
\lesssim&
\int_{0}^{T}
\Theta^{1/2}|\!|\!|\{\bld{\varepsilon}_{\bld{u}},\underline{\bld{\varepsilon}_{\bld{\eta}^s}}\}|\!|\!|_h\mathrm{dt}
+
\mu^f\int_{0}^{T}
\|\underline{\bld{\delta}_{\bld{u}}}\|_{f,*,h}
\|\underline{\bld{\varepsilon}_{\bld{u}}}\|_{f,h}\mathrm{dt}
\\
&+
\mu^s
A_h^s\left(\underline{\bld{\delta}_{\bld{\eta}^s}}(T),\underline{\bld{\varepsilon}_{\bld{\eta}^s}}(T)\right).
\end{align*}
Applying the coercivity and boundedness of $A_h^i$, and Young's inequality, we get:
\begin{align*}
|\!|\!|\{\bld{\varepsilon}_{\bld{u}}(T),\underline{\bld{\varepsilon}_{\bld{\eta}^s}}(T)\}|\!|\!|_h^2
+
\mu^f\int_{0}^{T}
\|\underline{\bld{\varepsilon}_{\bld{u}}}\|_{f,h}^2\mathrm{dt}
\lesssim&
\int_{0}^{T}
\Theta^{1/2}|\!|\!|\{\bld{\varepsilon}_{\bld{u}},\underline{\bld{\varepsilon}_{\bld{\eta}^s}}\}|\!|\!|_h\mathrm{dt}
\\
&+
\mu^f\gamma_1\int_{0}^{T}
\|\underline{\bld{\delta}_{\bld{u}}}\|_{f,*,h}^2\mathrm{dt}
+
\mu^s\gamma_2
\|\underline{\bld{\delta}_{\bld{\eta}^s}}(T)\|_{s,*,h}^2
\\
&+
\frac{\mu^f}{\gamma_1}\int_{0}^{T}
\|\underline{\bld{\varepsilon}_{\bld{u}}}\|_{f,h}^2\mathrm{dt}
+
\frac{\mu^s}{\gamma_2}
\|\underline{\bld{\varepsilon}_{\bld{\eta}^s}}(T)\|_{s,h}^2,
\end{align*}
for all $\gamma_1,\gamma_2 > 0$. The last two terms would be absorbed by the left-hand side when $\gamma_1$ and $\gamma_2$ are big enough. Then we have:
\begin{align*}
|\!|\!|\{\bld{\varepsilon}_{\bld{u}}(T),\underline{\bld{\varepsilon}_{\bld{\eta}^s}}(T)\}|\!|\!|_h^2
+
\mu^f\int_{0}^{T}
\|\underline{\bld{\varepsilon}_{\bld{u}}}\|_{f,h}^2\mathrm{dt}
\lesssim&
\int_{0}^{T}
\Theta^{1/2}|\|\{\bld{\varepsilon}_{\bld{u}},\underline{\bld{\varepsilon}_{\bld{\eta}^s}}\}\||_h\mathrm{dt}
\\
&+
\mu^f\int_{0}^{T}
\|\underline{\bld{\delta}_{\bld{u}}}\|_{f,*,h}^2\mathrm{dt}
+
\mu^s
\|\underline{\bld{\delta}_{\bld{\eta}^s}}(T)\|_{s,*,h}^2.
\end{align*}
By applying the Gronwall-type inequality \cite[Propostion 3.1]{chabaud2012uniform} and the Cauchy-Schwarz inequality, we get:
\begin{align*}
|\!|\!|\{\bld{\varepsilon}_{\bld{u}}(T),\underline{\bld{\varepsilon}_{\bld{\eta}^s}}(T)\}|\!|\!|_h^2
&+
\mu^f\int_{0}^{T}
\|\underline{\bld{\varepsilon}_{\bld{u}}}\|_{f,h}^2\mathrm{dt}
\\
&\le
\left(\frac{1}{2}\int_{0}^{T}\Theta^{1/2}\mathrm{dt}
+
\left(\max_{0\le\mathrm{t}\le T}
\left(
\mu^f\int_{0}^{t}
\|\underline{\bld{\delta}_{\bld{u}}}\|_{f,*,h}^2\mathrm{dt}
+
\mu^s
\|\underline{\bld{\delta}_{\bld{\eta}^s}}\|_{s,*,h}^2
\right)
\right)^{1/2}
\right)^2
\\
&\lesssim
T\int_{0}^{T}\Theta\mathrm{dt}
+
\mu^f\int_{0}^{T}
\|\underline{\bld{\delta}_{\bld{u}}}\|_{f,*,h}^2\mathrm{dt}
+
\mu^s
\max_{0\le\mathrm{t}\le T}
\|\underline{\bld{\delta}_{\bld{\eta}^s}}\|_{s,*,h}^2.
\end{align*}
Finally, the estimate \eqref{semiErr} is obtained by the above inequality
and the approximation properties of the projectors in Lemma
\ref{lemma:approx}.
\end{proof}
\begin{remark}[Robust velocity/displacement estimates]
It is clear that the velocity and displacement error estimate \eqref{semiErr} is independent of the pressure approximation $p_h^f$ and
the lame parameter $\lambda^s$.
Moreover, the error estimate \eqref{semiErr} is optimal in the energy norm
$|\!|\!|\cdot|\!|\!|_h$, which contains a discrete $H^1$-norm on $\Omega^s$.
On the other hand, we can only obtain a suboptimal convergence of order $\mathcal{O}(h^k)$ for
the $L^2$-norm of the velocity approximation from \eqref{semiErr}.
However, our numerical results in the next section indicate that the velocity $L^2$-norm
seems to be optimal. The proof of the optimality of the velocity $L^2$-norm is our
future work.
\end{remark}
\section{Numerical results}
\label{sec:num}
In this section, we
present {three} numerical examples for the
model problem \eqref{model} in two k{and three} dimensions.
The first example
uses a manufactured solution to verify the accuracy of the proposed
monolithic divergence-conforming HDG
schemes \eqref{full} and \eqref{fullB}
and the robustness of the preconditioner \eqref{prec}
with respect to mesh size, time step
size, and material parameters.
The second example is a classical benchmark problem typically used to
validate FSI solvers \cite{Nobile01,Bukac14x}.
{The third example is a 3D test case simulating the propagation of pressure pulse through a straight cylinder pipe.}
The NGSolve software \cite{Schoberl16} is used for the simulations.
\subsection{Example 1: The method of manufactured solutions}
We consider a rectangular fluid
domain, $\Omega^f = (0, 1)\times (-1, 0)$, and a rectangular solid domain,
$\Omega^s = (0, 1) \times (0, {0.5})$, connected by an
interface, $\Gamma = \{(x, y): \; x \in (0, 1), y = 0\}$.
We choose the volume and interface source terms such that the exact solutions
are given as follows:
\begin{align*}
\bld u^f=\bld u^s =&\;\left( \sin(2\pi x)^2\sin(\frac{8}{3}\pi(y+1))\sin(2t),
-1.5\sin(4\pi x)\sin(\frac43\pi (y+1))^2\sin(2t)\right),\\
p^f=&\;
\sin(2\pi x)\sin(2\pi y)\sin(t),\\
\bld \eta^s =&\;\left( \sin(2\pi x)^2\sin(\frac{8}{3}\pi(y+1))\sin(t)^2,
-1.5\sin(4\pi x)\sin(\frac43\pi (y+1))^2\sin(t)^2\right).
\end{align*}
We use a homogeneous Dirichlet boundary conditions \eqref{bcbc} on the
exterior boundaries.
For the material parameters,
we take the fluid density and viscosity to be one ($\rho^f=\mu^f=1$),
and vary the structure density and Lam\'e parameters in large parameter ranges:
$$
\rho^s \in\{10^{-3}, 1, 10^{3}\},
\mu^s = \delta_1\, \rho^s, \text{with } \delta_1\in\{0.1, 1, 10\},
\text{and }
\lambda^s = \delta_2\, \mu^s, \text{with } \delta_2\in\{1, 10^4\}.
$$
Here $\delta_2=1$ corresponds to a compressible structure, while
$\delta_2=10^4$ corresponds to a nearly incompressible structure.
We run simulations on a sequence of uniform unstructured
triangular meshes with mesh size $h=\frac{1}{10\times 2^j}
$ for $j=0,1,2,3$.
We take the polynomial degree to be either $k=1$ or $k=2$.
We use the (second-order) Crank-Nicolson temporal
discretization \eqref{full} for $k=1$, and
the (third-order) BDF3 temporal discretization \eqref{fullB}, and
take a uniform time step size $\delta t = h$.
To start the BDF3 scheme, we compute
$(\bld u_h^m, \bld \eta_h^{s,m}, \widehat{\bld \eta}_h^{s,m})$
by interpolating the exact solution at time $t_m=m\,\delta t$, $m=0,1,2$.
The preconditioned MinRes solver with the preconditioner
\eqref{prec} with AMG blocks \eqref{s-pre} and \eqref{a-pre}
is used to solve the linear system in each
time step, for which we start with {\it zero} initial
guess and stop until the residual norm is decreased by a factor of
$10^{-8}$.
The $L^2$-errors in the velocity approximation
$\|\bld u-\bld u_h\|_{\Omega}$ at the final time $T=0.3$ are documented in
Table \ref{table:m1}--\ref{table:m2} for various parameter choices.
It is clear to observe that our fully discrete scheme provide an optimal
velocity approximation of order 2 for polynomial degree $k=1$ with Crank-Nicolson time
stepping, and of order 3 for $k=2$ with BDF3 time stepping.
Moreover, we observe that our fully discrete scheme is robust with respect to
large density variations and large Lam\'e parameter variations since the errors for different parameters in each row of Table
\ref{table:m1}--\ref{table:m2} are similar.
The average numbers of iterations needed for the convergence of the
preconditioned MinRes solver are recorded in
Table \ref{table:m3}--\ref{table:m4}.
We observe for polynomial degree $k=1$, we roughly need about $150$
iterations
to converge for the compressible structure case in Table \ref{table:m3}
and about $116$ iterations for the nearly incompressible structure case in
Table \ref{table:m4}. Also, the preconditioner is fairly robust with
respect to the mesh size (and time step size), and parameter variations in
$\rho^s$ and $\mu^s$.
Similar results are observed for the $k=2$ case, which needs
roughly about $285$ iterations to converge for the compressible case in Table
\ref{table:m3} and about
$210$ iterations for the nearly incompressible case. However, it is also clear that
the preconditioner is not robust with respect to polynomial degree $k$.
We finally point out that the $k$-dependency on the iteration counts is due to the
auxiliary space velocity preconditioner \eqref{a-pre} since if we replace
$\widehat{\mathsf{iA}}$ by the exact inverse $\mathsf{A}^{-1}$, the
iteration counts are then observed
to be quite insensitive to the polynomial degree:
about 30--40 iterations are needed in the compressible cases, and about
20--30 iterations in the nearly incompressible cases
for polynomial degree $k=1,2,3,4$.
This is expected as the polynomial degree in the pressure block
is kept to be 0 regardless of the velocity polynomial degree $k$ in the
global linear system due to static condensation; see Remark \ref{rk:condense}.
\begin{table}[ht]
\centering
\resizebox{0.9\columnwidth}{!}
{%
\begin{tabular}{cc| ccc|ccc|ccc}
\toprule
& & \multicolumn{3}{c}{$\rho^s=10^{-3}$}
& \multicolumn{3}{|c}{$\rho^s=1$}
& \multicolumn{3}{|c}{$\rho^s=10^3$}
\\
&
& $\delta_1=0.1$ &$\delta_1=1$ &$\delta_1=10$
& $\delta_1=0.1$ &$\delta_1=1$ &$\delta_1=10$
& $\delta_1=0.1$ &$\delta_1=1$ &$\delta_1=10$
\tabularnewline
\midrule
$k$& $1/h$
& error & error& error
& error & error& error
& error & error& error
\tabularnewline
\midrule
\multirow{4}{2mm}{1}
& 10& 3.492e-02& 3.420e-02& 5.624e-02& 3.489e-02& 3.408e-02& 5.350e-02&3.460e-02& 3.496e-02& 4.566e-02\\
& 20& 8.409e-03& 8.362e-03& 1.145e-02& 8.400e-03& 8.345e-03&1.085e-02&8.312e-03& 8.531e-03& 1.454e-02 \\
& 40& 2.052e-03& 2.074e-03& 3.021e-03& 2.051e-03& 2.068e-03&2.777e-03&2.033e-03& 2.102e-03& 3.247e-03 \\
& 80& 5.063e-04& 5.125e-04& 9.015e-04& 5.059e-04& 5.113e-04&8.126e-04&4.974e-04& 5.260e-04& 9.448e-04 \\
\midrule
\multicolumn{2}{c}{rate}
&
2.04 & 2.02 & 1.98 & 2.04 & 2.02 & 2.01 & 2.04 & 2.02 & 1.89
\\ [1.5ex]
\midrule
\multirow{4}{2mm}{2}
& 10&4.124e-03& 4.273e-03& 4.331e-03&4.120e-03& 4.260e-03& 4.288e-03&4.116e-03& 4.279e-03& 4.339e-03 \\
& 20&5.151e-04& 5.298e-04& 5.262e-04&5.148e-04& 5.283e-04& 5.239e-04&5.136e-04& 5.442e-04& 5.259e-04 \\
& 40&6.267e-05& 6.549e-05& 6.476e-05&6.265e-05& 6.548e-05& 6.564e-05&6.269e-05& 7.180e-05& 6.390e-05 \\
& 80&7.733e-06& 8.028e-06& 7.712e-06&7.732e-06& 8.039e-06& 7.819e-06&7.738e-06& 9.032e-06& 8.915e-06 \\
\midrule
\multicolumn{2}{c}{rate}
& 3.02& 3.02& 3.04
& 3.02& 3.02& 3.03
& 3.02& 2.96& 2.98\\
\bottomrule
\end{tabular}}
\vspace{2ex}
\caption{\it \textbf{Example 1:} History of convergence of the $L^2$-velocity
errors. Compressible structure ($\delta_2=1$).}
\label{table:m1}
\end{table}
\begin{table}[ht]
\centering
\resizebox{0.9\columnwidth}{!}
{%
\begin{tabular}{cc| ccc|ccc|ccc}
\toprule
& & \multicolumn{3}{c}{$\rho^s=10^{-3}$}
& \multicolumn{3}{|c}{$\rho^s=1$}
& \multicolumn{3}{|c}{$\rho^s=10^3$}
\\
&
& $\delta_1=0.1$ &$\delta_1=1$ &$\delta_1=10$
& $\delta_1=0.1$ &$\delta_1=1$ &$\delta_1=10$
& $\delta_1=0.1$ &$\delta_1=1$ &$\delta_1=10$
\tabularnewline
\midrule
$k$& $1/h$
& error & error& error
& error & error& error
& error & error& error
\tabularnewline
\midrule
\multirow{4}{2mm}{1}
& 10& 3.388e-02& 3.304e-02& 5.068e-02 &3.388e-02 &3.351e-02 &4.935e-02&3.382e-02& 3.478e-02& 4.694e-02 \\
& 20& 8.211e-03& 8.094e-03& 1.006e-02 &8.201e-03 &8.227e-03 &9.373e-03&8.088e-03& 8.400e-03& 1.248e-02 \\
& 40& 2.004e-03& 1.998e-03& 2.136e-03 &2.002e-03 &2.022e-03 &2.053e-03&1.980e-03& 2.072e-03& 3.486e-03 \\
& 80& 4.949e-04& 4.942e-04& 8.038e-04 &4.943e-04 &4.999e-04 &7.259e-04&4.861e-04& 5.180e-04& 9.316e-04 \\
\midrule
\multicolumn{2}{c}{rate}
&
2.03 & 2.02 & 2.02 &2.03 & 2.02 & 2.05 & 2.04 & 2.02 & 1.88
\\ [1.5ex]
\midrule
\multirow{4}{2mm}{2}
& 10&4.195e-03& 4.354e-03& 4.406e-03 &4.164e-03& 4.298e-03& 4.307e-03&4.133e-03& 4.311e-03& 4.374e-03 \\
& 20&5.200e-04& 5.296e-04& 5.221e-04 &5.181e-04& 5.237e-04& 5.272e-04&5.155e-04& 5.457e-04& 5.253e-04 \\
& 40&6.258e-05& 6.421e-05& 6.430e-05 &6.245e-05& 6.419e-05& 6.548e-05&6.267e-05& 7.149e-05& 6.446e-05 \\
& 80&7.697e-06& 7.845e-06& 7.708e-06 &7.691e-06& 7.886e-06& 7.860e-06&7.727e-06& 8.962e-06& 8.890e-06 \\
\midrule
\multicolumn{2}{c}{rate}
&3.03& 3.04& 3.05
&3.03& 3.03& 3.03 &
3.02& 2.97& 2.99
\\
\bottomrule
\end{tabular}}
\vspace{2ex}
\caption{\it \textbf{Example 1:} History of convergence of the $L^2$-velocity errors.
Nearly incompressible structure ($\delta_2=10^4$).}
\label{table:m2}
\end{table}
\begin{table}[ht]
\centering
\resizebox{0.9\columnwidth}{!}
{%
\begin{tabular}{cc| ccc|ccc|ccc}
\toprule
& & \multicolumn{3}{c}{$\rho^s=10^{-3}$}
& \multicolumn{3}{|c}{$\rho^s=1$}
& \multicolumn{3}{|c}{$\rho^s=10^3$}
\\
&
& $\delta_1=0.1$ &$\delta_1=1$ &$\delta_1=10$
& $\delta_1=0.1$ &$\delta_1=1$ &$\delta_1=10$
& $\delta_1=0.1$ &$\delta_1=1$ &$\delta_1=10$
\tabularnewline
\midrule
$k$& $1/h$
& iter & iter & iter
& iter & iter & iter
& iter & iter & iter
\tabularnewline
\midrule
\multirow{4}{2mm}{1}
& 10& 136& 142& 122&137& 141& 122&154& 151& 128\\
& 20& 135& 146& 131&136& 148& 132&150& 157& 140 \\
& 40& 148& 158& 152&145& 160& 153&149& 155& 154 \\
& 80& 161& 174& 177&159& 180& 181&158& 169& 175 \\
\midrule
\multirow{4}{2mm}{2}
& 10&281& 290& 250 &283& 291& 243& 289& 288& 238 \\
& 20&283& 302& 269 &284& 302& 263& 285& 288& 255 \\
& 40&294& 313& 297 &293& 313& 291& 281& 287& 274 \\
& 80&291& 310& 307 &288& 307& 303& 272& 279& 264 \\
\bottomrule
\end{tabular}}
\vspace{2ex}
\caption{\it \textbf{Example 1:}
Average iteration counts for the preconditioned MinRes solver.
Compressible structure ($\delta_2=1$).}
\label{table:m3}
\end{table}
\begin{table}[ht]
\centering
\resizebox{0.9\columnwidth}{!}
{%
\begin{tabular}{cc| ccc|ccc|ccc}
\toprule
& & \multicolumn{3}{c}{$\rho^s=10^{-3}$}
& \multicolumn{3}{|c}{$\rho^s=1$}
& \multicolumn{3}{|c}{$\rho^s=10^3$}
\\
&
& $\delta_1=0.1$ &$\delta_1=1$ &$\delta_1=10$
& $\delta_1=0.1$ &$\delta_1=1$ &$\delta_1=10$
& $\delta_1=0.1$ &$\delta_1=1$ &$\delta_1=10$
\tabularnewline
\midrule
$k$& $1/h$
& iter & iter & iter
& iter & iter & iter
& iter & iter & iter
\tabularnewline
\midrule
\multirow{4}{2mm}{1}
& 10&115& 103& 108&115& 105& 111&134& 106& 109 \\
& 20&115& 101& 106&116& 103& 108&130& 108& 112 \\
& 40&125& 108& 114&124& 110& 111&130& 105& 107 \\
& 80&138& 117& 123&138& 123& 127&134& 113& 121 \\
\midrule
\multirow{4}{2mm}{2}
& 10&231& 200& 199&231& 199& 188&239& 195& 210 \\
& 20&228& 198& 200&228& 199& 188&239& 186& 205 \\
& 40&232& 206& 213&232& 205& 202&237& 171& 200 \\
& 80&229& 208& 216&226& 206& 208&231& 176& 186 \\
\bottomrule
\end{tabular}}
\vspace{2ex}
\caption{\it\textbf{Example 1:}
Average iteration counts for the preconditioned MinRes solver.
Nearly incompressible structure ($\delta_2=10^4$).}
\label{table:m4}
\end{table}
\subsection{Example 2: a linear two-dimensional test case}
We consider a simplified linear version of the numerical experiment reported in
\cite{Nobile01,Bukac14x}. We use the similar set-up as in \cite{Bukac14x}.
We consider a fluid domain, $\Omega^f=(0,6)\times (0,0.5)[\mathrm{cm}]^2$, and
a structure domain, $\Omega^s=(0,6)\times (0.5,0.6) [\mathrm{cm}]^2$, connected by an
interface $\Gamma =\{ (x,y):\;x\in(0,6), y = 0.5\}$.
We consider the FSI problem \eqref{f-eq}--\eqref{interface} with
$\bld f^f=\bld f^s=0$, where we add
a linear {\it spring} term, $\beta^s \bld \eta^s$ to the
first equation in \eqref{s-eq}:
$$
\rho^s \partial_t\bld u^s+\beta^s\bld \eta^s
-\nabla\cdot \bld \sigma^s(\bld \eta^s) = 0.
$$
The material parameters are given as follows:
$
\rho^s = 1.1[\mathrm{g/cm^3}],$ $
\mu^s = 0.575\times 10^6[\mathrm{dye/cm^2}],$ $
\beta^s = 4\times 10^6[\mathrm{dye/cm^4}],$ $
\lambda^s = 1.7\times 10^6[\mathrm{dye/cm^2}],$ $
\rho^f = 1[\mathsf{g/cm^3}],$ $
\mu^f = 0.035[\mathsf{g/(cm\cdot s)}]
$,
which are within physiologically realistic
values of blood flow in compliant arteries.
The flow is initially at rest, and we take the following boundary conditions
which model a pressure driven flow:
\begin{alignat*}{3}
(\bld \sigma^f\bld n)\cdot\bld n=&\; -p_{in}(t), \;\;\;&&
\mathsf{tang}(\bld u^f)=\; 0 \quad &&\text{on
} \Gamma^f_{in}:=\{(x,y): x=0, y\in(0,0.5)\},\\
(\bld \sigma^f\bld n)\cdot \bld n=&\; 0,\;&&
\mathsf{tang}(\bld u^f)=\; 0\quad &&\text{on
} \Gamma^f_{out}:=\{(x,y): x=6, y\in(0,0.5)\},\\
\mathsf{tang}(\bld \sigma^f\bld n)=&\;0,\;&&
\bld u^f\cdot\bld n=\; 0 \quad&& \text{on } \Gamma^f_{bot}:=\{(x,y):
x\in(0,6), y = 0\},\\
\bld \eta^s\cdot\bld n=&\; 0, \;\;\;&&
\mathsf{tang}(\bld \eta^s)=\; 0 \quad &&\text{on
} \Gamma^s_{in/out}:=\{(x,y): x\in\{0,6\}, y\in(0.5,0.6)\},\\
(\bld \sigma^s\bld n)\cdot\bld n=&\;0,\;&&
\mathsf{tang}(\bld \eta^s)=\; 0
\quad&& \text{on } \Gamma^s_{top}:=\{(x,y):
x\in(0,6), y = 0.6\},
\end{alignat*}
where the time-dependent
pressure boundary source term at the inlet $\Gamma_{in}^f$ is given as
follows:
\begin{align*}
p_{in}(t)=\left\{
\begin{tabular}{ll}
$\frac{p_{\max}}{2}\left(1-\cos(\frac{2\pi t}{t_{\max}})\right)$,
& if $t \le t_{\max}$, \\
$0$, & if $t>t_{\max}$,
\end{tabular}
\right.
\end{align*}
where $t_{\max} = 0.03 [\mathrm{s}]$ and
$p_{\max} = 1.333\times 10^4 [\mathrm{dyne/cm^2}]$.
The final time of the simulation is $T=1.2\times 10^{-2} [\mathrm{s}]$.
In this example, we use the divergence-conforming HDG scheme with
Crank-Nicolson time stepping \eqref{full}.
The additional spring term $\beta^s \bld \eta^s$
in the structure equation does not alter the
form of the resulting global linear system.
Hence we still apply the preconditioned
MinRes solver using the preconditioner \eqref{prec}
with AMG blocks \eqref{a-pre} and \eqref{s-pre}.
Due to different boundary conditions, we shall add the boundary
contribution
$$
\sum_{F\in \Gamma_{in}^f\cup \Gamma_{out}^f\cup \Gamma_{top}^s}
\int_F \frac{1}{\rho\,h}p_hq_h\,\mathrm{ds}
$$
to the bilinear form \eqref{neumann} associated with the matrix
$\mathsf{N}_h^{\rho,\gamma}$ in the pressure block \eqref{s-pre},
and take the following continuous linear velocity auxiliary finite element space
with the modified boundary conditions
{\small
$$
\bld V_h^{cg}:= \{\bld v\in \bld H^{1}(\Omega):\;\;
\bld v|_K\in [\mathcal{P}^1(K)]^d, \;\forall K\in \mathcal{T}_h,
\;\bld v|_{\Gamma_{in/out}^s}=0,
\;\bld v\cdot \bld n|_{\Gamma_{bot}^f}=0,
\;\mathsf{tang}(\bld v)|_{\Gamma_{in/out}^f\cup \Gamma^s_{top}}=0
\}
$$ }
in the velocity block \eqref{a-pre}.
For the discretization parameters, we consider
polynomial degree $k\in\{1,2,4\}$,
a uniform unstructured triangular mesh with mesh size $h\in \{0.1,
0.05,0.025\}$,
and a uniform time step size $\delta t \in\{10^{-4}, 0.25\times 10^{-4}\}$.
For all the numerical simulations,
we stop the MinRes iteration when residual norm is decreased by a factor of
$tol=10^{-6}$. The average number of MinRes iterations for different
discretization parameters are documented in Table \ref{table:mm}.
From Table \ref{table:mm}, we observe that
\begin{itemize}
\item [(a)]
for the same polynomial degree $k$ and mesh size $h$, a smaller time step size $\delta t$ leads to a smaller number of MinRes iterations.
\item [(b)]
for the same mesh size $h$ and time step size $\delta t$, a larger
polynomial degree $k$ leads to a larger number of MinRes iterations, with
the number of iterations roughly doubled from $k=1$ to $k=4$.
\item [(c)]
for the same time step size $\delta t$ and
polynomial degree $k$, the number of MinRes iterations roughly stays in
the same level as mesh size $h$ decreases.
\end{itemize}
We also mention that the MinRes iterations in Table \ref{table:mm} are
smaller than those in Table \ref{table:m3}--\ref{table:m4} in Example 1,
which is partially due to the fact that we used a larger stopping tolerance
$tol = 10^{-6}$ here.
\begin{table}[ht]
\centering
{%
\begin{tabular}{cc| ccc|cccccc}
\toprule
& & \multicolumn{3}{c}{$\delta t=10^{-4}$}
& \multicolumn{3}{|c}{$\delta t=0.25\times 10^{-4}$}
\\
&
& $k=1$ &$k=2$ &$k=4$
& $k=1$ &$k=2$ &$k=4$
\tabularnewline
\midrule
& $1/h$
& iter & iter & iter
& iter & iter & iter
\tabularnewline
\midrule
& 10& 76& 133& 213& 59& 79&143 \\
& 20& 89& 106& 158& 60& 83&134 \\
& 40& 89& 115& 167& 72& 98&150 \\
\bottomrule
\end{tabular}}
\vspace{2ex}
\caption{\it \textbf{Example 2:}
Average iteration counts for the preconditioned MinRes solver.}
\label{table:mm}
\end{table}
Finally, we plot in Figure \ref{fig:ex2} the flow rate, which is calculated
as two thirds of the horizontal velocity,
and pressure at the bottom boundary $\Gamma^f_{bot}$, and the vertical
displacement on the interface $\Gamma$ at final time $t=1.2\times
10^{-2}$ for
$k=1$ with mesh size $h\in\{0.05,0.025\}$ and time step size $\delta t=10^{-4}$,
$k=2$ with mesh size $h\in\{0.1,0.05\}$ and time step size
$\delta t=10^{-4}$, along with reference data for
$k=4$ with mesh size $h = 0.025$
and time step size $\delta t=0.25\times 10^{-4}$.
We observe that both the results for $k=1$ and $k=2$ agrees well with the
reference data. We also observe that the result for
$k=2$ on the coarse mesh with mesh size $h=0.1$ is more accurate that that
for $k=1$ on the medium
mesh with mesh size $h=0.05$, which indicates the benefits of using
a scheme with a higher order spatial discretization.
\begin{figure}
\begin{tikzpicture}
\tikzstyle{every node}=[font=\footnotesize]
\begin{axis}[
width=\textwidth,
height=0.26\textheight,
xtick={0,2,4,6},
ytick={-4,0,4, 8},
xmin=-0.01,
xmax=6.01,
ymin=-4,
ymax=8,
yticklabel style={/pgf/number format/fixed,/pgf/number format/precision=1},
every axis plot/.append style={line width=0.8pt, smooth},
ylabel={Flow Rate},
xlabel={$x$-coordinates of the bottom line $\Gamma_{bot}^f$},
legend style={at={(0.03,0.95)},anchor=north west}
]
\addplot[blue, solid] table[x index=0, y index=1]{U3.txt};
\addlegendentry{$k=1, h = 0.05$}
\addplot[green!50!blue] table[x index=0, y index=2]{U3.txt};
\addlegendentry{$k=1, h = 0.025$}
\addplot[red, densely dotted] table[x index=0, y index=3]{U3.txt};
\addlegendentry{$k=2, h = 0.1$}
\addplot[red!30!yellow, densely dotted] table[x
index=0, y index=4]{U3.txt};
\addlegendentry{$k=2, h = 0.05$}
\addplot[black] table[x
index=0, y index=5]{U3.txt};
\addlegendentry{reference data}
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}
\tikzstyle{every node}=[font=\footnotesize]
\begin{axis}[
width=\textwidth,
height=0.26\textheight,
xtick={0,2,4,6},
ytick={-2000, 0,2500, 5000},
xmin=-0.01,
xmax=6.01,
ymin=-2000,
ymax=5000,
yticklabel style={/pgf/number format/fixed,/pgf/number format/precision=1},
every axis plot/.append style={line width=0.8pt, smooth},
legend style={at={(0.03,0.95)},anchor=north west},
ylabel={Pressure},
xlabel={$x$-coordinates of the bottom line $\Gamma_{bot}^f$},
]
\addplot[blue, solid] table[x index=0, y index=1]{P3.txt};
\addlegendentry{$k=1, h = 0.05$}
\addplot[green!50!blue] table[x index=0, y index=2]{P3.txt};
\addlegendentry{$k=1, h = 0.025$}
\addplot[red, densely dotted] table[x index=0, y index=3]{P3.txt};
\addlegendentry{$k=2, h = 0.1$}
\addplot[red!30!yellow, densely dotted] table[x
index=0, y index=4]{P3.txt};
\addlegendentry{$k=2, h = 0.05$}
\addplot[black] table[x
index=0, y index=5]{P3.txt};
\addlegendentry{reference data}
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}
\tikzstyle{every node}=[font=\footnotesize]
\begin{axis}[
width=\textwidth,
height=0.26\textheight,
xtick={0,2,4,6},
ytick={-0.01,0,0.01, 0.02},
xmin=-0.01,
xmax=6.01,
ymin=-.01,
ymax=0.02,
yticklabel style={/pgf/number format/fixed,/pgf/number format/precision=1},
every axis plot/.append style={line width=0.8pt, smooth},
ylabel={Vertical Displacement},
xlabel={$x$-coordinates of the interface $\Gamma$},
legend style={at={(0.03,0.95)},anchor=north west}
]
\addplot[blue, solid] table[x index=0, y index=1]{D3.txt};
\addlegendentry{$k=1, h = 0.05$}
\addplot[green!50!blue] table[x index=0, y index=2]{D3.txt};
\addlegendentry{$k=1, h = 0.025$}
\addplot[red, densely dotted] table[x index=0, y index=3]{D3.txt};
\addlegendentry{$k=2, h = 0.1$}
\addplot[red!30!yellow, densely dotted] table[x
index=0, y index=4]{D3.txt};
\addlegendentry{$k=2, h = 0.05$}
\addplot[black] table[x
index=0, y index=5]{D3.txt};
\addlegendentry{reference data}
\node[above,font=\large\bfseries] at (current bounding box.north) {Vertical displacement along
the interface $y=0.5$};
\end{axis}
\end{tikzpicture}
\caption{\it \textbf{Example 2:} \it
Numerical solutions of the scheme \eqref{full} with different
discretization parameters at final time $t=1.2\times 10^{-2}[\mathsf{s}]$.
Top: flow rate $\frac23 \bld v_h[0]$ along
bottom line $\Gamma_{bot}^f$;
Middle: pressure along bottom line $\Gamma_{bot}^f$;
Bottom: vertical displacement $\bld\eta^s_h[1]$ along the interface
$\Gamma$.
Reference data is obtained with the HDG scheme \eqref{full} using polynomial degree $k=4$, mesh size $h=0.025$,
and time step size $\delta t=0.25\times 10^{-4}$.
All the other methods use the time step size $\delta t= 10^{-4}$.
}
\label{fig:ex2}
\end{figure}
\subsection{Example 3: a linear three-dimensional test case on a straight
cylindrical pipe}
Now we consider a 3D example that simulates the propagation of
the pressure pulse on a straight cylinder (see \cite{Deparis06}).
The fluid domain is a straight cylinder of radius $0.5 [\mathsf{cm}]$ and length
$5 [\mathsf{cm}]$,
$\Omega^f = \{(x,y,z): x\in(0,5),\, y^2+z^2<(0.5)^2\}$,
the structure domain has a thickness of $0.1 [\mathsf{cm}]$,
$\Omega^s = \{(x,y,z): x\in(0,5),\, (0.5)^2<y^2+z^2<(0.6)^2\}$,
and the interface
$\Gamma = \{(x,y,z): x\in(0,5),\, y^2+z^2=(0.5)^2\}$.
We use the same material parameters as in Example 2.
The flow is initially at rest, and we take the same boundary conditions as
in Example 2 with the exception that a pure Neumann boundary condition
$\bld \sigma^s\bld n=0$ is applied on the exterior structure boundary
$\Gamma^s_{ext}:=\{(x,y,z): x\in(0,5), y^2+z^2=0.6^2\}$.
We apply the scheme \eqref{full} with
time step size $\delta t = 10^{-4}$.
For the spatial discretization parameters, we consider two cases:
$k=1$ on a fine mesh with mesh size $h=0.05$ (264,288 tetrahedra), and
$k=2$ on a coarse mesh with mesh size $h=0.1$ (33,036 tetrahedra).
The fine mesh is illustrated in Figure \ref{fig:ex3}.
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{ex3X.png}
\caption{\it \textbf{Example 3:} the fine mesh with mesh size $h=0.05$.
The red region is the fluid domain, and the gray region is the structure
domain.}
\label{fig:ex3}
\end{figure}
For the preconditioned MinRes linear system solver,
we replace the point Gauss-Seidel smoother $\mathsf{R}$
in the velocity preconditioner \eqref{a-pre} by a block Gauss-Seidel
smoother
$\mathsf{R}^e$
based on edge blocks to further improve its efficiency.
We stop the MinRes iteration when residual norm is decreased by a factor of
$10^{-6}$.
The average number of iterations for convergence for $k=1$ with $h=0.05$ is
$60$ and that for $k=2$ with $h=0.1$ is $52$ when the edge-block
Gauss-Seidel smoother $\mathsf{R}^e$ is used in the velocity preconditioner
\eqref{a-pre}. If we instead use the
point Gauss-Seidel smoother, the numbers would be
$360$ for $k=1$ and $246$ for $k=2$.
Similar to Example 2, we
plot in Figure \ref{fig:ex3Y} the flow rate, which is calculated
as two thirds of the horizontal velocity,
and pressure at the center line $\{(x,0,0):\,x\in(0,5)\}$, and the
y-component of the displacement on the interface line
$\{(x,0.5,0):\,x\in(0,5)\}$ at final time $t=1.2\times
10^{-2}$.
We find that the results for $k=1$ and $k=2$ agrees well with each
other.
\begin{figure}
\begin{tikzpicture}
\tikzstyle{every node}=[font=\footnotesize]
\begin{axis}[
width=\textwidth,
height=0.26\textheight,
xtick={0,1,2,3,4,5},
ytick={-4,0,4, 8,12},
xmin=-0.01,
xmax=5.01,
ymin=-4.6,
ymax=13,
yticklabel style={/pgf/number format/fixed,/pgf/number format/precision=1},
every axis plot/.append style={line width=0.8pt, smooth},
ylabel={Flow Rate},
xlabel={$x$-coordinates of the bottom line $\Gamma_{bot}^f$},
legend style={at={(0.03,0.95)},anchor=north west}
]
\addplot[blue, solid] table[x index=0, y index=1]{U3X.txt};
\addlegendentry{$k=1, h = 0.05$}
\addplot[red, densely dotted] table[x index=0, y index=2]{U3X.txt};
\addlegendentry{$k=2, h = 0.1$}
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}
\tikzstyle{every node}=[font=\footnotesize]
\begin{axis}[
width=\textwidth,
height=0.26\textheight,
xtick={0,1,2,3,4,5},
ytick={-3000, 0,3000},
xmin=-0.01,
xmax=5.01,
ymin=-3500,
ymax=3500,
yticklabel style={/pgf/number format/fixed,/pgf/number format/precision=1},
every axis plot/.append style={line width=0.8pt, smooth},
legend style={at={(0.03,0.95)},anchor=north west},
ylabel={Pressure},
xlabel={$x$-coordinates of the bottom line $\Gamma_{bot}^f$},
]
\addplot[blue, solid] table[x index=0, y index=1]{P3X.txt};
\addlegendentry{$k=1, h = 0.05$}
\addplot[red, densely dotted] table[x index=0, y index=2]{P3X.txt};
\addlegendentry{$k=2, h = 0.1$}
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}
\tikzstyle{every node}=[font=\footnotesize]
\begin{axis}[
width=\textwidth,
height=0.26\textheight,
xtick={0,2,4,6},
ytick={-0.008,0,0.008},
xmin=-0.01,
xmax=5.01,
ymin=-.008,
ymax=0.008,
yticklabel style={/pgf/number format/fixed,/pgf/number format/precision=1},
every axis plot/.append style={line width=0.8pt, smooth},
ylabel={Vertical Displacement},
xlabel={$x$-coordinates of the interface $\Gamma$},
legend style={at={(0.03,0.95)},anchor=north west}
]
\addplot[blue, solid] table[x index=0, y index=1]{D3X.txt};
\addlegendentry{$k=1, h = 0.05$}
\addplot[red, densely dotted] table[x index=0, y index=2]{D3X.txt};
\addlegendentry{$k=2, h = 0.1$}
\end{axis}
\end{tikzpicture}
\caption{\it \textbf{Example 3:}
Numerical solutions of the scheme \eqref{full} with different
discretization parameters
along cut lines
at final time $t=1.2\times 10^{-2}[\mathsf{s}]$.
Top: flow rate $\frac23 \bld v_h[0]$ along
center line $\{(x,0,0):\,x\in(0,5)\}$;
Middle: pressure along center line
$\{(x,0,0):\,x\in(0,5)\}$;
Bottom: y-component of displacement
$\bld\eta^s_h[1]$ along the interface line
$\{(x,0.5,0):\,x\in(0,5)\}$.}
\label{fig:ex3Y}
\end{figure}
Finally, we plot
the structure deformation along with the fluid pressure
for $k=2$ with $h=0.1$ in Figure \ref{fig:ex3X} for
$t\in\{4,8,12\}\times 10^{-3}$.
We clearly observe the propagation of a pressure pulse as time evolves.
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{cyl2X.png}
\includegraphics[width=0.95\textwidth]{cyl4X.png}
\includegraphics[width=0.95\textwidth]{cyl6X.png}
\caption{\it \textbf{Example 3:} The structure deformation and pressure
approximation at different time.
The structure deformation is enlarged by a factor of 8 and
is only shown on half of the structure domain with $y<0$.
The pressure approximation is only shown on half of the fluid domain with
$z<0$.
From top to bottom, $t=0.004,
0.008, 0.012$.
( $k=2$, $h=0.1$).
}
\label{fig:ex3X}
\end{figure}
\section{Conclusion}
\label{sec:conclude}
We have present a novel monolithic divergence-conforming HDG scheme for
a linear FSI problem with a thick structure. The fully discrete scheme
produces an exactly divergence-free fluid velocity approximation and
{is energy-stable}.
Furthermore, we design an efficient block AMG
preconditioner and use it with a preconditioned MinRes solver for the resulting symmetric and
indefinite global linear system. This preconditioner is numerically observed
to be robust with respect to the mesh size, time step size and material
parameters in large parameter ranges. A theoretical analysis of this
preconditioner is our future work.
The extension of our scheme to other FSI models including thin structure
and/or moving interfaces consists of our ongoing work.
\label{sec:ale}
\bibliographystyle{siam}
|
2,877,628,091,041 | arxiv | \section{Motivation : Historical Background}
It is well known that the collective motions in excited nuclei are governed by the potential landscape and dissipation. When the system has to cross a barrier, the fluctuation, associated with
the dissipation according to the Dissipation-Fluctuation theorem, play an essential role.
For multi-dimensional problems, the Langevin equation appears to be easier to solve numerically than the equivalent Fokker-Planck equation also used for the analysis of heavy-ion
collisions, say, of the fast-fission process\cite{HD}. The Langevin equation was first used in nuclear physics for a dynamical description of the fission process.\cite{ABE1} Combined with particle emission,
it was applied to analyse the anomalous multiplicities of pre-scission neutrons,\cite{WADA}
which, together with the total kinetic energy of fission fragments, supports a strong friction
of the one-body type, (Wall-and-Window\cite{BLO}) rather than the two-body viscosity.\cite{DAV}
More recently, this Langevin formalism has been used to study the fusion of heavy nuclei in order to propose an explanation of the fusion hindrance\cite{ABE2}
which has been experimentally known to exist in fusion of massive systems\cite{SAH}
without theoretical explanation. The DNC
(Di-Nucleus Configuration) formed by the contact of two ions of the incident channels has
an extremely large deformation and then is located outside of the conditional saddle point
in LDM (Liquid Drop Model), as is shown in Fig. 1. The system has then to cross two barriers to fuse. In order to calculate probability of the hindered fusion, we use a Two-Step Model
and apply it to the synthesis of the superheavy elements.\cite{SHEN,BOU}
\begin{figure}[th]
\centerline{\psfig{file=fig1.eps,width=8cm}}
\vspace*{5pt}
\caption{A schematic illustration of LDM energy surface for heavy systems. The x-axis is the distance between two centers and the y-axis is the mass-asymmetry. Two examples of incident combinations of ions are indicated for the hot and cold fusion paths, respectively.}
\end{figure}
The over-passing probability of the inner barrier is dominated by the diffusion of the distance parameter between two ions.\cite
{ABE3,ABE4,SWI,BOIL} This is briefly recapitulated in section 2 with a simplified model. In section 3, time development of the neck motion is analysed with the Smolchowski equation during the fusion process.
\section{Brief Reminder of One-Dimensional Parabolic Barrier}
Reducing the inner barrier to an inverted parabola and assuming that the transport coefficients are constant near the barrier, the problem is amenable to a simple analytic
explanation of the origin of the fusion hindrance. Here, we will recapitulate one-dimensional case and refer to Ref.\cite{ABE3} for the N-dimensional case with a coupled Langevin equation.
The equation to be solved is given as follows,
\begin{equation}
\frac{d}{dt} \left[\begin{array}{c}q \\ p \end{array}\right] = \left[\begin{array}{cc}0&1/\mu\\ \mu\omega^2 & -\beta\end{array}\right].
\left[\begin{array}{c}q \\ p \end{array}\right] + \left[\begin{array}{c}0 \\ R\end{array} \right],
\end{equation}
where $ \mu$ and $\omega $ denote the inertia mass and the frequency of the parabola, respectively.
The reduced friction $ \beta = \gamma / \mu $ is defined with the friction coefficient $ \gamma $.
R denotes a Langevin force associated to the friction $\gamma$, which is assumed to satisfy
the dissipation-fluctuation theorem, and to be a Markovian with a Gaussian distribution.
The probability of passing over the barrier, which we call formation probability of the compound nucleus, is simply given by an error function, whose
argument is expressed by the average trajectory $ <q(t)>$ and its variance,
\begin{eqnarray}
P_{form}(q_0,p_0,t)&=&\int_0^{\infty}\frac{dq}{\sqrt{2\pi}} \frac1{\sigma_q(t)}\exp\left(
-\frac{(q-<q(t)>)^2}{2\sigma^2_q(t)}\right) \\
&=& \frac12 {\rm erfc}\left(-\frac{<q(t)>}{\sqrt2\sigma_q(t)}\right).
\end{eqnarray}
For a time long enough, the probability converges to a finite value,
\begin{equation}
\lim_{t\to\infty}F_{form}= \frac12 {\rm erfc} \left(\sqrt{\frac{x+ \sqrt{x^2+ 1}}{2x}} \sqrt{\frac{B}T} - \frac1{\sqrt{2x(x+\sqrt{x^2+1}}} \sqrt{\frac{K}T}\right),
\end{equation}
where $ B = \mu\omega^{2} q_{0}^{2}/2 $, the saddle point height measured from the initial point
$q_{0}$, while $ K= p_{0}^2/(2 \mu)$. $ x $ denotes the critical parameter $ \beta /(2\omega) $. The probability becomes when the initial kinetic energy $K= (x + \sqrt{x^2+ 1})^2 B $, which we call an effective barrier $B_{eff}$ for the case of dissipative dynamics. This simply explains the origin of the hindrance. As we discussed elsewhere,\cite{KOS,SHEN} the distribution of $ p_0 $ is expected to be a Boltzmann distribution, and then, an averaging over the initial momentum $p_0$ that is thermally distributed with an average value equal to zero, gives an extremely simple expression for the formation probability,
\begin{equation}
\label{Pform}
P_{form}(E_{c.m.})=\frac12 {\rm erfc}\left(\sqrt{\frac{B}T}\right).
\end{equation}
As is clearly seen in Eq. (\ref{Pform}), even if we give a larger incident kinetic energy, the formation and then the fusion probability increase very slowly through the increase of the temperature of the system, which appears in agreement with the experiment.\cite{SAH}
Before proceeding to a discussion on the motion of the neck degree of freedom, it is meaningful to have a close look at the time evolution of the fusion. We analyse time-developments of the trajectory, the formation probability and the current over the saddle by the use of the analytic solution. The results are shown in the first, the second, and the bottom rows of Fig. 2, respectively. \cite{BOIL}
\begin{figure}[bth]
\centerline{\psfig{file=fig2.eps,height=5.4cm}}
\caption{Average trajectory, over-passing probability and current at the top of the barrier as a function of time for four regimes, $K=0$ (first column), $K= B_{eff}/2$ (second column), $K= B_{eff} $ (third column) and $K=2 B_{eff}$ (last column). For each case, two temperatures were chosen, $T=B/5$ (solid line) and $T=B/2$ (dashed line). Note that each column has a different time scale.}
\end{figure}
Our case corresponds to the leftmost column, for incident kinetic energy $K=0$. Firstly, the average trajectory never passes over the saddle, but retreats back. Secondly, the formation probability becomes saturated as time goes. Finally, the current has a peak structure around several $\hbar$/MeV, which shows that the probability current at the saddle starts slowly and then terminates gradually. This means that the formation of the compound nucleus is not due to a dynamical motion, but due to the fluctuation, i.e., due to a tail part of the Gaussian distribution around the average trajectory. The time scale of the radial fusion is important for the discussion of the neck degree of freedom in the next section.
\section{From Di-Nucleus to Mono-Nucleus : Filling-in of the Neck Cleft}
For a description of fusion processes between heavy ions, there are at least three parameters or variables.
In two-center parameterization,\cite{TCP} they are distance between two mass centers, mass-asymmetry, and the neck correction. Since the neck degree of freedom is weakly coupled to the others, it is meaningful to analyse its time evolution separately. The LDM potential for the symmetric incident systems turns out to be approximately linear in the neck parameter. To analyse the time evolution of the neck parameter, starting at $ \epsilon =1.0 $ or around, a Langevin equation is solved.
It appears that the average value of the neck parameter changes very quickly, far quicker than the radial fusion for most systems including very heavy ones.\cite{NECK} This is due to the action of the linear driving force in the neck $ \epsilon $, while the radial fusion is governed by diffusion.
Thus, it is inferred that the neck degree of freedom is in the thermal equilibrium during the fusion.
Next, in order to know how the distribution reaches the equilibrium, we try to obtain a time-dependent distribution function of the neck, starting from the delta-function at $ \epsilon_0 = 1.0 $, i.e., at the initial DNC. The Smoluchowski equation is solved, since we know that the momentum space can be approximated to be in an
equilibrium, due to a very small inertia mass.\cite{AARS} Then, with a linear potential, the equation to be solved is as follows,
\begin{equation}
\frac{\partial N}{\partial t}= D \frac{\partial^2 N}{\partial \epsilon^2}+ C \frac{\partial N}{\partial\epsilon},
\end{equation}
where the diffusion coefficient is $ D = T/ \gamma $, and the drift one $C = f/ \gamma $, $ f $ being the slope parameter calculated with LDM\cite{TCP}: $ V( \epsilon) = f \epsilon $. The friction coefficient $ \gamma $ is calculated with the usual one body model. For simplicity, we take the range of the variable $ \epsilon $ to be $ [0.0, \infty] $, instead of the realistic $ [0.0, 1.0] $ (in this case, a little more complicated expression has been obtained, but the results are essentially the same as the present case.\cite{BSAG}). The boundary condition at $ \epsilon =0.0 $ is {\it reflective}. With the initial and the above boundary conditions, the solution is obtained as follows,\cite{SML}
\begin{eqnarray}
N(\epsilon,t)&=& \frac1{\sqrt{4\pi Dt}}\left[\exp\left(-\frac{(\epsilon-\epsilon_0)^2}{4Dt}\right)+
\exp\left(-\frac{(\epsilon+\epsilon_0)^2}{4Dt}\right)\right]\\
&& \times \exp\left(-\frac{C}{2D}(\epsilon-\epsilon_0)-\frac{C^2t}{4D}\right)\\
&& + \frac{C}{2D} \exp\left(-\frac{C \epsilon}D\right)\cdot {\rm erfc}
\left(\frac{\epsilon+\epsilon_0-Ct}{2\sqrt{Dt}}\right).
\end{eqnarray}
For long times, this expression becomes a Boltzmann distribution. In Fig. 3, the time dependence is shown by distributions at various times after the contact, for the case of $ ^{100} $Mo+$^{100}$Mo system.
\begin{figure}[th]
\centerline{\psfig{file=fig3.eps,width=5.4cm}}
\caption{Time evolution of the neck distribution function is shown with examples at various times for $ ^{100} $ Mo+$ ^{100} $Mo system, for which typical values of the parameters are $D=T/\gamma$=1/8 and $C=f/\gamma$=20/8 in the unit of MeV/$\hbar$. The time unit is $\hbar$/MeV.}
\end{figure}
Apparently, the Boltzmann distribution in the coordinate space is established at several tenths of $\hbar$/MeV. It is worth noticing that this time scale is far shorter than that of the 1-dimensional radial fusion discussed in the previous section. This means that before the radial motion for fusion starts, the neck cleft is filled in, i.e., the initial DNC becomes a superdeformed mono-nucleus. During fusioning motion, we can approximately take $ \epsilon $ to be 0.0, because it is the most probable value of the Boltzmann distribution obtained above.
A similar analysis has been made for the mass-asymmetry,\cite{BSAG} which turned out that the time scale is the same order with that of the radial fusion, and thus, the two degrees of freedom have to be solved in a coupled way.\cite{2DL}
\section*{Acknowledgements}
The present work has been supported by JSPS grant No.18540268.
One of the author (C.S.) thanks the supports from NSF of China and from NSF
of Zhejiang Province under the grant Nos. 10675046 and Y605476,
respectively. The authors also acknowledge supports by RCNP, Osaka Univ.,
GANIL, Huzhou Teachers College, and IPT, CEA-Saclay, which enable us to continue the collaboration.
|
2,877,628,091,042 | arxiv | \section{Introduction}
The first option pricing formula dates back to classic papers of \citeA{Black73} and \citeA{Merton73}. They implicitly introduced a risk-neutral valuation method to arbitrage pricing. But it was not fully developed and appreciated until the works of \citeA{Harrison79} and \citeA{Harrison81}. The basic idea of the risk-neutral valuation method is that discounted price process of an underlying asset is a martingale under some risk-neutral probability measure. The option price is equal to an expected value, with respect to the risk-neutral probability measure, of discounted option payoff. In this paper, to price rainbow options and lookback options, we use the risk-neutral valuation method in the presence of economic variables.
Sudden and dramatic changes in the financial market and economy are caused by events such as wars, market panics, or significant changes in government policies. To model those events, some authors used regime--switching models. The regime--switching model was introduced by seminal works of \citeA{Hamilton89,Hamilton90,Hamilton93} (see also books of \citeA{Hamilton94} and \citeA{Krolzig97}) and the model is hidden Markov model with dependencies, see \citeA{Zucchini16}. Markov regime--switching models have been introduced before Hamilton (1989), see, for example, \citeA{Goldfeld73}, \citeA{Quandt58}, and \citeA{Tong83}. The regime--switching model assumes that a discrete unobservable Markov process generates switches among a finite set of regimes randomly and that each regime is defined by a particular parameter set. The model is good fit for some financial data and has become popular in financial modeling including equity options, bond prices, and others.
Economic variables play important role in any economic model. In some existing option pricing models, the underlying asset price is governed by some stochastic process and it has not taken into account economic variables such as GDP, inflation, unemployment rate, and so on. For example, the classical Black-Scholes option pricing model uses a geometric Brownian motion to capture underlying asset prices. However, the underlying asset price modeled by geometric Brownian motion is not a realistic assumption when it comes to option pricing. In reality, for the Black-Scholes model, the price process of the asset should depend on some economic variables.
Classic Vector Autoregressive (VAR) process was proposed by \citeA{Sims80} who criticize large--scale macroeconometric models, which are designed to model inter--dependencies of economic variables. Besides \citeA{Sims80}, there are some other important works on multiple time series modeling, see, e.g., \citeA{Tiao81}, where a class of vector autoregressive moving average models was studied. For the VAR process, a variable in the process is modeled by its past values and past values of other variables in the process. After the work of \citeA{Sims80}, VARs have been used for macroeconomic forecasting and policy analysis. However, if the number of variables in the system increases or the time lag is chosen high, then too many parameters need to be estimated. This will reduce the degrees of freedom of the model and entails a risk of over-parametrization.
Therefore, to reduce the number of parameters in a high-dimensional VAR process, \citeA{Doan84} introduced probability distributions for coefficients that are centered at the desired restrictions but that have a small and nonzero variance. Those probability distributions are known as Minnesota prior in Bayesian VAR (BVAR) literature which is widely used in practice. Due to over-parametrization, the generally accepted result is that forecast of the BVAR model is better than the VAR model estimated by the frequentist technique. Research works have shown that BVAR is an appropriate tool for modeling large data sets, for example, see \citeA{Banbura10}.
In this paper, to partially fill the gaps mentioned above, we introduced a Bayesian Markov--Switching VAR (MS--VAR) model to value and hedge the options. Our model offers the following advantages: (i) it tries to mitigate valuation complexity of previous rainbow option models with regime--switching (ii) it considers economic variables thus the model will be more consistent with future economic uncertainty (iii) it introduces regime--switching so that the model takes into account sudden and dramatic changes in the economy and financial market (iv) it adopts a Bayesian procedure to deal with over--parametrization. Novelty of the paper is that we introduced Bayesian MS--VAR process which is widely used to model economic variables to rainbow options and lookback options.
The rest of the paper is structured as follows. In Section 2, we will consider some results, which include a Theorem used to price and hedge the rainbow options and lookback options and a log--normal system of economic and financial variables in \citeA{Battulga22b}. The author obtained pricing formulas for some frequently used options under Bayesian MS--VAR process. Section 3 is devoted to pricing the rainbow options and lookback options. Section 4 provides hedging formulas which are based on the locally risk--minimizing strategy for the options. Finally, Section 5 concludes the study.
\section{Review}
In this section, we will consider some results in \citeA{Battulga22b}. Let $(\Omega,\mathcal{H}_T,\mathbb{P})$ be a complete probability space, where $\mathbb{P}$ is a given physical or real{world probability measure and $\mathcal{H}_T$ will be defined below. To introduce a regime--switching process, we assume that $\{s_t\}_{t=1}^T$ is a homogeneous Markov chain with $N$ state and $\mathsf{P}:=\{p_{ij}\}_{i,j=1}^N$ is a random transition probability matrix. We consider a Bayesian Markov--Switching Vector Autoregressive (MS--VAR($p$)) process of $p$ order, which is given by the following equation
\begin{equation}\label{B01}
y_t=A_0(s_t)\psi_t+A_1(s_t)y_{t-1}+\dots +A_p(s_t)y_{t-p}+\xi_t,~t=1,\dots,T,
\end{equation}
where $y_t=(y_{1,t},\dots,y_{n,t})^T$ is an $(n\times 1)$ random vector, $\psi_t=(\psi_{1,t},\dots,\psi_{k,t})^T$ is a $(k\times 1)$ random vector of exogenous variables, $\xi_t=(\xi_{1,t},\dots,\xi_{n,t})^T$ is an $(n\times 1)$ Gaussian white noise process with zero mean vector and positive definite random covariance matrix $\Sigma(s_t)$, $A_0(s_t)$ is an $(n\times k)$ is an random coefficient matrix at regime $s_t$ that corresponds to the vector of exogenous variables, for $i=1,\dots,p$, $A_i(s_t)$ are random $(n\times n)$ coefficient matrices at regime $s_t$ that correspond to the vectors $y_{t-1},\dots,y_{t-p}$. In this paper, we focused Bayesian homogeneous MS--VAR process and for Bayesian heteroscedastic MS--VAR process, we refer to \citeA{Battulga22b}. Equation \eqref{B01} can be compactly written by
\begin{equation}\label{B02}
y_t=\Pi(s_t)\mathsf{Y}_{t-1}+\xi_t,~t=1,\dots,T,
\end{equation}
where $\Pi(s_t):=[A_0(s_t): A_1(s_t):\dots:A_p(s_t)]$ is random a coefficient matrix at regime $s_t$ which consist of all the coefficient matrices and $\mathsf{Y}_{t-1}:=(\psi_t,y_{t-1}^T,\dots,y_{t-p}^T)^T$ is a vector which consist of exogenous variables $\psi_t$ and last $p$ lagged values of the process $y_t$. In the paper, this form of the Bayesian MS--VAR process $y_t$ will play a major role than the form which is given by equation \eqref{B01}.
Let $y:=(y_1^T,\dots,y_T^T)^T$, $s:=(s_1,\dots,s_T)^T$, $\Pi:=[\Pi(1):\dots :\Pi(N)]$ and $\Sigma:=[\Sigma(1):\dots:\Sigma(N)]$. We also assume that the white noise process $\{\xi_t\}_{t=1}^T$ is independent of the random coefficient matrices $\Pi$, covariance matrices $\Sigma$, random transition matrix $\mathsf{P}$ and Markov chain process $\{s_t\}_{t=1}^T$ conditional on initial information $\mathcal{F}_0:=\sigma(y_{1-p},\dots,y_0,$ $\psi_{1},\dots,\psi_T)$. Here for a generic random vector $X$, $\sigma(X)$ denotes $\sigma$-field generated by $X$ random vector. We further suppose that the transition matrix $\mathsf{P}$ and Markov chain process $\{s_t\}_{t=1}^T$ is independent of the random coefficient matrices $\Pi$ and covariance matrices $\Sigma$ given $\mathcal{F}_0$.
To ease of notations, for a generic vector $x=(x_1^T,\dots,x_T^T)^T\in\mathbb{R}^{nT}$, we denote its first $t$ and last $T-t$ component vectors by $\bar{x}_t$ and $\bar{x}_t^c$, respectively, that is, $\bar{x}_t:=(x_1^T,\dots,x_t^T)^T$ and $\bar{x}_t^c:=(x_{t+1}^T,\dots,x_T^T)^T$. We define $\sigma$--fields: for $t=0,\dots,T$, $\mathcal{F}_{t}:=\mathcal{F}_0\vee\sigma(\bar{y}_{t})$, $\mathcal{G}_{t}:=\mathcal{F}_t\vee \sigma(\Pi)\vee\sigma(\Sigma)\vee \sigma(\mathsf{P})\vee \sigma(\bar{s}_t)$ and $\mathcal{H}_{t}:=\mathcal{F}_t\vee \sigma(\Pi)\vee \sigma(\Sigma)\vee \sigma(\mathsf{P})\vee\sigma(s)$ and for $t=0,\dots,T-1$, $\mathcal{I}_t=\mathcal{F}_t\vee \sigma(\Pi)\vee \sigma(\Sigma)\vee \sigma(\mathsf{P})\vee \sigma(\bar{s}_{t+1})$, where for generic sigma fields $\mathcal{M}_1,\dots,\mathcal{M}_k$, $\vee_{i=1}^k \mathcal{M}_i $ is the minimal $\sigma$--field containing the $\sigma$--fields $\mathcal{M}_i$, $i=1,\dots,k$. Observe that $\mathcal{F}_{t}\subset \mathcal{G}_{t}\subset \mathcal{H}_{t}$ for $t=0,\dots,T$. The $\sigma$--fields play major roles in the paper.
\subsection{Risk Neutral Measure}
We assume that for $t=1,\dots,T$, $\mathcal{I}_{t-1}$ measurable random vector $\theta_{t-1}(s_t)\in \mathbb{R}^n$ (Girsanov kernel, see \citeA{Bjork09}) has the following representation
\begin{equation}\label{B03}
\theta_{t-1}(s_t)=\Delta_0(s_t)\psi_t+\Delta_1(s_t)y_{t-1}+\dots+\Delta_p(s_t)y_{t-p},~~~t=1,\dots,T,
\end{equation}
where $\Delta_0(s_t)$ is an $(n\times k)$ random coefficient matrix and $\Delta_i(s_t)$, $i = 1,\dots,p$ are $(n \times n)$ random coefficient matrices. In order to change from the real probability measure $\mathbb{P}$ to some risk--neutral probability measure $\tilde{\mathbb{P}}$, for the random vectors $\theta_{t-1}(s_t)$, we define the following state price density process:
$$L_t~|~\mathcal{F}_0 :=\prod_{m=1}^t\exp\bigg\{\theta_{m-1}^T(s_m)\Sigma^{-1}(s_m)\big(y_m-\Pi(s_m) \mathsf{Y}_{m-1}\big)-\frac{1}{2}\theta_{m-1}^T(s_m)\Sigma^{-1}(s_m)\theta_{m-1}(s_m)\bigg\}$$
for $t=1,\dots,T.$ Then it can be shown that $\{L_t\}_{t=1}^T$ is a martingale with respect to the filtration $\{\mathcal{H}_t\}_{t=1}^T$ and the real probability measure $\mathbb{P}$. So $\mathbb{E}[L_T|\mathcal{H}_{0}]=\mathbb{E}[L_1|\mathcal{H}_0]=1$.
In order to formulate the following Theorem which is a trigger of option pricing with Bayesian MS--VAR process and will be used in the rest of the paper, we define following matrices and vector:
$$\Psi(s):=\begin{bmatrix}
I_n & 0 & \dots & 0 & \dots & 0 & 0\\
-A_1(s_2)-\Delta_1(s_2) & I_n & \dots & 0 & \dots & 0 & 0\\
\vdots & \vdots & \dots & \vdots & \dots & \vdots & \vdots\\
0 & 0 & \dots & -A_{p-1}(s_{T-1})-\Delta_{p-1}(s_{T-1}) & \dots & I_n & 0\\
0 & 0 & \dots & -A_p(s_T)-\Delta_p(s_T) & \dots & -A_1(s_T)-\Delta_1(s_T) & I_n
\end{bmatrix}$$
$\bar{\Sigma}(s):=\text{diag}\{\Sigma(s_1),\dots,\Sigma(s_T)\}$ and
$$\delta(s):=\begin{bmatrix}
\big(A_0(s_1)+\Delta_0(s_1)\big)\psi_1+\big(A_1(s_1)+\Delta_1(s_1)\big)y_{0}+\dots+\big(A_p(s_1)+\Delta_p(s_1)\big)y_{1-p}\\
\big(A_0(s_2)+\Delta_0(s_2)\big)\psi_2+\big(A_2(s_2)+\Delta_2(s_2)\big)y_{0}+\dots+\big(A_p(s_2)+\Delta_p(s_2)\big)y_{2-p}\\
\vdots\\
\big(A_0(s_{T-1}+\Delta_0(s_{T-1})\big)\psi_{T-1}\\
\big(A_0(s_T)+\Delta_0(s_T)\big)\psi_T
\end{bmatrix},$$
\begin{thm}\label{thm01}
Let a Bayesian MS--VAR process $y_t$ is given by equation \eqref{B01} or \eqref{B02}, for $t=1,\dots,T$, representation of a random vector $\theta_{t-1}(s_t)$ which is measurable with respect to $\sigma$--field $\mathcal{I}_{t-1}$ is given by equation \eqref{B03}. We define the following new (risk--neutral) probability measure
$$\mathbb{\tilde{P}}[A|\mathcal{F}_0]:=\int_A L_T(\omega |\mathcal{F}_0)d\mathbb{P}[\omega |\mathcal{F}_0]~~~\text{for all}~ A\in\mathcal{H}_T.$$
Let
$$\delta(s)=\begin{bmatrix}
\bar{\delta}_1(\bar{s}_t)\\ \bar{\delta}_2(\bar{s}_t^c)
\end{bmatrix}, ~~~
\Psi(s)=\begin{bmatrix}
\Psi_{11}(\bar{s}_t) & 0\\
\Psi_{21}(\bar{s}_t^c) & \Psi_{22}(\bar{s}_t^c)
\end{bmatrix}~~~\text{and}~~~\bar{\Sigma}(s)=\begin{bmatrix}
\bar{\Sigma}_{11}(\bar{s}_t) & 0\\
0 & \bar{\Sigma}_{22}(\bar{s}_t^c)
\end{bmatrix}$$
be partitions corresponding to random sub vectors $\bar{y}_t$ and $\bar{y}_t^c$ of a random vector $y=(y_1^T,\dots,y_T^T)^T$. Then the following probability laws hold:
\begin{eqnarray}
y~|~\mathcal{H}_0 &\sim& \mathcal{N}\Big(\Psi(s)^{-1}\delta(s),\Psi(s)^{-1}\bar{\Sigma}(s)(\Psi(s)^{-1})^T\Big), \label{B04}\\
\bar{y}_t^c~|~\mathcal{H}_t &\sim& \mathcal{N}\Big(\Psi_{22}^{-1}(\bar{s}_t^c)\big(\bar{\delta}_2(\bar{s}_t^c)-\Psi_{21}(\bar{s}_t^c)\bar{y}_t\big),\Psi_{22}^{-1}(\bar{s}_t^c)\bar{\Sigma}_{22}(\bar{s}_t^c)(\Psi_{22}^{-1}(\bar{s}_t^c))^T\Big), \label{B05}
\end{eqnarray}
under the risk--neutral probability measure $\tilde{\mathbb{P}}$. Also, conditional on $\mathcal{F}_0$ joint distribution of the random vector $S_*:=\text{vec}(\Pi,\Sigma,s,\mathsf{P})$ is same for probability measures $\mathbb{\tilde{P}}$ and $\mathbb{P}$.
\end{thm}
\textbf{Proof:} See \citeA{Battulga22b}.\hfill{$\Box$}
It follows from the Theorem that $\text{vec}(\Pi,\Sigma)$ and $\text{vec}(\mathsf{P},s)$ are independent given $\mathcal{F}_0$ under the risk--neutral probability measure $\mathbb{\tilde{P}}$ and joint distributions of the random vectors $\text{vec}(\Pi,\Sigma)$ and $\text{vec}(\mathsf{P},s)$ are same under the probability measures $\mathbb{\tilde{P}}$ and $\mathbb{P}$. In particular, it holds
$$\mathbb{\tilde{P}}\big(s=s ~\big|~\mathsf{P},\mathcal{F}_0\big)=p_{s_1}\prod_{t=2}^Tp_{s_{t-1}s_t}.$$
where $p_{s_1}:=\mathbb{P}(s_1=s_1|\mathsf{P},\mathcal{F}_0)$ and for $t=1,\dots,T$, $p_{s_{t-1}s_t}:=\mathbb{P}(s_t=s_t|s_{t-1}=s_{t-1},\mathsf{P},\mathcal{F}_0)$.
\subsection{Log--normal System}
Under Bayesian MS--VAR framework, \citeA{Battulga22b} introduced foreign--domestic market and obtained pricing formulas for frequently used options. Because the idea of domestic market can be used to domestic--foreign market, to simplify the calculation, here we will focus on a domestic market. We assume that financial variables, which are composed of a domestic log spot rate and domestic assets, and economic variables are together placed on Bayesian MS--VAR process $y_t$. To extract the financial variables from the process $y_t$, we introduce the following vectors and matrices:
$e_i:=(0,\dots,0,1,0,\dots,0)^T\in \mathbb{R}^n$ is a unit vector, that is, its $i$-th component is 1 and others are zero, $M_1:=\big[I_{n_z}:0_{n_z\times n_x}\big]$, and $M_2:=\big[0_{n_x\times n_z}:I_{n_x}\big]$.
Let $r_t$ be a domestic spot interest rate. We define $\tilde{r}_{t}:=\ln(1+r_t)$. Then $\tilde{r}_{t}$ represents total log return at time $t$ and we will refer to it as log spot rate. Since the spot interest rate at time $t$ is known at time $(t-1)$, we can assume that the log spot rate is placed on the 1st component of the process $y_{t-1}$. In this case, $\tilde{r}_{t}=e_1^Ty_{t-1}$. Let $n_z\geq 1$ and $z_t:=M_1y_t$ be an $(n_z\times 1)$ vector at time $t$ that include the domestic log spot rate. Since the first component of the process $z_t$ corresponds to the domestic log spot rate, we assume that other components of the process $z_t$ correspond to economic variables that affect the financial variables. So, the log spot rate is not constant and is explained by its own and other variables' lagged values in the VAR system $y_t$.
We further suppose that $\tilde{x}_t:=\ln(x_t)=M_2y_t$ is an $(n_x\times 1)$ log price process of the domestic assets, where $x_t$ is an $(n_x\times 1)$ price process of the domestic assets. This means log prices of the domestic assets are placed on $(n_z+1)$--th to $(n_z+n_x)$--th components of the Bayesian MS--VAR process $y_t$. As a result, the domestic market is given by the following system:
\begin{equation}\label{B08}
\begin{cases}
z_t=\Pi_1(s_t)\mathsf{Y}_{t-1}+\zeta_t\\
\tilde{x}_t=\Pi_2(s_t)\mathsf{Y}_{t-1}+\eta_t\\
D_{t}=\exp\{-\tilde{r}_{1}-\tilde{r}_{2}-\dots-\tilde{r}_{t}\}=\frac{1}{\prod_{m=1}^t(1+r_m)}\\
\tilde{r}_{t}=e_1^Ty_{t-1}
\end{cases},~~~t=1,\dots,T,
\end{equation}
where $D_t$ is a domestic discount process, $\zeta_t:=M_1\xi_t$ and $\eta_t:=M_2\xi_t$ are residual processes of the processes $z_t$ and $\tilde{x}_t$, respectively, $\Pi_1(s_t):=M_1\Pi(s_t)$ and $\Pi_2(s_t):=M_2\Pi(s_t)$ are random coefficient matrices. For the system, $D_tx_t$ represent a discounted price process of the domestic assets. If we define a random vector $\theta_{t-1}^{*}(s_t):=M_2(y_{t-1}-\Pi(s_t)\mathsf{Y}_{t-1})+i_{n_x}e_1^Ty_{t-1}$, then it can be shown that
\begin{equation}\label{B09}
D_tx_t=\big(D_{t-1}x_{t-1}\big)\odot\exp\big(\eta_t-\theta_{t-1}^*(s_t)\big),
\end{equation}
where $\odot$ means the Hadamard product. The random vector $\theta_{t-1}^*(s_t)$ which is measurable with respect to $\sigma$-field $\mathcal{I}_{t-1}$ can be represented by
$$\theta_{t-1}^*(s_t)=\Delta_0^*(s_t)\psi_t+\Delta_1^*(s_t)y_{t-1}+\dots+\Delta_p^*(s_t)y_{t-p},$$
where $\Delta_0^*(s_t):=-M_2A_0(s_t)$, $\Delta_1^*(s_t):=M_2\big(I_n-A_1(s_t)\big)+i_{n_x}e_1^T$ and for $m=2,\dots,T$, $\Delta_m^*(s_t):=-M_2A_m(s_t)$. According to equation \eqref{B09}, as $D_{t-1}x_{t-1}$ is $\mathcal{H}_{t-1}$ measurable, in order to the discounted process $D_tx_t$ is a martingale with respect to the filtration $\mathcal{H}_t$ and some risk-neutral probability measure $\tilde{\mathbb{P}}$, we must require that
$$\tilde{\mathbb{E}}\big[\exp\big\{\eta_t-\theta_{t-1}^*(s_t)\big\}|\mathcal{H}_{t-1}\big]=i_{n_x},$$
where $\mathbb{\tilde{E}}$ denotes a expectation under the risk-neutral probability measure $\tilde{\mathbb{P}}$. We also require that
$$\tilde{\mathbb{E}}[\exp\{\zeta_t\}|\mathcal{H}_{t-1}]=i_{n_z}.$$
If we combine the requirements, then it can be written by
$$\tilde{\mathbb{E}}\big[\exp\big\{\xi_t-\bar{\theta}_{t-1}(s_t)\big\}|\mathcal{H}_{t-1}\big]=i_n,$$
where $\xi_t=(\zeta_t^T,\eta_t^T)^T=y_t-\Pi(s_t)\mathsf{Y}_{t-1}$ and $\bar{\theta}_{t-1}(s_t):=\big(0,\theta_{t-1}^{*T}(s_t)\big)^T$. If we denote a vector which consist of diagonal elements of a generic square matrix $A$ by $\mathcal{D}[A]$, then $\theta_{t-1}(s_t)$ in Theorem 1 must be the following form
$$\theta_{t-1}(s_t):=\bar{\theta}_{t-1}(s_t)-\alpha_t(s_t)$$
where $\alpha_t(s_t):=\frac{1}{2}\mathcal{D}\big[\Sigma_t(s_t)\big]$.
Therefore, it follows from Theorem \ref{thm01} that conditional on $\mathcal{H}_t$ a distribution of a random vector $\bar{y}_t^c$ is given by
$$\bar{y}_t^c=(y_{t+1}^T,\dots,y_T^T)^T~|~\mathcal{H}_t \sim \mathcal{N}\big(\mu_{2.1}^\alpha(\bar{y}_t,\bar{s}_t^c),\Sigma_{2.1}(\bar{s}_t^c)\big)$$
under risk-neutral probability measure $\tilde{\mathbb{P}}$, where $\mu_{2.1}^\alpha(\bar{y}_t,\bar{s}_t^c):=\Psi_{22}^{-1}(\bar{s}_t^c)\big(\bar{\delta}_2(\bar{s}_t^c)-\bar{\alpha}_t^c(\bar{s}_t^c)-\Psi_{21}(\bar{s}_t^c)\bar{y}_t\big)$ and $\Sigma_{2.1}(\bar{s}_t^c):=\Psi_{22}^{-1}(\bar{s}_t^c)\bar{\Sigma}_{22}(\bar{s}_t^c)(\Psi_{22}^{-1}(\bar{s}_t^c))^T$ are mean vector and covariance matrix of the random vector $\bar{y}_t^c$ given $\mathcal{H}_t$.
Let $\tilde{x}:=(\tilde{x}_1^T,\dots,\tilde{x}_T^T)^T$ be a log of a price vector $x:=(x_1^T,\dots,x_T^T)$. Then in terms of $y$, the log of price process is represented by $\tilde{x}=(I_T\otimes M_2)y$. Now we introduce a vector that deals with the domestic risk-free spot interest rate: a vector $\gamma_{u,v}$ is defined by for $v>u$, $\gamma_{u,v}^T:=\big[0_{1\times [(u-t)n]}:i_{v-u-1}^T\otimes e_1^T:0_{1\times [(T-v+1)n]}\big]$ and for $v=u$, $\gamma_{u,v}:=0\in \mathbb{R}^{[T-t]n}$. Then observe that for $t\leq u\leq v$,
$$\sum_{m=u+1}^v\tilde{r}_{m}=e_i^Ty_u1_{\{u<v\}}+\gamma_{u,v}^T\bar{y}_t^c.$$
According to \citeA{Geman95}, clever change of probability measure lead to significant reduction in computational burden of derivative pricing. Therefore, we will consider some probability measures, originated from the risk-neutral probability measure $\mathbb{\tilde{P}}$. In this and following sections, we will assume that $0\leq t\leq u\leq T$. We define the following map defined on $\sigma$-field $\mathcal{H}_T$:
$$\mathbb{\tilde{P}}_{t,u}^{i}\big[A|\mathcal{H}_t\big]:=\frac{1}{D_tx_{i,t}}\int_AD_ux_{i,u}d\mathbb{\tilde{P}}\big[\omega|\mathcal{H}_t\big],~~~\text{for all}~A\in \mathcal{H}_T.$$
Because the discounted process $D_tx_t$ get positive values and for $0\leq t\leq u\leq T$, $\tilde{\mathbb{E}}[D_ux_u|\mathcal{H}_t]=D_tx_t$ (as it is a martingale with respect to the filtration $\{\mathcal{H}_t\}_{t=1}^T$ and risk-neutral probability measure $\tilde{\mathbb{P}}$), the map become probability measure. If we define $\beta_{t,u}^{i}=(i_{u-t}^T,0_{1\times (T-u)})^T\otimes e_{n_z+i}$, where $\otimes$ is the Kronecker product, then it can be shown that a conditional distribution of the random vector $\bar{y}_t^c$ is given by
$$\bar{y}_t^c=(y_{t+1}^T,\dots,y_T^T)^T~|~\mathcal{H}_t\sim \mathcal{N}\Big(\mu_{t,u}^{i}(\bar{y}_t,\bar{s}_t^c),\Sigma_{2.1}(\bar{s}_t^c)\Big),$$
under measure $\mathbb{\tilde{P}}_{t,u}^{i}$, where $\mu_{t,u}^{i}(\bar{y}_t,\bar{s}_t^c):=\mu_{2.1}^\alpha(\bar{y}_t,\bar{s}_t^c)+\Psi_{22}^{-1}\bar{\Sigma}_2(\bar{s}_t^c)\beta_{t,u}^{i}$ and $\Sigma_{2.1}(\bar{s}_t^c)$ are mean vector and covariance matrix of the random vector $\bar{y}_t^c$ given $\mathcal{H}_t$.
If we denote normal distribution function with mean $\mu$ and covariance matrix $\Omega$ at event $A$ by $\mathcal{N}(A,\mu,\Omega)$, then it is obvious that for all $A\in \mathcal{H}_T$
$$\mathbb{\tilde{P}}_{t,u}^{i}[A|\mathcal{G}_t]=\sum_{s_{t+1}=1}^N\dots \sum_{s_T=1}^N\mathcal{N}\big(A,\mu_{t,u}^{i}(\bar{y}_t,\bar{s}_t^c),\Sigma_{2.1}(\bar{s}_t^c)\big)\prod_{m=t+1}^Tp_{s_{m-1}s_m}.$$
To price rainbow options and lookback options which will be appear the following sections we will use the following two Lemmas.
\begin{lem}\label{lem07}
For $t=0,\dots,T-1$, the following relation holds
$$\mathbb{\tilde{P}}\big(\bar{s}_t^c=\bar{s}_t^c~|~\mathcal{G}_t\big)=\prod_{m=t+1}^Tp_{s_{m-1}s_m},$$
where $p_{s_0s_1}:=\mathbb{P}(s_1=s_1|\mathsf{P},\mathcal{F}_0)$ and $p_{s_{m-1}s_m}:=\mathbb{P}(s_m=s_m|s_{m-1}=s_{m-1},\mathsf{P},\mathcal{F}_0)$ for $m=t+1,\dots,T$.
\end{lem}
Let us denote conditional on a generic $\sigma$-field $\mathcal{F}$ joint density function of a generic random vector $X$ by $\tilde{f}(X|\mathcal{F})$ under $\mathbb{\tilde{P}}$. Then the following Lemma is true.
\begin{lem}\label{lem08}
For $t=0,\dots,T$, the following relation holds
$$\tilde{f}\big(\Pi,\Gamma,\mathsf{P},\bar{s}_t~|~\mathcal{F}_t\big)=\frac{f(\Pi,\Gamma|\mathcal{F}_0)p_{s_1}\prod_{m=2}^tp_{s_{m-1}s_m}f(\mathsf{P}|\mathcal{F}_0)\tilde{f}(\bar{y}_t|\mathcal{G}_t)}{\sum_{s_1=1}^N\dots \sum_{s_t=1}^N\int_{\Pi,\Gamma,\mathsf{P}} f(\Pi,\Gamma|\mathcal{F}_0)p_{s_1}\prod_{m=2}^tp_{s_{m-1}s_m}f(\mathsf{P}|\mathcal{F}_0)\tilde{f}(\bar{y}_t|\mathcal{G}_t)d\Pi d\Gamma d\mathsf{P}},$$
where $\tilde{f}\big(\Pi,\Gamma,\mathsf{P},\bar{s}_0~|~\mathcal{F}_0\big)=f(\Pi,\Gamma|\mathcal{F}_0)f(\mathsf{P}|\mathcal{F}_0)$ and
$$\tilde{f}(\bar{y}_t|\mathcal{G}_t)=\frac{1}{(2\pi)^{nt/2}\prod_{m=1}^t|\Sigma_m(s_m)|^{1/2}}\exp\Big\{-\frac{1}{2}\big(\bar{y}_t-\mu_1(\bar{s}_t)\big)^T\Sigma_{11}^{-1}(\bar{s}_t)(\bar{s}_t)\big(\bar{y}_t-\mu_1(\bar{s}_t)\big)\Big\}$$
with $\mu_1(\bar{s}_t):=\Psi_{11}^{-1}(\bar{s}_t)\bar{\delta}_1(\bar{s}_t)$ and $\Sigma_{11}(\bar{s}_t):=\Psi_{11}^{-1}(\bar{s}_t)\bar{\Sigma}_{11}(\bar{s}_t)(\Psi_{11}^{-1}(\bar{s}_t))^T$.
\end{lem}
Now we present a Lemma, which is used to calculate expectation of a random variable $D_v/D_u1_A$ with respect to a generic probability measures.
\begin{lem}\label{lem01}
Let $\bar{y}_t^c~|~\mathcal{H}_t\sim \mathcal{N}\big(\mu^G(\bar{y}_t,\bar{s}_t^c),\Sigma_{2.1}(\bar{s}_t^c)\big)$ under a generic probability measure $\mathbb{\tilde{P}}^G$. Then, for $A\in \mathcal{H}_T$ and $t\vee u\leq v$, it holds
$$\mathbb{\tilde{E}}^G\bigg[\frac{D_v}{D_u} 1_A\bigg|\mathcal{H}_t\bigg]=
\frac{D_{t\vee u}}{D_u}\exp\big\{\big[a^G\big]_{t\vee u}^v(\bar{y}_t,\bar{s}_t^c)\big\}\mathcal{N}\big(A,\big[\mu^G\big]_{t\vee u}^v(\bar{y}_t,\bar{s}_t^c),\Sigma_{2.1}(\bar{s}_t^c)\big)$$
where for $t\leq u\leq v$, $\big[a^G\big]_u^v(\bar{y}_t,\bar{s}_t^c)=-e_{1}^Ty_t1_{\{u=t,v>u\}}-j_{u,v}^T\bar{R}_{t}^c\mu^G(\bar{y}_t,\bar{s}_t^c)+\frac{1}{2}j_{u,v}^T\bar{R}_{t}^c\Sigma_{2.1}(\bar{s}_t^c)\bar{R}_{t}^{cT}j_{u,v}$ and $\big[\mu^G\big]_u^v(\bar{y}_t,\bar{s}_t^c)=\mu^G(\bar{y}_t,\bar{s}_t^c)-\Sigma_{2.1}(\bar{s}_t^c)\bar{R}_{t}^{cT}j_{u,v}$.
\end{lem}
\section{Rainbow Options}
Rainbow options are usually calls or puts on the maximum or minimum of underlying assets. A number of assets is called a number of colors of a rainbow and each asset is referred to as a color of the rainbow. \citeA{Stulz82} introduced rainbow options with two assets. Its extension is given by \citeA{Johnson87a} for rainbow options with more than two assets using multidimensional normal cumulative distribution functions. In this section, we will present pricing formulas of call and put options and lookback options on maximum and minimum of several asset prices which are without default risk, but following the idea in \citeA{Battulga22b}, one can develop pricing formulas for the options with default risk. Here we impose weights on all underlying assets at all time period. Therefore, the options depart from existing rainbow and lookback options. To price the rainbow options and lookback options, we reconsider domestic market, which is given by equation \eqref{B08}. We define maximum and minimum of prices of the domestic assets:
$$\overline{M}_t:=\max_{1\leq u\leq t}\{M_u\}~~~\text{and}~~~\overline{m}_t:=\min_{1\leq u\leq t}\{m_u\}$$
for $t=1,\dots,T$, where
\begin{equation}\label{B10}
M_u:=\max_{1\leq i\leq n_x}\{w_{i,u}x_{i,u}\}~~~\text{and}~~~m_u:=\min_{1\leq i\leq n_x}\{w_{i,u}x_{i,u}\}
\end{equation}
with $w_{i,u}$ is weight at time $u$ of $i$-th asset. One of choices of the weight vector correspond to reciprocal of the assets at time 0. In this case, $w_{i,t}x_{i,t}=x_{i,t}/x_{i,0}$ represents total return at time $t$ of $i$-th domestic asset. To price the rainbow options and lookback options, it will be sufficient to consider the following call option on maximum
$$C_{t,w}^{\overline{M}_T}(K):=\frac{1}{D_t}\mathbb{\tilde{E}}\Big[D_T\big(\overline{M}_T-K\big)^+\Big|\mathcal{I}_t\Big],$$
where $T$ is a time of the option expiration and $K$ is a strike price of the option. Let us denote a discounted contingent claim of the option by $\overline{H}_T^1$, that is,
$$\overline{H}_T^1:=D_T\big(\overline{M}_T-K\big)^+.$$
To simplify notations, we define the following random variables: $Z_{i,u}:=w_{i,u}x_{i,u}$ is a price at time $u$ of $w_{i,u}$ unit of $i$-th asset. Then, for all $i=1,\dots,n_x$ and $u=1,\dots,T$, event $\{\overline{M}_T=Z_{i,u}\}\cap \{\overline{M}_T\geq K\}$ (which means $Z_{i,u}$ is maximum and the option on maximum expires in the money) holds if and only if event $A_{i,u}\cap B_{i,u}$ holds, where $B_{i,u}:=\big\{Z_{i,u}\geq K\big\}$ and $A_{i,u}:=A_{i,u,1}\cap A_{i,u,2}$ with
$$A_{i,u,1}:=\Big\{Z_{i,u} \geq Z_{1,1},\dots, Z_{i,u} \geq Z_{n_x,1},\dots,Z_{i,u} \geq Z_{1,t},\dots, Z_{i,u} \geq Z_{n_x,t}\Big\}$$
and
$$A_{i,u,2}:=\Big\{Z_{i,u} \geq Z_{1,t+1},\dots,Z_{i,u} \geq Z_{n_x,t+1},\dots,Z_{i,u} \geq Z_{1,T},\dots, Z_{i,u} \geq Z_{n_x,T}\Big\}.$$
It is clear that the discounted contingent claim of the call option on maximum can be represented by
\begin{equation}\label{B11}
\overline{H}_T^1=\sum_{i=1}^{n_x}\sum_{u=1}^TD_T\big(Z_{i,u}-K\big)1_{E_{i,u}}.
\end{equation}
where $E_{i,u}:=A_{i,u}\cap B_{i,u}$. Since for $1\leq u\leq t$, random variables $Z_{i,u}$ are known at time $t$, the sets $A_{i,u,1}$ and $B_{i,u}$ must be represented by $A_{i,u,1}=B_{i,u}=\{\O,\Omega\}$. Therefore, it allows us to deduce that
\begin{equation}\label{B12}
E_{i,u}=A_{i,u}\cap B_{i,u}~\begin{cases}
\in\big\{\O, A_{i,u,2}\big\}, & \text{if}~~~1\leq u\leq t\\
=A_{i,u,2} \cap \big\{Z_{i,u}\geq \gamma\big\}, & \text{if}~~~t<u\leq T,
\end{cases}
\end{equation}
where $\gamma:=\max\{\overline{M}_t,K\}$. Because for $1\leq u\leq t$, $Z_{i,u}$ is known at time $t$ and for $i=1,\dots,n_x$, $\mu_{2.1}^\alpha(\bar{y}_t,\bar{s}_t^c)=\mu_{t,t}^{i}(\bar{y}_t,\bar{s}_t^c)$, due to Lemma \ref{lem01}, one obtain that conditional on $\mathcal{H}_t$ price at time $t$ of the option on maximum is given by
\begin{eqnarray}\label{B13}
C_{t,w}^{\overline{M}_T}(\mathcal{H}_t,K)&=&\sum_{i=1}^{n_x}\sum_{u=1}^Tw_{i,u}x_{i,u\wedge t}\exp\big\{[a_{t,u\vee t}^i]_{u\vee t}^T(\bar{y}_t,\bar{s}_t^c)\big\}\mathcal{N}\big(E_{i,u},[\mu_{t,u\vee t}^i]_{u\vee t}^T(\bar{y}_t,\bar{s}_t^c),\Sigma_{2.1}(\bar{s}_t^c)\big)\nonumber\\
&-&K\sum_{i=1}^{n_x}\sum_{u=1}^T\exp\big\{[a_{2.1}^\alpha]_t^T(\bar{y}_t,\bar{s}_t^c)\big\}\mathcal{N}\big(E_{i,u},[\mu_{2.1}^\alpha]_t^T(\bar{y}_t,\bar{s}_t^c),\Sigma_{2.1}(\bar{s}_t^c)\big),
\end{eqnarray}
where for any real numbers $a,b\in \mathbb{R}$, $a\vee b=\max\{a,b\}$ and $a\wedge b=\min\{a,b\}$. In terms of the random log price vector $\bar{\tilde{x}}_t^c$ the set $A_{i,u,2}$ is expressed by
\begin{equation}\label{B14}
A_{i,u,2}=\big\{\tilde{L}_{i,u}\bar{\tilde{x}}_t^c\leq \tilde{b}_{i,u}\big\}
\end{equation}
for $1\leq u\leq t$ and $i=1,\dots,n_x$, where $\tilde{b}_{i,u}:=\big(\ln(Z_{i,u}/w_{1,t+1}),\dots,\ln(Z_{i,u}/w_{n_x,T})\big)^T$ and $\tilde{L}_{i,u}:=I_{[T-t]n_x}$. Now we consider second line of equation \eqref{B12}. To represent the set $A_{i,u,2}\cap \big\{Z_{i,u}\geq \gamma\big\}$ in terms of the log price vector $\bar{\tilde{x}}_t^c$, we define the following matrix and vector:
$$L_{i,u}:=\begin{bmatrix}
I_{[u-t-1]n_x+i-1} & -i_{[u-t-1]n_x+i-1} & 0\\
0 & -i_{[T-u+1]n_x-i} & I_{[T-u+1]n_x-i}\\
0 & -1 & 0\\
\end{bmatrix},$$
and $b_{i,u}^{\gamma}:=\big(\ln(w_{i,u}/w_{1,t+1}),\dots,\ln(w_{i,u}/w_{i-1,u}),\ln(w_{i,u}/w_{i+1,u}),\dots,\ln(w_{i,u}/w_{n_x,T}),\ln(w_{i,u}/\gamma)\big)^T$. For the matrix $L_{i,u}$, its last row corresponds to the event $\big\{Z_{i,u}\geq \gamma\big\}$ and other rows correspond to the event $A_{i,u,2}$. In this case, we can deduce that
\begin{equation}\label{B15}
A_{i,u,2}\cap \big\{Z_{i,u}\geq \gamma\big\}=\big\{L_{i,u}\bar{\tilde{x}}_t^c\leq b_{i,u}^\gamma\big\}
\end{equation}
for $u<t\leq T$ and $i=1,\dots,n_x$. Let us introduce a simple Lemma, which will be used to price the call option on maximum.
\begin{lem}\label{lem02}
Let $\bar{y}_t^c~|~\mathcal{H}_t \sim \mathcal{N}\big(\mu^G(\bar{y}_t,\bar{s}_t^c),\Sigma_{22.1}(\bar{s}_t^c)\big)$ under a generic probability measure $\mathbb{\tilde{P}}^G$. Then for all $\mathsf{A}\in \mathbb{R}^{k\times [(T-t)n_x]}$ matrices, it holds
$$\mathsf{A}\bar{\tilde{x}}_t^c~|~\mathcal{H}_t\sim \mathcal{N}\Big(\mu^G(\bar{y}_t,\bar{s}_t^c,\mathsf{A}),\Sigma_{2.1}(\bar{s}_t^c,\mathsf{A})\Big)$$
under the generic probability measure $\mathbb{\tilde{P}}^G$, where $\mu^G(\bar{y}_t,\bar{s}_t^c,\mathsf{A}):=\mathsf{A}(I_{T-t}\otimes M_2)\mu^G(\bar{y}_t,\bar{s}_t^c)$ and $\Sigma_{2.1}(\bar{s}_t^c,\mathsf{A}):=\mathsf{A}(I_{T-t}\otimes M_2)\Sigma_{2.1}(\bar{s}_t^c)(I_{T-t}\otimes M_2^T)\mathsf{A}^T$.
\end{lem}
Due to equations \eqref{B12}, \eqref{B14} and \eqref{B15}, we have
$$\mathbb{\tilde{P}}^G\big[E_{i,u}\big|\mathcal{H}_t\big]=\begin{cases}
\mathbb{\tilde{P}}^G\big[\tilde{L}_{i,u}\bar{\tilde{x}}_t^c\leq \tilde{b}_{i,u}\big|\mathcal{H}_t\big]1_{A_{i,u,1}\cap B_{i,u}}, & \text{if}~~~1\leq u\leq t,\\
\mathbb{\tilde{P}}^G\big[L_{i,u} \bar{\tilde{x}}_t^c\leq b_{i,u}^\gamma\big|\mathcal{H}_t\big], & \text{if}~~~t< u \leq T
\end{cases}$$
under a generic probability measure $\mathbb{\tilde{P}}^G.$ We assume that weighted price at time $u_*$ of $i_*$-th asset is maximum value in the history of the weighted prices of all assets up to and including time $t$, that is, $\overline{M}_t=Z_{i_*,u_*}$. Let us denote a normal distribution function with mean $\mu$ and covariance matrix $\Sigma$ at point $x$ by $\mathcal{N}(x,\mu,\Sigma)$. Then, according to equation \eqref{B13} and Lemma \ref{lem02}, we can obtain that for given information $\mathcal{G}_t$, price at time $t$ of the call option on maximum is given by
\begin{eqnarray}\label{B16}
\hat{C}_{t,w}^{\overline{M}_T}(\mathcal{G}_t,K,\gamma\star)&:=&\sum_{s_{t+1}=1}^N\dots\sum_{s_T=1}^N\Bigg[\sum_{i=1}^{n_x}\sum_{u=t+1}^Tw_{i,u}x_{i,t}\exp\big\{[a_{t,u}^i]_u^T(\bar{y}_t,\bar{s}_t^c)\big\}\nonumber\\
&\times&\mathcal{N}\big(b_{i,u}^{\gamma\star},[\mu_{t,u}^{i}]_u^T(\bar{y}_t,\bar{s}_t^c,L_{i,u}^\star),\Sigma_{2.1}(\bar{s}_t^c,L_{i,u}^\star)\big)-K\sum_{i=1}^{n_x}\sum_{u=t+1}^T\exp\big\{[a_{2.1}^\alpha]_t^T(\bar{y}_t,\bar{s}_t^c)\big\}\nonumber\\
&\times&\mathcal{N}\big(b_{i,u}^{\gamma\star},[\mu_{2.1}^\alpha]_t^T(\bar{y}_t,\bar{s}_t^c,L_{i,u}^\star),\Sigma_{2.1}(\bar{s}_t^c,L_{i,u}^\star)\big)\Bigg]\prod_{k=t+1}^Tp_{s_{k-1}s_k}+W_{i_*,u_*}.
\end{eqnarray}
where $L_{i,u}^\star=L_{i,u}$, $b_{i,u}^{\gamma\star}=b_{i,u}^\gamma$ and
\begin{eqnarray*}
W_{i_*,u_*}&:=&1_{B_{i_*,u_*}}\sum_{s_{t+1}=1}^N\dots\sum_{s_T=1}^N\bigg[\Big(w_{i_*,u_*}x_{i_*,u_*}-K\Big)\exp\big\{[a_{2.1}^\alpha]_t^T(\bar{y}_t,\bar{s}_t^c)\big\}\\
&\times& \mathcal{N}\big(\tilde{b}_{i_*,u_*}^\star,[\mu_{2.1}^\alpha]_t^T(\bar{y}_t,\bar{s}_t^c,\tilde{L}_{i_*,u_*}^\star),\Sigma_{2.1}(\bar{s}_t^c,\tilde{L}_{i_*,u_*}^\star)\big)\bigg]\prod_{k=t+1}^Tp_{s_{k-1}s_k}
\end{eqnarray*}
with $B_{i_*,u_*}=\{Z_{i_*,u_*}\geq K\}$, $\tilde{L}_{i_*,u_*}^\star=\tilde{L}_{i_*,u_*}$, and $\tilde{b}_{i_*,u_*}^{\star}=\tilde{b}_{i_*,u_*}$. We refer to the term $W_{i_*,u_*}$ as tail term of the call option on maximum. Therefore, due to Lemmas 1 and 2, and the tower property of conditional expectation, price at time $t$ of the call option on maximum with maturity $T$ and strike price $K$ is obtained by
\begin{eqnarray*}
C_{t,w}^{\overline{M}_T}(K)&=&\frac{1}{D_t}\mathbb{\tilde{E}}\Big[D_T\big(\overline{M}_T-K\big)^+\Big|\mathcal{F}_t\Big]=\mathbb{\tilde{E}}\big[\hat{C}_{t,w}^{\overline{M}_T}(\mathcal{G}_t,K,\gamma)\big|\mathcal{F}_t\big]\\
&=&\sum_{s_1=1}^N\dots\sum_{s_t=1}^N\int_{\Pi,\Sigma,\mathsf{P}}C_{t,w}^{\overline{M}_T}(\mathcal{G}_t,K,\gamma)\tilde{f}(\Pi,\Sigma,\mathsf{P},\bar{s}_t)d\Pi d\Sigma d\mathsf{P}
\end{eqnarray*}
Because in similar manner we can price other options using Lemmas 1 and 2, it is sufficient to price the options for the information $\mathcal{G}_t$. Now we list some option pricing formulas given $\mathcal{G}_t$, which are originated from above formula \eqref{B16} corresponding to the call option on maximum of the domestic asset prices.
\begin{itemize}
\item[1.] Let weighted price at time $u_*$ of $i_*$-th asset be a maximum value in the history of the weighted prices of all assets up and including to time $t$. Then, conditional on information $\mathcal{G}_t$ price at time $t$ of the call option on maximum with strike price $K$ and expiration time $T$ is given by
$$C_{t,w}^{\overline{M}_T}(\mathcal{G}_t,K):=\frac{1}{D_t}\mathbb{\tilde{E}}\Big[D_T\big(\overline{M}_{T}-K\big)^+~\Big|~\mathcal{G}_t\Big]=\hat{C}_{t,w}^{\overline{M}_T}(\mathcal{G}_t,K,\gamma),$$
where input parameters of equation \eqref{B16} are $B_{i_*,u_*}=\{Z_{i_*,u_*}\geq K\}$, $\tilde{L}_{i_*,u_*}^\star=\tilde{L}_{i_*,u_*}$, $\tilde{b}_{i_*,u_*}^\star=\tilde{b}_{i_*,u_*}$, $L_{i,u}^\star=L_{i,u}$ and $b_{i,u}^{\gamma\star}=b_{i,u}^\gamma$ with $\gamma=\overline{M}_t\vee K$.
\item[2.] Let weighted price at time $u_*$ of $i_*$-th asset be a maximum value in the history of the weighted prices of all assets up to and including time $t$. Then, conditional on information $\mathcal{G}_t$ price at time $t$ of a put option on maximum with strike price $K$ and expiration time $T$ is given by
\begin{eqnarray*}
P_{t,w}^{\overline{M}_T}(\mathcal{G}_t,K)&:=&\frac{1}{D_t}\mathbb{\tilde{E}}\Big[D_T\big(K-\overline{M}_{T}\big)^+~\Big|~\mathcal{G}_t\Big]\\
&=&
\begin{cases} \hat{C}_{t,w}^{\overline{M}_T}(\mathcal{G}_t,K,K)-\hat{C}_{t,w}^{\overline{M}_T}(\mathcal{G}_t,K,\overline{M}_t)+W_{i_*,u_*} & \text{if}~~~\overline{M}_t\leq K,\\
0 & \text{if}~~~\overline{M}_t> K,
\end{cases}
\end{eqnarray*}
where input parameters of equation \eqref{B16} are $B_{i_*,u_*}=\{Z_{i_*,u_*}\leq K\}$, $\tilde{L}_{i_*,u_*}^\star=\tilde{L}_{i_*,u_*}$, $\tilde{b}_{i_*,u_*}^\star=\tilde{b}_{i_*,u_*}$, $L_{i,u}^\star=L_{i,u}$, $b_{i,u}^{K\star}=b_{i,u}^K$ and $b_{i,u}^{\overline{M}_t\star}=b_{i,u}^{\overline{M}_t}$.
\item[3.] Let weighted price at time $u_*$ of $i_*$-th asset is minimum value in the history of the weighted prices of all assets up to and including time $t$. Then, conditional on information $\mathcal{G}_t$ price at time $t$ of a call option on minimum with strike price $K$ and expiration time $T$ is given by
\begin{eqnarray*}
C_{t,w}^{\overline{m}_T}(\mathcal{G}_t,K)&:=&\frac{1}{D_t}\mathbb{\tilde{E}}\Big[D_T\big(\overline{m}_{T}-K\big)^+~\Big|~\mathcal{G}_t\Big]\\
&=&\begin{cases} \hat{C}_{t,w}^{\overline{M}_T}(\mathcal{G}_t,K,\overline{m}_t)-\hat{C}_{t,w}^{\overline{M}_T}(\mathcal{G}_t,K,K)+W_{i_*,u_*} & \text{if}~~~ \overline{m}_t\geq K,\\
0 & \text{if}~~~ \overline{m}_t< K,
\end{cases}
\end{eqnarray*}
where input parameters of equation \eqref{B16} are $B_{i_*,u_*}=\{Z_{i_*,u_*}\geq K\}$, $\tilde{L}_{i_*,u_*}^\star=-\tilde{L}_{i_*,u_*}$, $\tilde{b}_{i_*,u_*}^\star=-\tilde{b}_{i_*,u_*}$, $L_{i,u}^\star=-L_{i,u}$, $b_{i,u}^{K\star}=-b_{i,u}^K$ and $b_{i,u}^{\overline{m}_t\star}=-b_{i,u}^{\overline{m}_t}$.
\item[4.] Let weighted price at time $u_*$ of $i_*$-th asset is minimum value in the history of the weighted prices of all assets up to and including time $t$. Then, conditional on information $\mathcal{G}_t$ price at time $t$ of a put option on minimum with strike price $K$ and expiration time $T$ is given by
$$P_{t,w}^{\overline{m}_T}(\mathcal{G}_t,K):=\frac{1}{D_t}\mathbb{\tilde{E}}\Big[D_T\big(K-\overline{m}_{T}\big)^+~\Big|~\mathcal{G}_t\Big]=\hat{C}_{t,w}^{\overline{M}_T}(\mathcal{G}_t,K,\gamma),$$
where input parameters of equation \eqref{B16} are $B_{i_*,u_*}=\{Z_{i_*,u_*}\leq K\}$, $\tilde{L}_{i_*,u_*}^\star=-\tilde{L}_{i_*,u_*}$, $\tilde{b}_{i_*,u_*}^\star=-\tilde{b}_{i_*,u_*}$, $L_{i,u}^\star=-L_{i,u}$, and $b_{i,u}^{\gamma\star}=-b_{i,u}^\gamma$ with $\gamma=\overline{m}_t\wedge K$.
\item[5.] According to above formula for the call option on maximum, conditional on information $\mathcal{G}_t$ price at time $t$ of a lookback call option with expiration time $T$ is given by
$$L_{t,w}^C(\mathcal{G}_t):=\frac{1}{D_t}\mathbb{\tilde{E}}\Big[D_T\big(\overline{M}_{T}-M_T\big)~\Big|~\mathcal{G}_t\Big]=C_{t,w}^{\overline{M}_T}(\mathcal{G}_t,0)-C_{t,\bar{w}}^{\overline{M}_T}(\mathcal{G}_t,0)$$
where for $i=1,\dots,n_x$, $\bar{w}_{i,T}:=w_{i,T}$ and rest of the components of a vector $\bar{w}$ are zero.
\item[6.] According to above formula for the call option on minimum, conditional on information $\mathcal{G}_t$ price at time $t$ of a lookback put option with expiration time $T$ is given by
$$L_{t,w}^P(\mathcal{G}_t):=\frac{1}{D_t}\mathbb{\tilde{E}}\Big[D_T\big(m_T-\overline{m}_{T}\big)~\Big|~\mathcal{G}_t\Big]=C_{t,\bar{w}}^{\overline{m}_T}(\mathcal{G}_t,0)-C_{t,w}^{\overline{m}_T}(\mathcal{G}_t,0),$$
where $i=1,\dots,n_x$, $\bar{w}_{i,T}:=w_{i,T}$ and rest of the components of a vector $\bar{w}$ are zero.
\end{itemize}
It should be noted that if we know distribution of a random vector $\text{vec}(\Pi,\Gamma,\mathsf{P},\bar{s}_t)$ conditional on $\mathcal{F}_t$, then one can price options by Monte--Carlo simulation methods. Let us illustrate an option pricing method using Monte--Carlo methods for the call option on maximum. To price the option by Monte--Carlo methods, first, we generate a sufficiently large number of random realizations $V_{t*}:=(\Pi_*,\Sigma_*,\mathsf{P}_*,\bar{s}_{t*})$ from $f(\Pi,\Sigma,\mathsf{P},\bar{s}_{t}|\mathcal{F}_t)$. Then we substitute them into the price formula of call option on maximum, $C_{t,w}^{\overline{M}_T}(\mathcal{G}_t,K)$ obtain a large number of $C_{t,w}^{\overline{M}_T}(V_{t*})$s. Finally, we average $C_{t,w}^{\overline{M}_T}(V_{t*})$s. By the law of large numbers, the average converges to theoretical option price $C_{t,w}^{\overline{M}_T}(K)$. This simulation method is better than a simulation method which is based on realizations from $f(\bar{y}_t^c,\Pi,\Gamma,\mathsf{P},\bar{s}_t|\mathcal{F}_t)$, because the former one has lower variance than the last one. Monte--Carlo methods using Gibbs sampling algorithm of Bayesian MS--VAR process are proposed by authors. In particular, Monte--Carlo method of Bayesian MS--AR($p$) process is provided by \citeA{Albert93} and its multidimensional extension is given by \citeA{Krolzig97}.
Note that using the idea in \citeA{Battulga22b} one can obtain similar pricing formulas that correspond to rainbow options and lookback options of foreign asset prices and foreign currencies.
\section{Locally Risk-Minimizing Strategy}
\citeA{Follmer86} introduced the concept of mean--self--financing and extended the concept of complete market into incomplete market. If a discounted cumulative cost process is a martingale, then a portfolio plan is called mean-self-financing. In discrete time case, \citeA{Follmer89} developed a locally risk-minimizing strategy and obtained a recurrence formula for optimal strategy. According to \citeA{Schal94} (see also \citeA{Follmer04}), under a martingale probability measure the locally risk-minimizing strategy and remaining conditional risk-minimizing strategy are same. Therefore, in this section we will consider the locally risk-minimizing strategy which corresponds to the call option on maximum given in Section 4 and the life insurance products given in Section 5. In an insurance industry, for continuous time unit--linked term life and pure endowment insurances with guarantee, locally risk-minimizing strategies are obtained by \citeA{Moller98}.
To simplify notations we define: for $t=1,\dots,T$, $\overline{X}_t:=(\overline{X}_{1,t},\dots,\overline{X}_{n_x,t})^T$ is a discounted price vector at time $t$ and $\Delta \overline{X}_t:=\overline{X}_t-\overline{X}_{t-1}$ is a difference vector at time $t$ of the price vectors, where $\overline{X}_{i,u}:=D_ux_{i,u}$ is a discounted price at time $u$ of $i$-th asset. Note that $\Delta \overline{X}_t$ is a martingale difference with respect to the filtration $\{\mathcal{H}_t\}_{t=1}^T$ and risk-neutral measure $\mathbb{\tilde{P}}$. Following the idea in \citeA{Follmer04} and \citeA{Follmer89}, one can obtain that for the filtration $\{\mathcal{F}_t\}_{t=1}^T$ and a generic discounted contingent claim $\overline{H}_T$, under risk-neutral measure $\mathbb{\tilde{P}}$ locally risk-minimizing strategy ($h^0, h$) is given by the following equations:
\begin{equation}\label{B23}
h_{t+1}=\Omega_{t+1}^{-1}\Lambda_{t+1}~~~\text{and}~~~h_{t+1}^0=V_{t+1}-h_{t+1}^TX_{t+1}
\end{equation}
for $t=0,\dots,T-1$, where, $\Omega_{t+1}:=\mathbb{E}\big[\Delta\overline{X}_{t+1}\Delta\overline{X}_{t+1}^T\big|\mathcal{F}_t\big]$, $\Lambda_{t+1}:=\widetilde{\text{Cov}}\big[\Delta\overline{X}_{t+1},\overline{H}_T\big|\mathcal{F}_t\big]$ and $\overline{V}_{t+1}:=\mathbb{\tilde{E}}[\overline{H}_T|\mathcal{F}_{t+1}]$ for a square integrable random variable $\overline{H}_T$.
It should be noted that since all the options are originated from the call option on maximum of several asset prices, it will be sufficient to consider locally risk--minimizing strategies that correspond to the call option on maximum. Because the difference of discounted price process $\Delta\overline{X}_t$ is a martingale difference with respect to the risk-neutral probability measure $\mathbb{\tilde{P}}$ and filtration $\{\mathcal{H}_t\}_{t=1}^T$, it follows that
\begin{equation}\label{B25}
\Lambda_{t+1}=\mathbb{\tilde{E}}\big[\overline{H}_T\overline{X}_{t+1}\big|\mathcal{F}_t\big]-\overline{V}_t\overline{X}_t.
\end{equation}
For product of discounted price at time $u$ of $i$-th asset and discounted price at time $s$ of $j$-th asset, it can be shown that for $i,j=1,\dots,n_x$ and $t\leq u,v$,
\begin{equation}\label{B26}
\mathbb{\tilde{E}}\big[\overline{X}_{i,u}\overline{X}_{j,v}|\mathcal{H}_t\big]=\overline{X}_{i,t}\overline{X}_{j,t}\exp\big\{\beta_{t,u}^{i T}\bar{\Sigma}_2(\bar{s}_t^c)\beta_{t,v}^{j}\big\}=\overline{X}_{i,t}\overline{X}_{j,t}\exp\bigg\{\sum_{m=t+1}^{u\wedge v}\sigma_{ij,m}(s_{m})\bigg\},
\end{equation}
where $\sigma_{ij,m}(s_{m})$ is ($i,j$)-th element of the random matrix at regime $s_m$, $\Sigma_{m}(s_{m})$. Therefore, as $\overline{X}_t$ is a martingale with respect to filtration $\{\mathcal{H}_t\}_{t=1}^T$ and risk-neutral measure $\mathbb{\tilde{P}}$, equation \eqref{B26} allows us to conclude that for $i,j=1,\dots,n_x$, $(i,j)$-th element of the random matrix $\Omega_{t+1}$ is given by
\begin{equation}\label{B27}
\omega_{ij,t+1}=\mathbb{\tilde{E}}\big[\Delta \overline{X}_{i,t+1}\Delta \overline{X}_{j,t+1}|\mathcal{I}_t\big]=\overline{X}_{i,t}\overline{X}_{j,t}\bigg(\mathbb{\tilde{E}}\bigg[\sum_{s_{t+1}=1}^N\exp\big\{\sigma_{ij,t+1}(s_{t+1})\big\}p_{s_{t+1}}\bigg|\mathcal{I}_t\bigg]-1\bigg).
\end{equation}
Due to equation \eqref{B26}, as $\overline{X}_{i,t},\overline{X}_{j,t}>0$, one can define the following new probability measure:
$$\tilde{\mathbb{P}}_{t,u,v}^{i,j}[A|\mathcal{H}_t]:=\frac{\exp\big\{-\beta_{t,u}^{iT}\bar{\Sigma}_2(\bar{s}_t^c)\beta_{t,v}^{j}\big\}}{\overline{X}_{i,t}\overline{X}_{j,t}}\int_A\overline{X}_{i,u}\overline{X}_{j,v}\mathbb{\tilde{P}}[\omega |\mathcal{H}_t], ~~~\text{for all}~A\in \mathcal{H}_T.$$
It can be shown that conditional distribution of random vector $\bar{y}_t^c$ given $\mathcal{H}_t$ is given by
$$\bar{y}_t^c~|~\mathcal{H}_t\sim \mathcal{N}\Big(\mu_{t,u,v}^{i,j}(\bar{y}_t,\bar{s}_t^c),\Sigma_{2.1}(\bar{s}_t^c)\Big)$$
under probability measure $\tilde{\mathbb{P}}_{t,u,v}^{i,j}$, where $\mu_{t,u,v}^{i,j}(\bar{y}_t,\bar{s}_t^c):=\mu_{2.1}^\alpha(\bar{y}_t,\bar{s}_t^c)+\Psi_{22}^{-1}\bar{\Sigma}_2(\bar{s}_t^c)(\beta_{t,u}^{i}+\beta_{t,v}^{j})$. In order to obtain locally risk-minimizing strategies that correspond to the call option on maximum, we need to calculate conditional expectations that have forms $\mathbb{\tilde{E}}[D_T\overline{X}_{j,v} 1_A|\mathcal{H}_t]$, $\mathbb{\tilde{E}}[\overline{X}_{i,u}\overline{X}_{j,v} 1_A|\mathcal{H}_t]$ and $\mathbb{\tilde{E}}[D_T/D_u\overline{X}_{i,u}\overline{X}_{j,v} 1_A|\mathcal{H}_t]$ for a generic set $A\in \mathcal{H}_T$. It follows from the domestic and above probability measures and Lemma \ref{lem01} that for $t\leq u,v$,
\begin{equation}\label{B28}
\mathbb{\tilde{E}}[D_u\overline{X}_{j,v} 1_A|\mathcal{H}_t]=D_t\overline{X}_{j,t}\exp\big\{[a_{t,v}^{j}]_t^u(\bar{y}_t,\bar{s}_t^c)\big\}\mathcal{N}\big(A,[\mu_{t,v}^{j}]_t^u(\bar{y}_t,\bar{s}_t^c),\Sigma_{2.1}(\bar{s}_t^c)\big),
\end{equation}
and
\begin{eqnarray}\label{B30}
\mathbb{\tilde{E}}[D_T/D_u\overline{X}_{i,u}\overline{X}_{j,v} 1_A|\mathcal{H}_t]&=&\overline{X}_{i,t}\overline{X}_{j,t}\exp\bigg\{\sum_{m=t+1}^{u\wedge v}\sigma_{ij,m}(s_{m})+[a_{t,u,v}^{i,j}]_u^T(\bar{y}_t,\bar{s}_t^c)\bigg\}\nonumber\\
&\times&\mathcal{N}\big(A,[\mu_{t,u,v}^{i,j}]_u^T(\bar{y}_t,\bar{s}_t^c),\Sigma_{2.1}(\bar{s}_t^c)\big).
\end{eqnarray}
In terms of the discounted price process $\overline{X}_{i,t}$, the discounted contingent claim of the call option on maximum, $\overline{H}_T$, which is given by equation \eqref{B11} can be represented by
$$\overline{H}_T^1=\sum_{i=1}^{n_x}\sum_{u=1}^tD_T\big(w_{i,u}x_{i,u}-K\big) 1_{E_{i,u}}+\sum_{i=1}^{n_x}\sum_{u=t+1}^TD_T/D_uw_{i,u}\overline{X}_{i,u} 1_{E_{i,u}}-K\sum_{i=1}^{n_x}\sum_{u=t+1}^TD_T1_{E_{i,u}}.$$
To obtain $\Lambda_{t+1}$ corresponding to the call option on maximum, we define $R_{j,t+1}(\mathcal{G}_t):=\mathbb{\tilde{E}}\big[\overline{H}_T\overline{X}_{j,t+1}|\mathcal{G}_t\big]$. Then, equations \eqref{B28}-\eqref{B30} and Lemma \ref{lem07} allow us to conclude that the expectation is given by the following equations:
\begin{eqnarray}\label{B31}
R_{j,t+1}(\mathcal{G}_t)&=&\sum_{s_{t+1}=1}^N\dots \sum_{s_T=1}^N\bigg[\big(w_{i_*,u_*}x_{i_*,u_*}-K\big)D_t \overline{X}_{j,t}\exp\big\{[a_{t,t+1}^j]_t^T(\bar{y}_t,\bar{s}_t^c)\big\}\nonumber\\
&\times&\mathcal{N}\big(\tilde{b}_{i_*,u_*},[\mu_{t,t+1}^j]_t^T(\bar{y}_t,\bar{s}_t^c,\tilde{L}_{i_*,u_*}),\Sigma_{2.1}(\bar{s}_t^c,\tilde{L}_{i_*,u_*})\big)1_{B_{i_*,u_*}}\nonumber\\
&+&\sum_{i=1}^{n_x}\sum_{u=t+1}^Tw_{i,u}\overline{X}_{i,t}\overline{X}_{j,t}\exp\big\{\sigma_{ij,t+1}(s_{t+1})+[a_{t,u,t+1}^{i,j}]_u^T(\bar{y}_t,\bar{s}_t^c)\big\}\nonumber\\
&\times&\mathcal{N}\big(b_{i,u}^\gamma,[\mu_{t,u,t+1}^{i,j}]_u^T(\bar{y}_t,\bar{s}_t^c,L_{i,u}),\Sigma_{2.1}(\bar{s}_t^c,L_{i,u})\big)\nonumber\\
&-&K\sum_{i=1}^{n_x}\sum_{u=t+1}^TD_{t}\overline{X}_{j,t}\exp\big\{[a_{t,t+1}^j]_t^T(\bar{y}_t,\bar{s}_t^c)\big\}\\
&\times&\mathcal{N}\big(b_{i,u}^\gamma,[\mu_{t,t+1}^j]_t^T(\bar{y}_t,\bar{s}_t^c,L_{i,u}),\Sigma_{2.1}(\bar{s}_t^c,L_{i,u})\big)\bigg]\prod_{m=t+1}^Tp_{s_{m-1}s_m}.\nonumber
\end{eqnarray}
To simplify notations, let us introduce the following vector: $R_{t+1}(\mathcal{G}_t):=(R_{1,t+1}(\mathcal{G}_t),\dots,R_{n_x,t+1}(\mathcal{G}_t))^T$. Therefore, due to equations \eqref{B25} and \eqref{B31} one can obtain that for the call option on maximum, we have
\begin{equation}\label{B34}
\Lambda_{t+1}=\mathbb{\tilde{E}}\big[\overline{H}_T\overline{X}_{t+1}|\mathcal{F}_t\big]-\mathbb{\tilde{E}}\big[\overline{H}_T|\mathcal{F}_t\big]\overline{X}_{t}=\mathbb{\tilde{E}}\big[R_{t+1}(\mathcal{G}_t)|\mathcal{F}_t\big]-\overline{C}_{t,w}^{\overline{M}_T}(K)\overline{X}_t,
\end{equation}
where $\overline{C}_{t,w}^{\overline{M}_T}(K):=D_tC_{t,w}^{\overline{M}_T}(K)$. As a result, if we substitute equations \eqref{B27} and \eqref{B34} into equation \eqref{B23}, we can obtain the locally risk--minimizing strategy for the call option on maximum of several asset prices.
\section{Conclusion}
Economic variables play important roles in any economic model, and sudden and dramatic changes exist in the financial market and economy. Therefore, in the paper, we introduced the Bayesian MS--VAR process and obtained pricing and hedging formulas for the rainbow options and lookback options on maximum and minimum of several asset prices using the risk--neutral valuation method and locally risk--minimizing strategy.
It should be noted that Bayesian MS--VAR process contains a simple VAR process, vector error correction model (VECM), BVAR, and MS--VAR process. To use our model, which is based on Bayesian MS--VAR process, as mentioned before one can use Monte--Carlo methods, see \citeA{Krolzig97}. For simple MS--VAR process, maximum likelihood methods are provided by \citeA{Hamilton89,Hamilton90,Hamilton93,Hamilton94} and \citeA{Krolzig97} and for large BVAR process, we refer to \citeA{Banbura10}. To summarize, the main advantages of the paper are
\begin{itemize}
\item because we consider VAR process, the spot rate is not constant and is explained by its own and other variables' lagged values,
\item it introduced economic variables, regime--switching, and heteroscedasticity to the options,
\item it introduced the Bayesian method for valuation of the options, so the model will overcome over--parametrization,
\item valuation and hedging of the options is not complicated,
\item and the model contains simple VAR, VECM, BVAR, and MS--VAR processes.
\end{itemize}
\bibliographystyle{apacite}
|
2,877,628,091,043 | arxiv | \section{Introduction}
H.E.S.S. observations of the inner Galactic plane in the [$270^{\circ}$, $30^{\circ}$]
longitude range have revealed more than two dozens of new VHE
sources, consisting of shell-type SNRs, pulsar wind nebulae,
X-ray binary systems, a putative young star cluster, etc, and yet
unidentified objects (see e.g. \cite{HESSScanII} and \cite{HESSSurveyICRC07} in
these proceedings for a summary).
The extended H.E.S.S. survey in the
[$30^{\circ}$-$60^{\circ}$] longitude range performed between 2005 and
2007 overlaps with regions covered by the MILAGRO sky survey at longitudes
greater than $30^{\circ}$.
The latter experiment has recently reported \cite{MILAGRO} three
low-latitude sources including, MGRO~J1908+06,
detected after seven years of operation (2358 days of data) at
8.3$\sigma$ (pre-trials) confidence level.
MGRO~J1908+06, of which the extension remains unknown but bounded to
a maximum diameter of 2.6$^{\circ}$, is located near the
galactic longitude $\sim 40^{\circ}$ and hence is covered by the
H.E.S.S. galactic plane survey.
A new H.E.S.S. source, HESS~J1908+063, which coincides
with MGRO~J1908+06, is presented here. Its position, size and spectrum
are measured and compared to the MILAGRO source.
Possible counterparts at other wavelengths are discussed in the light
of the H.E.S.S. measurements.
\section{Observations, Analysis \& Results}
\label{results}
Results presented in this section should be
considered as preliminary.
Observations around HESS~J1908+063 were first performed during June 2005
and then from May to September 2006
as part of the extension of the Galactic plane survey in the range of
galactic longitude and latitude of 30$^{\circ}$ $<$~l~$<$60$^{\circ}$ and
$-3^{\circ}$~$<$~b~$<$3$^{\circ}$, respectively. Followup observations
were made during May and June 2007.
In the available data-set the source is offset from the field of
view center, at different angular distances with an
average offset of 1.4$^{\circ}$. Observations for which the
source is offset by more than 2.5$^{\circ}$ were not considered for the
analysis. The total dead-time corrected and quality selected data-set
amounts to 14.9~hours with the zenith angle ranging from 30
to 46$^{\circ}$ and with a mean energy threshold of $\sim$300~GeV.
\begin{figure}[!t
\begin{center}
\includegraphics[width=0.48\textwidth,angle=0,clip]{skymap_MGRO1908NexNsigV2.eps}
\end{center}
\caption{ Smoothed excess map
($\sigma=0.5^{\circ}$) of the 1.5$^{\circ}\times1.5^{\circ}$ field of
view around the position of HESS~J1908+063. The contours show
the pre-trials significance levels for 5, 6 and 7$\sigma$, while the white
circle shows the $0.5^{\circ}$ integration radius used for the
spectrum derivation. }
\label{skymap}
\end{figure}
After calibration, the standard H.E.S.S. event reconstruction scheme
was applied to the data \cite{HESSCrab}. In order to
reject the background of cosmic-ray showers, $\gamma$-ray like
events were selected using cuts on image shape scaled with their
expected values obtained from Monte Carlo simulations. As described in
\cite{HESSKooka}, two different sets of cuts, depending on the image
size, were applied. Cuts optimized for a hard spectrum and a weak
source with a rather tight cut on the image size of 200 p.e. (photoelectrons),
which achieve a maximum signal-to-noise ratio, were applied to
study the morphology of the source, while for the spectral analysis,
the image size cut is loosened to 80 p.e. in order to cover the
maximum energy range. The background estimation
(described in \cite{HESSBack}) for each position in the
two-dimensional sky map is computed from a ring with an (apriori)
increased radius of $1.0^{\circ}$, as compared to the standard radius
of $0.5^{\circ}$, in order to deal with the large source diameter.
This radius yields four times a larger area for the background estimation
than the considered on-source region. Also events coming from known sources were
excluded to avoid contamination of the background. For the spectrum analysis, the
background is evaluated from positions in the field of view with the
same radius and same offset from the pointing direction as the source region.
Fig.~\ref{skymap} shows the Gaussian-smoothed excess map for a size
cut on the images above 200 p.e. The
colored contours indicate the H.E.S.S. pre-trials significance contour
levels for 5,6 and 7 $\sigma$. HESS~J1908+063 was discovered first as a
hot-spot within the standard survey analysis scheme \cite{HESSScanII} and was
subsequently confirmed at 7.7 $\sigma$ (pre-trials).
A conservative estimate of the trials yields a post-trials
significance of 5.7 $\sigma$.
To evaluate the extension and the position of the source,
the sky-map was fitted to a simple symmetrical two-dimensional Gaussian
function, convolved with the instrument PSF (point spread function).
The best-fit position lies at
$l=40.45^{\circ}\pm0.06_{stat}^{\circ}\pm0.06_{sys}^{\circ}$ and
$b=-0.80^{\circ}\pm0.05_{sta}^{\circ}\pm0.06_{sys}^{\circ}$, while the intrinsic
extension derived is $\sigma_{src}=(0.21^{\circ}+
0.07_{sta}^{\circ}-0.05_{sta}^{\circ}$). As the shape of the source
seems to depart from a symmetrical Gaussian, these values should be taken
as first approximations.
\begin{figure}[!b]
\begin{center}
\includegraphics[width=0.51\textwidth]{spectrum_MGRO1908_20TeV.eps}
\end{center}
\caption{Differential energy spectrum measured above 300~GeV for
HESS~J1908+063. The shaded area shows the 1 $\sigma$
confidence region for the fit parameters.
The differential flux of MGRO~J1908+06 at 20 TeV is shown in red.
Fit residuals are given in the bottom panel.}
\label{spectrum}
\end{figure}
The differential energy spectrum was computed within an integration
radius of $0.5^{\circ}$ (corresponding to the FWHM of the source size
and shown as a white circle in Fig.~\ref{skymap})
centred on the best-fit position
by means of a forward-folding maximum
likelihood fit \cite{CATSpectrum}. The spectrum is well fitted with a simple
power-law function (Fig.~\ref{spectrum}) with a hard photon index of
$2.08\pm0.10_{\rm stat}\pm 0.2_{\rm sys}$
and a differential flux at 1~TeV of ($3.23 \pm
0.45_{\rm stat} \pm 0.65_{\rm sys})\times
10^{-12}$ cm$^{-2}$~s$^{-1}$.
The integrated flux above 1~TeV corresponds to 14$\%$
of the Crab Nebula flux above that energy.
\section{Comparison with MGRO1908+06 \& Search for Counterparts}
\label{comparison}
Fig.~\ref{skymapmwl} shows the $1.5^{\circ}\times1.5^{\circ}$ field of
view around the position of HESS~J1908+063 together with
sources at other wavelengths including MGRO~J1908+06. The latter
source was discovered by the MILAGRO collaboration \cite{MILAGRO}
after seven years of operation (2358 days of data) at the galactic
longitude and latitude of
$l=(40.4^{\circ}~\pm~0.1^{\circ}_{\rm stat}~\pm~0.3^{\circ}_{\rm sys}$) and
$b=(-1.0^{\circ}~\pm~0.1^{\circ}_{\rm stat}~\pm 0.3^{\circ}_{\rm sys}$),
respectively. The differential flux, at the median energy of 20~TeV, and
assuming a spectral index of -2.3, is at a level of
(8.8$\pm$2.4$_{\rm stat}\pm$2.6$_{sys})\times
10^{-15}~{\rm TeV}^{-1}{\rm cm}^{-1}{\rm s}^{-1}$. MGRO~J1908+06 is reported to be
both compatible with a point or extended source up to a diameter of
2.6$^{\circ}$.
As clearly seen on Fig.~\ref{skymapmwl}, the positions of the two VHE sources
are fully compatible within errors. There is also a quite good
agreement between the differential flux at
20~TeV of MGRO~J1908+06 and the spectrum measured by HESS as shown on
Fig.~\ref{spectrum}. Given the larger integration radius of
$1.3^{\circ}$ for the MILAGRO source as compared to the $0.5^{\circ}$ radius
for HESS~J1908+063, the flux agreement implies the absence of any other
significant emission to the MILAGRO flux: the two sources can
consequently be identified to each other.
The better determination of the position of HESS~J1908+063 and the
measurement of its size and spectrum allow to search for counterparts
with stronger constraints.
\begin{figure}[ht]
\begin{center}
\includegraphics*[width=0.5\textwidth,angle=0,clip]{skymap_MGRO1908MultiV2.eps}
\end{center}
\caption{ Multi-wavelength view of the 1.5$^{\circ}\times1.5^{\circ}$ field of
view around the position of HESS~J1908+063. The
dotted black line shows the MILAGRO significance contours for 5 (inner) and
8$\sigma$ (outer contour). The position of the EGRET GeV source
GRO~J1908+0556 is marked with a green cross as well as the
1$\sigma$ error in the position. The 3EG 1903+0550 contours
corresponding to 99, 95, 68 and 50$\%$ confidence levels are shown in
green. The red circle marks the size and position of the radio-bright SNR
G040.5-00.5. Contours in blue show the $^{13}$CO molecular cloud in the
velocity range between (45,65) km/s.}
\label{skymapmwl}
\end{figure}
At radio wavelengths, SNR~G40.5-0.5 \cite{Green} at
an estimated distance of 5.3~kpc overlaps with HESS~J1908+063.
At EGRET energies, 3EG~J1903+0550, shown in green contours, lies close
to the SNR and has been suggested as possibly associated with it \cite{Sturner95}.
However G40.5-0.5 is not in exact coincidence with HESS~J1908+063
position and 3EG~J1903+0550 is only marginally overlapping with the
latter. HEGRA observations of this region of
the sky \cite{HEGRAul} yielded an upper limit at 0.7~TeV at the SNR
position of 4.8$\%$ of the Crab Nebula flux.
As this limit only applies for a point-like source it is not in
contradiction with the measurements reported here.
If the SNR is associated with the VHE source, the fact that the 22
arc-min size of the shell is smaller than the FWHM of HESS~J1908+063
would contrast to previously discovered HESS sources identified
with shell-type VHE emitters, such as RX~J1713.7-3946
\cite{HESSG347} or RCW 86 reported at this conference
\cite{RCW86ICRC07}. The contribution of nearby unresolved sources or
interactions of accelerated cosmic rays with molecular matter in the vicinity of
the source could explain a larger size. However, for the latter
case, the position of the nearby $^{12}\rm CO$ cloud \cite{co} or
alternatively the $^{13}\rm CO$ contours (shown in blue on Fig.~\ref{skymapmwl}) do not favour
this scenario.
An analysis of the highest energy photons ($>$1~GeV) observed by EGRET
\cite{olaf97,lamb97} from this region shows a nearby and yet unidentified
source, GRO~J1908+0556/GEV J1907+0557. The positions of the two GeV
derivations are compatible within errors. GRO~J1908+0556, shown as a
green circle on Fig.~\ref{skymapmwl}, lies within a distance of less than
two times the EGRET 68\% position measurement error to
HESS~J1908+063. A simple extrapolation of the H.E.S.S. spectrum to
lower energies leads to a lower flux than that reported
for the EGRET source ($6.33\times10^{-8}~{\rm cm}^{-1}{\rm s}^{-1}$). However
given the large PSF of EGRET even at GeV energies, other unresolved
sources can contribute to the flux measurement of GRO~J1908+0556. The
association of the HESS and MILAGRO sources to the GeV source is then
likely, although a coincidence by chance is not
excluded.
\section*{Summary}
In summary, a new source, HESS J1908+063 is reported above 300 GeV
at the level of 14$\%$ of the Crab Nebula flux and a post-trials
significance of 5.7~$\sigma$. The H.E.S.S.
source is extended, with a FWHM size of
0.5$^{\circ}$, and shows a hard spectrum with an index of 2.08$\pm$0.10.
This detection confirms for the first time one of the low-latitude sources
reported by the MILAGRO collaboration, MGRO~1908+062.
A connection to the EGRET GeV source GRO~J1908+0556/GEV J1907+0557 at
lower energies remains possible. The association with SNR~G40.5-0.5 is not
excluded but the larger size of the TeV emission should then find an
explanation in terms of either contribution of unresolved sources or
interactions of ultra-relativistic particles with molecular matter in
the vicinity of the SNR. Deeper observations of this region with
Cherenkov telescopes and GLAST data would help the interpretation
of the detected VHE emission.
\section*{Acknowledgments}
The support of the Namibia authorities and of the University of Namibia
in facilitating the construction and operation of H.E.S.S. is gratefully
acknowledged, as is the support by the German Ministry for Education and
Research (BMBF), the Max Planck Society, the French Ministry for Research,
the CNRS-IN2P3 and the Astroparticle Interdisciplinary Programme of the
CNRS, the U.K. Particle Physics and Astronomy Research Council (PPARC),
the IPNP of the Charles University, the Polish Ministry of Science and
Higher Education, the South African Department of
Science and Technology and National Research Foundation, and by the
University of Namibia. We appreciate the excellent work of the technical
support staff in Berlin, Durham, Hamburg, Heidelberg, Palaiseau, Paris,
Saclay, and in Namibia in the construction and operation of the
equipment.
|
2,877,628,091,044 | arxiv | \section{Introduction}
The recent neutrino oscillation experiments have been revealing the
detailed structure of leptonic flavors~\cite{review,analysis}. The
neutrino property, in particular the tiny mass scale is one of the
most important experimental clues to find the new physics beyond the
standard model~(SM)\@. The seesaw mechanism naturally leads to small
neutrino masses by the integration of new heavy particles which
interact with the ordinary neutrinos. The introduction of heavy
right-handed neutrinos~\cite{seesaw} implies the intermediate mass
scale of such states to have light Majorana masses of order eV, and
these heavy states are almost decoupled in the low-energy effective
theory. Alternatively, TeV-scale right-handed neutrinos could also be
possible, which in turn means tiny orders of couplings to the SM
sector and their signs cannot be observed in near future TeV-scale
particle experiments such as the Large Hadron Collider (LHC).
The SM neutrinos have tiny masses due to a slight violation of the
lepton number. This fact implies that the events with same-sign
di-lepton final states~\cite{dileptons} are too rare to be
observed. In this letter, we focus on the lepton number preserving
processes, in particular, the tri-lepton signals with large missing
transverse energy, $pp\to\ell^\pm\ell^\mp\ell^\pm\nu(\bar\nu)$. These
processes would be rather effectively detected at the LHC because only
small fraction of SM processes contributes to the background against
the signals.
As a simple example of observable seesaw theory, we consider a
five-dimensional extension of the SM with right-handed neutrinos,
where all SM fields are confined in the four-dimensional world, while
right-handed neutrinos propagate in the whole extra-dimensional
space~\cite{DDG}-\cite{neuExD2}. We will discuss an explicit framework
which provides the situation that TeV-scale right-handed neutrinos
generate tiny scale of seesaw-induced neutrino masses and
simultaneously have sizable interactions to the SM leptons and gauge
bosons. The scenario does not rely on any particular generation
structure of mass matrices and is available for one-generation
case. For such TeV-scale particles with large couplings to the SM
sector, the LHC experiment generally has the potential to find the
signals of extra dimensions and the origin of small neutrino masses.
\bigskip
\section{Observable Seesaw}
Let us consider a five-dimensional theory where the extra space is
compactified on the $S^1/Z_2$ orbifold with the radius $R$. The
SM fields are confined on the four-dimensional boundary
at $x^5=0$. Besides the gravity, only SM gauge singlets can propagate
in the bulk not to violate the charge
conservation~\cite{DDG,neuExD}. The gauge-singlet Dirac
fermions ${\cal N}_i$ ($i=1,2,3$) are introduced in the bulk which
contain the right-handed neutrinos and their chiral partners. The
Lagrangian up to the quadratic order of spinor fields is given by
\begin{eqnarray}
{\cal L} \;=\; i\overline{\cal N}D\hspace{-2.5mm}/\,{\cal N}
-\frac{1}{2}\big[\,\overline{{\cal N}^c}
(M_v+M_a\gamma_5){\cal N}+\text{h.c.}\big].
\end{eqnarray}
The conjugated spinor is defined
as ${\cal N}^c=\gamma_3\gamma_1\overline{\cal N}{}^{\rm t}$ such that
it is Lorentz covariant in five dimensions. It is straightforward to
write a bulk Dirac mass for ${\cal N}_i$ if introducing a $Z_2$-odd
function which originates from some field expectation value. The bulk
mass parameters $M_v$ and $M_a$ are $Z_2$ parity even and could depend
on the extra dimensional coordinate $x^5$ which comes from the
delta-function dependence (resulting in localized mass terms) and/or
the background geometry such as the warp factor in AdS$_5$. We also
introduce the mass terms between bulk and boundary fields:
\begin{eqnarray}
{\cal L}_m \;=\; -\big(\overline{\cal N} mL+
\overline{{\cal N}^c}m^cL\big)\delta(x^5) +{\rm h.c.},
\label{boundary}
\end{eqnarray}
where $m$ and $m^c$ denote the mass parameters after the electroweak
symmetry breaking. The boundary spinors $L_i$ ($i=1,2,3$) contain the
left-handed neutrinos $\nu_i$, namely, given in the 4-component
notation $L_i=\genfrac{(}{)}{0pt}{1}{0}{\,\nu_i\,}$. The $Z_2$ parity
implies that either component in a Dirac fermion ${\cal N}$ vanishes at
the boundary ($x^5=0$) and therefore either of $m$ and $m^c$ becomes
irrelevant.\footnote{The exception is the generation-dependent $Z_2$
parity assignment on bulk fermions~\cite{HWY}. We do not consider such
a possibility in this paper.} In the following we assign the
even $Z_2$ parity to the upper (right-handed) component of bulk
fermions, i.e.\ ${\cal N}(-x^5)=\gamma_5{\cal N}(x^5)$, and will
drop the $m^c$ term.
With a set of boundary conditions, the bulk fermions ${\cal N}_i$ are
expanded by Kaluza-Klein (KK) modes with their kinetic terms being
properly normalized
\begin{eqnarray}
{\cal N}(x,x^5) \;=\; \Bigg(\begin{array}{l}
\sum\limits_n \chi^n_R(x^5)N_R^n(x) \\[1mm]
\sum\limits_n \chi^n_L(x^5)N_L^n(x)
\end{array} \Bigg).
\end{eqnarray}
The wavefunctions $\chi_{R,L}^n$ are generally matrix-valued in the
generation space and we have omitted the generation indices for
notational simplicity. After integrating over the fifth dimension, we
obtain the neutrino mass matrix in four-dimensional effective
theory. Neutrinos are composed of the boundary ones and the KK
modes $(\nu,\epsilon N_R^{\,0\,*},\epsilon N_R^{\,1\,*},
N_L^{\,1},\cdots)\equiv(\nu,N)$. The zero modes of the left-handed
components have been extracted according to the boundary
condition. The neutrino mass matrix for $(\nu,N)$ is given by
\begin{eqnarray}
\renewcommand{\arraystretch}{1.15}
\qquad \left(%
\begin{array}{c|cccc}
& \,m_0^{\rm t} & \,m_1^{\rm t} & 0 & \cdots \\ \hline
m_0 & M_{R_{00}}^* & M_{R_{01}}^* & M_{K_{01}}^{}
& \cdots \\[1mm]
m_1 & M_{R_{10}}^* & M_{R_{11}}^* & M_{K_{11}}^{} & \cdots \\[1mm]
0 & M_{K_{10}}^{\rm t} & M_{K_{11}}^{\rm t}
& M_{L_{11}}^{} & \cdots \\[1mm]
\vdots & \vdots & \vdots & \vdots & \ddots
\end{array}\right)
\;\;\equiv\;\; -\!\left(
\begin{array}{c|ccc}
& & M_D^{\rm t} & \\ \hline
& & & \\
\!\!M_D & ~ & M_N & \\
& & &
\end{array}\right),
\end{eqnarray}
where the boundary Dirac masses $m_n$, the KK masses $M_K$, and the
Majorana masses $M_{R,L}$ are
\begin{alignat}{2}
m_n \;&= \, \chi^n_R\!{}^\dagger(0)m\,,& \qquad
M_{R_{mn}} &= \int_{-\pi R}^{\pi R}\!\!dx^5\,
(\chi^m_R)^{\rm t} (M_a+M_v) \chi^n_R\,, \nonumber \\
M_{K_{mn}} &= \int_{-\pi R}^{\pi R}\!\!dx^5\,
(\chi^m_R)^\dagger\partial_5\chi^n_L\,,& \qquad
M_{L_{mn}} &= \int_{-\pi R}^{\pi R}\!\!dx^5\,
(\chi^m_L)^{\rm t} (M_a-M_v) \chi^n_L\,.
\end{alignat}
It is noticed that $M_{K_{mn}}$ becomes proportional
to $\delta_{mn}$ if $\chi_{R,L}^n$ are the eigenfunctions of the bulk
equations of motion, and $M_{R,L_{mn}}$ also becomes proportional
to $\delta_{mn}$ if the bulk mass parameters $M_a$, $M_v$ are
independent of the coordinate $x^5$.
We further implement the seesaw operation
assuming ${\cal O}(m_n)\ll{\cal O}(M_K)$ or ${\cal O}(M_{L,R})$ and
find the induced Majorana mass matrix for three-generations light
neutrinos
\begin{eqnarray}
M_\nu \;=\; M_D^{\text{t}}M_N^{-1}M_D^{}.
\end{eqnarray}
It is useful for later discussion of collider phenomenology to write
down the electroweak Lagrangian in the basis where all the mass
matrices are generation diagonalized. The interactions to the
electroweak gauge bosons are given in this mass eigenstate
basis $(\nu_d,N_d)$ as follows:
\begin{eqnarray}
{\cal L}_g = \frac{g}{\sqrt{2}}\Big[W_\mu^\dagger e^\dagger\sigma^\mu
U_{\rm MNS} \big(\nu_d+VN_d\big) +\text{h.c.}\Big]
+\frac{g}{2\cos\theta_W}Z_\mu\big(\nu_d^\dagger+
N_d^\dagger V^\dagger\big)\sigma^\mu\big(\nu_d+VN_d\big),\;
\end{eqnarray}
where $W_\mu$ and $Z_\mu$ are the electroweak gauge bosons and $g$ is
the $SU(2)_{\rm weak}$ gauge coupling constant. The 2-component
spinors $\nu_d$ are three light neutrinos for which the seesaw-induced
mass matrix $M_\nu$ is diagonalized
\begin{eqnarray}
M_\nu \;=\; U_\nu^*\,M_\nu^d\,U_\nu^\dagger, \qquad
U_\nu\,\nu_d \;=\; \nu-M_D^\dagger M_N^{-1\,*}N,
\end{eqnarray}
and $N_d$ denote the infinite number of neutrino KK modes for which
the bulk mass matrix $M_N$ is diagonalized in the generation and KK
spaces by a unitary matrix $U_N\,$:
\begin{eqnarray}
M_N \,=\; U_N^*\,M_N^d\,U_N^\dagger, \qquad
U_N N_d\ \,=\; N+M_N^{-1}M_D^{}\,\nu.
\end{eqnarray}
The lepton mixing matrix measured in the neutrino oscillation
experiments is given by $U_{\rm MNS}=U_e^\dagger U_\nu$ where $U_e$ is
the left-handed rotation matrix for diagonalizing the charged-lepton
Dirac masses. It is interesting to find that the model-dependent parts
of electroweak gauge vertices are governed by a single
matrix $V$ which is defined as
\begin{eqnarray}
V \;=\; U_\nu^\dagger M_D^\dagger M_N^{-1\,*}U_N.
\end{eqnarray}
When one works in the basis where the charged-lepton sector is flavor
diagonalized, $U_\nu$ is fixed by the neutrino oscillation matrix.
The neutrinos also have the interaction to the electroweak doublet
Higgs $H$ in four dimensions. The boundary Dirac
mass~\eqref{boundary} comes from the Yukawa coupling
\begin{eqnarray}
{\cal L}_h \;=\; -\big(y\overline{\cal N}LH^\dagger
+\text{h.c.}\big)\delta(x^5).
\end{eqnarray}
The doublet Higgs $H$ has a non-vanishing expectation value $v$ and
its fluctuation $h(x)$. After integrating out the fifth dimension and
diagonalizing mass matrices, we have
\begin{eqnarray}
{\cal L}_h \;=\; \frac{-1}{v}\sum_n
\big[(N_d^{\rm t}-\nu_d^{\rm t}V^*)U_N^{\rm t}\big]_{R_n}\!\!
m_nU_\nu\,\epsilon(\nu_d+VN_d)h^* +\text{h.c.},
\end{eqnarray}
where $[\cdots]_{R_n}$ means the $n$-th mode of the right-handed
component.
\medskip
The heavy neutrino interactions to the SM fields are determined by the
mixing matrix $V$ both in the gauge and Higgs
vertices. The $3\times\infty$ matrix $V$ is determined by the matrix
forms of neutrino masses in the original
Lagrangian ${\cal L}+{\cal L}_b$. The matrix elements in $V$ have
the experimental upper bounds from electroweak physics, as will be
seen later. Another important constraint on $V$ comes from the
low-energy neutrino experiments, namely, the seesaw-induced masses
should be of the order of eV scale, which in turn specifies the scale
of heavy neutrino masses $M_N$. This can be seen from the definition
of $V$ by rewriting it with the light and heavy neutrino mass
eigenvalues
\begin{eqnarray}
V \;=\; (M_\nu^d)^{\frac{1}{2}}P(M_N^d)^{-\frac{1}{2}},
\label{V}
\end{eqnarray}
where $P$ is an arbitrary $3\times\infty$ matrix
with $PP^{\rm t}=1$. Therefore one naively expects that, with a fixed
order of $M_\nu^d\sim10^{-1}\,\text{eV}$ and $V\gtrsim10^{-2}$ for the
discovery of experimental signatures of heavy neutrinos, their masses
should be very light and satisfy $M_N^d\lesssim$~keV (this does not
necessarily mean the seesaw operation is not justified as $M_\nu^d$ is
fixed). The previous collider studies on TeV-scale right-handed
neutrinos~\cite{TeVRH} did not impose the seesaw relation~\eqref{V}
and have to rely on some assumptions for suppressing the necessarily
induced masses $M_\nu$. For example, the neutrino mass matrix has
some singular generation structure, otherwise it leads to the
decoupling of seesaw neutrinos from collider physics.
\medskip
A possible scenario for observable heavy neutrinos is to take a
specific value of bulk Majorana masses. Here we assume that bulk Dirac
masses vanish but it is easy to include them by attaching wavefunction
factors in the following formulas. The equations of motion without
bulk Majorana masses are solved by simple oscillators and the mass
matrices in four-dimensional effective theory are found
\begin{alignat}{2}
m_n \,\;&=\, \frac{m}{\sqrt{2^{\delta_{n0}}\pi R}}\,,& \qquad\;\;
M_{R_{mn}} &=\; \delta_{mn}(M_a+M_v), \nonumber \\[.5mm]
M_{K_{mn}} &=\; \frac{n}{R}\delta_{mn}\,,& \qquad\;\;
M_{L_{mn}} &=\; \delta_{mn}(M_a-M_v).
\end{alignat}From these,
we find the seesaw-induced mass matrix and the mixing with heavy modes:
\begin{eqnarray}
M_\nu &=& \frac{1}{2\pi R}\,m^{\rm t}\frac{\pi RX}{\tan(\pi RX)}\,
\frac{1}{(M_a+M_v)^*}\,m, \\[1mm]
\nu &=& U_\nu\nu_d \,+\frac{1}{\sqrt{2\pi R}}\,m^\dagger\bigg[\,
\,\frac{1}{M_a+M_v}\,\epsilon N_R^{\,0\,*}
\nonumber \\
&& \qquad\qquad
+\sum_{n=1}\frac{\sqrt{2}}{X^{2*}\!-(n/R)^2}\,
\Big[(M_a-M_v)^*\epsilon N_R^{\,n\,*}+
\frac{n}{R}\,N_L^{\,n}\Big]\,\bigg],
\end{eqnarray}
where $X^2=(M_a+M_v)^*(M_a-M_v)$. The effect of infinitely many
numbers of KK neutrinos appears as the factor $\tan(\pi RX)$. An
interesting case is that (the eigenvalue(s) of) $X$ takes a specific
value $X\simeq\alpha/R$ where $\alpha$ contains half
integers~\cite{DDG}: the seesaw-induced mass matrix $M_\nu$ is
suppressed by the tangent factor (not only by a large Majorana mass
scale), on the other hand, the heavy mode interaction $V$ is
un-suppressed. This fact realizes the situation that right-handed
neutrinos in the seesaw mechanism are observable at sizable rates in
future collider experiments.
\bigskip
\section{Collider Signatures}
One of the most exciting signals of higher-dimensional theory at
collider experiments is the production of KK excited states. The
signals could be observed at the LHC if new physics, which is
responsible for the generation of neutrino masses, lies around the TeV
scale, and large Yukawa couplings are allowed that lead to a sizable
order of mixing between the left- and right-handed neutrinos. An
immediate question is what processes we should pay attention to find
out the signals. One important possibility is the like-sign di-leptons
signal, $pp\to\ell^+N\to\ell^\pm\ell^\pm W^\mp\to\ell^\pm\ell^\pm jj$,
because the SM background against the signal is enough
small. Unfortunately, this process violates the lepton number which
should be proportional to tiny Majorana neutrino masses, and is
therefore difficult to be observed at the LHC\@. In this letter we
thus focus on lepton number preserving processes. While there are
various types of such processes related to heavy neutrino productions,
most of these would not be observable due to huge SM backgrounds. As
we will see in the following, an exception suitable for the present
purpose is the tri-lepton signal with large missing transverse
energy; $pp\to\ell^\pm N\to\ell^\pm\ell^\mp W^\pm\to\ell^\pm\ell^\mp
\ell^\pm\nu(\bar{\nu})$ and $pp\to\ell^\pm N\to
\ell^\pm\nu(\bar{\nu})Z\to\ell^\pm\nu(\bar{\nu})
\ell^\pm\ell^\mp$ (Fig.~\ref{fig:3leptons}). They are possibly
captured at the LHC since only small fractions of SM processes
contribute to the background against the signal.
\begin{figure}[t]
\begin{center}
\includegraphics[width=5.2cm]{3leptonW.eps}\hspace{15mm}
\includegraphics[width=5.2cm]{3leptonZ.eps}
\caption{Lepton number preserving tri-lepton processes at the LHC.}
\label{fig:3leptons}
\end{center}
\end{figure}
To investigate the signal quantitatively, we consider the
five-dimensional seesaw theory as a simple example for providing
realistic seesaw neutrino masses and observable collider
signatures. The right-handed Majorana masses
are $M_a=M$ and $M_v=0$ and diagonalized in the generation space. In
this letter it is assumed that these masses are also generation
independent. As mentioned before, the effective neutrino Majorana
masses become tiny for $M\simeq1/2R$, and thus, the right-handed
neutrino masses can be $M\sim1/R\sim{\cal O}(\text{TeV})$, while
keeping a non-negligible order of Yukawa couplings and sizable
electroweak gauge vertices for the heavy KK neutrinos. We
parametrically introduce a small quantity $\delta$ as
\begin{eqnarray}
M \;=\; \frac{1-\delta}{2R}.
\end{eqnarray}
Summing up the effects of heavy neutrinos,$\!$\footnote{In theory with
more than one extra dimensions, the sums of infinite KK modes
generally diverse without regularization~\cite{BF}.} we obtain the
seesaw-induced mass $M_\nu=\frac{\delta\pi^2}{8}
\frac{m^{\rm t}m}{M}$. A vanishing value of $\delta$ makes the light
neutrinos exactly massless, where the complete cancellation occurs in
the effects of heavy neutrinos which exhibit the Dirac nature in this
case. The $n$-th excited KK mode spectrum is $M_n=(2n-1)/(2R)$.
The electroweak gauge and Higgs vertices are also evaluated from the
Lagrangian given in the previous section. For example, the neutrino
Yukawa matrix $y$ in the model is expressed as
\begin{eqnarray}
\frac{y}{\sqrt{2\pi R}} \;=\;
\frac{2}{\pi v}\frac{1}{\sqrt{\delta R}}\,O^\dagger
(M_\nu^d)^\frac{1}{2}U^\dagger_{\rm MNS},
\end{eqnarray}
where $O$ is the $3\times3$ orthogonal matrix, which generally comes
in reconstructing high-energy quantities from the observable
ones~\cite{CI}. That corresponds to the matrix $P$ in \eqref{V}. The
model therefore contains the parameters $R$, $\delta$, $M_\nu^d$,
$U_{\rm MNS}$, and $O$. The neutrino mass differences and the
generation mixing parameters have been measured and we take their
typical experimental values~\cite{analysis}:
$\Delta m_{21}^2=8\times10^{-5}\>\text{eV}^2$,
$\,\Delta m_{32}^2=2.5\times 10^{-3}\>\text{eV}^2$,
$\,\sin\theta_{12}=0.56$, $\,\sin\theta_{23}=0.71$, and
$|\sin\theta_{13}|\leq0.22$. In this letter we consider the neutrino
mass spectrum with the normal hierarchy. The other cases of the
inverted and degenerate mass patterns can also be analyzed in similar
fashion. The Majorana phases in $U_{\rm MNS}$ have no physical
relevance in the present work and are set to be zero. The remaining
quantities suffer from experimental constraints in low-energy
physics. In particular, the dominant constraint is found to come from
the experimental search for lepton flavor-changing
processes~\cite{LFV,lowene}. For a real orthogonal matrix $O$, the
limits imposed by lepton flavor conservation are summarized as
\begin{eqnarray}
\frac{2R}{\delta}\,U_{\rm MNS}\,M_\nu\,U_{\rm MNS}^\dagger
\;\,\leq\; \left(\begin{array}{ccc}
10^{-2} & 7\times 10^{-5} & 1.6\times 10^{-2} \\
7\times 10^{-5} & 10^{-2} & 10^{-2} \\
1.6\times 10^{-2} & 10^{-2} & 10^{-2}
\end{array}\right),
\label{LFVexp}
\end{eqnarray}
which shows that the most severe limit is given by the 1-2 component,
i.e.\ the $\mu\to e\gamma$ search. We fix $\sin\theta_{13}=0.07$ as a
typical example, and accordingly the Dirac CP phase
in $U_{\rm MNS}$ is $\phi_D=\pi$ such that the effect of lepton flavor
violation is minimized. It then turns out from \eqref{LFVexp} that all
the constraints are satisfied
for $\delta/R\geq6.6\,\text{eV}$\@. Finally, the SM Higgs mass is to
be $m_h=120$ GeV in evaluating the decay widths of heavy KK
neutrinos ($N\to h+\nu$).
\begin{figure}[t]
\begin{center}
\includegraphics[height=7cm]{XsectionN.eps}
\caption{Total cross sections of tri-lepton signals as the functions
of the compactification scale $R$ with a fixed value $\delta/R=10$ eV.}
\label{fig:Xsection}
\end{center}
\end{figure}
\medskip
Now we are at the stage of investigating the tri-lepton signal of
heavy neutrino productions at the LHC\@. Since the tau lepton is
hardly detected compared to the others, we consider the signal event
including only electrons and muons. There are four kinds of tri-lepton
signals: $eee$, $ee\mu$, $e\mu\mu$, and $\mu\mu\mu$. In this work, we
use two combined signals which are composed
of $eee+ee\mu$ (the $2e$ signal) and $e\mu\mu+
\mu\mu\mu$ (the $2\mu$ signal). Figure~\ref{fig:Xsection} shows the
total cross sections for these signals from the 1st KK mode
productions at the LHC\@. They are described as the functions of the
compactification scale $R$ with $\delta/R$ being 10 eV\@. It is found
from the figure that the cross section for the $2\mu$ signal is about
one order of magnitude larger than the $2e$ signal.\footnote{For the
inverted hierarchy spectrum of light neutrinos, the $2e$ signal cross
section becomes larger than the $2\mu$ one.} We have also evaluated
the cross sections of tri-lepton signals from heavier KK neutrinos and
found that they are more than one order of magnitude smaller than the
above and are out of reach of the LHC machine. A high luminosity
collider with clean environment such as the International Linear
Collider (ILC) would distinctly discover the signatures of KK mode
resonances.
\begin{figure}[t]
\begin{center}
\includegraphics[height=7cm]{SignalsN.eps}
\caption{Expected event numbers of the $2e$ and $2\mu$ signals after
implementing the kinematical cut. The event numbers are depicted as
the functions of the compactification scale $R$ with a fixed
value $\delta/R=10$ eV\@. The integrated luminosity is taken to
be 30 fb$^{-1}$.}
\label{fig:signal}
\end{center}
\end{figure}
To clarify whether the tri-lepton signal is captured at the LHC, it is
important to estimate SM backgrounds against the signal. The SM
backgrounds which produce or mimic the tri-leptons final state have
been studied~\cite{cut,cut2}, and for the present purpose a useful
kinematical cut is discussed to reduce these SM
processes~\cite{cut2}. According to that work, we adopt the following
kinematical cuts;
\begin{itemize}
\setlength{\parskip}{0pt}
\item The existence of two like-sign charged
leptons $\ell_1^\pm$, $\ell_2^\pm$, and an additional one with the
opposite charge $\ell_3^\mp$.
\item Both energies of the like-sign leptons are larger than 30 GeV.
\item Both invariant masses
from $\ell_1$ and $\ell_3$ and from $\ell_2$ and $\ell_3$ are
larger than $m_Z+10$ GeV or smaller than $m_Z-10$ GeV.
\end{itemize}
The last condition is imposed to reduce large backgrounds from the
leptonic decays of $Z$ bosons in the SM
processes. Figure~\ref{fig:signal} shows the expected numbers of
signal events after imposing the cuts stated above. The results are
depicted by assuming the integrated luminosity 30 fb$^{-1}$. In order
to estimate the efficiency for signal events due to the cuts, we have
used the Monte Carlo simulation using the CalcHep
code~\cite{CalcHep}. Since the event numbers of SM backgrounds after
the cut are about 260 for the $2e$ signal and 110 for
the $2\mu$ signal~\cite{cut2}, the $2\mu$ events are expected to be
observed if $1/R$ is less than a few hundred GeV.
\begin{figure}[t]
\begin{center}
\includegraphics[height=7cm]{Reach2mN.eps}
\caption{Luminosity for the 3$\sigma$ reach on
the $(1/R,\delta/R)$ plane (10, 30, and 300 fb$^{-1}$ contours).}
\label{fig:Reach2mN}
\end{center}
\end{figure}
The luminosity which is required to find the 2$\mu$ signal at the LHC
is shown in Fig.~\ref{fig:Reach2mN} as a contour plot on
the $(1/R,\delta/R)$ plane. The contour is obtained by computing the
significance for the signal discovery,
\begin{eqnarray}
S_{ig} \;\equiv\; \frac{S}{\sqrt{S+B}},
\end{eqnarray}
where $S$ and $B$ are the numbers of the 2$\mu$ events and the
corresponding SM backgrounds after the kinematical cut. Since
both $S$ and $B$ are proportional to the luminosity, it is possible to
estimate the luminosity, e.g.\ giving $S_{ig}=3$ which is plotted in
Fig.~\ref{fig:Reach2mN}. The luminosity for signal
confirmation (for $S_{ig}=5$) are also obtained by scaling the above
result. The luminosity of 10, 30, and 300 fb$^{-1}$ are depicted in
the figure. It is found that if $1/R$ is less than about 250 GeV, the
signals will be observed at the early run of the LHC, while a larger
luminosity is needed for a smaller size of extra dimension to find its
signals.
\bigskip
\section{Summary and Discussion}
We have discussed a seesaw scenario where right-handed neutrinos are
around TeV scale, accessible in near future particle experiments. The
seesaw-induced mass scale is of the order of eV, while the
right-handed neutrinos have sizable gauge and Yukawa couplings to the
SM sector. The scenario is a five-dimensional extension of the SM
with right-handed neutrinos, where the ordinary SM particles locally
live in four dimensions and the right-handed neutrinos exist in the
bulk. The light neutrinos obtain tiny Majorana masses due to the small
lepton number violation, and therefore the same-sign di-lepton
processes cannot be observed. We have analyzed, as the most effective
LHC signal, the lepton number preserving processes with tri-lepton
final states, $pp\to\ell^\pm\ell^\pm\ell^\mp\nu(\bar\nu)$. It is found
that the scenario gives enough excessive tri-lepton events beyond the
SM backgrounds in wide regions of parameter space, and the LHC would
discover the signs of tiny neutrino mass generation and extra
dimensions.
The possible experimental detections of neutrino mass generations have
been discussed in other seesaw
scenarios~\cite{dileptons,TeVRH,others}. In the present analysis, only
the 1st excited mode contributes to the signals. The observation of
higher KK modes is expected to be within the reach of future collider
experiments such as the ILC, which result makes the scenario
substantially confirmed. Further analysis of such collider signatures,
together with including bulk Dirac masses and curved gravitational
backgrounds, are left for important future study.
\bigskip
\subsection*{Acknowledgments}
\noindent
The authors thank Takahiro Nishinaka for collaboration during the
early stage of this work. This work is supported in part by the
scientific grant from the ministry of education, science, sports, and
culture of Japan (Nos. 20540272, 20039006, 20244028, 18204024,
20025004, and 20740135), and also by the grant-in-aid for the global
COE program "The next generation of physics, spun from universality
and emergence" and the grant-in-aid for the scientific research on
priority area (\#441) "Progress in elementary particle physics of the
21st century through discoveries of Higgs boson and supersymmetry"
(No.~16081209).
\newpage
|
2,877,628,091,045 | arxiv | \section{Introduction}
The purpose of this thesis is the computation of the observables for heavy-light mesons, mesons with a light and a heavy valence quark, in QCD. There are various methods available to accomplish this task. Hence, the first question is which method is chosen for the computations. The degrees of freedom which should be used give the first constraint on the methods in question. In this attempt quarks and gluons should be used, the elementary degrees of freedom of QCD. This choice reduces the number of methods which can be used. In particular QCD Sum Rules (QSR) are chosen to do the job.\\
Those few sentences raise several questions. There are two which are of special interests. How are the properties of resonances calculated in QCD the theory of strong interactions ? What are QCD Sum Rules ? The answers to these questions directly show how the calculations performed in this thesis work. Based on those calculations conclusions can be drawn.\\
The classical computations based on the QCD Lagrangian have been perturbative ones. They where based on a perturbative solution of the QCD Lagrangian where the solution is expanded in the coupling constant $\alpha_{s}$ and every order can be calculated via the Feynman diagram technique. Characteristic computations based on perturbation theory have been calculations of running coupling constants, running masses or of charges and deep inelastic scattering. Perturbation theory was also used to check whether QCD is the theory of strong interactions or not. \\
Unfortunately the properties of mesons can not be calculated via perturbation theory. The perturbative solution of the QCD Lagrangian is useful in the case where it can be truncated after a certain order. There are infinitely many orders of the solution in $\alpha_{s}$. A calculation of every order would be impossible. Therefore the coupling constant $\alpha_{s}$ has to be small if the truncation should be valid. However, the coupling constant is momentum dependent. The coupling constant is small for momenta larger than $\Lambda_{QCD}$ and large for momenta smaller than $\Lambda_{QCD}$. Thus, the perturbative solution is expected to be valid for large momenta and invalid for small momenta. This behavior is called asymptotic freedom because the interaction tends to zero if the energy goes to infinity. Asymptotic freedom is the property that allows for all perturbative calculations in QCD.\\
In the time when the tests and measurements based on perturbation theory have been performed a new phenomenon was recognized. Free quarks or gluons have never been found. This was called the confinement of quarks. To our knowledge its impossible to produce free quarks or gluons. They are always confined in a bound state. Though it is widely believed that at high energies QCD matter melts and reveals quasi free quarks and gluons. This would be something like a liquid where the molecules are quarks and gluons. The confinement has up to today not been explained and is possibly not explainable by perturbative physics, but it is most probably close connected to hadron physics.\\
The physics around us takes place in the domain of small momenta whereas the physics which takes place at high momenta could only be found in the early universe. There are artificial sources where the physics at high momentum scales can be found like accelerators. Hence, if the QCD physics at the energy scales at which we live should be described a non-perturbative solution of the QCD Lagrangian is needed. Unfortunately there is only one method which is supposed to be capable to solve the theory in every domain. It is lattice QCD, but this method needs a lot of computer power and is not very instructive about the physics which stands behind the calculations. Therefore, a less direct method is used, but a method with considerably more physical insight than lattice QCD.\\
The dream of everybody who works in this field would be to find a formula as it is given for the hydrogen atom. Here every bound state has a main quantum number $n$ which characterizes the mass of the bound state. Something like that is aimed for by modern researchers. The fine structure of the problem is completely neglected because the raw structure is already that difficult. One method to represent the states are spectral functions where every bound state is identified with a peak in a function of the energy. The properties of the bound states characterize the peaks. Thus, if the properties are known the peaks can be calculated or vice versa. The mathematical object which contains the spectral function of a certain bound states are correlators. This has been explored in the K/"al\'en-Lehmann spectral representation. In the domain with time-like momentum the correlator is given by the spectral function of the system. In a perturbative calculation bound states are not generated.\\
However, there is a possibility to estimate the spectral function taking non-perturbative effects of the QCD Lagrangian into account. One exploits the fact that the correlator in the space like domain can be calculated by the so called operator product expansion (OPE). Via a dispersion relation time-like and space-like momenta can be connected with the domain where the momentum squared is negative. A phenomenological ansatz for the spectral function is made and the properties of this ansatz are fitted in order to reproduce the OPE. In a certain way the spectral function is then calculated by the scheme. QCD enters the scheme through the OPE which is the quantity where theoretical calculations enter the game. The so called Wilson coefficients of the OPE can be calculated using the QCD Lagrangian. Hence, this scheme has two parts a phenomenological part which is the spectral function and a theoretical part which is the OPE. This scheme is called a QCD Sum Rule. It has two features that provide the non-perturbative information. Through the OPE non-perturbative information enters the calculation and the dispersion relation connects the spectral function with the non-perturbative information. \\
The thesis is structured in four parts:
\begin{itemize}
\item In the first part, sections \ref{reguandreno}, \ref{OPE}, \ref{wilsoncoefficient} and \ref{condensates}, the OPE and its calculation are introduced with a special glance on some subtle but important features of the OPE.
\item In the second part, section \ref{dispersionrelations}, dispersions relations are discussed.
\item The third part, sections \ref{sumrules}, \ref{charmonium} and \ref{borel}, introduce correlators, spectral functions, QCD Sum Rules and the Borel Sum Rules.
\item The fourth part, section \ref{hl_systems}, QCD Sum Rule calculations of heavy-light systems are presented. In this part novel calculations are discussed. In addition previous QCD Sum Rule calculations are reviewed. In these examples the Wilson coefficients are calculated in QCD and in the heavy quark effective theory (HQET).
\end{itemize}
\section{Regularization and renormalization \label{reguandreno}}
In this section important terminologies and techniques in the field of regularization and renormalization are introduced. In the first part terminologies are discussed and in the second examples of some common techniques are presented. The results of the examples will be used in subsequent sections.
\subsection{The terminology of superficial divergence}
Let D be the superficial degree of divergence: an amplitude with the degree D diverges like:
\begin{eqnarray}
\int^{\infty}_{0} k^{D-1} dk.
\end{eqnarray}
Some examples are given here:
\begin{eqnarray}
D=0~~~~~~~~~~~\int^{\infty}_{0} k^{-1} dk=\int^{\infty}_{0} \frac{1}{k}dk=\left(\ln[k]\right)_{0}^{\infty}
\end{eqnarray}
\begin{eqnarray}
D=1~~~~~~~~~~~\int^{\infty}_{0} k^{0} dk=\int^{\infty}_{0} 1 dk=\left(k\right)_{0}^{\infty}
\end{eqnarray}
\begin{eqnarray}
D=2~~~~~~~~~~~\int^{\infty}_{0} k^{1} dk=\int^{\infty}_{0} k dk=\left(\frac{k^2}{2}\right)_{0}^{\infty}.
\end{eqnarray}
This means: all amplitudes with $D\geq0$ are divergent, it is important to notice that also the amplitude with D=0 is divergent. Such amplitudes are logarithmically divergent. Examples for divergent diagrams are loop diagrams. A simple example occurs in the real scalar $\phi^4$ theory where the tadpole diagram is divergent:
\begin{eqnarray}
\parbox[c]{3cm}{
\begin{fmffile}{phi4tadpole01}
\begin{fmfgraph*}(23,20)
\fmfleft{i}
\fmfright{o}
\fmf{plain_arrow,label=$q$}{i,v}
\fmf{plain_arrow,label=$k$,tension=0.8}{v,v}
\fmf{plain_arrow,label=$q$}{v,o}
\fmfdot{v}
\end{fmfgraph*}
\end{fmffile}}
\backsim\int d^{4}k\frac{i}{k^2-m^2} \label{tadpole01}.
\end{eqnarray}
In this case D=2. The external legs have been amputated in the explicit expression. The number of loop integrals in each theory is infinite but fortunately there is a class of quantum field theories in which the number of elementary diagrams which are divergent is limited. In this class the treatment of these divergences is simpler than in a class of theories with infinitely many divergent diagrams. An example for a diagram which is finite and contains a loop is\\
\begin{eqnarray}
\parbox[c]{4cm}{
\begin{fmffile}{phi46}
\begin{fmfgraph*}(20,15)
\fmfleft{i1,i2,i3,i4}
\fmfright{o1,o2,o3,o4}
\fmf{plain}{i1,v2}
\fmf{plain}{i2,v1}
\fmf{plain}{i3,v1}
\fmf{plain}{i4,v3}
\fmf{plain}{o1,v2}
\fmf{plain}{o2,v4}
\fmf{plain}{o3,v4}
\fmf{plain}{o4,v3}
\fmf{plain}{v1,v2,v4,v3,v1}
\fmffixedx{0}{v2,v3}
\fmffixedy{20}{v2,v3}
\fmfdot{v1,v2,v3,v4}
\fmflabel{$q_{1}$}{i4}
\fmflabel{$q_{2}$}{i3}
\fmflabel{$q_{3}$}{i2}
\fmflabel{$q_{4}$}{i1}
\fmflabel{$q'_{1}$}{o4}
\fmflabel{$q'_{2}$}{o3}
\fmflabel{$q'_{3}$}{o2}
\fmflabel{$q'_{4}$}{o1}
\end{fmfgraph*}
\end{fmffile}}\backsim\int d^{4}k\left(\frac{i}{k^2-m^2}\right)^4.
\end{eqnarray}\\
Here D=-4, the integral is not divergent. The external legs are again amputated and the external momenta are set to 0.
\subsection{Regularization and renormalization of the tadpole diagram in $\lambda\phi^4$-theory}
The Lagrangian of the theory is given by
\begin{eqnarray}
\mathscr{L}=\frac{1}{2}\left(\partial_{\mu}\phi\right)^2-\frac{1}{2}m^2\phi^2-\frac{\lambda}{4!}\
\phi^4.
\end{eqnarray}
There is only one diagram to order $\lambda^1$ (see equation (\ref{tadpole01})). This diagram is the simplest divergent diagram that exists. Divergent diagrams can be regularized. These regularized expressions parameterize the divergence of the diagram. After the regularization it is possible to renormalize the diagram. Renormalization is a process in which the infinities are absorbed from observable quantities which have a relation to the amplitude, in order to eliminate them from the results. To achieve that a reference momentum is chosen at which the observable quantity is measured. This point is called the renormalization point. Taking the difference of the quantity given by the diagram at the renormalization point and another point results in an expression which is given by the value of the observable at the renormalization point and some momentum dependent function. At this stage the infinities cancel and the theory is defined at an arbitrary renormalization point.\\
The full analytic expression for the diagram (\ref{tadpole01}) is
\begin{eqnarray}
\frac{i}{q^2-m^2}\left(-i\lambda\right)\int\frac{dk^4}{(2\pi)^4}\frac{i}{k^2-m^2}\frac{i}{q^2-m^2}=\frac{\left(-\lambda\right)}{(q^2-m^2)^2}\int\frac{dk^4}{(2\pi)^4}\frac{1}{q^2-m^2}\label{tadpolefull}.
\end{eqnarray}
The momentum conservation at the vertex in (\ref{tadpole01}) shows that the external momenta do not contribute to the internal momentum (see figure \ref{tadpolemomentum}).
\begin{figure}[htbp]
\begin{center}
\begin{picture}(0,0)%
\includegraphics{vertextadpole.eps}%
\end{picture}%
\setlength{\unitlength}{4144sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(2384,1805)(879,-1844)
\put(1261,-1771){$p_{total}=k-k+q-q=0$}%
\put(1351,-331){$k$}%
\put(1126,-1321){$q$}%
\put(2611,-1321){$q$}%
\put(2971,-421){$k$}%
\end{picture}%
\caption{Momentum conservation at the vertex.\label{tadpolemomentum}}
\end{center}
\end{figure}
The diagram is factorized in the contribution of two free propagators and the loop. In the following only the integral is kept. This integral which represents the loop is divergent, its superficial degree of divergence is D=1. There exist various methods to regularize and renormalize divergent diagrams. Every method has advantages and disadvantages therefore the three most common are applied to the tadpole diagram for a comparison and an overview.\\
An explicit integration of the loop integral is needed. One method to accomplish the integration is the Wick rotation. Here the integration in the four dimensional Minkowski space is transformed to an integration in a four dimensional Euclidean space. The only thing employed is the residue theorem.\\
\begin{figure}[htbp]
\begin{center}
\begin{picture}(0,0)%
\includegraphics{coordinate02.eps}%
\end{picture}%
\setlength{\unitlength}{4144sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(3044,3044)(2199,-4283)
\put(5011,-2971){$Re[k_{0}]$}%
\put(3841,-1441){$Im[k_{0}]$}%
\put(2941,-2941){$k_{0}=-m$}%
\put(4051,-2671){$k_{0}=m$}%
\put(3361,-2131){$R$}%
\end{picture}%
\caption{Contour of integration in the zero component of (\ref{tadpolefull}). The red dots are mass poles shifted by $\epsilon$ away from the real axis. If the limit $R\rightarrow \infty$ is taken the integration over the bows vanish.\label{wickrotation}}
\end{center}
\end{figure}
The integration over the contour in figure \ref{wickrotation} is zero. In the limit $R\rightarrow \infty$ only the integration along the real and the imaginary axis contribute. Therefore the integrations must be equal except of their signs. This enables the replacement of the integral along the real, $I_{1}$, axis with the integral along the imaginary axes, $I_{2}$. The integral $I_{2}$ is not affected from the poles sketched in figure (\ref{wickrotation}) and furthermore the replacement transforms the integral from Minkowski to Euclidean space. This transformation is further investigated in section \ref{wilsoncoefficient}. The whole equation can also be merged in a graphic picture. The integration contour of $I_{1}$ can be rotated into the integration contour of $I_{2}$ without crossing the mass poles. Hence the name Wick rotation is explained as a simple rule for the procedure just outlined.\\
After the Wick rotation the integral is given by the following expression:
\begin{eqnarray}
\int\frac{dk^4}{(2\pi)^4}\frac{1}{k^2-m^2}=\frac{-i}{8\pi^2}\int_{0}^{\infty}
dk_{E}\frac{k_{E}^3}{k^2_{E}+m^2}=\frac{-i}{16\pi^2}\left(k_{E}^2-m^2\ln[k_{E}^2+m^2]\right)_{0}^{\infty}.
\end{eqnarray}
The pole in the integrand has vanished and the integration runs in Euclidean space, this is symbolized by the index E.
The transformation law is simply $l^0=il^{0}_{E}$, $\vec{l}=\vec{l}_{E}$, the minus sign in front of the integration was explained above.\\
Thus the integral is computed and the comparison of the renormalization methods can be started.
\begin{enumerate}
\item The cutoff method:\\ \\
This method is the most simple one, application of the Feynman rules results in an integration from 0 to
$\infty$. This integral is divergent and has therefore to be regularized. Regularization means a parameterization of the divergence or in other words a method to make the integral finite with a rule that tells how the original integral can be derived from the regularized one. Thus one way to achieve that is by replacing the upper limit by $\Lambda$. The result is:
\begin{eqnarray}
\int\frac{dk^4}{(2\pi)^4}\frac{1}{k^2-m^2}=\frac{-i}{16\pi^2}\left(k_{E}^2-m^2\ln[k_{E}^2+m^2]\right)_{0}^{\Lambda}\nonumber\\
=\frac{-i}{16\pi^2}\left(\Lambda^2-m^2\ln\left[\frac{\Lambda^2}{m^2}+1\right]\right)\underbrace{\approx}_{for
\Lambda\gg m}\frac{-i}{16\pi^2}\left(\Lambda^2-m^2\ln\left[\frac{\Lambda^2}{m^2}\right]\right)=-if\left(\Lambda\right) \label{power}.
\end{eqnarray}
The original divergent result is retained in the limit $\Lambda\rightarrow\infty$. This is a parameterization of
the tadpoles divergence. Through this procedure a parameterization of the divergence is obtained, the integral is regularized. Without the cutoff the
integral is infinite.\\
Renormalization is a method to get rid of the unphysical parameter $\Lambda$, which is infinite and therefore unphysical. The question is which effect this divergence has. As we know from measurements the propagation of a particle from a to b is
not infinitely likely. This gives us the strong hint that the divergence should be absorbed into something inside the
amplitude. If we neglect all other contributions to the propagator except of the tadpole we can form a geometrical
series, that looks like:
\begin{eqnarray}
D(p^2)=i\int d^4x~e^{ip^{\mu}x_{\mu}}\bra{0}T\{\phi(0)\phi(x)\}\ket{0}=\nonumber\\
\frac{i}{p^2-m^2}\left[1-if\left(\Lambda\right)\frac{1}{p^2-m^2}+\left(-if\left(\Lambda\right)\frac{1}{p^2-m^2}\right)^2+...
\right.=\frac{i}{p^2-m^2}\frac{1}{1-\frac{-if\left(\Lambda\right)}{p^2-m^2}}=\nonumber\\=\frac{i}{p^2-m^2+if\left(\Lambda
\right)}=\frac{i}{p^2-m^2_{R}} \label{selfenergy}.
\end{eqnarray}
This shows that the mass can absorb the divergence which we parameterized by the cutoff. The combination
$m^2-if\left(\Lambda\right)$ is called the renormalized mass $m_{R}$. The massparameter that we called m is the naked mass and has to be divergent in order to absorb the divergent $-if\left(\Lambda\right)$.
If we assume the naked mass to be divergent, this subtraction of divergences eliminates the problem of infinities. Thus renormalization of the diagram is achieved.\\
The renormalization just performed has to be treated with care. The loop integral is independent of the external
momentum $p^{\mu}$. This means the renormalization of the mass by means of the tadpole is a shift of the mass it does not
produce a momentum dependent mass, the renormalize mass stays constant. The first momentum dependent loop emerges in second order
$\lambda$. Therefore no physical measurable effect exists that proves whether the mass is shifted by the tadpole or not.\\
The consequence of this analysis is a simplification of our calculations! All loop diagrams in which the loops do not
depend on the external momentum p can be dropped from the perturbation expansion of the two point function. They produce no
momentum dependence of the mass but the mass is a parameter which enters the theory out of measurements. This means it is fixed if all contributions beyond first order $\lambda$ can be neglected.
\item The Pauli-Villars method:\\ \\
Another method to regularize the tadpole diagram is the introduction of regulators. Where the regulators are just copies of the original particel's propagator with a very big "mass". The starting point is the loop integral
\begin{eqnarray}
\int\frac{dk^4}{(2\pi)^4}\frac{1}{k^2-m^2}-D_{1}\int\frac{dk^4}{(2\pi)^4}\frac{1}{k^2-M_{1}^2}-D_{2}\int\frac{dk^4}{(2\pi)^4}\frac{1}{k^2-M_{2}^2}\label{paulivillars}
\end{eqnarray}
with the conditions
\begin{eqnarray}
D_{1}+D_{2}=1 \label{cond01}
\end{eqnarray}
and
\begin{eqnarray}
D_{1}M_{1}^2+D_{2}M_{2}^2=m^2\label{cond02}.
\end{eqnarray}
The additional propagators are the regulators with "heavy masses" $M_{i}$. After the integration these conditions are directly needed.
\begin{eqnarray}
\frac{-i}{16\pi^2}\left(\Lambda^2-m^2\ln\left[\frac{\Lambda^2}{m^2}+1\right]-D_{1}\Lambda^2+D_{1}M_{1}^{2}\ln\left[\frac{\Lambda^2}{M_{1}^2}\right]-D_{2}\Lambda^2+D_{2}M_{2}^{2}\ln\left[\frac{\Lambda^2}{M_{2}^2}\right]\right)\nonumber\\
=\frac{-i}{16\pi^2}\left(\Lambda^2-D_{1}\Lambda^2-D_{2}\Lambda^2-m^2\ln\left[\frac{\Lambda^2}{m^2}\right]+D_{1}M_{1}^{2}\ln\left[\frac{\Lambda^2}{M_{1}^2}\right]+D_{2}M_{2}^{2}\ln\left[\frac{\Lambda^2}{M_{2}^2}\right]\right)
\end{eqnarray}
Together with (\ref{cond01}) $\Lambda^2-D_{1}\Lambda^2-D_{2}\Lambda^2=0$ and
\begin{eqnarray} D_{1}M_{1}^{2}\ln\left[\frac{\Lambda^2}{M_{1}^2}\right]+D_{2}M_{2}^{2}\ln\left[\frac{\Lambda^2}{M_{2}^2}\right]\approx(D_{1}M_{1}^{2}+D_{2}M_{2}^{2})\ln\left[\frac{\Lambda^2}{M^2}\right]=m^{2}\ln\left[\frac{\Lambda^2}{M^2}\right]
\end{eqnarray}
the result for the regularized loop integral is
\begin{eqnarray}
\frac{-i}{16\pi^2}\left(-m^2\ln\left[\frac{M^2}{m^2}\right]\right).
\end{eqnarray}
In comparison with (\ref{power}) the power term $\Lambda^2$ does not appear. Therefore this term is regularization dependend.\\
However, the renormalization procedure shown in the paragraph about the cut-off method holds here too. The physical quantity, the mass does not change, because the shift due to the loop is unobservable.\\
The phenomenon just observed is of general nature and can be comprised as follows. Different regularization procedures can result in different expressions for the same diagram, but the renormalized observable quantities are not allowed to change if the regularization procedure is changed. Unless this condition is satisfied the theory used would make no sense at all.
\item The method of dimensional regularization:\\ \\
Another method to regularize the divergence in (\ref{tadpolefull}) is to extend the integral in 4 dimensions to an integral in $n$ dimensions. The extended integral is divergent for even $n$ with $n\geq4$. The loop integral in $n$ dimensions is given by
\begin{eqnarray}
\int\frac{dk^n}{(2\pi)^n}\mu^{4-n}\frac{i}{k^2-m^2}=\frac{1}{(4\pi)^{\frac{n}{2}}}\mu^{4-n}\frac{\Gamma(1-\frac{n}{2})}{\left( m^2\right)^{1-\frac{n}{2}}}=\nonumber\\\frac{m^2}{16\pi^2}\left[-\frac{2}{4-n}-\gamma-1-\ln(4\pi)+\ln\left( \frac{m^2}{\mu^2}\right) +\mathcal{O}(4-n)\right] \label{dimensional}
\end{eqnarray}
where $\gamma$ is the Euler-Mascheroni constant and $\mu^{4-n}$ is introduced to keep the dimension of the expression constant with respect to $n$. The integration can again be performed with the Wick rotation. Thus, the third method to regularize the integral gave another expression that differs from the results of method one and method two. All methods up to now gave expressions that where different from each other. Hence, the expressions are regularization dependend.\\
The dimensional regularized loop diagram has to be renormalized. Therefore it has to be subtracted. Two schemes to do that are very common the $MS$ and the $\overline{MS}$ scheme.
In the $MS$ scheme just the pole term $-\frac{2}{4-n}$ is subtracted while in the $\overline{MS}$ scheme $-\frac{2}{4-n}-\gamma$ is subtracted. Therefore the renormalized integrals are scheme dependent. This example shows that not only regularization but also renormalization can be scheme dependent.\\
Again the physical quantities are not affected from such scheme dependences. The tadpole diagram in dimensional regularization (\ref{dimensional}) has to be summed up as shown in the cut-off method before it is subtracted. The scheme dependence does not show up in the renormalized mass. It is just shifted from the bare to the physical value.
\end{enumerate}
The methods shown here are convenient for the calculation of one loop diagrams. A criteria which constrains the choice of the renormalization technique is if the symmetries of the theory in which the calculations are performed are still valid after renormalization.
\section{The operator product expansion (OPE) \label{OPE}}
This section provides a basic introduction of the operator product expansion. The expansion is derived in two ways where the first is the physical introduction and the second one the mathematical introduction. After that issues which are important in the application of the OPE are considered. Finally questions how the OPE should be truncated and how it can be simplified are addressed.
\subsection{Features and their Derivation}
The purpose of this section is to establish a first overview of the OPE and topics connected to it. The most important features of this expansion are explained and derived in the following sequence.
\begin{enumerate}
\item {The Operator Identity}\\ \\
In 1969 Wilson proposed a Non-Lagrangian model of current algebra \cite{Wilson:1969zs}. The basis of his model is the
assumption that the singular part as $x\rightarrow y$ of the product A(x)B(y) of two operators is given by a sum
over local operators
\begin{eqnarray}
A(x)B(y)\rightarrow\sum_{n}C^{AB}_{n}(x-y)O_{n}(y) \label{ope}
\end{eqnarray}
where $C^{AB}_{n}(x-y)$ are singular c-number functions and $O_{n}$ are operators. This relation is derived by using the formalism of the
path-integral. This access to the OPE gives valuable physical insight about the OPE, in particular on the separation of
long and short distance fluctuations. The remarkable thing about the OPE is that it is an operator relation; this means, in
applying it to any matrix element
$\bra{\beta}A(x)B(x)\ket{\alpha}$, the same functions $C^{AB}_{n}(x-y)$ are received for all states $\ket{\alpha}$ and
$\ket{\beta}$.
\item{Derivation in the Operator Formalism}\\ \\
The OPE can also be derived in the operator formalism of quantum field theory. This derivation deals with the
singularities of the operator product $ A(x)B(y)$ in the limit $x\rightarrow y$. These singularities have to be separated
from each other. This operation produces a series in which the summands are sorted by the strength of their
singularities which coincides with their importance. The series derived in this scheme is identical with the OPE derived
by using the path-integral formalism.
\item{Important and Unimportant Summands}\\ \\
An expansion with infinitely many summands that can not be resummed is useless if it can not be truncated. Using scale arguments it can be shown
that the OPE can be truncated.\\
Dimensional analysis suggests that $C^{AB}_{n}(x-y)$ behaves for $x\rightarrow y$ like the power $d_{O}-d_{A}-d_{B}$ of
$x-y$, where $d_{i}$ is the dimension of the operator i in powers of mass or momentum. Since $d_{i}$ increases as we
add more fields or derivatives to an operator $O$, the strength of the singularity of $C^{AB}_{n}(x-y)$ decreases for
operators C of increasing complexity.\\
The decrease of the singularity in equation (\ref{ope}) with operators $O(y)$ of increasing complexity is the reason that
justifies the truncation of the OPE. Generally speaking the OPE is useful in drawing conclusions about the behavior of
the product A(x)B(y) for $x\rightarrow y$.\\
\item{Translation to Perturbation Theory}\\ \\
For application purposes the OPE is translated into the language of Feynman diagrams. The Wilson coefficients are
displayed as a series of
graphs that contain common symbols and symbols representing the connection to the matrix elements. While the matrix elements
are simply multiplied
with the expression for the corresponding Wilson coefficient without any instruction how they should be calculated. Special attention is payed to the fact that high and low frequencies have to be separated. This leads to the introduction of a renormalization point $\mu$.
\end{enumerate}
\subsection{The Operator Identity \label{pathderivation}}
In this section a generalized version of the Wilson expansion is derived. The derivation is close to \cite{Weinberg:1996kr}. For this purpose, let us consider a Greens function for local operators $A_{1}(x_{1}),A_{2}(x_{2})$, etc. whose arguments approach a point x, as well as other local operators $B_{1}(y_{1}),B_{2}(y_{2})$, etc. with fixed arguments:
\begin{eqnarray}
\bra{0}T\{A_{1}(x_{1}),A_{2}(x_{2}),...B_{1}(y_{1}),B_{2}(y_{2}),...\}\ket{0}\nonumber\\=\int\left[\Pi_{l,z} d\phi_{l}(z)\right]a_{1}(x_{1})a_{2}(x_{2})...b_{1}(y_{1})b_{2}(y_{2})...exp(iI[\phi]) \label{opepath}
\end{eqnarray}
where the lower-case letters a and b indicate replacement of the field operators, the A's and B's, with c-number fields $\phi$. The operators $A_{1}(x_{1}),A_{2}(x_{2})$, etc. form the operator product which should be expanded, while the operators $B_{1}(y_{1}),B_{2}(y_{2})$, etc. represent external states. The c-number fields $\phi$ are the field operators of the considered theory.\\
Now surround the point x with a sphere B(R) of radius R which is much larger than the separations among the $x,x_{1},x_{2}$, etc. but much smaller than the separations between $x,y_{1},y_{2}$, etc. (see figure \ref{partitioningR2}). Since the action is local, it may be written as
\begin{eqnarray}
I=\int_{z\in B(R)} d^4z~\mathscr{L}(z)+\int_{z\not\in B(R)} d^4z~\mathscr{L}(z).
\end{eqnarray}
Equation (\ref{opepath}) can be put in the form
\begin{eqnarray}
\bra{0}T\{A_{1}(x_{1}),A_{2}(x_{2}),...B_{1}(y_{1}),B_{2}(y_{2}),...\}\ket{0}=\nonumber\\\int\left[\Pi_{l,z\not\in B(R)} d\phi_{l}(z)\right]b_{1}(y_{1})b_{2}(y_{2})...exp\left(i\int_{z\not\in B(R)}d^4 z\mathscr{L}(z)\right)\nonumber\\ \times\int\left[\Pi_{l,z\in B(R)} d\phi_{l}(z)\right]a_{1}(x_{1})a_{2}(x_{2})...exp\left(i\int_{z\in B(R)}d^4 z\mathscr{L}(z)\right) \label{opesep}
\end{eqnarray}
in which the only influence that the fields inside and outside the sphere have to each other is given by the boundary condition at the surface of the sphere.
\begin{figure}[htbp]
\begin{center}
\begin{picture}(0,0)%
\includegraphics{ball01.eps}%
\end{picture}%
\setlength{\unitlength}{4144sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(2777,2870)(2571,-4058)
\put(3923,-1296){$\vec{e}_{2}$}%
\put(4348,-2413){$R$}%
\put(4109,-2440){$x$}%
\put(5333,-2786){$\vec{e}_{1}$}%
\end{picture}%
\caption{Partitioning of $\mathds{R}^2$ into a Ball of radius $R$ around the point $x$ and the remainder of the space. This picture has to be treated with care because in the 2 dimensional Minkowskispace a sphere is not a sphere. This point is going to be elaborated in section (\ref{wilsoncoefficient}).\label{partitioningR2}}
\end{center}
\end{figure}
The two fields have to be connected continuously to each other. Thus, the boundary conditions are that the field or the n-th derivative of the field is continuous on the surface of the sphere. The fields inside and outside the sphere have the same value for a certain point $u$ on the sphere's surface. Except of this, the dynamics of the fields are completely independent from each other. They could even be described by two different theories. Hence the amplitude has been split into two domains.\\
This fact is going to be exploited to simplify the expansion that was just performed. The integral over the fields inside the sphere are fully determined by the values and derivatives of the fields on the surface of the sphere, which in turn may be expressed in terms of the fields and their derivatives extrapolated from outside the sphere to the interior point x. Hence, after this conversion the integral over the sphere will not depend on specific coordinates of the spheres surface but just on its radius $R$.\\
\begin{figure}[htbp]
\begin{center}
\begin{picture}(0,0)%
\includegraphics{ball02.eps}%
\end{picture}%
\setlength{\unitlength}{4144sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(3070,3142)(1268,-3339)
\put(2777,-313){$\vec{e}_{2}$}%
\put(4323,-1946){$\vec{e}_{1}$}%
\end{picture}%
\caption{A simple example of a field that can be split up as shown in figure (\ref{partitioningR2}). The field is constant outside the sphere but it has a gradient inside the sphere.\label{boundarycondition01}}
\end{center}
\end{figure}
In the simple example that is shown in figure (\ref{boundarycondition01}) the extrapolation is quite easy. The field is a constant outside the sphere so the extrapolation to the point x will be the same constant. This constant has already been the boundary condition on the spheres surface. In conclusion the boundary conditions stay the same.\\
The integral over the fields inside the sphere contains the part of the amplitude where short distances are involved while the integral over the fields outside the sphere contains the part of the amplitude where long distances are involved. In addition to this separation the two domains do not share any variables.\\
If we express this integral as a series in products of o(x) and the c-number fields $\phi$ and their derivatives extrapolated to x, the coefficients can only be functions $C^{A_{1},A_{2},...}_{n}(x_{1}-x,x_{2}-x,...)$ of the coordinate differences and $R$. Since the points $y_{1}$, $y_{2}$, etc. are all far outside the sphere, the sphere has no effect in the limit $R\ll \abs{x-y_{i}}$, so in this limit equation (\ref{opesep}) becomes
\begin{eqnarray}
\bra{0}T\{A_{1}(x_{1}),A_{2}(x_{2}),...B_{1}(y_{1}),B_{2}(y_{2}),...\}\ket{0}\rightarrow\nonumber\\
\int\left[\Pi_{l,z} d\phi_{l}(z)\right]b_{1}(y_{1})b_{2}(y_{2})...exp\left(i\int d^4 z\mathscr{L}(z)\right)\nonumber\\ \times
\sum_{n}C^{A_{1},A_{2},...}_{n}(x_{1}-x,x_{2}-x,...)o_{n}(x)\nonumber\\
=\sum_{n}C^{A_{1},A_{2},...}_{n}(x_{1}-x,x_{2}-x,...)\mele{T\{O_{n}(x),B_{1}(y_{1}),B_{2}(y_{2}),...\}}
\label{opelim}
\end{eqnarray}
for $x_{1}$, $x_{2}$, etc. all approaching x, where $O(x)$ is the quantum-mechanical Heisenberg-picture operator corresponding to $o_{n}(x)$. The operators that occur in this expression depend solely on x. Therefore, they are called local operators, a non-local operator would depend on x and further coordinates.\\
In comparison with (\ref{opesep}) the integral over the fields inside the sphere has changed drastically. It has separated into summands which are in turn separated in two factors. The first one is a coefficient function while the second one contains c-number fields and their derivatives. While the integral over the fields outside the sphere stayed nearly the same. The operator $O_{n}$ that stems from the integral over the fields inside the sphere connects the long and short distance part of the amplitude.\\
This yields by Fourier transforming (\ref{opelim}) with respect to the y variables and multiplying with appropriate coefficient functions
\begin{eqnarray}
\bra{\beta}T\{A_{1}(x_{1}),A_{2}(x_{2}),...\}\ket{\alpha}\rightarrow\sum_{n}C^{A_{1},A_{2},...}_{n}(x_{1}-x,x_{2}-x,...)\bra{\beta}O_{n}(x)\ket{\alpha}
\end{eqnarray}
for arbitrary states $\ket{\alpha}$ and $\bra{\beta}$, of course the states correspond to the $B$ operators. Because this applies for arbitrary states, it is the operator product expansion in a generalized version:
\begin{eqnarray}
T\{A_{1}(x_{1}),A_{2}(x_{2}),...\}\rightarrow\sum_{n}C^{A_{1},A_{2},...}_{n}(x_{1}-x,x_{2}-x,...)O_{n}(x).
\end{eqnarray}
It is very important to understand that the OPE arises only in the limit $R\ll \abs{x-y_{i}}$. This is a special and not the general case. That means the applications of the OPE are limited to matrix elements where you have the distinct separation between the distances $\abs{x-x_{i}}$ and $\abs{x-y_{i}}$ of the form
\begin{eqnarray}
\abs{x-x_{i}}\ll\abs{x-y_{i}}.
\end{eqnarray}
\subsection{Derivation in the operator formalism \label{operatorderivation}}
The derivation in section \ref{pathderivation} provides a very intuitive introduction of the OPE. Many features of the expansion can simply be read off the proof. Some of them where already claimed. Still missing is the scheme that orders the sum appearing in the OPE. Also the analytic properties of the coefficient functions have not been analyzed. This is done in the present section via an alternative derivation of the OPE. This section is close to \cite{brandeis}.\\
A product of two operators is considered
\begin{eqnarray}
P(x_{\mu},\xi_{\mu})=T\{A_{1}(x_{\mu}+\xi_{\mu}),A_{2}(x_{\mu}-\xi_{\mu})\} \label{productdef}.
\end{eqnarray}
The vector $\xi_{\mu}$ is split into spherical coordinates:
\begin{eqnarray}
\rho=\sqrt{\xi_{\mu}\xi^{\mu}},~~~\eta_{\mu}=\frac{\xi_{\mu}}{\rho}.
\end{eqnarray}
Henceforth $P$ is considered as a function of $x_{\mu},\eta_{\mu}$ and $\rho$ :
\begin{eqnarray}
P(x_{\mu},\xi_{\mu})=P(x_{\mu},\eta_{\mu},\rho).
\end{eqnarray}
If $P$ diverges for $\rho\rightarrow 0$ we define an operator
\begin{eqnarray}
O_{1}(x_{\mu},\eta_{\mu})=\lim_{\rho\rightarrow 0}\frac{P(x_{\mu},\eta_{\mu},\rho)}{C_{1}(\rho)} \label{operatordef}
\end{eqnarray}
in the weak limit, the limit performed for the matrix elements and not for the norm of the operator. This limit should be finite. Hence, $C_{1}(\rho)$ has to compensate the divergence of
$P(x_{\mu},\eta_{\mu},\rho)$ at $\rho=0$. Therefore, $C_{1}(\rho)$ has to have a singularity at $\rho=0$ by
itself. $C_{1}(\rho)$ is a suitable function. The singularity of $C_{1}$ is restricted by the
condition that the result be finite and different from zero. Under the assumption that there are some matrix elements
of $P$
\begin{eqnarray}
\bra{\Phi'}P(x'_{\mu},\eta'_{\mu},\rho)\ket{\Psi'} \label{planewavederivative}
\end{eqnarray}
which near $\rho=0$ are as singular as or more singular than all other matrix elements of $P$ a
suitable $C_{1}$ can be found. The mathematical term for this requirement is
\begin{eqnarray}
\lim_{\rho\longrightarrow
0}\frac{\bra{\Phi}P(x_{\mu},\eta_{\mu},\rho)\ket{\Psi}}{\bra{\Phi'}P(x'_{\mu},\eta'_{\mu},\rho)\ket{\Psi'}}
\end{eqnarray}
exists for any $\phi,\psi,x_{\mu}$ and $\eta_{\mu}$. Choosing any of these most singular matrix elements,
equation (\ref{planewavederivative}), as the function $C_{1}$ the operator (\ref{operatordef}) should have
finite and non-vanishing matrix elements.\\
By (\ref{productdef}) and (\ref{operatordef}) the operator $O_{1}$ is local in the sense that
\begin{eqnarray}
\left[ A_{j}(x),O_{1}(y,\eta)\right]_{\pm}=0 \nonumber\\
\left[ O_{1}(x,\eta),O_{1}(y,\eta')\right]_{\pm}=0
\label{localcond}
\end{eqnarray}
for $(x-y)^2<0$ with commutators or anticommutators taken appropriately.\\
In many cases $O_{1}$ turns out to be a multiple of the identity. In perturbation theory it seems to
be a general rule that the most singular part of a matrix element is given by\footnote{There may be several equally singular matrix elements. This possibility is not considered here.}
\begin{eqnarray}
\bra{\Phi}P(x,\eta,\rho)\ket{\Psi}\approx\bra{0}P(x,\eta,\rho)\ket{0}\mele{\Phi|\Psi}
\end{eqnarray}
provided the vacuum expectation value does not vanish
\begin{eqnarray}
\bra{0}P(x,\eta,\rho)\ket{0}\not = 0.
\end{eqnarray}
This leads to the determination of $O_{1}$
\begin{eqnarray}
\lim_{\rho\longrightarrow
0}\frac{\bra{\Phi}P(x,\eta,\rho)\ket{\Psi}}{\bra{0}P(x',\eta',\rho)\ket{0}}=\mele{\phi\psi}
\end{eqnarray}
or
\begin{eqnarray}
O_{1}=\mathds{1}.
\end{eqnarray}
If the vacuum expectation value vanishes
\begin{eqnarray}
\mele{P(x,\eta,\rho)}=0
\end{eqnarray}
(\ref{operatordef}) may lead to a suitable definition, but we will see that in general there are more
local operators which should be associated with $P$.\\
We now turn to the second problem of analyzing the singularities of the operator product. To this end
the remainder is introduced
\begin{eqnarray}
r_{1}(x,\eta,\rho)=\frac{P(x,\eta,\rho)}{C_{1}(\rho)}-O_{1}(x,\eta) \label{remainder01}
\end{eqnarray}
which vanishes in the limit
\begin{eqnarray}
\lim_{\rho\rightarrow 0}r_{1}(x,\eta,\rho)=0
\end{eqnarray}
because of (\ref{operatordef}). In order to get some information about the singularities of $P$ near
$\xi=0$ we multiply (\ref{remainder01}) by $C_{1}(\rho)$ and solve for $P$
\begin{eqnarray}
P(x,\eta,\rho)=C_{1}(\rho)O_{1}(x,\eta)+P_{2}(x,\eta,\rho)\label{a}
\end{eqnarray}
with
\begin{eqnarray}
P_{2}(x,\eta,\rho)=C_{1}(\rho)r_{1}(x,\eta,\rho)\label{P2}.
\end{eqnarray}
Here $C_{1}\rightarrow\infty$ and $r_{1}\rightarrow 0$ for $\rho\rightarrow 0$. Hence nothing can be
said in general about the limit of $P_{2}$. If
\begin{eqnarray}
\lim_{\rho\rightarrow 0}P_{2}=0
\end{eqnarray}
(\ref{P2}) already gives complete information on the singularities of $P$ near $\xi=0$
\begin{eqnarray}
P(x,\xi)=C_{1}\left(\rho\right)O_{1}\left(x,\eta\right) +P_{2}(x,\eta,\rho)
\end{eqnarray}
with $\lim_{\rho\rightarrow 0}P_{2}\left(x,\eta,\rho\right)=0$ for $\eta$ constant. Hence the term
$C_{1}O_{1}$ carries the total singularity of $P$.\\ If $P_{2}$ diverges the product $P$ has
additional singularities near $\xi=0$. In that case its possible to repeat the procedure for $P_{2}$ that was just applied to $P$ with
\begin{eqnarray}
P_{2}(x,\eta,\rho)= P(x,\eta,\rho)-C_{1}(\rho)O_{1}(x,\eta).
\end{eqnarray}
This time the most singular matrix element $C_{2}$ of $P_{2}$ is chosen to form
\begin{eqnarray}
O_{2}(x,\eta)=\lim_{\rho\rightarrow 0}\frac{P_{2}(x,\eta,\rho)}{C_{2}(\rho)}.
\end{eqnarray}
Hence, more information on the singularities of $P$ is acquired. Introducing the remainder
\begin{eqnarray}
r_{2}(x,\eta,\rho)=\frac{P_{2}(x,\eta,\rho)}{C_{2}(\rho)}-O_{2}(x,\eta)
\end{eqnarray}
with
\begin{eqnarray}
\lim_{\rho\rightarrow 0}=r_{2}(x,\eta,\rho)=0.
\end{eqnarray}
An Operator analogous to $P_{2}$ can be introduced
\begin{eqnarray}
P_{3}=r_{2}C_{2}.
\end{eqnarray}
In conclusion it is obtained
\begin{eqnarray}
P_{2}=C_{2}O_{2}+P_{3} \label{P2new}.
\end{eqnarray}
Inserting (\ref{P2new}) into (\ref{a}) it is obtained
\begin{eqnarray}
P(x,\eta,\rho)=C_{1}(\rho)O_{1}(x,\eta)+C_{2}(\rho)O_{2}(x,\eta)+P_{3}(x,\eta,\rho).
\end{eqnarray}
$C_{2}$ is less singular than $C_{1}$
\begin{eqnarray}
\lim_{\rho\rightarrow 0}\frac{C_{2}(\rho)}{C_{1}(\rho)} =0 \label{singord}.
\end{eqnarray}
This procedure has refined our analysis of the singularities of $P$, (\ref{singord}) follows from
\begin{eqnarray}
\lim_{\rho\rightarrow 0}\frac{C_{2}(\rho)}{C_{1}(\rho)}=\lim_{\rho\rightarrow 0}\frac{\bra{\phi}P_{2}(x,\eta,\rho)\ket{\psi}/C_{1}(\rho)}{\bra{\phi}P_{2}(x,\eta,\rho)\ket{\psi}/C_{2}(\rho)}=\lim_{\rho\rightarrow 0}\frac{\bra{\phi}r_{2}(x,\eta,\rho)\ket{\psi}}{\bra{\phi}P_{2}(x,\eta,\rho)\ket{\psi}/C_{2}(\rho)}=0
\end{eqnarray}
because $\lim_{\rho\rightarrow 0}r_{2}(x,\eta,\rho)=0$ and $\lim_{\rho\rightarrow 0}\frac{P_{2}(x,\eta,\rho)}{C_{2}(\rho)}=O_{2}(x,\eta)\not =0$ for at least one matrix element.\\
This process can be iterated until a $P_{n}$ is derived which vanishes at $\xi=0$. It is an assumption that the process terminates after a finite number of steps. The result of the iteration is the expansion
\begin{eqnarray}
P(x,\eta,\rho)=C_{1}(\rho)O_{1}(x,\eta)+...+C_{n}(\rho)O_{n}(x,\eta)+R(x,\eta,\rho)
\end{eqnarray}
with the following properties
\begin{eqnarray}
\lim_{\rho\rightarrow 0}\frac{C_{i+1}(\rho)}{C_{i}(\rho)}=0 \label{feature01}
\end{eqnarray}
\begin{eqnarray}
\lim_{\rho\rightarrow 0}C_{n}(\rho)=\infty~or~\lim_{\rho\rightarrow 0}C_{n}(\rho)=c\not =0
\end{eqnarray}
\begin{eqnarray}
\lim_{\rho\rightarrow 0}R(x,\eta,\rho)=0\label{feature03}.
\end{eqnarray}
The operators $O_{j}$ are given by
\begin{eqnarray}
O_{j}=\lim_{\rho\rightarrow 0}\frac{P-\sum_{\alpha=1}^{j-1}C_{\alpha}O_{\alpha}}{C_{j}}
\end{eqnarray}
and satisfy the locality conditions (\ref{localcond}).\\
The choice of the matrix elements that are used as the $C_{i}$ is not unique, in turn the operators $O_{i}$ are not unique. This arbitrariness can be reduced by requiring that the Operators $O_{j}$ are linearly independent. The transformation that has to be performed to achieve this can always be arranged without changing the general features of the functions $C_{j}$. An expansion with linearly independent Operators $O_{i}$ and coefficient functions that have the features listed in (\ref{feature01})-(\ref{feature03}) is called a standard expansion.\\
This alternative derivation of the OPE gives insight in the analytic properties of the coefficient functions. Obviously the analysis of singularities in the operator product $P(x,\eta,\rho)$ at $\rho=0$ provide an alternative approach to the OPE, in addition to the separation of the amplitude in short and long distance fluctuations.
\subsection{Important and unimportant summands \label{importantunimportant}}
The operator products $P$ that are investigated have a certain dimension d. Wilson's assumption about the OPE leads to the statement that only operators $O_{n}$ with dimension $\leq d$ are relevant in the OPE. This statement is derived by scale arguments. The chain of arguments is shown in the following \cite{brandeis}. The starting point is the assumption that a scale invariant theory is investigated. This means the theory is invariant under the transformation:
\begin{eqnarray}
x\longrightarrow sx \label{invariance}.
\end{eqnarray}
In the quantum mechanical sense this means that there is a family of unitary transformations $U(s)$ with the property:
\begin{eqnarray}
U^{-1}(s)\phi(x)U(s)=s^{d(\phi)}\phi(sx) \label{invariance2}
\end{eqnarray}
for any local field operator of the theory. The real number $d(\phi)$ is called the dimension of the operator $\phi$. Using this transformation law Wilson determined the singularities of the coefficients in the following way. Applying the transformation $U(s)$ to the expansion
\begin{eqnarray}
A_{1}(x+\xi)A_{2}(x-\xi)=\sum_{j=1}^{\infty}C_{j}(\xi)O_{j}(x)
\end{eqnarray}
we get
\begin{eqnarray}
U^{-1}(s)A_{1}(x+\xi)A_{2}(x-\xi)U(s)=\sum_{j=1}^{\infty}U^{-1}(s)C_{j}(\xi)O_{j}(x)U(s) \nonumber\\
U^{-1}(s)A_{1}(x+\xi)A_{2}(x-\xi)U(s)=\sum_{j=1}^{\infty}C_{j}(\xi)U^{-1}(s)O_{j}(x)U(s) \nonumber\\
s^{d(A_{1})+d(A_{2})}A_{1}(sx+s\xi)A_{2}(sx-s\xi)=\sum_{j=1}^{\infty}C_{j}(\xi)s^{d(O_{j})}O_{j}(sx).
\end{eqnarray}
Expanding $A_{1}A_{2}$ on the left hand side yields
\begin{eqnarray}
s^{d(A_{1})+d(A_{2})}\sum_{j}C_{j}(s\xi)O_{j}(sx)=\sum_{j=1}^{\infty}C_{j}(\xi)s^{d(O_{j})}O_{j}(sx).
\end{eqnarray}
Since the $O_{j}$ are linearly independent the following equations are valid
\begin{eqnarray}
C_{j}(\xi)=s^{d(A_{1})+d(A_{2})-d(O_{j})}C_{j}(s\xi)
\end{eqnarray}
as the scaling law of the coefficients. The exponent of $s$ is called the dimension of $C_{j}$ and we have the result that in each term of the expansion the dimension of $C_{j}$ and the dimension of $O_{j}$ must add up to the total dimension of the left hand side
\begin{eqnarray}
d(C_{j})+d(O_{j})=d(A_{1})+d(A_{2})=d(A_{1}A_{2}).
\end{eqnarray}
The dimension of $C_{j}$ indicates the behavior for $\xi\rightarrow 0$:
\begin{eqnarray}
C_{j}(s\xi)=s^{-d(C_{j})}C_{j}(\xi)
\end{eqnarray}
or
\begin{eqnarray}
C_{j}(x)=\abs{x}^{-d(C_{j})}C_{j}(\frac{x}{\abs{x}}).
\end{eqnarray}
As a consequence the $C_{j}(x)$ are singular or $\not =0$ in the limit $x\rightarrow 0$ only if $d(O_{j})\leq d(A_{1}A_{2})$. Hence the singular and non-vanishing terms are provided by operators $O_{j}$ of dimension less or equal to the dimension of the product $A_{1}A_{2}$.\\
All the terms belonging to operators with $d(O_{j})\geq d(A_{1}A_{2})$ vanish in the limit $x\rightarrow 0$. The conclusion is that all Operators with $d(O_{j})\geq d(A_{1}A_{2})$ are not relevant for the OPE.\\
The simple power-counting argument above is modified by renormalization effects; the expansion (\ref{ope}) must be
formulated in terms of operators renormalized at some scale $\mu$, and then $\mu$ appears along with $x-y$ in the
coefficient function $C^{AB}_{n}(x-y)$. An important example are asymptotically free field theories, where $C^{AB}_{n}(x-y)$ does behave like the power $d_{O}-d_{A}-d_{B}$ of $x-y$ suggested by dimensional analysis only up to a power of $ln(x-y)^2$. \\
The corresponding statement in momentum space is that for $k\rightarrow\infty$,
\begin{eqnarray}
\int d^4 xe^{-ik^{\mu}x_{\mu}}A(x)B(0)\rightarrow\sum_{n}V^{AB}_{n}(k)O_{n}(0)
\end{eqnarray}
and correspondingly
\begin{eqnarray}
\int d^4 xe^{-ik^{\mu}x_{\mu}}T\{A(x)B(0)\}\rightarrow\sum_{n}C^{AB}_{n}(k)O_{n}(0)
\end{eqnarray}
where $V^{AB}_{n}(k)$ and $C^{AB}_{n}(k)$ are functions of $k^{\mu}$ that for large $k^2$ decrease more and more rapidly for more and more complicated terms in the series.\\
A short remark concerning the validity of the arguments given above has to be made. The basis of all arguments have been the equations (\ref{invariance}) and (\ref{invariance2}). Unfortunately dimensional arguments do not work in every theory, in some theories they are not valid. In all realistic theories scale invariance is broken as an example any mass term breaks the scale invariance. Furthermore it can happen that the dimensionality $d$ can not be defined in a usefull way. Hence in such theories scale arguments can not be used in order to determine if the OPE can be truncated.\\
However, dimensional arguments work very well in perturbation theory. For the exact solution the situation may be quite different. This can be demonstrated in the Thirring model. There Wilson's hypothesis holds in perturbation theory with the conventional dimensions $d(A_{1}A_{2})=d(A_{1})+d(A_{2})$, but this is no longer true for the exact solution of the Thirring model \cite{Wilson:1970pq}. Hence, the conclusion is that the arguments above work in perturbation theory and perturbation theory will be used in the remainder of this thesis. Therefore, the arguments can be used.\\
\subsection{Translation to perturbation theory \label{transpert}}
The preceding sections about the OPE have been very general. Calculations in quantum field theory are mainly done via perturbative techniques. Therefore this section introduces the perturbative technique to calculate the OPE of a Greens function.\\
The first step is to define the operators in the OPE. The operators are given by combinations of the field operators appearing in the Lagrangian of the theory. In standard expansions (see section \ref{operatorderivation}) the operators are linearly independent. That means the operators in standard expansions are given by all possible combinations of field operators of the theory under consideration, that are linearly independent. In this and the following sections only standard expansions are used. These arguments fix the operators.\\
The second step is the calculation of the coefficients in Wilson's expansion, the so called Wilson coefficients. The recipe was already given in section \ref{operatorderivation}. There the Wilson coefficients have been identified with the coefficients of matrix elements in the operator product expansion. The singularities of these coefficient functions determined the operator they belonged to. This method has two weaknesses. The Wilson coefficients have not been determined uniquely. There can be several different Wilson coefficients which have singularities of the same strength. In perturbation theory the strength of the singularities of these Wilson coefficients is often not determinable. An alternative method is urgently needed.\\
The clue that provides this method is that the OPE is an operator relation and the operators $O_{n}$ are already known because we use a standard expansion. If an OPE is sandwiched between two external states
\begin{eqnarray}
\bra{\alpha}P\ket{\beta}=\bra{\alpha}\sum_{n}C_{n}O_{n}\ket{\beta}=\sum_{n}\bra{\alpha}C_{n}O_{n}\ket{\beta}= \sum_{n}C_{n}\bra{\alpha}O_{n}\ket{\beta}
\end{eqnarray}
the Wilson coefficient $C_{n}$ is unaffected by the external states. This is the feature that is exploited in the calculation of the $C_{n}$. There are states that filter out certain coefficients when the OPE is sandwiched between them. The matrix elements of the operators $O_{n}$ with these states are all zero except of one or a few of them. In this section the perfect case in which just one matrix element is not zero is stressed. Suppose the matrix element with $O_{j}$ is not 0
\begin{eqnarray}
\bra{\alpha}P\ket{\beta}=C_{j}\bra{\alpha}O_{j}\ket{\beta} \Leftrightarrow C_{j}=\frac{\bra{\alpha}P\ket{\beta}}{\bra{\alpha}O_{j}\ket{\beta}}.
\end{eqnarray}
This is a perturbatively calculable expression but there is a restriction that requires a modification of the calculations usually made in perturbation theory. In the first derivation of the OPE, section \ref{pathderivation}, it was shown that the coefficients $C_{n}$ stem from a path-integral that is restricted to a Ball with radius $R$ in four dimensional space time. Therefore only short distance fluctuations play a role in the calculation of the coefficients. This is the point that allows the Wilson coefficients in many theories to be calculated via perturbative techniques. The dynamics which takes place at short distances happens at large momenta. In asymptotically free theories this is the domain where perturbative calculations are valid. The impact of this observation is that no small momenta contributions occur in the Wilson coefficients. The scale $\mu$ that distinguishes high and low momenta is defined by additional conditions.\\
The concrete rules that follow are
\begin{enumerate}
\item The external momenta $q^{\mu}_{i}$ must be bigger than $\mu$, $q_{i}^2>\mu$.
\item The virtual momenta in the loops of involved diagrams must be bigger than $\mu$.
\end{enumerate}
Otherwise the OPE is not applicable to the problem. The method just introduced is called the plane wave method. With these rules all Wilson coefficients are calculable, although the calculations are sometimes very difficult. There exist alternative methods but the plane wave method is very valuable because it provides a lot of physical insight in comparison with other methods.
\subsection{OPE in the real scalar $\phi^4$ theory \label{unbroken}}
In the last sections the groundwork concerning the OPE was done. This work should be substantiated in quantitative calculations. Additionally there appear complications like the question of the application in theories with broken symmetry. These points are treated in this section. The calculations should be as easy as possible. For this reason the most simple theory is used, the real scalar $\phi^4$ theory. The Lagrangian density is given by
\begin{eqnarray}
\mathscr{L}=\frac{1}{2}\left(\partial_{\mu}\phi\right)^2-\frac{1}{2}m_{0}^2\phi^2-\frac{\lambda_{0}}{4!}\phi^4
\label{phi4lagrange}
\end{eqnarray}
where the dimension of $\phi$ is energy. This theory has an intrinsic symmetry, it is a simple reflection symmetry
\begin{eqnarray}
\mathscr{L}\left(\phi\right) =\mathscr{L}\left(-\phi\right).
\end{eqnarray}
The Feynman rules are given in table \ref{erstetabelle}.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{||c||}
\hline
vertex : $-i\lambda_{0}$\\
propagator : $\frac{i}{p^2-m_{0}^2+i\epsilon}$\\
\hline
\end{tabular}
\end{center}
\caption{Feynmanrules in the $\phi^4$-theory in the phase where the symmetry is not broken.
The vertex is a four vertex.\label{erstetabelle}}
\end{table}
As an example the operator product $\phi\phi$ is expanded in a standard expansion. The set of linear independent Operators is given by
\begin{eqnarray}
\left\{\phi\phi,\phi\phi\phi\phi,\phi\phi\phi\phi\phi\phi,...\right\}.
\end{eqnarray}
Operators with an odd number of $\phi$'s can be excluded because they do not respect the reflection symmetry. The important terms in the OPE are the terms with $d\leq 2$ (see section \ref{importantunimportant}). With these informations the OPE is determined to be
\begin{eqnarray}
T\left\{\phi(q),\phi(q)\right\}=C_{1}\mathds{1}+C_{2}\phi(0)\phi(0) \label{phi2ope}.
\end{eqnarray}
This is the OPE for the propagator in the interacting $\phi^4$ theory. The OPE is going to be derived in first order in the coupling constant $\lambda_{0}$. It is convenient to start with the left hand side of the OPE (\ref{phi2ope}). The vacuum expectation value is given by
\begin{eqnarray}
D(q)=\bra{0}T\left\{\phi(q),\phi(q)\right\}\ket{0}=-i\int d^4xe^{iq^{\mu}x_{\mu}}\bra{0}T\{\phi(x)\phi(0)\}\ket{0}\nonumber\\=
\parbox[c]{2cm}{
\begin{fmffile}{phifreeprop}
\begin{fmfgraph*}(20,20)
\fmfleft{i}
\fmfright{o}
\fmf{plain}{i,o}
\end{fmfgraph*}
\end{fmffile}}+
\parbox[c]{2cm}{
\begin{fmffile}{phi4tadpole02}
\begin{fmfgraph*}(20,20)
\fmfleft{i}
\fmfright{o}
\fmf{plain}{i,v,v,o}
\fmfdot{v}
\end{fmfgraph*}
\end{fmffile}}
=\frac{1}{q^2-m^2_{0}}\left[1+\frac{\lambda_{0}}{32\pi^2}\frac{-m^2_{0}ln\left(\frac{M^2}{m^2_{0}}\right)}{q^2-m^2_{0}}\right] \label{zz08}
\end{eqnarray}
(see section \ref{reguandreno}). This Greens function is the sum of the two terms in the OPE (\ref{phi2ope}). The OPE can be obtained with the rules given in section \ref{transpert}. First, the requirement that q is large is exploited. The propagators are expanded in $\frac{m_{0}}{q}$
\begin{eqnarray}
D(q^2)=\frac{i}{q^2-m_{0}^2}=\frac{i}{q^2}\cdot\frac{1}{1-\left(\frac{m_{0}}{q}\right)^2}=\underbrace{i\left(\frac{1}{q^2}+\frac{m_{0}^2}{q^4}+\frac{m_{0}^4}{q^6}+...\right.}_{geometric~series}.
\end{eqnarray}
In the propagator we keep terms, up to order $\frac{1}{q^4}$ in order to be consistent with the second term in (\ref{zz08})
\begin{eqnarray}
D(q^2)=\frac{1}{q^2}+\frac{m^2}{q^4}+\frac{\lambda_{0}}{32\pi^2}\frac{-m^2_{0}ln\left(\frac{M^2}{m^2_{0}}\right)}{q^4}=\frac{1}{q^2}+\frac{m_{0}^2}{q^4}\left[ 1-\frac{\lambda_{0}}{32\pi^2}ln\left(\frac{M^2}{m^2_{0}}\right)\right] \label{propagator}.
\end{eqnarray}
Secondly, the loop integration has to be separated in a high and a low frequency part. This corresponds to a separation in short and long distances. Hence a scale $\mu$ is introduced which separates high and low frequencies, $high~frequencies>\mu>low~frequencies$
\begin{eqnarray}
D(q^2)=\frac{1}{q^2}+\frac{m_{0}^2}{q^4}\left[ 1-\frac{\lambda_{0}}{32\pi^2}ln\left(\frac{M^2}{\mu^2}\right)-\frac{\lambda_{0}}{32\pi^2}ln\left(\frac{\mu^2}{m^2_{0}}\right)\right] \nonumber\\=\frac{1}{q^2}+\frac{m_{0}^2}{q^4}\left[ 1-\frac{\lambda_{0}}{32\pi^2}ln\left(\frac{M^2}{\mu^2}\right)\right]-\frac{m_{0}^2}{q^4}\frac{\lambda_{0}}{32\pi^2}ln\left(\frac{\mu^2}{m^2_{0}}\right) \label{opeuntidy}.
\end{eqnarray}
This is the OPE of the propagator, although the components, Wilson coefficients and operators remain to be identified. The expansion could have been done in an alternative way by using the plane wave method, discussed in section \ref{transpert}, to calculate the Wilson coefficients $C_{1}$ and $C_{2}$. The plane wave method exploits the operator nature of the OPE. If the OPE is sandwiched between appropriate states a certain summand in the expansion is singled out. Based on that the Wilson coefficient of the summand can be calculated, but the calculations can also be comprised into diagrammatic techniques. If this is done the Wilson coefficients are expressed as diagrams. The analytic result corresponding to the diagrams are the Wilson coefficients. The states for $C_{1}$ would be $\bra{0}$ and $\ket{0}$ and for $C_{2}$ $\bra{p}$ and $\ket{p}$. The calculation of $C_{2}$ is simple, the corresponding diagram is given by
\begin{eqnarray}
\begin{fmffile}{operatorphiphi01}
\begin{fmfgraph*}(40,40)
\fmfpen{thick}
\fmfleft{i}
\fmfright{o}
\fmf{plain}{i,v,o}
\fmf{phantom,tag=1}{v,v}
\fmfposition
\fmfipath{p[]}
\fmfiset{p1}{vpath1(__v,__v)}
\fmfi{plain}{subpath (0,length(p1)/4) of p1}
\fmfiv{d.sh=cross,d.ang=0,d.siz=5thick}{point length(p1)/4 of p1}
\fmfi{plain}{subpath (3length(p1)/4,length(p1)) of p1}
\fmfiv{d.sh=cross,d.ang=0,d.siz=5thick}{point 3length(p1)/4 of p1}
\fmfdot{v}
\end{fmfgraph*}
\end{fmffile}
\end{eqnarray}
where the cross denotes the contact with the operator. The calculation is simple, its just the product of two propagators and the four vertex. This product is taken in the limit where the external momentum is large. Hence, a Taylor expansion in $\frac{1}{q^2}$ is performed and the lowest order terms are kept. After all a symmetry factor of 2 has to be taken into account. The result turns out to be $C_{2}=\frac{\lambda_{0}}{2q^4}$.\\
The matrix element $\bra{0}\phi(0)\phi(0)\ket{0}$ can be calculated via known techniques. It is just the Greens function $\bra{q}\phi(0)\phi(0)\ket{q}$ in leading order $\lambda_{0}$ where the external propagators and the four vertex are amputated. The amputated part is just the Wilson coefficient $C_{2}$. An example for the contractions is given by
\begin{eqnarray}
\frac{-i\lambda_{0}}{4!}\int d^4 p\wick{2-5,7-8}{1-6,3-4}{\bra{q},\phi(0),\phi(0),\phi(p),\phi(p),\phi(p),\phi(p),\ket{q}}.
\end{eqnarray}
The corresponding diagram is given by
\begin{eqnarray}
\begin{fmffile}{operatorphiphi02}
\begin{fmfgraph*}(40,20)
\fmfpen{thick}
\fmfleft{i}
\fmfright{o}
\fmf{plain}{i,v1}
\fmf{plain}{v1,o}
\fmf{plain,left,tag=1}{v1,v2}
\fmf{plain,left}{v2,v1}
\fmfv{decor.shape=circle,decor.fill=empty,decor.size=.1w,l=$\bigotimes$,label.dist=0}{v2}
\fmfdot{v1}
\fmffixedx{0}{v1,v2}
\fmffixedy{1cm}{v1,v2}
\end{fmfgraph*}
\end{fmffile}
\end{eqnarray}
where the Wilson coefficient $C_{2}$ has to be amputated. The cross symbol denotes a special vertex, it is the operator $\phi(0)\phi(0)$. The result is just $\frac{-m^2_{0}\ln\frac{\mu^2}{m^2_{0}}}{16\pi^2}$, the expression for the operator in lowest order. An alternative way to derive this result would be to calculate the tadpole diagram in $\phi^4$ theory with the extension that the frequencies large than $\mu$ have to be cut off. Hence the third summand in (\ref{opeuntidy}) is $C_{2}\bra{0}\phi\phi\ket{0}$, with respect to (\ref{phi2ope}) the remainder has to be $C_{1}\mathds{1}$. Thus the following identifications can now be placed:
\begin{eqnarray}
m^2(\mu)=m^2_{0}\left[1-\frac{\lambda_{0}}{16\pi^2}\ln\frac{M}{\mu}\right] \label{comp1}
\end{eqnarray}
\begin{eqnarray}
C_{1}=\frac{1}{q^2}+\frac{m^2(\mu)}{q^4}
\end{eqnarray}
\begin{eqnarray}
C_{2}=\frac{\lambda_{0}}{2q^4}
\end{eqnarray}
\begin{eqnarray}
\mele{\phi^2}=\frac{-m^2_{0}\ln\frac{\mu^2}{m^2_{0}}}{16\pi^2} \label{phicut}.
\end{eqnarray}
A remarkable point is that a running mass arises in the OPE in first order in $\lambda_{0}$. The mass is running with respect to the scale $\mu$ which separates high and low frequencies. In the literature $\mu$ is referred to as the renormalization point (see \cite{Novikov:1984rf}). Without this separation a running mass does not appear to first order in $\lambda_{0}$, it would be constant without the separation in high and low frequencies. The ultraviolet cutoff $M$ and the naked mass $m_{0}$ are unobservable quantities and should not appear in the definition of the running mass. Therefore, the running mass is renormalized in order to eliminate the unobservable quantities.\\
The Wilson coefficient $C_{1}$ was calculated to first order in $\lambda_{0}$. The first order result can be used to calculate a part of the higher order corrections to $C_{1}$. In higher orders, diagramms appear which consist of $n$ first order results which are ordered one after another
\begin{eqnarray}
\parbox[c]{2cm}{
\begin{fmffile}{phifreeprop}
\begin{fmfgraph*}(20,20)
\fmfleft{i}
\fmfright{o}
\fmf{plain}{i,o}
\end{fmfgraph*}
\end{fmffile}}
+
\parbox[c]{2cm}{
\begin{fmffile}{phi4tadpole02}
\begin{fmfgraph*}(20,20)
\fmfleft{i}
\fmfright{o}
\fmf{plain}{i,v,v,o}
\fmfdot{v}
\end{fmfgraph*}
\end{fmffile}}
+
\parbox[c]{2cm}{
\begin{fmffile}{phi4tadpole03}
\begin{fmfgraph*}(20,20)
\fmfleft{i}
\fmfright{o}
\fmf{plain}{i,v1,v1,v2,v2,o}
\fmffixedx{0.8cm}{v1,v2}
\fmfdot{v1,v2}
\end{fmfgraph*}
\end{fmffile}}+~~~...~
.
\end{eqnarray}
Hence, a series analogous to the geometrical series is derived
\begin{eqnarray}
\frac{1}{q^2}+\frac{1}{q^2}m_{0}^2\left[1-\frac{\lambda_{0}}{32\pi^2}ln\left(\frac{M^2}{\mu^2}\right)\right]\frac{1}{q^2}\nonumber\\+\frac{1}{q^2}m_{0}^2\left[1-\frac{\lambda_{0}}{32\pi^2}ln\left(\frac{M^2}{\mu^2}\right)\right]\frac{1}{q^2}m_{0}^2\left[1-\frac{\lambda_{0}}{32\pi^2}ln\left(\frac{M^2}{\mu^2}\right)\right]\frac{1}{q^2}+...~.
\end{eqnarray}
This series can be summed up
\begin{eqnarray}
\frac{1}{q^2}\frac{1}{1-\frac{1}{q^2}m_{0}^2\left[1-\frac{\lambda_{0}}{32\pi^2}ln\left(\frac{M^2}{\mu^2}\right)\right]}=\frac{1}{q^2-m_{0}^2\left[1-\frac{\lambda_{0}}{32\pi^2}ln\left(\frac{M^2}{\mu^2}\right)\right]}.
\end{eqnarray}
The term
\begin{eqnarray}
m^2(\mu)=m_{0}^2\left[1-\frac{\lambda_{0}}{32\pi^2}ln\left(\frac{M^2}{\mu^2}\right)\right]
\end{eqnarray}
is identified as the mass of the particle. Obviously, the divergent parameters $m_{0}$ and $M$ still appear in the expression for the mass. These parameters have to be eliminated in order to make the theory well defined. A new scale $\mu_{0}$ is introduced
\begin{eqnarray}
m^2(\mu)=m_{0}^2\left[1-\frac{\lambda_{0}}{32\pi^2}ln\left(\frac{M^2}{\mu^2_{0}}\right)-\frac{\lambda_{0}}{32\pi^2}ln\left(\frac{\mu_{0}^2}{\mu^2}\right)\right].
\end{eqnarray}
The finite and the divergent have to be separated
\begin{eqnarray}
m^2(\mu)=m_{0}^2\left[1-\frac{\frac{\lambda_{0}}{32\pi^2}ln\left(\frac{M^2}{\mu^2_{0}}\right)}{1-\frac{\lambda_{0}}{32\pi^2}ln\left(\frac{\mu_{0}^2}{\mu^2}\right)}\right]\left[1-\frac{\lambda_{0}}{32\pi^2}ln\left(\frac{\mu_{0}^2}{\mu^2}\right)\right] .
\end{eqnarray}
If $M^2\gg\mu^2_{0}$ and $\mu^2_{0}\approx\mu^2$
\begin{eqnarray}
m^2(\mu)\approx m_{0}^2\left[1-\frac{\lambda_{0}}{32\pi^2}ln\left(\frac{M^2}{\mu^2_{0}}\right)\right]\left[1-\frac{\lambda_{0}}{32\pi^2}ln\left(\frac{\mu_{0}^2}{\mu^2}\right)\right] .
\end{eqnarray}
If $\mu=\mu_{0}$
\begin{eqnarray}
m^2(\mu_{0})= m_{0}^2\left[1-\frac{\lambda_{0}}{32\pi^2}ln\left(\frac{M^2}{\mu^2_{0}}\right)\right].
\end{eqnarray}
Hence
\begin{eqnarray}
m^2(\mu)= m^2(\mu_{0})\left[1-\frac{\lambda_{0}}{32\pi^2}ln\left(\frac{\mu_{0}^2}{\mu^2}\right)\right] .
\end{eqnarray}
Thus all divergent parameters have been eliminated. The mass is renormalized and everything is well defined. Hence, (\ref{comp1}-\ref{phicut}) is the OPE of the propagator in the $\phi^4$ theory to leading order in the coupling constant $\lambda$. \\
This simple example tells a lot about the OPE. The matrix elements are not purely non-perturbative. They receive contributions from perturbation theory. The OPE reproduces the perturbative result to a certain order in the coupling constant $\lambda$ and in $\frac{1}{q^2}$.
\subsubsection{OPE in the phase with broken symmetry \label{broken}}
A second look at the Lagrangian (\ref{phi4lagrange}) of the $\phi^4$ theory exhibits an interesting feature of the theory if the mass is exchanged by a mass parameter with arbitrary sign
\begin{eqnarray}
\mathscr{L}=\frac{1}{2}\left(\partial_{\mu}\phi\right)^2\pm\frac{1}{2}\eta^2\phi^2-\frac{\lambda_{0}}{4!}\phi^4 \label{phi4eta}.
\end{eqnarray}
The Hamilton operator is given by
\begin{eqnarray}
H=\int d^3x\left[\frac{1}{2}\pi^2+\frac{1}{2}(\nabla\phi)^2\pm\frac{1}{2}\eta^2\phi^2+\frac{\lambda_{0}}{4!}\phi^4\right].
\end{eqnarray}
The potential is given by
\begin{eqnarray}
V(\phi)=\pm\frac{1}{2}\eta^2\phi^2+\frac{\lambda_{0}}{4!}\phi^4.
\end{eqnarray}
Depending on the sign of the $\eta^2$-term in (\ref{phi4eta}), the potential has the shape of figure \ref{phipotential}.
\begin{figure}[htbp]
\begin{center}
\begin{picture}(0,0)%
\includegraphics{potential.eps}%
\end{picture}%
\setlength{\unitlength}{4144sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(5550,2426)(6303,-2894)
\put(11640,-1872){$\phi$}%
\put(10711,-633){$V(\phi)$}%
\put(8447,-1872){$\phi$}%
\put(7612,-633){$V(\phi)$}%
\end{picture}%
\caption{The potential of the Hamilton operator in $\phi^4$ theory, depending on the sign
of $\eta^2$. On the left hand side the case of a positive sign is shown while on the right
hand side the case of a negative sign is shown.\label{phipotential}}
\end{center}
\end{figure}
In the case of a negative sign in front of $\eta$ the symmetry of the system is spontaneously broken. The order parameter which determines this is $\bra{0}\phi\ket{0}$. If $\bra{0}\phi\ket{0}\not = 0$ the symmetry is broken. In the case of broken symmetry it is reasonable to expand the theory around the minima of the potential
\begin{eqnarray}
\phi=\phi_{0}+\sigma(x),~~~\phi_{0}=\pm v=\pm\sqrt{\frac{6}{\lambda_{0}}}i\eta .
\end{eqnarray}
Where $\phi_{0}$ is given by the position of the minima in the $\phi$ space. The Lagrangian $\mathscr{L}$ after this transformation is given by
\begin{eqnarray}
\mathscr{L}=\frac{1}{2}\left(\partial_{\mu}\sigma\right)^2-\frac{1}{2}(2\eta^2)\sigma^2-\sqrt{\frac{\lambda_{0}}{6}}\eta\sigma^3-\frac{\lambda_{0}}{4!}\sigma^4 \label{shiftlagrange}.
\end{eqnarray}
The Feynman rules in this version of the theory differ from the ones in the theory where the symmetry is not broken. A comparison is given in table \ref{zweitetabelle}.
\begin{table}
\begin{center}
\begin{tabular}{||c|l||}
\hline
unshifted theory& vertex : $-i\lambda_{0}$\\
& propagator : $\frac{i}{p^2-m^2}$\\
\hline
shifted theory& vertices : $-i\lambda_{0}$,$-i\sqrt{\frac{\lambda_{0}}{3!}}\eta$ \\
& propagator: $\frac{i}{p^2-2\eta^2}$\\
\hline
\end{tabular}
\caption{Comparison of the Feynmanrules of $\phi^4$-theory in the phase where the symmetry is not broken with the rules in the phase where the symmetry is broken. An additional vertex arises in the phase with broken symmetry.\label{zweitetabelle}}
\end{center}
\end{table}
Where $-i\lambda_{0}$ is a four vertex and $-i\sqrt{\frac{\lambda_{0}}{3!}}\eta$ is a three vertex and is of order $\sqrt{\lambda_{0}}$. If the OPE instead of the usual techniques are used to calculate the amplitudes in the phase with broken symmetry the use of the new Feynmanrules can be avoided. There are two possibilities to incorporate the nature of the states in the calculation of the amplitudes. The first one is to shift the theory to the minima in the $\phi$ space which results in new Feynmanrules. The second is to use the OPE and incorporate the nature of the state not in the Feynmanrules but in the matrix elements. This is especially helpful if for some reason the shift which leads to new Feynmanrules is not possible. In the following it is proven, that the OPE reproduces the amplitudes in the phase of the theory with broken symmetry.\\
The OPE can directly be taken from section \ref{unbroken}. Obviously, only $m_{0}^2$ has to be replaced by $-\eta^2$ and the matrix elements change. Again, it is emphasized that the theory is not shifted. Hence, the Wilson coefficients do not change in comparison with section \ref{unbroken} except for the mass parameter. Thus the Wilson coefficients stay the same, only the matrix elements change.\\
In the following a first naive test is performed if the OPE reproduces the amplitudes in the phase where the symmetry is broken. This test will fail, but in the next section the reason for this failure will be clarified.\\
The method to calculate the matrix elements in the OPE in $\phi^4$ theory is to use the classical value for $\phi$ which is given by $\phi_{0}$. In this case
\begin{eqnarray}
\bra{0}\phi(0)\phi(0)\ket{0}=\left( \pm\sqrt{\frac{6}{\lambda_{0}}}\eta\right)^2=\frac{6}{\lambda_{0}}\eta^2 \label{firsttry}.
\end{eqnarray}
This is the value for $\bra{0}\phi(0)\phi(0)\ket{0}$ at the scale $\mu$ at which $m^2_{physical}(\mu)=-2\eta^2$, as it is already stated in equation (\ref{shiftlagrange}) in summand 2.\\
The treatise just performed leads to the following OPE in the phase with broken symmetry
\begin{eqnarray}
i\int dx e^{-ix_{\mu}q^{\mu}}\bra{0}\phi(x)\phi(0)\ket{0}=\frac{1}{q^2}+\frac{2\eta^2}{q^4}\left(1-\frac{\lambda_{0}}{32\pi^2}\ln\left(\frac{\mu }{\mu_{0}}\right)\right)\nonumber\\=\frac{1}{q^2}-\frac{m^2_{physical}}{q^4}\left(1-\frac{\lambda_{0}}{32\pi^2}\ln\left(\frac{\mu }{\mu_{0}}\right)\right).
\end{eqnarray}
This expression should reproduce perturbation theory but how is the propagator in the phase with broken symmetry calculated ? In order to keep the calculations as simple as possible the shifted theory should be used. The diagrams that contribute to first order in $\lambda_{0}$ are
\begin{eqnarray}
\parbox{20mm}{
\begin{fmffile}{phi4shifted00}
\begin{fmfgraph*}(20,40)
\fmfpen{thick}
\fmfleft{i}
\fmfright{o}
\fmf{plain}{i,o}
\end{fmfgraph*}
\end{fmffile}}
~~~~
\parbox{20mm}{
\begin{fmffile}{phi4shifted01}
\begin{fmfgraph*}(20,40)
\fmfpen{thick}
\fmfleft{i}
\fmfright{o}
\fmf{plain}{i,v,v,o}
\fmfdot{v}
\end{fmfgraph*}
\end{fmffile}}
~~~~
\parbox{20mm}{
\begin{fmffile}{phi4shifted02}
\begin{fmfgraph*}(20,40)
\fmfpen{thick}
\fmfleft{i}
\fmfright{o}
\fmf{plain}{i,v1,v2,v2,v1,o}
\fmfdot{v1,v2}
\fmffixedx{0}{v1,v2}
\fmffixedy{25}{v1,v2}
\end{fmfgraph*}
\end{fmffile}}
~~~~
\parbox{20mm}{
\begin{fmffile}{phi4shifted03}
\begin{fmfgraph*}(20,40)
\fmfpen{thick}
\fmfleft{i}
\fmfright{o}
\fmf{plain}{i,v1}
\fmf{plain,left,tensio=0.4}{v1,v2}
\fmf{plain,right,tensio=0.4}{v1,v2}
\fmf{plain}{v2,o}
\fmfdot{v1,v2}
\end{fmfgraph*}
\end{fmffile}}~.
\end{eqnarray}
Only the last diagram contributes to the propagator the other ones vanish after renormalization (see section \ref{reguandreno}). In the limit of big momenta $q$ the expression for the propagator is
\begin{eqnarray}
D(q)=\frac{1}{q^2}+\frac{m^2_{phys}}{q^4}\left(1+\frac{3\lambda_{0}}{16\pi^2}\ln\left(\frac{\sqrt{q^2}}{m_{phys}}\right) \right) \label{propbroken}
\end{eqnarray}
and it does not coincide with (\ref{firsttry}) even after setting $\mu_{0}=q$ and $\mu=m_{phys}$. This result is of course shocking. The question appears if everything that has been stated about the OPE in the phase with broken symmetry is wrong. Fortunately it is not wrong but there is something missing in the derivation of the OPE. In the next section the riddle about the missing component will be solved.
\subsubsection{Comparison of OPE with perturbation theory in the phase with broken symmetry \label{compara}}
The question that has to be answered is if the OPE reproduces the amplitudes in the phase with broken symmetry to a given order in $\lambda_{0}$. For this purpose the propagator is expanded via perturbation theory in the phase with broken symmetry and via the OPE. The results should coincide to a given order in $\lambda_{0}$ but as shown in the preceding section they seem not to do it. The missing parts of the OPE are derived in the following.\\
As shown in section \ref{unbroken} the normalization point is crucial in the definition of the OPE. The missing agreement between the OPE and perturbation theory in the phase with broken symmetry could also stem from an incomplete implementation of the renormalization group flux. In the version of the theory with reflection symmetry the only scale that is important is the mass in the Lagrangian (\ref{phi4lagrange}) which naturally defines the scale $\mu$ which is inherent to the theory but it is not necessary to use the physical mass of the particle as the renormalization point. An other point is that the physical mass in the theory with broken symmetry is not given by the mass parameter $\eta$ in the Lagrangian (\ref{phi4eta}) but by $-2\eta$ which is the natural scale in the phase with broken symmetry.\\
These complications did not show up in the theory with unbroken symmetry where there exists no difference in the mass scale that the Wilson coefficients are calculated with and the mass scale that is set by the physical mass. Thus the flux should be less important in the theory with reflection symmetry than in the theory where this symmetry is broken.\\
In simple words the Wilson coefficients are calculated in a version of the theory that is defined on an other scale than the version of the theory in which the propagator
(\ref{propbroken}) is calculated. Thus the point at which the Wilson coefficients are defined
have to be changed to the point where the propagator (\ref{propbroken}) is defined. Unless
this is done the effective theory (\ref{shiftlagrange}) and the OPE will not coincide.\\
In order to make the OPE independent from the normalization point $\mu$ the flux of the Wilson coefficients and all parameters has to be constructed. The method used here to achieve this is the method of renormalization group improvement. The OPE is improved by summing up the leading logarithms using the recipe that renormalization group theory proposes.\\
The first step is the renormalization of the coupling constant $\lambda_{0}$. Those computations are standard and can be taken from books like \cite{Peskin:1995ev}. The renormalization group equations are given by
\begin{eqnarray}
\mu\frac{d}{d\mu}C_{2}=\frac{\lambda(\mu)}{16\pi^2}C_{2} \label{problemfall}
\end{eqnarray}
\begin{eqnarray}
\mu\frac{d}{d\mu}C_{1}=\frac{m^2(\mu)}{8\pi^2}C_{2} \label{opmix}
\end{eqnarray}
\begin{eqnarray}
\mu\frac{d}{d\mu}m^2(\mu)=\frac{\lambda(\mu)}{16\pi^2}m^2(\mu)
\end{eqnarray}
\begin{eqnarray}
\mu\frac{d}{d\mu}\lambda(\mu)=3\frac{\lambda^2(\mu)}{16\pi^2} \label{letztegleich}.
\end{eqnarray}
All of these equations except of (\ref{problemfall}) are derivatives of the first order expressions. The derivation of (\ref{problemfall}) leads to new insight concerning the renormalization group flux of the OPE summands. The derivation can be found in \cite{Collins:1984xc} and is not shown here. The solutions of the equations (\ref{problemfall}-\ref{letztegleich}) are
\begin{eqnarray}
\lambda(\mu)=\frac{\lambda(\mu_{0})}{1-3\frac{\lambda(\mu_{0})}{16\pi^2}\ln\left( \frac{\mu}{\mu_{0}}\right) }
\end{eqnarray}
\begin{eqnarray}
m^2(\mu)=\left[\frac{\lambda(\mu)}{\lambda(\mu_{0})}\right]^{\frac{1}{3}}m^2(\mu_{0})
\end{eqnarray}
\begin{eqnarray}
C_{2}\left(\mu\right)=\left[\frac{\lambda(\mu)}{\lambda(\mu_{0})}\right]^{\frac{1}{3}}C_{2}\left(\mu_{0}\right)
\end{eqnarray}
\begin{eqnarray}
C_{1}(\mu)=C_{1}(\mu_{0})-2\frac{m^2(\mu_{0})C_{2}(\mu_{0})}{\lambda(\mu_{0})}\left(\left[\frac{\lambda(\mu)}{\lambda(\mu_{0})}\right]^{-\frac{1}{3}}-1\right).
\end{eqnarray}
This is the improved OPE in the theory with reflection symmetry, corresponding to the theory with a positive value of the squared mass parameter in the Lagrangian (\ref{phi4eta}). It has the remarkable ability that it reproduces both versions of the propagator in the $\phi^4$ theory. If $\mu=m_{0}^2$ it reproduces the propagator in the theory with reflection symmetry and if $\mu=-2m_{0}^2$ the propagator in the theory where this symmetry is broken. This assertion is proved in the following.\\
The starting conditions are chosen to be
\begin{eqnarray}
C_{1}(\mu_{0}=q)=\frac{1}{q^2}+\frac{m^2(q)}{q^4}
\end{eqnarray}
\begin{eqnarray}
C_{2}(\mu_{0}=q)=\frac{\lambda(q)}{2q^4}
\end{eqnarray}
then the coefficients $C_{1},C_{2}$ contain no logarithms. The explicit expression for the OPE with those conditions is
\begin{eqnarray}
D(q)=\bra{0}C_{1}(\mu)+C_{2}(\mu)\phi^2\ket{0}=\frac{1}{q^2}\nonumber\\+\frac{1}{q^4}\left(\left[\frac{\lambda(q)}{\lambda(\mu)}\right]^{\frac{1}{3}}2m^2(\mu)+\left[\frac{\lambda(q)}{\lambda(\mu)}\right]^{\frac{2}{3}}\left( \frac{1}{2}\lambda(\mu)\mele{\phi^2(\mu)}-m^2(\mu)\right) \right) .
\end{eqnarray}
With these results the validity of the OPE in $\phi^4$ theory can be tested.
\begin{enumerate}
\item{Coincidence of the improved OPE with the propagator in the theory with reflection
symmetry: $\mu^2=m^2$ }\\
At this scale the matrix element given in (\ref{phicut}) vanishes which leaves the OPE as
\begin{eqnarray}
D(q)=\frac{1}{q^2}+\frac{1}{q^4}m^2(m)\left(2
\left[\frac{\lambda(q)}{\lambda(m)}\right]^{\frac{1}{3}}-\left[\frac{\lambda(q)}{
\lambda(m)}\right]^{\frac{2}{3}}\right) \nonumber\\
=\frac{1}{q^2}+\frac{1}{q^4}m^2(m)+\mathcal{O}(\lambda^2),
\end{eqnarray}
which coincides with the propagator in the theory with higher symmetry to first order
in $\lambda$.
\item{Coincidence of the improved OPE with the propagator in the theory with broken
symmetry: $\mu^2=-2m^2=m^2_{phys}$ }\\
At this scale the matrix element is given by (\ref{firsttry}) which leaves the OPE as
\begin{eqnarray}
D(q)=\frac{1}{q^2}+\frac{1}{q^4}\left(-2m^2(m_{phys})\right) \left(2
\left[\frac{\lambda(q)}{\lambda(m_{phys})}\right]^{\frac{2}{3}}-\left[\frac{\lambda(q)}
{\lambda(m_{phys})}\right]^{\frac{1}{3}}\right) \nonumber\\
=\frac{1}{q^2}+\frac{1}{q^4}m^2_{phys}\left(2
\left[\frac{\lambda(q)}{\lambda(m_{phys})}\right]^{\frac{2}{3}}-\left[\frac{\lambda(q)}
{\lambda(m_{phys})}\right]^{\frac{1}{3}}\right) \nonumber\\
=\frac{1}{q^2}+\frac{m^2_{phys}}{q^4}\left(1+\frac{3\lambda}{16\pi^2}\ln\left(\frac{q}{m_{phys}}\right) \right)+\mathcal{O}(\lambda^2),
\end{eqnarray}
which reveals the coincidence with the theory in the phase with broken symmetry.
\end{enumerate}
\begin{figure}[htbp]
\begin{center}
\begin{picture}(0,0)%
\includegraphics{scala.eps}%
\end{picture}%
\setlength{\unitlength}{4144sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(4566,997)(2218,-1979)
\put(6346,-1141){$\mu$}%
\put(4501,-1906){$\mu^2=m^2_{phys}$}%
\put(2836,-1906){$\mu^2=m^2$}%
\end{picture}%
\caption{An illustration concerning the distribution of normalization points.\label{boundarycondition02}}
\end{center}
\end{figure}
The conclusion of this section is that the change of matrix elements, claimed in section \ref{broken}, is not sufficient to change between the phases of the theory. In addition the normalization point has also to be changed.\\
The ability of the OPE to describe processes in the phase with broken symmetry simply by changing the matrix elements and the normalization point in comparison with the OPE in the phase where the symmetry is not broken is the reason why it is employed in modern applications.\\
In the early days of the OPE the question arose whether the OPE is an approximation to the theory in the phase with broken symmetry or if it is exact in reproducing the amplitudes \cite{Novikov:1984rf}. The source of this problem was an approximation made by Shifman, Vainshtein and Zakharov in their original publications to QCD Sum Rules \cite{Shifman:1978bx}. This approximation is discussed in the next section.
\subsection{OPE in QCD - Wilsonian OPE versus Practical OPE \label{opeinqcd}}
The OPE has been explained in general and in terms of an example in the real scalar $\phi^4$ theory but every theory has its own peculiarities concerning the OPE. In the remainder of this thesis QCD is the theory in which calculations are performed.\\
Concerning the OPE in QCD the OPE exhibits big advantages. QCD also has a broken symmetry, the chiral symmetry. Unfortunately, the theory is much more difficult than the $\phi^4$ theory. In the phase with broken symmetry an additional phenomenon arises, the confinement of quarks. This means, that free quarks do not exist in that phase. Only bound states of quarks and gluons have been measured, the so called hadrons. No one knows how the transition between the phase with unbroken and broken symmetry has to be described. Thus a shift as it has been done in section \ref{broken} does not seem to be possible. Therefore, no Feynmanrules for amplitudes on the basis of quarks and gluons in the phase with broken symmetry are known. There exist effective field theories, which describe the physics in that phase on the basis of hadrons as degrees of freedom. Moreover, there exist other methods like lattice QCD which seeks for a numerical solution of the QCD Lagrangian. An additionally method to describe the physics of QCD in the broken phase is given by the OPE. \\
The OPE is for example an alternative to chiral perturbation theory. As it was stated before the Wilson coefficients can be calculated in the phase with and without broken symmetry. Hence, one can use the QCD Lagrangian, where the quark masses are current quark masses, to compute the Wilson coefficients. A repetition of the calculations performed in the $\phi^4$ theory would lead to amplitudes valid in the phase with broken symmetry. In this expansion quarks and gluons are the basic degrees of freedom. This is the way the OPE is used in actual applications.\\
There are two approximations that simplify the calculation of the OPE in QCD considerably
\begin{enumerate}
\item The perturbative parts of the matrix elements can be absorbed in the Wilson coefficients of the unit operator $\mathds{1}$. Thus, in this approximation the matrix elements are purely non-perturbative (see \cite{Shifman:1999mk}).
\item The renormalization group flow of the Wilson coefficients can be neglected.
\end{enumerate}
In the propagator example of $\phi^4$ theory the perturbative part of the $\phi^2(0)$ matrix element has been $\frac{-m^2_{0}}{16\pi^2}\ln\frac{\mu^2}{m^2_{0}}$. Thus in the first approximation, this term is included in $C_{1}$ and must be subtracted from the matrix element.\\
Those two approximations are applied to all OPE computations in QCD. They date back to the founders of the QCD Sum Rule method \cite{Shifman:1978bx} and have lead to missunderstandings \cite{Novikov:1984rf}.\\
The exact OPE is denoted Wilsonian OPE and the approximative one is called practical OPE or SVZ expansion/OPE. In the remainder of this thesis the practical OPE is used. \\
One generally can describe the OPE as a method to approximate the physics in the nonperturbative regime, the phase with broken symmetry, with the field theory that is valid in the perturbative regime. The big advantages in this case are that the OPE deals with current quarks and not with constituent quarks.\\
Every advantage has its price. In the OPE case it is the introduction of the matrix elements which in QCD are called condensates. Up to today there exist no method which is based on first principles to calculate these matrix elements. Chiral perturbation theory and QCD Sum Rules can be used to get an approximation for the condensates, but they are based on data from expirements and not on the QCD Lagrangian. Hence, the condensates are determined through measurements.
Nevertheless the OPE is heavily used in nonperturbative physics. In the next section the calculations of the Wilson coefficients are shown for several examples.
\section{Calculation of Wilson coefficients in QCD \label{wilsoncoefficient}}
In the last section the OPE was introduced and it was showed that the Wilson coefficients are the part of the OPE that has to be calculated if QCD amplitudes are concerned. The plane-wave method was introduced as the technique for the computations, but the details of the calculations have not been explained. Therefore, this section is devoted to the calculations of Wilson coefficients via the plane-wave method. This method is the most simple one can think of, it directly employs the operator nature of the OPE. Moreover, the first calculations of Wilson coefficients where done with this method \cite{Shifman:1978bx}. As it was quoted the practical OPE is used, thus the renormalization group flow of the coefficients is neglected.
\subsection{Equal mass case : the flavor conserving scalar current correlator \label{Wilson_Berechnung}}
Here and in further investigations products of currents are concerned. The currents are used as the interpolating field for a hadron. Unfortunately, this choice is not unique. As the approximative current for a hadron all currents have to be considered which have the same number as the hadron. Such ambiguities can have different implications. For example the ambiguity between a four quark and a two quark current has a physical implication. The quarks in the current are the valence quark. Hence, such an ambiguity is directly linked to questions concerning the quark model. Non-physical ambiguities are for example given by the number of derivatives which occur in the current. It is possible to write down n quark currents with and without derivatives which have the same quantum numbers. In this thesis mesons are analyzed. Thus, the currents have to have a even number of quark fields, but here only two quark currents are addressed. Moreover, currents with the simplest structure are used. This means for example currents with the smallest number of derivatives. The most simple current of this type is the scalar current:
\begin{eqnarray}
j(x)=\bar{q}(x)q(x)~~j^{\dagger}(x)=\bar{q}(x)q(x).
\end{eqnarray}
Hence the OPE is:
\begin{eqnarray}
j(x)j^{\dagger}(0)=\sum_{n=1}^{\infty}C_{n}O_{n}.
\end{eqnarray}
The current product has energy dimension 6, hence section (\ref{importantunimportant}) states that only operators with $dimension\leq 6$ appear in the OPE. Those operators are given by \cite{Shifman:1978bx}
\begin{eqnarray}
\mathds{1}~~dimension=0\nonumber\\
m\overline{q}q~~dimension=4\nonumber\\
G^{c}_{\mu\nu}G^{c}_{\mu\nu}~~dimension=4\nonumber\\
m\overline{q}\sigma_{\mu\nu}\lambda^{c}qG^{c}_{\mu\nu}~~dimension=6\nonumber\\
\overline{q}\Gamma_{1}q\overline{q}\Gamma_{2}q~~dimension=6\nonumber\\
f^{abc}G^{a}_{\mu\nu}G^{b}_{\nu\gamma}G^{c}_{\gamma\mu}~~dimension=6
\label{operators}.
\end{eqnarray}
To be precise there are more operators with $dimension\leq 6 $ than those just quoted, but they can be reduced to those quoted above. An example of such operators are operators with a derivative. Derivatives raise the dimension of an operator. Hence, a derivative of the $m\overline{q}q$ still appears in the OPE but it can be reduced to the $m\overline{q}q$ Operator. The calculations of Wilson coefficients gets more complex if more complex operator products are concerned, for a given operator product it gets more complex the more complex the operator is to which the Wilson coefficient belongs. Therefore, a simple example for the calculation of Wilson coefficients is the Wilson coefficient that belongs to the operator $m\overline{q}q$, the quark condensate. According to the plane-wave method this coefficient can be filtered out by sandwiching the OPE between quark states:
\begin{eqnarray}
\bra{p}j(x)j^{\dagger}(0)\ket{p}=\bra{p}\sum_{n=1}^{\infty}C_{n}O_{n}\ket{p}=\sum_{n=1}^{\infty}\bra{p}C_{n}O_{n}\ket{p}=\sum_{n=1}^{\infty}C_{n}\bra{p}O_{n}\ket{p}=\nonumber\\C_{\bar{q}q}\bra{p}O_{\bar{q}q}\ket{p}=C_{\bar{q}q}\bra{p}\bar{q}q\ket{p}=C_{\bar{q}q}\bar{u}(p)u(p)
\label{opequarkcond}.
\end{eqnarray}
The current correlator is transformed to momentum space
\begin{eqnarray}
i\int d^4xe^{iqx}\bra{0}T(j(x)j^{\dagger}(0))\ket{0}=i\int d^4xe^{iqx}\bra{0}T(\bar{q}(x)q(x)\bar{q}(0)q(0))\ket{0}.
\end{eqnarray}
After that the plane-wave method is applied for the calculation of the $\bar{q}q$ Wilsoncoefficient
\begin{eqnarray}
i\int d^4xe^{iqx}\bra{p}T(j(x)j^{\dagger}(0))\ket{p}=i\int d^4xe^{iqx}\bra{p}T(\bar{q}(x)q(x)\bar{q}(0)q(0))\ket{p}.
\end{eqnarray}
The coefficient is derived to lowest order in $\alpha_{S}$. Therefore Wick`s theorem is applied:
\begin{eqnarray}
i\int d^4xe^{iqx}\wick{1-2,5-6}{3-4}{\bra{p},\bar{q}(x),q(x),\bar{q}(0),q(0),\ket{p}}+i\int d^4xe^{iqx}\wick{1-4,3-6}{2-5}{\bra{p},\bar{q}(x),q(x),\bar{q}(0),q(0),\ket{p}}\nonumber\\=
i\parbox[c]{3cm}{
\begin{fmffile}{diagram01}
\begin{fmfgraph*}(40,35)
\fmfleft{i1,o1} \fmfright{i2,o2}
\fmffixedx{2cm}{v1,v2}
\fmf{dots_arrow,label.side=left,label=$q$}{i1,v1}
\fmf{fermion,label.side=down,label=$q+p$}{v1,v2}
\fmf{dots_arrow,label.side=left,label=$q$}{v2,i2}
\fmf{fermion,label.side=right,label=$p$}{o1,v1}
\fmf{fermion,label.side=right,label=$p$}{v2,o2}
\end{fmfgraph*}
\end{fmffile}}
~~~~~~~~~+
i\parbox[c]{3cm}{
\begin{fmffile}{diagram02}
\begin{fmfgraph*}(40,35)
\fmfleft{i1,o1} \fmfright{i2,o2}
\fmffixedx{2cm}{v1,v2}
\fmf{dots_arrow,label.side=left,label=$q$}{v2,i2}
\fmf{dots_arrow,label.side=left,label=$q$}{i1,v1}
\fmf{fermion,label.side=right,label=$p$}{o1,v2}
\fmf{fermion,label.side=down,label=$q-p$}{v2,v1}
\fmf{fermion,label.side=right,label=$p$}{v1,o2}
\end{fmfgraph*}
\end{fmffile}}
\nonumber\\=i\bar{u}(p,s)\frac{i(\slashed{p}+\slashed{q}+m)}{(p+q)^2-m^2}u(p,s)+i\bar{u}(p,s)\frac{i(\slashed{p}-\slashed{q}+m)}{(p-q)^2-m^2}u(p,s)\label{scalarlorentzscalar}.
\end{eqnarray}
In the diagrams dashed lines represent currents and full lines represent fermions. Hence, an expression for the left hand side of (\ref{opequarkcond}) is
\begin{eqnarray}
-\bar{u}(p)\frac{\slashed{p}+\slashed{q}+m}{(p+q)^2-m^2}u(p)-\bar{u}(p)\frac{\slashed{p}-\slashed{q}+m}{(p-q)^2-m^2}u(p).
\end{eqnarray}
The goal is to reduce those expressions to a form where one can identify the quark term $\bar{u}u$. The external particles are free particles $p^2=m^2$. Hence, the Dirac equation $\bar{u}(p)(\slashed{p}-m)=0$ or $(\slashed{p}-m)u(p)=0$ can be applied
\begin{eqnarray}
-\bar{u}(p)\frac{m+\slashed{q}+m}{(p+q)^2-m^2}u(p)-\bar{u}(p)\frac{m-\slashed{q}+m}{(p-q)^2-m^2}u(p)=\nonumber\\
-\bar{u}(p)\frac{2m+\slashed{q}}{(p+q)^2-m^2}u(p)-\bar{u}(p)\frac{2m-\slashed{q}}{(p-q)^2-m^2}u(p)\label{scalar01}.
\end{eqnarray}
What about the $\slashed{q}$-term ? The treatment of this term is a bit more difficult !
\begin{eqnarray}
\bar{u}(p)\slashed{q}u(p)=\bar{u}(p)\frac{m}{m}\slashed{q}u(p)=\frac{1}{2m}\bar{u}(p)\left\{m\slashed{q}+\slashed{q}m\right\} u(p)=\frac{1}{2m}\bar{u}(p)\left\{\slashed{p}\slashed{q}+\slashed{q}\slashed{p}\right\} u(p)\nonumber\\
\frac{1}{2m}p_{\mu}q_{\nu}\bar{u}(p)\left\{\gamma^{\mu}\gamma^{\nu}+\gamma^{\nu}\gamma^{\mu}\right\} u(p)=\frac{1}{2m}p_{\mu}q_{\nu}\bar{u}(p)2g^{\mu\nu}u(p)=\frac{p_{\mu}q^{\mu}}{m}\bar{u}(p)u(p)
\end{eqnarray}
Inserting the last expression in (\ref{scalar01}) results after a trivial conversion in
\begin{eqnarray}
-\frac{1}{m^2}\frac{2m^2+p^{\mu}q_{\mu}}{(p+q)^2-m^2}\left[m\bar{u}(p,s)u(p,s)\right]-\frac{1}{m^2}\frac{2m^2-p^{\mu}q_{\mu}}{(p-q)^2-m^2}\left[m\bar{u}(p,s)u(p,s)\right]\label{scalardetermined}.
\end{eqnarray}
The Wilsoncoefficient is determined to be:
\begin{eqnarray}
-\frac{1}{m^2}\left(\frac{2m^2+p^{\mu}q_{\mu}}{(p+q)^2-m^2}+\frac{2m^2-p^{\mu}q_{\mu}}{(p-q)^2-m^2}\right)=\nonumber\\-
\frac{1}{m^2}\left(\frac{2m^2+p^{\mu}q_{\mu}}{p^2+q^2+2p_{\mu}q^{\mu}-m^2}+\frac{2m^2-p^{\mu}q_{\mu}}{p^2+q^2-2p_{\mu}q^{\mu}-m^2}\right).
\end{eqnarray}
From the derivation of the OPE we know that the Wilsoncoefficients can only depend on $q^2$. The dependence of this Wilsoncoefficient is now analyzed. The coefficient takes the form:
\begin{eqnarray}
-\frac{1}{m^2}\left(\frac{2m^2+p^{\mu}q_{\mu}}{(p+q)^2-m^2}+\frac{2m^2-p^{\mu}q_{\mu}}{(p-q)^2-m^2}\right)=\nonumber\\
-\frac{1}{m^2}\left(\frac{2m^2+p^{\mu}q_{\mu}}{q^2+2p_{\mu}q^{\mu}}+\frac{2m^2-p^{\mu}q_{\mu}}{q^2-2p_{\mu}q^{\mu}}\right) \label{unaveraged}.
\end{eqnarray}
The only p dependence enters the coefficient through the scalar products. This dependence should not occur in the coefficient after all that we know. The dependence stems from the fact that we only take the effect of a single plane-wave into account. This wave has a fixed four momentum but there is no physical circumstance that marks out a special momentum vector. This means we have to take all possible vectors into account. This is achieved by transforming the scalar product to a space with a Euclidean metric. In this space the scalarproduct is averaged over the Euclidean angle leaving the coefficient without any p dependence.
\subsubsection{Transformation to 4-dimensional space with Euclidean metric}
The most convenient way to perform the upcoming integration is to transform the problem from the Minkowski-Space to a space with a Euclidean metric. This is done by decomposing the -1 in the metric tensor of the Minkowski-Space
\begin{eqnarray}
\\
g_{\mu\nu}=
\left[
\begin{array}{cccc}
1&0&0&0\\
0&-1&0&0\\
0&0&-1&0\\
0&0&0&-1\\
\end{array}
\right]
=
\left[
\begin{array}{cccc}
-i^2&0&0&0\\
0&-1&0&0\\
0&0&-1&0\\
0&0&0&-1\\
\end{array}
\right].
\end{eqnarray}
The $-i^2$ is subscribed to the 0 component of the 4 vectors in which the tensor is sandwiched. The effect of this procedure is that the component which was the 0 component becomes imaginary and will from now on be called the 4 component. The advantage of the effort is that in the space in which we have transformed the problem the metric is Euclidean. This enables is to perform the integration mentioned above.\\
The transformation laws are:
\begin{eqnarray}
g_{\mu\nu}\longrightarrow -\mathds{1}_{4}~~~~~~\nonumber\\
p^2\longrightarrow p_{E}^2=-p^2 \nonumber\\
p_{0}\longrightarrow p_{E,4}=ip_{0} \label{trans}.
\end{eqnarray}
The action of the transformation can be illustrated in two dimensions. The metric transforms as:
\begin{eqnarray}
\\
g_{\mu\nu}=
\left[
\begin{array}{cc}
1&0\\
0&-1\\
\end{array}
\right]
\longrightarrow
-\left[
\begin{array}{cc}
1&0\\
0&1\\
\end{array}
\right].
\end{eqnarray}
while the vectors transform as:
\begin{eqnarray}
p_{\mu}
\left[
\begin{array}{c}
p_{0}\\
p_{1}\\
\end{array}
\right]
\longrightarrow
p_{E,\mu}=
\left[
\begin{array}{c}
p_{1}\\
ip_{0}\\
\end{array}
\right].
\end{eqnarray}
Following the common conventions there is a zero component in the Minkowski, but not in the Euclidean space. The zero component in the Minkowski space becomes the last component in the Euclidean space.\\
An interesting effect concerning this transformation is the transformation of the sphere, $p^2=const.$. The sphere in the Minkowski space is a hyperbola while the sphere in a Euclidean space is a ball. The transformation (\ref{trans}) transforms these two objects into each other (see figure \ref{hyperbolasphere}).
\begin{figure}[htbp]
\begin{center}
\begin{picture}(0,0)%
\includegraphics{hyperbolaball.eps}%
\end{picture}%
\setlength{\unitlength}{4144sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(6402,2823)(1312,-3250)
\put(3894,-2044){$p_{1}$}%
\put(2820,-641){$p_{0}$}%
\put(7499,-2017){$p_{1}$}%
\put(6426,-614){$ip_{0}$}%
\end{picture}%
\caption{The sphere in the 2 dimensional Minkowskispace and the sphere after applying the transformation \ref{trans}.\label{hyperbolasphere}}
\end{center}
\end{figure}
In four dimensions the transformation causes exactly the same. Thus, from this example its plausible that the transformation enables the use of common techniques for the averaging process.
\subsubsection{Averaging over the 4-dimensional Euclidean angle}
In the preceding section, the method used to average over all possible orientations of $p_{\mu}$ and $q_{\mu}$ was explained. In this section an example and the Wilson coefficient are computed.\\
The four dimensional Euclidean measure is:
\begin{eqnarray}
dp_{E}^4=\abs{p_{E}}^3sin^2(\theta_{2})sin(\theta_{1})d\theta_{2}d\theta_{1}d\phi dp_{E}.
\end{eqnarray}
The solid angle in 4 dimensions is:
\begin{eqnarray}
d\Omega_{4}=sin^2(\theta_{2})sin(\theta_{1})d\theta_{2}d\theta_{1}d\phi.
\end{eqnarray}
While the surface of the unit sphere is given by
\begin{eqnarray}
\int_{\theta_{2}=0}^{\pi}\int_{\theta_{1}=0}^{\pi}\int_{\phi=0}^{2\pi}d\Omega_{4}=2\pi^2.
\end{eqnarray}
If the average over the scalar product of two vectors $p_{E,\mu}$,$q_{E,\mu}$ in the 4 dimensional Euclidean space should be calculated it is very convenient to define the 4 axis in the direction of the vector $q_{E,\mu}$. Then the scalar product reads:
\begin{eqnarray}
p^{E,\mu}q_{E,\mu}=cos(\theta_{2})\abs{p_{E}}\abs{q_{E}}.
\end{eqnarray}
Therefore averaging over arbitrary orientations of these two vectors means to perform the following integration:
\begin{eqnarray}
\frac{\int_{\theta_{2}=0}^{\pi}\int_{\theta_{1}=0}^{\pi}\int_{\phi=0}^{2\pi}p^{E,\mu}q_{E,\mu}d\Omega_{4}}{\int_{\theta_{2}=0}^{\pi}\int_{\theta_{1}=0}^{\pi}\int_{\phi=0}^{2\pi}d\Omega_{4}}\nonumber\\=\frac{\abs{p_{E}}\abs{q_{E}}}{2\pi^2}\int_{\theta_{2}=0}^{\pi}\int_{\theta_{1}=0}^{\pi}\int_{\phi=0}^{2\pi}cos(\theta_{2})sin^2(\theta_{2})sin(\theta_{1})d\theta_{2}d\theta_{1}d\phi=0
\label{averaging}.
\end{eqnarray}
This trivial example leads to the formula for averaging $\left(p_{E,\mu}q^{\mu}_{E}\right)^n$:
\begin{eqnarray}
<\left(p_{\mu}q^{\mu}\right)^n >=\frac{\left(\abs{p_{E}}\abs{q_{E}}\right)^n}{2\pi^2}\int_{\theta_{2}=0}^{\pi}\int_{\theta_{1}=0}^{\pi}\int_{\phi=0}^{2\pi}cos^n(\theta_{2})sin^2(\theta_{2})sin(\theta_{1})d\theta_{2}d\theta_{1}d\phi .
\end{eqnarray}
For odd n the result vanishes. For even n some results are shown in table \ref{average}.
\begin{table}
\begin{tabular}[htbp]{||c|c||}
\hline
n& $\frac{<\left(p_{\mu}q^{\mu}\right)^n >}{\left(\abs{p}\abs{q}\right)^n}$\\
\hline
2&$\frac{1}{4}$\\
\hline
4&$\frac{1}{8}$\\
\hline
6&$\frac{5}{64}$\\
\hline
\end{tabular}
\caption{Averaging the scalar product.\label{average}}
\end{table}
This example directly shows how (\ref{unaveraged}) can be averaged over all possible orientations of $p_{\mu}$ and $q_{\mu}$. After the transformation to Euclidean space the procedure just outlined is repeated. The only difference is the integrand in (\ref{averaging}) it is replaced by (\ref{unaveraged}). This integration is not trivial but possible, algebraic computer tools can perform it easily. The result is
\begin{eqnarray}
C_{\overline{q}q}=-\frac{1}{m^2}\frac{(1-v)(1+2v)}{1+v}
\end{eqnarray}
where $v=\sqrt{1-\frac{4m^2}{q^2}}$. Some authors have an additional power of m in the coefficient \cite{Bagan:1985zp} because they take $\bar{q}q$ and not $m\bar{q}q$ as the condensate.\\
An important remark still has to be stated. The integration that has to be performed during the averaging process could here be done in closed form. In general this is not possible. Thus in such cases the integrand has to be expanded in the scalar product. The term in the expansion can all be averaged as it was shown in (\ref{averaging}). Therefore an expansion of the Wilson coefficient is calculable. At this point the problems begin. The summands are in principle given by powers of $\frac{m}{q^2}$ if $m$ is small compared to $q^2$ the expansion can be truncated. The first summand is often a satisfying approximation. In the case of big masses $m$ this can not be done, each summand is important. Hence, the series has to be resummed. The resummation is a difficult process. During the investigations made for this thesis no systematic method was found to achieve the resummation. However, the authors of \cite{Shifman:1978bx} somehow solved this problem.\\
The renormalization group flow of the coefficient is neglected as it was explained in section \ref{opeinqcd}. Thus the calculations are finished and can be comprised as the sequence:
\begin{enumerate}
\item Apply the plane-wave method with appropriate external states.
\item Bring all free spinors to the right of the expression and identify them as placeholders for the
condensates.
\item Eliminate the external momenta, either by setting their square equal to the corresponding masses or by using an appropriate average method.
\end{enumerate}
\subsection{Equal mass case : the flavor conserving pseudoscalar current correlator \label{pseudoscalar-quark}}
In this case everything is analogous to the scalar current correlator, except of a trivial point concerning $\gamma_{5}$ matrices. The current is given by
\begin{eqnarray}
j(x)=\bar{q}(x)\left(i\gamma_{5}\right)q(x)~~j^{\dagger}(x)=\bar{q}(x)\left(i\gamma_{5}\right)q(x)
\end{eqnarray}
where the $i$ is necessary in order to make the current self adjoint. Again the coefficient is calculated to lowest order $\alpha_{S}^0$. The application of Wick's theorem results in
\begin{eqnarray}
i\int d^4xe^{iqx}\wick{1-2,7-8}{4-5}{\bra{p},\bar{q}(x),\left(i\gamma_{5}\right),q(x),\bar{q}(0),\left(i\gamma_{5}\right),q(0),\ket{p}}+i\int d^4xe^{iqx}\wick{1-5,4-8}{2-7}{\bra{p},\bar{q}(x),\left(i\gamma_{5}\right),q(x),\bar{q}(0),\left(i\gamma_{5}\right),q(0),\ket{p}}\nonumber\\=
\parbox[c]{3cm}{
\begin{fmffile}{diagram01}
\begin{fmfgraph*}(40,35)
\fmfleft{i1,o1} \fmfright{i2,o2}
\fmffixedx{2cm}{v1,v2}
\fmf{dots_arrow,label.side=left,label=$q$}{i1,v1}
\fmf{fermion,label.side=down,label=$q+p$}{v1,v2}
\fmf{dots_arrow,label.side=left,label=$q$}{v2,i2}
\fmf{fermion,label.side=right,label=$p$}{o1,v1}
\fmf{fermion,label.side=right,label=$p$}{v2,o2}
\end{fmfgraph*}
\end{fmffile}}
~~~~~~~~~+
\parbox[c]{3cm}{
\begin{fmffile}{diagram02}
\begin{fmfgraph*}(40,35)
\fmfleft{i1,o1} \fmfright{i2,o2}
\fmffixedx{2cm}{v1,v2}
\fmf{dots_arrow,label.side=left,label=$q$}{v2,i2}
\fmf{dots_arrow,label.side=left,label=$q$}{i1,v1}
\fmf{fermion,label.side=right,label=$p$}{o1,v2}
\fmf{fermion,label.side=down,label=$q-p$}{v2,v1}
\fmf{fermion,label.side=right,label=$p$}{v1,o2}
\end{fmfgraph*}
\end{fmffile}}
\nonumber\\=-\bar{u}(p)\left( i\gamma_{5}\right) \frac{\slashed{p}+\slashed{q}+m}{(p+q)^2-m^2}\left( i\gamma_{5}\right) u(p)-\bar{u}(p)\left( i\gamma_{5}\right) \frac{\slashed{p}-\slashed{q}+m}{(p-q)^2-m^2}\left( i\gamma_{5}\right) u(p)\label{pseudoscalarquark}.
\end{eqnarray}
Step 1 is this time a bit longer because the $\gamma_{5}$ matrices have to be accounted for. Their action is the change of the sign in front the slashed quantities. The expression which determines the Wilson coefficient is altered into
\begin{eqnarray}
\bar{u}(p)\frac{-\slashed{p}-\slashed{q}+m}{(p+q)^2-m^2}u(p)+\bar{u}(p)\frac{-\slashed{p}+
\slashed{q}+m}{(p-q)^2-m^2}u(p).
\end{eqnarray}
Now, using the same manipulations as for the scalar current the spinors can be transferred to the right
\begin{eqnarray}
\frac{-m-\frac{p_{\mu}q^{\mu}}{m}+m}{(p+q)^2-m^2}\bar{u}(p)u(p)+\frac{-m+
\frac{p_{\mu}q^{\mu}}{m}+m}{(p-q)^2-m^2}\bar{u}(p)u(p)=\nonumber\\
\frac{1}{m^2}\frac{-p_{\mu}q^{\mu}}{q^2+2p_{\mu}q^{\mu}}m\bar{u}(p)u(p)+\frac{1}{m^2}\frac{
p_{\mu}q^{\mu}}{q^2-2p_{\mu}q^{\mu}}m\bar{u}(p)u(p)\label{pseudodetermined}.
\end{eqnarray}
In comparison with (\ref{scalardetermined}) the numerator of (\ref{pseudodetermined}) has obviously changed. Hence, the expression to be averaged is given by
\begin{eqnarray}
\frac{1}{m^2}\frac{-p_{\mu}q^{\mu}}{q^2+2p_{\mu}q^{\mu}}+\frac{1}{m^2}\frac{
p_{\mu}q^{\mu}}{q^2-2p_{\mu}q^{\mu}}.
\end{eqnarray}
The relevant integral is of the same type as discussed in section \ref{Wilson_Berechnung} and can be obtained in closed form
\begin{eqnarray}
C_{\overline{q}q}=\frac{1}{m^2}\frac{1-v}{1+v}.
\end{eqnarray}
\subsection{Unequal mass case : the flavor changing scalar current correlator \label{hl-scalar-quark}}
From the OPE point of view mesonic systems consisting of two quarks with unequal masses are more complicated. In the case of scalar mesons such systems are interpolated by the current
\begin{eqnarray}
j(x)=\bar{q}_{1}(x)q_{2}(x)~~j^{\dagger}(x)=\bar{q}_{2}(x)q_{1}(x).
\end{eqnarray}
In order to calculate a Wilson coefficient of such systems the sequence just quoted has to be passed trough as it was done in the case of equal mass systems. The principle of the calculation remains unchanged, but the steps are more complicated. The vacuum polarization transformed to momentum space is:
\begin{eqnarray}
i\int d^4xe^{iqx}\bra{0}T(j(x)j^{\dagger}(0))\ket{0}=i\int d^4xe^{iqx}\bra{0}T(\bar{q}_{1}(x)q_{2}(x)\bar{q}_{2}(0)q_{1}(0))\ket{0}.
\end{eqnarray}
For the calculation of the $m\bar{q}q$ Wilson coefficient the plane-wave method is applied
\begin{eqnarray}
i\int d^4xe^{iqx}\bra{p_{1}}T(j(x)j^{\dagger}(0))\ket{p_{1}}=i\int d^4xe^{iqx}\bra{p_{1}}T(\bar{q}_{1}(x)q_{2}(x)\bar{q}_{2}(0)q_{1}(0))\ket{p_{1}}.
\end{eqnarray}
The plane-wave method discussed in section \ref{Wilson_Berechnung} must be altered a bit to account for the two different types of quark fields. Thus, two quark condensates enter the calculation, one for each field. Correspondingly, the external states of the amplitude are either of type 1 or type 2. This is symbolized by the 1 at the external states. Again the Wilson coefficient is derived to lowest order
\begin{eqnarray}
i\int d^4xe^{iqx}\wick{1-2,5-6}{3-4}{\bra{p,1},\bar{q}_{1}(x),q_{2}(x),\bar{q}_{2}(0),q_{1}(0),\ket{p,1}}=
\parbox[c]{3cm}{
\begin{fmffile}{unequal01}
\begin{fmfgraph*}(40,35)
\fmfleft{i1,o1} \fmfright{i2,o2}
\fmffixedx{2cm}{v1,v2}
\fmf{dots_arrow,label.side=left,label=$q$}{i1,v1}
\fmf{fermion,label.side=down,label=$q+p$,width=2}{v1,v2}
\fmf{dots_arrow,label.side=left,label=$q$}{v2,i2}
\fmf{fermion,label.side=right,label=$p$}{o1,v1}
\fmf{fermion,label.side=right,label=$p$}{v2,o2}
\end{fmfgraph*}
\end{fmffile}}\nonumber\\
=-\bar{u}_{1}(p)\frac{\slashed{p}+\slashed{q}+m_{2}}{(p+q)^2-m_{2}^2}u_{1}(p).
\end{eqnarray}
Note that the exchange term does not contribute to this amplitude. Using the Dirac Equation $\bar{u}(p)(\slashed{p}-m_{1})=0$ or $\bar{u}_{1}(p,s)(\slashed{p}-m_{1})=0$ leads to
\begin{eqnarray}
-\bar{u}_{1}(p)\frac{m_{1}+\slashed{q}+m_{2}}{(p+q)^2-m_{2}^2}u_{1}(p).
\end{eqnarray}
The $\slashed{q}$-term is treated as in section \ref{Wilson_Berechnung}. The amplitude is given by
\begin{eqnarray}
-\frac{1}{m_{1}}\frac{m^2_{1}+p_{\mu}q^{\mu}+m_{1}m_{2}}{(p+q)^2-m_{2}^2}\bar{u}_{1}(p)u_{1}(p)\label{unequalscalar}.
\end{eqnarray}
In subsequent applications the limit $m_{1}\rightarrow 0$ is of interest. Therefore it will be calculated here
\begin{eqnarray}
-\frac{1}{m_{1}}\frac{m^2_{1}+m_{1}m_{2}+p_{\mu}q^{\mu}}{q^2-m_{2}^2+m_{1}^2+2q_{\mu}p^{\mu}}\bar{u}_{1}(p)u_{1}(p).
\end{eqnarray}
The expression for the Wilson coefficient is expanded in a series
\begin{eqnarray}
-\frac{1}{m_{1}(q^2-m_{2}^2)}\frac{m^2_{1}+m_{1}m_{2}+p_{\mu}q^{\mu}}{1+\frac{m_{1}^2+2qp}{q^2-m_{2}^2}}=-\frac{m^2_{1}+m_{1}m_{2}+p_{\mu}q^{\mu}}{m_{1}(q^2-m_{2}^2)}\left[1- \frac{m_{1}^2+2qp}{q^2-m_{2}^2}+...\right. .
\end{eqnarray}
In the limit $m_{1}\rightarrow 0$ only the leading term survives, remember $p_{\mu}q^{\mu}=m_{1}\sqrt{q_{\mu}q^{\mu}}cos(\theta)$ because $p_{E,i}p_{E}^{i}=p_{\mu}p^{\mu}$ per definition
\begin{eqnarray}
-\frac{m_{2}+\abs{q}cos(\theta)}{q^2-m_{2}^2}.
\end{eqnarray}
After averaging over the four dimensional Euclidean angle $\theta$ the term is given by
\begin{eqnarray}
-\frac{m_{2}}{q^2-m_{2}^2}.
\end{eqnarray}
Hence the Wilson coefficient in this approximation is given by
\begin{eqnarray}
C_{\bar{q}q}=\frac{-m_{2}}{q^2-m_{2}^2}=\frac{m_{2}}{m_{2}^2-q^2}\label{unequalscalarresult}.
\end{eqnarray}
\subsection{Unequal mass case : the flavor changing pseudoscalar current correlator}
The calculation goes along the same lines as in sections \ref{pseudoscalar-quark} and \ref{hl-scalar-quark}. The computations are illustrated here:
\begin{eqnarray}
j(x)=\bar{q}_{1}(x)\left(i\gamma_{5}\right)q_{2}(x)~~j^{\dagger}(x)=\bar{q}_{2}(x)\left(i\gamma_{5}\right)q_{1}(x).
\end{eqnarray}
Hence, the Wilson coefficient is determined by
\begin{eqnarray}
i\int d^4xe^{iqx}\wick{1-2,7-8}{4-5}{\bra{p}_{1},\bar{q}_{1}(x),\left(i\gamma_{5}\right),q_{2}(x),\bar{q}_{2}(0),\left(i\gamma_{5}\right),q_{1}(0),\ket{p}_{1}}=
\parbox[c]{3cm}{
\begin{fmffile}{unequal01}
\begin{fmfgraph*}(40,35)
\fmfleft{i1,o1} \fmfright{i2,o2}
\fmffixedx{2cm}{v1,v2}
\fmf{dots_arrow,label.side=left,label=$q$}{i1,v1}
\fmf{fermion,label.side=down,label=$q+p$,width=2}{v1,v2}
\fmf{dots_arrow,label.side=left,label=$q$}{v2,i2}
\fmf{fermion,label.side=right,label=$p$}{o1,v1}
\fmf{fermion,label.side=right,label=$p$}{v2,o2}
\end{fmfgraph*}
\end{fmffile}}\nonumber\\
=-\bar{u}_{1}(p)\left(i\gamma_{5}\right)\frac{\slashed{p}+\slashed{q}+m_{2}}{(p+q)^2-m_{2}^2}\left(i\gamma_{5}\right)u_{1}(p)=-\frac{1}{m_{1}}\frac{m^2_{1}-m_{1}m_{2}+p_{\mu}q^{\mu}}{q^2-m_{2}^2+m_{1}^2+2qp}\bar{u}_{1}(p)u_{1}(p).
\end{eqnarray}
In comparison with (\ref{unequalscalar}) only the sign of the $m_{1}m_{2}$-term in the numerator is changed. Thus, the result in the approximation where $m_{1}=0$ is given by (\ref{unequalscalarresult}) with $m_{2}$ replaced my $-m_{2}$
\begin{eqnarray}
C_{\bar{q}q}=\frac{m_{2}}{q^2-m_{2}^2}=\frac{-m_{2}}{m_{2}^2-q^2}\label{unequalpseudoscalarresult}.
\end{eqnarray}
\subsection{Equal mass case : the flavor conserving vector current correlator}
Vector mesons where the valence quark and antiquark have the same flavor are extrapolated by the current:
\begin{eqnarray}
j_{\mu}(x)=\bar{q}(x)\gamma_{\mu}q(x)~~j^{\dagger}_{\mu}(x)=\bar{q}(x)\gamma_{\mu}q(x).
\end{eqnarray}
The standard plane-wave analysis discussed above leads to the expression:
\begin{eqnarray}
\bra{p}j_{\mu}(x)j^{\dagger}_{\nu}(0)\ket{p}=\nonumber\\
i\int d^4xe^{iqx}\wick{1-2,7-8}{4-5}{\bra{p},\bar{q}(x),\gamma_{\mu},q(x),\bar{q}(0),\gamma_{\nu},q(0),\ket{p}}+i\int d^4xe^{iqx}\wick{1-7}{4-5,2-8}{\bra{p},\bar{q}(x),\gamma_{\mu},q(x),\bar{q}(0),\gamma_{\nu},q(0),\ket{p}}\nonumber\\
=\bar{u}(p,s)\gamma_{\mu}\frac{\slashed{p}+\slashed{q}+m}{(p+q)^2-m^2}\gamma_{\nu}u(p)+\bar{u}(p)\gamma_{\nu}\frac{\slashed{p}-\slashed{q}+m}{(p-q)^2-m^2}\gamma_{\mu}u(p) \label{vectorcoefficient}.
\end{eqnarray}
The occurrence of open Dirac indices is new. Such indices did not occur in the calculations to scalar particles. Fortunately the expressions for vector particels can be factorized in Lorentz tensors and Lorentz scalar parts as:
\begin{eqnarray}
\Pi_{\mu\nu}\left(q^2\right) =\left(g_{\mu\nu}-\frac{q_{\mu}q_{\nu}}{q^2}\right)\Pi_{T}\left(q^2\right)-\frac{q_{\mu}q_{\nu}}{q^2}\Pi_{L}\left( q^2\right).
\end{eqnarray}
Due to current conservation only the transversal part contributes to the amplitudes, the
longitudinal part can be neglected. Therefore it is removed from the amplitudes. After this subtraction only the transversal part times a Lorentz tensor is left. The transversal part is a Lorentz scalar. In order to keep the calculations simple only the transversal part is calculated. In comparison with the scalar and pseudoscalar coefficients the additional work is given solely by the procedure which extracts the transversal part of the coefficient from the full expression of the coefficient. Three steps are necessary in order to accomplish the extraction.
\begin{enumerate}
\item Contract the amplitude with $g_{\mu\nu}$, and average this expression over the four dimensional Euclidean angle. The result is $3C_{\bar{q}q,T}\left(q^2\right)-C_{\bar{q}q,L}\left( q^2\right)$.
\item Contract the amplitude with $\frac{q_{\mu}q_{\nu}}{q^2}$, and average this expression over the four dimensional Euclidean angle. The result is $-C_{\bar{q}q,L}\left( q^2\right)$.
\item Subtract those two expressions and divide by 3. The result is $C_{\bar{q}q,T}\left(q^2\right)=C_{\bar{q}q}\left(q^2\right)$.
\end{enumerate}
Some comments on the calculation of Wilson coefficients can also be found in \cite{Bagan:1985zp}. As an example the Wilson coefficient for the quark condensate in the vector current correlator case is calculated to lowest order in the quark mass.
\begin{enumerate}
\item Contraction of (\ref{vectorcoefficient}) with $g_{\mu\nu}$
\begin{eqnarray}
\bar{u}(p)\frac{\left( \slashed{p}+\slashed{q}\right) +m}{(p+q)^2-m^2}u(p)+\bar{u}(p)\frac{-2\left( \slashed{p}-\slashed{q}\right) +4m}{(p-q)^2-m^2}u(p)=\nonumber\\\bar{u}(p)\frac{2}{m}\left[\frac{m^2-pq}{(p+q)^2-m^2}+\frac{m^2+pq}{(p-q)^2-m^2}\right]u(p)\label{contractcoef01}.
\end{eqnarray}
The expressions have to be averaged over the four dimensional Euclidean angle. This can be done directly with the expression in (\ref{contractcoef01}) or with each term of its Taylor expansion. The expansion is done in the scalar product $pq$, the result to lowest order in the mass is $\frac{4m}{q^2}$.
\item Contraction of (\ref{vectorcoefficient}) with $\frac{q_{\mu}q_{\nu}}{q^2}$
\begin{eqnarray}
\frac{\bar{u}(p)}{q^2}\frac{\slashed{q}\left(\slashed{p}+\slashed{q} +m\right)\slashed{q}}{(p+q)^2-m^2}u(p)+\frac{\bar{u}(p)}{q^2}\frac{\slashed{q}\left(\slashed{p}-\slashed{q} +m\right)\slashed{q}}{(p-q)^2-m^2}u(p)
=\nonumber\\
\frac{\bar{u}(p)}{q^2}\frac{\left(2qp\slashed{q}+q^2(-\slashed{p}+\slashed{q}+m\right)}{(p+q)^2-m^2}u(p)+\frac{\bar{u}(p)}{q^2}\frac{\left(2qp\slashed{q}+q^2(-\slashed{p}-\slashed{q}+m) \right)}{(p-q)^2-m^2}u(p)=\nonumber\\\bar{u}(p)
\frac{1}{m}\left[\frac{2(pq)^2+q^2pq}{(p+q)^2-m^2}+\frac{2(pq)^2-q^2pq}{(p-q)^2-m^2}\right]u(p)\label{contractcoef02}.
\end{eqnarray}
Again the Taylor expansion is considered. In lowest order of the mass the longitudinal part of the Wilson coefficient is $\frac{-2m}{q^2}$.
\item The last step is simple and the result is
\begin{eqnarray}
C_{\bar{q}q}=\frac{2m}{q^2}+\mathcal{O}\left(\frac{1}{q^4}\right) \label{lightvector}.
\end{eqnarray}
\end{enumerate}
The solution quoted in (\ref{lightvector}) can be used for light quark systems. In the case of heavy quark systems all orders in he Taylor expansion have to be considered. Hence, the expansion has to be summed up or the average has to be taken direct without doing a Taylor expansion. The result where all orders in mass are included is:
\begin{eqnarray}
C_{\bar{q}q}(q^2)=-\frac{2}{3m}\frac{(1-v)(2+v)}{(1+v)}.
\end{eqnarray}
The examples just shown illustrates the plane-wave method for the computation of Wilson coefficients. They have been arranged from simple to difficult but it does not end at the vectorparticle level. Tensor particels of arbitrary rank, can also be considered. With increasing rank the Dirac structure and the corresponding projector are more and more complex. Additional complications arise in loop diagrams and in diagrams involving gluons. Furthermore, there are ambiguities in the interpolating currents (see section \ref{gluonmix}).\\
The plane-wave method is the most rudimentary method for the computations of Wilson coefficients. Every method posses advantages and disadvantages. The advantages of the plane-wave method are that it can be used for the calculation of every Wilson coefficient without having to learn a great amount of new formalisms. The disadvantage is the complexity in the calculations which have to be performed. There exist methods where the calculations are significantly less complex. The most important one besides the plane-wave method is the fixed point gauge technique, also called background field method. There are many examples concerning the application of this method in \cite{Narison:1989aq}.
\subsection{Diagrammatic representation of an OPE}
An OPE of the two point correlator in a meson channel can be graphically represented. This is illustrated below for the first two terms, in the OPE
\begin{eqnarray}
\bra{p}j_{\mu}(x)j^{\dagger}_{\nu}(0)\ket{p}=
\parbox[c]{3cm}{
\begin{fmffile}{loop}
\begin{fmfgraph*}(30,30)
\fmfleft{i}
\fmfright{o}
\fmf{dots,label.side=down}{i,v1}
\fmf{dots,label.side=down}{v2,o}
\fmf{plain,label.side=left,left,tension=.3}{v1,v2}
\fmf{plain,label.side=left,left,tension=.3}{v2,v1}
\fmfdotn{v}{2}
\end{fmfgraph*}
\end{fmffile}}
\mathds{1}+
\parbox[c]{3cm}{
\begin{fmffile}{quark}
\begin{fmfgraph*}(30,30)
\fmfleft{i}
\fmfright{o}
\fmf{dots,label.side=down}{i,v1}
\fmf{dots,label.side=down}{v2,o}
\fmf{phantom,label.side=left,left,tension=.3,tag=1}{v1,v2}
\fmf{plain,label.side=left,left,tension=.3}{v2,v1}
\fmfdotn{v}{2}
\fmfposition
\fmfipath{p[]}
\fmfiset{p1}{vpath1(__v1,__v2)}
\fmfi{plain}{subpath (0,length(p1)/4) of p1}
\fmfiv{d.sh=cross,d.ang=0,d.siz=5thick}{point length(p1)/4 of p1}
\fmfi{plain}{subpath (3length(p1)/4,length(p1)) of p1}
\fmfiv{d.sh=cross,d.ang=0,d.siz=5thick}{point 3length(p1)/4 of p1}
\end{fmfgraph*}
\end{fmffile}}
\mele{m\bar{q}q}.
\end{eqnarray}
This includes all terms in the OPE to lowest order in $\alpha_{S}$. The term in front of the unit operator is referred to as the perturbative term. This terminology stems from the fact that the practical OPE (see section \ref{opeinqcd}) is used where the Wilson coefficient of the unit operator is given by perturbative QCD.\\
The term in front of the quark condensate is similar to the scattering diagram used in the calculation of the corresponding Wilson coefficient above. In fact the diagrammatic representation can be understood as stemming from the calculations done in the plane-wave method. The crosses which do not appear in the scattering diagram symbolize the contact of the Wilsoncoefficient with the quark condensate.\\
Note that the connection between the scattering diagrams and those representing the Wilson coefficients is non-trivial due to the angle average in Euclidean space discussed above.
\subsection{Quark mass effects \label{gluonmix}}
The last sections have been very general and mathematical but for applications of the OPE also physical boundary conditions have to be taken into account. A very important boundary condition is given by the mass of the quarks that are chosen to build up the interpolating currents. They are important for Wilson coefficients involving the gluon condensate.\\
Suppose the Wilson coefficient for the gluon condensate is calculated for the case of a current which is build up of two quarks with different flavor. Then two masses $m_{1}$ and $m_{2}$ enter the Wilson coefficient. Depending on the meson which should be approximated by the current various limits for the masses are interesting. In the case of a heavy-light system the limit $m_{2}\rightarrow 0$ is interesting. Where $m_{1}$ should be the mass of the heavy and $m_{2}$ the mass of the light quark. Surprisingly this limit exhibits a serious problem. The Wilson coefficient in this limit diverges! According to physical and mathematical arguments this divergence should not occur. Hence, further considerations are necessary. Methods to handle them where already starting to develop in the original publications to the OPE \cite{Shifman:1978bx}, became fully developed in \cite{Reinders:1984sr} and where completed by \cite{Bagan:1985zp}.\\
Here a short introduction is given following \cite{Reinders:1984sr}. Sandwiching the OPE between one gluon states gives a suprising result:
\begin{eqnarray}
\sum_{n}C_{n}\bra{k}O_{n}\ket{k}=C_{m_{1}}\bra{k}m_{1}\bar{q}_{1}q_{1}\ket{k}+C_{m_{2}}\bra{k}m_{2}\bar{q}_{2}q_{2}\ket{k}+C_{G}\bra{k}G^2\ket{k}+... \label{ope1gluon}.
\end{eqnarray}
Calculating the matrix element $\bra{k}m\bar{q}q\ket{k}$, that means writing down its OPE leads to
\begin{eqnarray}
\bra{k}m\bar{q}q\ket{k}=\frac{1}{12}\frac{\alpha_{s}}{\pi}\bra{k}G^2\ket{k}+...~\label{qqope}.
\end{eqnarray}
We see from this expression that the quark matrix element is of the same order in $\alpha_{s}$ as $C_{G}$, since in lowest order the coefficient $C_{m}$ is of zeroth order in $\alpha_{s}$. Inserting (\ref{qqope}) into (\ref{ope1gluon}) the following expression is obtained
\begin{eqnarray}
\sum_{n}C_{n}\bra{k}O_{n}\ket{k}=\left[C_{G}+\frac{1}{12}\frac{\alpha_{s}}{\pi}C_{m_{1}}+\frac{1}{12}\frac{\alpha_{s}}{\pi}C_{m_{2}}\right]\bra{k}G^2\ket{k}+...~.
\end{eqnarray}
Therefore the coefficient of the gluon condensate differs from what it was expected to be. The physical version is given by:
\begin{eqnarray}
C_{G,physical}=C_{G}+\frac{1}{12}\frac{\alpha_{s}}{\pi}\left(C_{m_{1}}+C_{m_{2}}\right).
\end{eqnarray}
The physical interpretation of this mixing is that the quark condensation has also to be taken into account in the Wilsoncoefficient of the gluon condensate. The mass of the quarks determines if their condensation has to be taken into account in the Wilson coefficient of the gluon condensate. A simple rule can be formulated. In the case of heavy quarks the quark condensation has not to be taken into account in the Wilson coefficient of the gluon condensate. Moreover, the quark condensation can totally be neglected in the corresponding OPE. In the case of light quarks the situation is reversed. The quark condensation has to be taken into account in the gluon condensate coefficient and in the corresponding OPE.\\
These rules can be comprised in the statements that heavy quarks condense mainly trough gluons while light quarks condense mainly through quarks. The most drastic example is the $G^3$ condensate. Light quarks totally decouple from this condensate (see \cite{Bagan:1985zp} and references herein). Decoupling means that the Wilson coefficient is zero.\\
The effects just described depend on the relative size of the quark mass compared to the renormalization point $\mu$, that was introduced to separate long and short distance fluctuations. If the quark mass is smaller than $\mu$ quark condensation is favored, if it is bigger than $\mu$ gluon condensation of quarks is favored.\\
One way to understand the effect mathematically is to analyze the gluon condensate coefficient. The loop integral in the coefficient at large momentum $q^2$ receives its main contribution from two regions of virtual momenta. The first is $p^2\approx q^2$ and the second is $p^2\approx m^2$. For small quark masses the latter region can`t be included in the coefficient $C_{G}$ since some of the quark propagators soft. This piece must be subtracted and absorbed into the matrix element $\bra{0}m\bar{q}q\ket{0}$. While for the heavy quarks the propagators are not soft in that region.\\
In summary this observation leads to the following expressions.
\begin{enumerate}
\item{For pure light quark systems the nonperturbative corrections are}
\begin{eqnarray}
C_{m_{1}}\bra{0}m_{1}\bar{q}_{1}q_{1}\ket{0}+C_{m_{2}}\bra{0}m_{2}\bar{q}_{2}q_{2}\ket{0}+C_{G,physical}\bra{0}G^2\ket{0}.
\end{eqnarray}
\item{For heavy-light systems $(m_{2}\gg m_{1})$ the corrections are}
\begin{eqnarray}
C_{m_{1}}\bra{0}m_{1}\bar{q}_{1}q_{1}\ket{0}+C_{m_{2}}\bra{0}m_{2}\bar{q}_{2}q_{2}\ket{0}+C_{G,physical}\bra{0}G^2\ket{0}
=C_{m_{1}}\bra{0}m_{1}\bar{q}_{1}q_{1}\ket{0}\nonumber\\+\left(C_{G}+\frac{1}{12}\frac{\alpha_{s}}{\pi}C_{m_{1}}\right)\bra{0}G^2\ket{0}+ C_{m_{2}}\left( \bra{0}m_{2}\bar{q}_{2}q_{2}\ket{0}+\frac{1}{12}\frac{\alpha_{s}}{\pi}\bra{0}G^2\ket{0}\right).
\end{eqnarray}
The heavy quark mass expansion for the heavy quark condensate $\bra{0}\bar{h}h\ket{0}=-\frac{1}{m_{h}}\frac{\alpha_{s}}{12\pi}\bra{0}G^2\ket{0}+...$ is used to eliminate the $C_{m_{2}}$ term. The result for the non perturbative corrections is finally
\begin{eqnarray}
C_{m_{1}}\bra{0}m_{1}\bar{q}_{1}q_{1}\ket{0}+\left(C_{G}+\frac{1}{12}\frac{\alpha_{s}}{\pi}C_{m_{1}}\right)\bra{0}G^2\ket{0}.
\end{eqnarray}
\item{For pure heavy quark systems the relevant expression is}
\begin{eqnarray}
C_{m_{1}}\bra{0}m_{1}\bar{q}_{1}q_{1}\ket{0}+C_{m_{2}}\bra{0}m_{2}\bar{q}_{2}q_{2}\ket{0}+C_{G,physical}\bra{0}G^2\ket{0}=
C_{G}\bra{0}G^2\ket{0}\nonumber\\+C_{m_{1}}\left(m_{1}\bra{0}\bar{q}_{1}q_{1}\ket{0}+\frac{1}{12}\frac{\alpha_{s}}{\pi}\bra{0}G^2\ket{0}\right)+C_{m_{2}}\left(\bra{0}m_{2}\bar{q}_{2}q_{2}\ket{0}+\frac{1}{12}\frac{\alpha_{s}}{\pi}\bra{0}G^2\ket{0}\right) .
\end{eqnarray}
Again the heavy quark mass expansion for the heavy quark condensate is used. The result is
\begin{eqnarray}
C_{G}\bra{0}G^2\ket{0}.
\end{eqnarray}
It is identical with the one calculated without taking into account the coefficients of the $m_{i}\bar{q}_{i}q_{i}$ terms. The conclusion is that the heavy quark condensate has practically no effect on the polarization functions. An example is the charmonium system, which in the OPE is determined exclusively by gluonic effects.
\end{enumerate}
In retrospective this section improves the naive picture of the gluon Wilson coefficient. The condensation via quarks or gluons is entangled and can not be treated separately.
\section{The condensates \label{condensates}}
In the last sections the Wilson coefficients were in principle determined, but what about the condensates? There exists no method to calculate them from first principles. All values found in the literature have been extracted from experiments. Some condensates have been directly extracted from the data, others involve additional estimations. Hence it is logical to start with the ones that have directly been extracted from the data. These are the matrix elements of the light quark condensates, some of the four quark condensates, the gluon condensate and the mixed gluon condensate. The triple gluon condensate has also been directly extracted from data, but is not treated here.
\begin{itemize}
\item $\bra{0}\overline{u}u\ket{0}=-(0.250GeV)^3=-1.5625\cdot10^{-2}GeV^3$
\item $\bra{0}\overline{d}d\ket{0}=-(0.250GeV)^3=-1.5625\cdot10^{-2}GeV^3$
\item $\bra{0}\overline{s}s\ket{0}=0.8\times\bra{0}\overline{u}u\ket{0}=-1.25\cdot10^{-2}GeV^3$
\end{itemize}
Which have been computed in the formalism of Gell-Mann, Oaks,Renner \cite{Narison:1989aq}. Then there is a set of matrix elements which have computed with the help of QSRs.
\begin{itemize}
\item $\bra{0}\frac{\alpha_{s}}{\pi}G_{\mu\nu}^{c}G_{\mu\nu}^{c}\ket{0}=0.012~GeV^4=1.2\cdot10^{-2}~GeV^4$
\item $g\mele{\overline{q}\sigma^{\mu\nu}\frac{\lambda_{a}}{2}qG^{a}_{\mu\nu}}=M_{0}^{2}\mele{\overline{q}q}=0.80GeV^2\cdot(-1.5625\cdot10^{-2}GeV^3)=-0.0125GeV^5$
\end{itemize}
The four quark condensate and the triple gluon condensate have also been computed by using QSRs. All other matrix elements have been extracted from the data with less direct methods. In the case of the heavy quark condensates the expansion of the condensate reads
\begin{eqnarray}
\bra{0}\overline{h}h\ket{0}=-\frac{1}{12m_{h}}\bra{0}\frac{\alpha_{s}}{\pi}G_{\mu\nu}^{c}G_{\mu\nu}^{c}\ket{0}+...~.
\end{eqnarray}
The next to leading terms are multiplied by higher powers of $\frac{1}{m_{h}}$. Hence, they are suppressed in the heavy quark case. The estimates for the heavy quark condensates in first order are:
\begin{itemize}
\item $m_{c}=001.30GeV \rightarrow \bra{0}\overline{c}c\ket{0}=-7.693\cdot10^{-4}~GeV^3$
\item $m_{b}=004.50GeV \rightarrow \bra{0}\overline{b}b\ket{0}=-2.2\cdot10^{-4}~GeV^3$
\item $m_{t}=176.20GeV \rightarrow \bra{0}\overline{t}t\ket{0}=-5.675\cdot10^{-6}~GeV^3$.
\end{itemize}
Thus the heavy quark condensates are negligible compared to the light quark condensates. The estimates of higher dimensional quark condensates goes along different lines. The basic idea that stands behind the procedure is the assumption of vacuum state dominance. This idea is illustrated by factorization of a simplified 4 quark condensate
\begin{eqnarray}
\bra{0}\bar{q}q\bar{q}q\ket{0}=\bra{0}\bar{q}q\mathds{1}\bar{q}q\ket{0}=\bra{0}\bar{q}q\sum_{n}\ket{n}\bra{n}\bar{q}q\ket{0}=\sum_{n}\bra{0}\bar{q}q\ket{n}\bra{n}\bar{q}q\ket{0}\approx\bra{0}\bar{q}q\ket{0}\bra{0}\bar{q}q\ket{0}.
\end{eqnarray}
After the insertion of a complete set of states it is assumed that everything except the term with the vacuum states can be neglected. Hence, the vacuum is assumed to be the numerically biggest contribution. The result is of course only an estimate, but a very handy one. The simplified four quark condensate has been factorized in the product of two two-quark condensates. These condensates have been estimated as shown before.\\
The four quark condensate as defined in (\ref{operators}) has an interior structure which changes the factorization procedure slightly, but the calculations give insight in an interesting fact and will therefore be shown here.\\
In many calculations $\Gamma_{1}$ and $\Gamma_{2}$ turn out to be $\lambda_{a}\gamma_{\mu}$ where $\lambda_{a}=2t_{a}$ and the $t_{a}$s are the Gell-Mann flavor matrices. The condensate for this case is factorized here. Here only quarks of one color are considered
\begin{eqnarray}
\bra{0}\bar{q}\lambda_{a}\gamma_{\mu}q\bar{q}\lambda_{a}\gamma_{\mu}q\ket{0}=
\bra{0}\bar{q}_{i,\alpha}\gamma^{\alpha\beta}_{\mu}\lambda_{a}^{ij}q_{j,\beta}\bar{q}_{k}^{\eta}\gamma^{\mu}_{\eta\nu}\lambda_{a}^{kl}q_{l}^{\nu}\ket{0}=\gamma^{\alpha\beta}_{\mu}\lambda_{a}^{ij}\gamma^{\mu}_{\eta\nu}\lambda_{a}^{kl}\bra{0}\bar{q}_{i,\alpha}q_{j,\beta}\bar{q}_{k}^{\eta}q_{l}^{\nu}\ket{0}
\label{4quarkfacto}.
\end{eqnarray}
The interior structure is already factored out, but in this example a non-trivial index structure of the quark fields is given and there remains one step before the vacuum saturation assumption can be applied. Two kinds of indices have to be considered next. They are the Dirac indices $\alpha$, $\beta$, $\eta$ and $\nu$ and the flavor indices $i$,$j$,$k$ and $l$. Vacuum expectation values of operators do not have spin or flavor and of course no color. Therefore the flavors and the anti-flavors have to cancel and also the operators must be a scalar. To achieve this the fields are grouped in two pairs in each pair is a $q$ and $\bar{q}$ field operator. The indices in these pairs have to be equal in order to cancel the properties they correspond to. Hence two combinations are possible.
\begin{itemize}
\item $\bar{q}_{i,\alpha}q_{j,\beta}\bar{q}_{k}^{\eta}q_{l}^{\nu}$
\item $\bar{q}_{i,\alpha}q_{l}^{\nu}\bar{q}_{k}^{\eta}q_{j,\beta}$
\end{itemize}
Therefore (\ref{4quarkfacto}) can be written as
\begin{eqnarray}
\gamma^{\alpha\beta}_{\mu}\lambda_{a}^{ij}\gamma^{\mu}_{\eta\nu}\lambda_{a}^{kl}\bra{0}\bar{q}_{i,\alpha}q_{j,\beta}\bar{q}_{k}^{\eta}q_{l}^{\nu}-\bar{q}_{i,\alpha}q_{l}^{\nu}\bar{q}_{k}^{\eta}q_{j,\beta}\ket{0}\label{4quarkgeordnet}.
\end{eqnarray}
The double summation in the right summand is cancelled after application of the vacuum states and the minus sign stems from the anti-commutator relation that has to be used in order to change the position of the quarkfields. As the last step the summation has to be performed. It is achieved by employing the relation
\begin{eqnarray}
\bra{0}\bar{q}_{A}q_{B}\ket{0}=\frac{\delta_{AB}}{N}\bra{0}\bar{q}q\ket{0}\label{superindex}
\end{eqnarray}
where the quark fields carry a superindex which collects all possible indices and the quarkfields without indices are the summed up fields in which the indices are always equal. The normalization factor counts the summands in this sum. In the actual example it is $4\times3=12$. Application of the relation (\ref{superindex}) to (\ref{4quarkgeordnet}) results in:
\begin{eqnarray}
\frac{\gamma^{\alpha\alpha}_{\mu}\lambda_{a}^{ii}\gamma^{\mu}_{\eta\eta}\lambda_{a}^{kk}}{N^2}\bra{0}\bar{q}_{i,\alpha}q_{i,\alpha}\bar{q}_{k}^{\eta}q_{k}^{\eta}\ket{0}-\frac{\gamma^{\alpha\eta}_{\mu}\lambda_{a}^{ik}\gamma^{\mu}_{\eta\alpha}\lambda_{a}^{ki}}{N^2}\bra{0}\bar{q}_{i,\alpha}q_{i}^{\alpha}\bar{q}_{k}^{\eta}q_{k,\eta}\ket{0}.
\end{eqnarray}
The corresponding index free notation is
\begin{eqnarray}
\frac{1}{N^2}\left[tr\left(\gamma_{\mu}\lambda_{a}\right)tr\left(\gamma^{\mu}\lambda_{a}\right)-tr\left(\gamma_{\mu}\lambda_{a}\gamma^{\mu}\lambda_{a} \right)\right]
\bra{0}\bar{q}q\bar{q}q\ket{0}.
\end{eqnarray}
At this stage its recommendable to apply the vacuum saturation hypothesis. The result is
\begin{eqnarray}
\frac{1}{N^2}\left[tr\left(\gamma_{\mu}\lambda_{a}\right)tr\left(\gamma^{\mu}\lambda_{a}\right)-tr\left(\gamma_{\mu}\lambda_{a}\gamma^{\mu}\lambda_{a} \right)\right]
\bra{0}\bar{q}q\ket{0}^2=-\frac{16}{9}\bra{0}\bar{q}q\ket{0}^2
\end{eqnarray}
where also the numerical result is displayed. The full result also has to incorporate the possibility of different colors, but the structure of the factorization formula stays the same
\begin{eqnarray}
\bra{0}\bar{\psi}\Gamma_{1}\psi\bar{\psi}\Gamma_{2}\psi\ket{0}=
\frac{1}{N^2}\left[ tr\Gamma_{1}tr\Gamma_{2}-
tr\Gamma_{1}\Gamma_{2}\right]
\bra{0}\bar{\psi}\psi\ket{0}^2
\end{eqnarray}
as shown in \cite{Shifman:1978bx}. Here the normalization factor is $4\times3\times3=36$ and $\bra{0}\bar{\psi}\psi\ket{0}=\bra{0}(\bar{u}u+\bar{d}d+\bar{s}s)\ket{0}$. The factorization can also be achieved by using Fierz transformations. An important example for a factorization of a four quark condensate is:
\begin{eqnarray}
\alpha_{s}\mele{\overline{q}\gamma_{\mu}\lambda_{a}q\sum_{n=u,d,s}(\overline{n}\gamma^{\mu}\lambda_{a}n)}=-\frac{16}{9}\alpha_{s}\mele{\overline{q}q}^2=-\frac{16}{9}1.75\cdot 10^{-4}GeV^6=-3.11\cdot 10^{-4}GeV^6
\end{eqnarray}
(see \cite{Narison:1989aq}). Along these lines all quark condensates can be factorized in the approximation where the vacuum saturates the whole condensate but if this assumption is valid or not is another question. There hadrons in QCD which are very sensitive to the four quark condensate. If the Sum Rules are fitted to the data, it turns out that the vacuum saturation hypothesis fails. In order to incorporate the violation of the vacuum hypothesis a factor $k$ is introduced. The factorized four quark condensate is multiplied by this factor
\begin{eqnarray}
\bra{0}\bar{\psi}\Gamma_{1}\psi\bar{\psi}\Gamma_{2}\psi\ket{0}=
k\cdot\frac{1}{N^2}\left[ tr\Gamma_{1}tr\Gamma_{2}-
tr\Gamma_{1}\Gamma_{2}\right]
\bra{0}\bar{\psi}\psi\ket{0}^2.
\end{eqnarray}
This factor takes into account the violation of the vacuum saturation hypothesis. If $k=1$ the hypothesis holds, if it is not the hypothesis is violated. The four quark condensate is still subject to new estimations. The value for $k$ seems to change steadily \cite{Narison:1990cy,Leupold:1997dg}. However, in this work the validity of the vacuum saturation hypothesis is assumed.\\
Estimates of higher gluon condensates exist, but are not reliable. Therefore, they have been extracted from the data.\\
The last remaining condensate on the list is
$g\mele{\overline{q}\sigma^{\mu\nu}\frac{\lambda_{a}}{2}qG^{a}_{\mu\nu}}$ here also no
satisfactory method for the estimation or calculation exists and this condensate has again been extracted from data \cite{Narison:1990cy}.\\
In conclusion it can be said that in contrast to the Wilson coefficients the knowledge of the condensates is poor and relatively high errors are connected with their values. Clearly, the reason for this is that they involve non-perturbative physics, while the Wilson coefficients can be computed perturbatively.
\section{The dispersion relations\label{dispersionrelations}}
The central elements in QCD Sum Rules are the OPE and the dispersion relations. Above the OPE has been introduced and here the dispersion relations will be reviewed. First the basic formulas are introduced and different regularization methods are discussed. It turns out that everything is based on only one dispersion relation. The derivation of this relation will be explained in the subsequent section.
\subsection{The formulae used \label{formulae}}
The relation on which all other relations are based on is given by
\begin{eqnarray}
\Pi\left(q^2\right)=\frac{1}{\pi}\int_{0}^{\infty}\frac{Im\Pi(s)}{(s-q^2)} ds \label{dispersion}
\end{eqnarray}
and states that a correlator is known if its imaginary part along the positive real axis is known. In the derivation only causality has been assumed. In practical applications (\ref{dispersion}) has to be checked for convergence. $Im\Pi(s)$ can be very steep and the integral can be divergent. In such a case the integral has to be subtracted. Therefore, a Taylor expansion of the correlator is performed
\begin{eqnarray}
\Pi\left(q^2\right)=\Pi\left(0\right)+\left[\frac{d}{dq^2}\Pi\left(q^2\right)\right]_{q^2=0}q^2+...\nonumber\\+\frac{1}{(n-1)!}\left[\left( \frac{d}{dq^2}\right) ^{n-1}\Pi\left(q^2\right)\right]_{q^2=0}\left( q^2\right)^{n-1}+\frac{(q^2)^n}{\pi}\int\frac{Im\Pi(s)}{s^n(s-q^2)} ds \label{subtracted}.
\end{eqnarray}
The first n-1 terms are the expansion while the last term is the remainder term of the expansion. Thus, the integral is split in two parts a polynomial in $q^2$ and a subtracted dispersion integral. Divergences of the correlator are eliminated by renormalizing the coefficients of the polynomial. For a logarithmic divergence one substitution is sufficient
\begin{eqnarray}
\Pi\left(q^2\right)=\Pi\left(0\right)+\frac{q^2}{\pi}\int\frac{Im\Pi(s)}{s(s-q^2)} ds \label{onesubtraction}.
\end{eqnarray}
After renormalization of the amplitude ensures that $\Pi\left(0\right)$ is either finite or 0. If $\Pi\left(0\right)=0$
\begin{eqnarray}
\Pi\left(q^2\right)=\frac{q^2}{\pi}\int\frac{Im\Pi(s)}{s(s-q^2)} ds.
\end{eqnarray}
The divergent integral in (\ref{dispersion}) is finite after one subtraction due to the $\frac{1}{s}$ factor. Another way of treating such integrals is the method of power moments $M_{n}\left(q^2\right)$, also called Hilbert moments. These moments are derivatives of the dispersion relation (\ref{dispersion}) with respect to $q^2$
\begin{eqnarray}
\frac{d}{dq^2}\Pi\left(q^2\right)=\frac{1}{\pi}\int\frac{Im\Pi(s)}{(s-q^2)^2} ds
\end{eqnarray}
\begin{eqnarray}
\left(\frac{d}{dq^2}\right)^2\Pi\left(q^2\right)=\frac{2}{\pi}\int\frac{Im\Pi(s)}{(s-q^2)^3} ds
\end{eqnarray}
\begin{eqnarray}
\left(\frac{d}{dq^2}\right)^3\Pi\left(q^2\right)=\frac{2\cdot 3}{\pi}\int\frac{Im\Pi(s)}{(s-q^2)^4} ds
\end{eqnarray}
\begin{eqnarray}
\left(\frac{d}{dq^2}\right)^4\Pi\left(q^2\right)=\frac{2\cdot 3\cdot 4}{\pi}\int\frac{Im\Pi(s)}{(s-q^2)^5} ds.
\end{eqnarray}
Or in compact form:
\begin{eqnarray}
M_{n}\left(q^2\right)=\frac{1}{n!}\left(\frac{d}{dq^2}\right)^n\Pi\left(q^2\right)=\frac{1}{\pi}\int\frac{Im\Pi(s)}{(s-q^2)^{(n+1)}} ds\qquad n\geq 1 .\label{diff}
\end{eqnarray}
At $q^2=0$ $M_{n}$ coincides with the coefficients of the subtracted dispersion relation. This time not the correlator itself is computed but its derivatives, this approach will also prove useful in further considerations. Again the convergence of the integral is improved with increasing $n$ until the $n$ is reached at which it becomes finite.\\
\begin{figure}[htbp]
\begin{center}
\begin{picture}(0,0)%
\includegraphics{cutpoles.eps}%
\end{picture}%
\setlength{\unitlength}{4144sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(2488,1907)(2928,-2476)
\put(5223,-1697){$Re\left[s\right]$}%
\put(4366,-1995){$threshold~ value$}%
\put(4231,-691){$Im\left[s\right]$}%
\end{picture}%
\caption{The analytic structure of the correlators used in this work. On the positive real axis above the threshold begins a cut on which additional poles occur. The cut corresponds to physical processes.\label{cutpoles}}
\end{center}
\end{figure}
A further method to improve the convergence of the dispersion relation (\ref{dispersion}) is the Borel transformation. This method is also using derivatives but not a finite number of them but an infinite number. The transformation is accomplished by applying the Borel operator
\begin{eqnarray}
\widehat{\mathcal{B}}=\frac{1}{(n-1)!}(-q^2)^n\left(\frac{d}{dq^2}\right)^n,-q^2\rightarrow\infty,~n\rightarrow\infty,\qquad\frac{-q^2}{n}=M^2~fixed.
\end{eqnarray}
The action of the Operator is to change the kernel of the integral
\begin{eqnarray}
\widehat{\mathcal{B}}\Pi\left(q^2\right)=\widehat{\mathcal{B}}\frac{1}{\pi}\int_{0}^{\infty}\frac{Im\Pi(s)}{(s-q^2)}ds= \frac{1}{\pi}\int_{0}^{\infty}e^{-\frac{s}{M^2}}Im\Pi(s)ds.
\end{eqnarray}
Hence, after the Borel transformation high momentum components are exponentially suppressed for $q^2<0$. For many QSR applications, the Borel transformation is the optimal choice.
\subsection{Derivation of the dispersion relation}
The analytic properties of the correlators together with the residue theorem \cite{Bronstein} form the basis of the derivation. In a sense the analytic structure of the correlators used in this work is simple. They have a cut and poles on the positive real axis above some threshold. The remaining part of the complex plane, over which they are defined is free of singularities (see figure \ref{cutpoles}).
Hence, according to the residue theorem an integration over a contour which excludes the shaded region in figure \ref{cutpoles} vanishes. However, a look at the integral in (\ref{dispersion}) reveals that the integrand has an additional pole at $s=q^2$. The residue of this pole is the reason why the dispersion integral is $\Pi\left(q^2\right)$. Instead of a general derivation of the dispersion relation a simple illustrative example is discussed. These illustrations mimic the analytic structure of the correlators that are concerned here. The particular example chosen is the square root, which has a cut on the negative real axis. The analytic structure of the square root is given in figure \ref{rootcut}.
\begin{figure}[htbp]
\begin{center}
\begin{picture}(0,0)%
\includegraphics{rootcut.eps}%
\end{picture}%
\setlength{\unitlength}{4144sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(1631,1589)(2689,-1823)
\put(3568,-349){$Im\left[z\right]$}%
\put(4187,-1136){$Re\left[z\right]$}%
\end{picture}%
\caption{The analytic structure of the square root. The cut of the square root is defined to be on the negative real axis. \label{rootcut}}
\end{center}
\end{figure}
The only difference to the correlators is that there are no poles on the cut of the square root. Everything that will follow from here on is simply an application of the residue theorem, which is given by
\begin{eqnarray}
\sum_{k=1}^{n}Res(f,z_{k})=\frac{1}{2\pi i}\oint f(z)dz \label{residuetheorem}.
\end{eqnarray}
In this example the integral equation is given by
\begin{eqnarray}
\sum_{k=1}^{n}Res\left(\frac{\sqrt{z}}{z-z_{0}},z_{k}\right)=\frac{1}{2\pi i}\oint \frac{\sqrt{z}}{z-z_{0}}dz .
\end{eqnarray}
The choice of the integration contour is restricted by three requirements
\begin{enumerate}
\item Exclusion of the singularities along the negative real axis.
\item Inclusion of the pole given by $\frac{1}{z-z_{0}}$.
\item Reduction of the integral to the form given in (\ref{dispersion}).
\end{enumerate}
Thus, the integration contour can be chosen as in figure \ref{rawcontour}.
\begin{figure}[htbp]
\begin{center}
\begin{picture}(0,0)%
\includegraphics{rootrawcontour.eps}%
\end{picture}%
\setlength{\unitlength}{4144sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(2073,2074)(2737,-1818)
\put(3946,-1006){$R$}%
\put(4411,-961){$Re\left[z\right]$}%
\put(3871,119){$Im\left[z\right]$}%
\end{picture}%
\caption{The integration contour for the dispersion relation before the limit $R\rightarrow\infty$ is taken.\label{rawcontour}}
\end{center}
\end{figure}
The next goal is to eliminate the integrations along the circles. Therefore, the limits $R\rightarrow\infty$ and $r\rightarrow 0$ where $R$ is the radius of the big circle and $r$ is the radius of the small circle in figure \ref{rawcontour} are taken. In many cases the integrand vanishes along the circles in this limit. In this example the integrand does not vanish on the circle whose radius had been set to infinity, but a simple subtraction eliminates this problem (see equation (\ref{subtracted}))
\begin{eqnarray}
\sqrt{z}=\sqrt{0}+\frac{z_{0}}{\pi}\int\frac{\sqrt{z}}{z(z-z_{0})} dz=\frac{z_{0}}{\pi}\int\frac{\sqrt{z}}{z(z-z_{0})} dz.
\end{eqnarray}
On the circles the integral is given by
\begin{eqnarray}
i\frac{z_{0}}{\pi}\int\frac{\sqrt{R}}{Re^{i\phi}-z_{0}}e^{i\frac{\phi}{2}} dz.
\end{eqnarray}
Hence, the integrand vanishes for $R\rightarrow\infty$ and $R\rightarrow 0$. Therefore only the contour along the negative axis survives after the limit is taken. This is a huge step forward but the integration still runs over the real and complex part of the square root whereas the one in (\ref{dispersion}) runs only over the imaginary part.\\
The solution of this puzzle is given by another feature of the square root. On the integration contour that runs above the negative real axis the square root has the value $+i\sqrt{\abs{z}}$ on the one below its value is $-i\sqrt{\abs{z}}$. Such discontinuities also occur in correlators but in general their real part is not zero. However, the square root is purely imaginary on both contours. Moreover, the the integrations along the negative real axis can be comprised to a single integration because the integration on the contour where the square root is $-i\sqrt{\abs{z}}$ runs in the opposite direction to the one where the root is $+i\sqrt{\abs{z}}$. Therefore the two integrations are equal and can be added. Thus the integration takes the form
\begin{eqnarray}
\sum_{k=1}^{n}Res(f,z_{k})=\frac{z_{0}}{\pi i}\int_{-\infty}^{0} \frac{\sqrt{z}}{z(z-z_{0})} dz =\frac{z_{0}}{\pi}\int_{-\infty}^{0} \frac{Im\left[ \sqrt{z}\right] }{z(z-z_{0})} dz\qquad\sqrt{z}=iIm\left[ \sqrt{z}\right]\label{rootdisp}.
\end{eqnarray}
Only the left hand side has to be modified. It is given by the residues of the integrand in (\ref{rootdisp}), but the integrand has only one singularity inside the integration region. Hence, there is only one residue. The calculation of this residue is simple and can be performed as follows, the square root has to be expanded around $z_{0}$, the residue is the coefficient of the $\frac{1}{z-z_{0}}$-term
\begin{eqnarray}
\sqrt{z}=\sqrt{z_{0}}+\frac{1}{2}\frac{1}{\sqrt{z_{0}}}(z-z_{0})-\frac{1}{8}\frac{1}{z_{0}^{3/2}}(z-z_{0})^2+...
\end{eqnarray}
\begin{eqnarray}
\frac{\sqrt{z}}{z-z_{0}}=\frac{\sqrt{z_{0}}}{z-z_{0}}+\frac{1}{2}\frac{1}{\sqrt{z_{0}}}-\frac{1}{8}\frac{1}{z_{0}^{3/2}}(z-z_{0})+...~.
\end{eqnarray}
The analysis of the Laurent expansion that has just been derived shows that the residue is given by $\sqrt{z_{0}}$ and the integration (\ref{rootdisp}) finally shows up to be
\begin{eqnarray}
\sqrt{z_{0}}=\frac{z_{0}}{\pi}\int_{-\infty}^{0}\frac{Im\left[\sqrt{z}\right] }{z(z-z_{0})} dz \label{finalroot}
\end{eqnarray}
and for the example of the root the dispersion relation has been valid. Though, there are a few remarks that are necessary in order to extend the derivation to correlators. The integration contour for correlators is mirrored with respect to the imaginary axes and then the zero point is shifted to the threshold value shown in figure \ref{cutpoles}. This is done because the positive real axis should be excluded from the inside of the integration contour. Hence, it has to be justified why the integration along two contours can be replaced along the integration over one contour and why the integration over real and imaginary part can be replaced with an integration over the imaginary part. \\
The solution was already given when the root was concerned but there is a property of correlators that has not been explained yet. Along the integrations observed here the following relations hold
\begin{eqnarray}
Im\left[\Pi\left(q^2+i\epsilon\right) \right]=-Im\left[\Pi\left(q^2-i\epsilon\right) \right]
\end{eqnarray}
\begin{eqnarray}
Re\left[\Pi\left(q^2+i\epsilon\right) \right]=Re\left[\Pi\left(q^2-i\epsilon\right) \right]
\label{realpart}
\end{eqnarray}
where $q^2$ is real and above the threshold. The case of the imaginary part is already known from the root example and the result was that the integrations just add up to one integration. From \ref{realpart} it follows that the real parts cancel. Thus only the integration over the imaginary part survives.\\
As the last step the residues of the correlators have to be calculated but this is a difficult task that is not going to be performed here. Instead of another theoretical derivation some practical applications are going to be performed. The integral in (\ref{finalroot}) is now calculated.\\
There are two cases that have to be distinguished.
\begin{enumerate}
\item $z_{0}$ does not lie in the region that has been excluded from the integration.
\item $z_{0}$ lies in the region that has been excluded from the integration.
\end{enumerate}
The first case is the one which is interesting in applications. Therefore it is going to be explained for the root example. Hence in the first case the pole is located inside the integration contour indicated in figure (\ref{rawcontour}). Then relation (\ref{finalroot}) can be used and only a real integration has to be performed
\begin{eqnarray}
\sqrt{z_{0}}=\frac{z_{0}}{\pi}\int_{-\infty}^{0}\frac{\sqrt{\abs{z}}}{z(z-z_{0})}dz=-i\frac{2\sqrt{z_{0}}}{\pi}\left(ArTanh\left(i\frac{\sqrt{\abs{z}}}{\sqrt{z_{0}}}\right) \right)_{-\infty}^{0}=-i\frac{2\sqrt{z_{0}}}{\pi}\left(i\frac{\pi}{2}\right)=\sqrt{z_{0}}.
\end{eqnarray}
Thus, the dispersion relation in subtracted form is verified in the case where $z_{0}$ does not lie on the cut of the square root. \\
The lesson that can be learned from this section is that a correlator is fully determined by its imaginary part above some threshold. In the following sections this part of the correlator will be identified with the non-perturbative resonance physics, which cannot be calculated by perturbative QCD. Hence, if the full imaginary part is known and the correlator is calculated at some point using a dispersion relation, the correlator at this point also includes non-perturbative information.\\
Dispersion relations are long known and have been used in electrodynamics under the name Kramers-Kroning relations. A nice review from a modern perspective is given in \cite{Weinberg:1996kr}.
\section{The QCD Sum Rule (QSR) Method \label{sumrules}}
In the preceding sections the operator product expansion and the dispersion relations were discussed. By combining those tools one obtains the so called QCD Sum Rules. The QSR allow calculations of properties which are connected to n-point correlators, in a sense these correlators can be calculated by QSR. This thesis is restricted to correlators where the external states, are vacuum states, but the QSR can deal with arbitrary external states, addressing e.g. the in-medium properties of hadrons.\\
The objects which are investigated are mainly 2-point correlators. Such correlators contain much physical information. The first example for a 2-point correlator that occurs in QFT is the propagator, for example $\bra{0}T\left\{\bar{q}(0),q(x)\right\}\ket{0}$. The propagator has a clear physical interpretation. It is the amplitude for the propagation of a particle from the space-time point $0$ to $x$. In the literature \cite{Peskin:1995ev} two cases are analyzed. One without interaction and one with interaction. Without interaction everything gets simpler and the propagator is fully calculable and has just one singularity. In momentum space it is a pole connected with the mass of the particle. This pole occurs when the particle is on shell $q^2=m^2$. With interactions the singularity structure of the propagator is much richer. In addition to the pole corresponding to the mass of the particle, poles corresponding to bound states and/or resonances occur.\\
With QCD Sum Rules one can calculate the properties of bound states by matching the correlation functions at a scale between the perturbative and non-perturbative regimes. However, the poles can not be calculated in a perturbative approach to the problem. Hence, non-perturbative methods are needed. There is a domain where the OPE agrees with the correlation function. In this domain where the correlators have no poles. Thus, an ansatz for the imaginary part of the propagator can be made. Furthermore, the space-like region can be connected to the time-like region, where the physical singularities of the correlator are located, by means of a dispersion relation. Then the propagator on the real axis below threshold can be calculated from the ansatz using a dispersion relation. The result can be matched with the OPE in the space like region. This is how QCD Sum Rules work.\\
Such a procedure allows a comparison between phenomenology and theory because for some correlators the imaginary parts can be measured. Therefore, in such cases one side of the dispersion relation is given by the phenomenology and the other one by theory. These remarks are characteristic for the way in which QSRs are calculated and evaluated.
\subsection{The connection between the correlator and the spectral density}
There is a direct connection between a spectral function, sometimes called spectral density, and the correlator in the corresponding channel. The vector-current correlator is of particular interest. It is factorize into a Lorentz tensor and a Lorentz scalar part
\begin{eqnarray}
\Pi_{\mu\nu}(q_{\alpha})=\left(g_{\mu\nu}-\frac{q_{\mu}q_{\nu}}{q^2}\right)\Pi(q^2)=\left(g_{\mu\nu}q^2-q_{\mu}q_{\nu}\right)\rho(q^2).
\end{eqnarray}
The spectral function is proportional to the imaginary part of $\Pi(q^2)$. The spectral function in the vector channel can be extracted from data on $e^+e^-$- or $\mu^+\mu^-$- annihilation \cite{Peskin:1995ev}. In other channels no direct measurements exist.
\subsection{The model for the 2-point correlator}
As discussed in the previous section, a QCD Sum Rule is a dispersion relation connecting the phenomenological spectral density with the OPE
\begin{eqnarray}
\frac{1}{\pi}\int_{0}^{\infty}\frac{Im\left[\Pi_{pheno}(s)\right] }{(s-q^2)} ds =OPE\left(q^2\right).\label{dispersiontheory}
\end{eqnarray}
The momentum domain has to be restricted to large space-like momenta. Hence, the substitution $Q^2=-q^2$ is convenient to avoid the unimportant minus sign.\\
Obviously a QSR can be used in two ways. The first one is to define the right hand side as given and to calculate the properties of the left hand side or vice versa. Both possibilities are used and have to be used, as will be discussed below. The evaluation of a QSR is a chapter for itself and is best illustrated through examples. The classical way to use QSR is to use an ansatz for the phenomenological part given by a narrow resonance and a continuum, where the resonance is given by a delta function and the continuum by a step function
\begin{eqnarray}
\Pi_{pheno}(s)=\bra{0}j\ket{\psi}^2\delta(s-M_{res})+const.\cdot\theta(s-t_{c}).
\end{eqnarray}
If a tiny width is given to the $\delta$-functions the phenomenological part in the form sketched in figure \ref{oneresonance}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.9]{phaenomenologisch01.eps}
\caption{Model for the phenomenological part of the sum rules. \label{oneresonance}}
\end{center}
\end{figure}
Obviously such a model is very simplified, since the particle that is represented by the $\delta$-resonance is usually a resonance with a non-zero width. Thus, a resonance with a finite lifetime is represented by a stable particle. Moreover the radial excitations are all missing. The step function is an approximation for the continuum. The first step to make calculations handy is to use the imaginary part of the perturbative expression for the correlator in the domain where $q^2$ is positive instead of a step function. Especially because the perturbative expression is a part of the OPE which belongs to the system and is therefore known. Others improvements will be shown later.\\
The right hand side is the theoretical side which is given by the OPE. However, the OPE can not be calculated exactly, it has to be truncated. In Section \ref{importantunimportant} the criteria for convergence of the OPE were discussed. Furthermore, the terms in the truncated OPE are not known exactly, but to a certain order in $\alpha_{S}$ or to a certain accuracy of estimation, as discussed sections \ref{wilsoncoefficient} and \ref{condensates}. Finally the exact Wilsonian OPE is not employed but rather an approximation, the so called practical OPE or SVZ expansion (see section \ref{opeinqcd}). Hence, also the theoretical side of the QSR contains uncertainties and simplifications. However, QSRs have been and are heavily used in non-perturbative physics with big success and with a satisfactory accuracy. In many cases QSR are the only tools available on the market. \\
A well known applications of QSRs are the $\rho$- and the $J/\psi$-meson. The analysis of these two systems is instructive and will be presented in the following section.
\section{The $J/\psi$ QSR as a classical QSR \label{charmonium}}
In this section QSRs for the $J/\psi$-meson are employed to determine the gluon condensate $\alpha_{S}\bra G_{\mu\nu}^a G_{\mu\nu}^a\ket{0}$.
\subsection{The spectrum of $c\overline{c}$-mesons}
\begin{table}[htbp]
\begin{tabular}{||c|l|l|l|l|l|l||}
\hline
$I^G\left(J^{PC}\right)$& groundstate & 1.excitation& 2.excitation& 3.excitation& 4.excitation& 5.excitation\\
\hline
$0^+\left(0^{-+}\right)$&$\eta_{c}$&&&&&\\
\hline
$0^-\left(1^{--}\right)$& $J/\psi$ &$\psi(2S)$&$\psi(3770)$&$\psi(4040)$&$\psi(4160)$&$\psi(4415)$\\
\hline
$0^+\left(0^{++}\right)$&$\chi_{c0}$&&&&&\\
\hline
$0^+\left(1^{++}\right)$&$\chi_{c1}$&&&&&\\
\hline
$0^+\left(2^{++}\right)$&$\chi_{c2}$&&&&&\\
\hline
\end{tabular}
\caption{The known charmonium spectrum. The excited states correspond to radial excitations.}\label{ccradial}
\end{table}
In table \ref{ccradial} the known spectrum of $c\overline{c}$-meson (Charmonium) is shown. In the vector channel $(J^{PC}=1^{--})$ the ground state and several radial excitations are known while in the other channels only the ground states are known. Such a situation is characteristic for hadron spectroscopy. In many cases only the ground state corresponding to given quantum numbers is known. Therefore the calculation of the ground states properties would already be a remarkable success and for this system QSR do the job very well. SVZ realized during the work to their paper \cite{Novikov:1977dq} that $c\overline{c}$-mesons are bound exclusively by gluons. Thus, an attempt to calculate the mass of a $c\overline{c}$-meson with a QSR in which an OPE is used which contains only the perturbative and the lowest order gluon condensate term is reasonable, although the rules of section \ref{importantunimportant} say that all terms up to dimension six should be included. It turns out that the attempt is successfull, but to achieve the goal a sacrifice has to be made. Section \ref{condensates} states the value of the gluon condensate $\bra{0}\frac{\alpha_{s}}{\pi}G_{\mu\nu}^{c}G_{\mu\nu}^{c}\ket{0}$ and as the source of it the $J/\Psi$ sum rule. The gluon condensate has not been calculated up to now and therefore it has to be measured. The $c\overline{c}$-mesons are the best systems for it because of the structure of the OPE and because the spectrum of the $J/\Psi$-meson can and has been measured to a high accuracy.\\
\subsubsection{The phenomenological part of the sum rules}
The spectral function of the $J/\Psi$-meson and its radial excitations has been measured in $\mu^+ \mu^-$-annihilation (see for example \cite{Peskin:1995ev}). The resonances can be approximated by $\delta$-functions and the spectral function shown in figure \ref{manyresonance}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.9]{phaenomenologisch02.eps}
\caption{Sketch of the spectral function of the $J/\Psi$-meson as it is known today. \label{manyresonance}}
\end{center}
\end{figure}
The parameterization of the spectral function is very intuitive:
\begin{eqnarray}
Im\Pi(s)=\sum_{resonances}\bra{0}j_{\mu}\ket{n_{res}}^2\delta\left(s-m^2_{R}\right)+\frac{1}{4\pi}\left(1+\frac{\alpha_{S}(s)}{\pi}\right)\Theta(s-t_{c}) \label{fullspectrum}.
\end{eqnarray}
In the sum rule for the ground state of the states with $J^P=1^-$ an approximation to the spectrum is used where all excited states are neglected and the ground state is approximated by a $\delta$-function. Thus, the effective spectral function is given by figure \ref{jpsiapproxspec}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.9]{phaenomenologisch03.eps}
\caption{Approximation for the spectral function of the $J/\Psi$-meson used in the sum rules for the calculation of the gluon condensate. \label{jpsiapproxspec}}
\end{center}
\end{figure}
The evaluation of the sum rule will show that this approximation is reasonable. In the evaluation process the contribution of the neglected parts of the sum rules are suppressed.
\subsubsection{The theoretical (QCD) part of the sum rules}
The current which approximates the $J/\Psi$-meson is given by
\begin{eqnarray}
j_{\mu}(x)=\overline{c}(x)\gamma_{\mu}c(x)
\end{eqnarray}
and has exactly the quark structure and quantum numbers of the $J/\Psi$. In the OPE for the 2-point correlator of this current only two terms are kept
\begin{eqnarray}
i\int d^4x e^{-iq^{\mu}x_{\mu}}\mele{j_{\mu}(x)j_{\nu}(0)}=\left( g_{\mu\nu}q^2-q_{\mu}q_{\nu}\right) \left[ C_{pert}\cdot I+C_{G^2}\cdot\bra{0}\frac{\alpha_{s}}{\pi}G_{\mu\nu}^{c}G_{\mu\nu}^{c}\ket{0}\right] \label{opejpsi},
\end{eqnarray}
where the coefficients are given by:
\begin{enumerate}
\item{perturbative contribution $C_{pert}$}\\ \\
The approximation to the perturbative coefficient used here contains the bare loop and
$\alpha_{S}$ contributions
\begin{eqnarray}
Im\left[
\parbox[c]{3cm}{
\begin{fmffile}{QCDMesonpert01}
\begin{fmfgraph*}(30,30)
\fmfleft{i}
\fmfright{o}
\fmf{dots}{i,v1}
\fmf{dots}{v2,o}
\fmf{plain,left,tension=.2}{v1,v2}
\fmf{plain,left,tension=.2}{v2,v1}
\fmfdotn{v}{2}
\end{fmfgraph*}
\end{fmffile}}+
\parbox[c]{3cm}{
\begin{fmffile}{QCDMesonpert02}
\begin{fmfgraph*}(30,30)
\fmfleft{i}
\fmfright{o}
\fmf{dots}{i,v1}
\fmf{dots}{v2,o}
\fmf{plain,left,tension=0.2,tag=1}{v1,v2}
\fmf{plain,left,tension=0.2,tag=2}{v2,v1}
\fmfdot{v1,v2}
\fmfposition
\fmfipath{p[]}
\fmfiset{p1}{vpath1(__v1,__v2)}
\fmfiset{p2}{vpath2(__v2,__v1)}
\fmfi{gluon,left,tension=0.2}{point length(p1)/2 of p1 -- point length(p2)/2 of p2}
\end{fmfgraph*}
\end{fmffile}}+
\parbox[c]{3cm}{
\begin{fmffile}{QCDMesonpert03}
\begin{fmfgraph*}(30,30)
\fmfleft{i}
\fmfright{o}
\fmf{dots}{i,v1}
\fmf{dots}{v2,o}
\fmf{plain,left,tension=0.2,tag=1}{v1,v2}
\fmf{plain,left,tension=0.2,tag=2}{v2,v1}
\fmfdot{v1,v2}
\fmfposition
\fmfipath{p[]}
\fmfiset{p1}{vpath1(__v1,__v2)}
\fmfiset{p2}{vpath2(__v2,__v1)}
\fmfi{gluon,right,tension=1.5}{point length(p1)/5 of p1 -- point 4length(p1)/5 of p1}
\end{fmfgraph*}
\end{fmffile}}
\right]
\nonumber\\
=Im\left[C_{pert}(s)\right]=\Theta(s-4m^2)\frac{1}{8\pi}v(s)(3-v^2(s))~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
\nonumber\\\times\left\{1+\frac{4}{3}\alpha_{s}\left[
\frac{\pi}{2v(s)}-\frac{v(s)+3}{4}\left(
\frac{\pi}{2}-\frac{3}{4\pi}\right)\right]\right\}~~~~~~~~~
v(s)=\sqrt{1-\frac{4m^2}{q^2}}
\label{vectorpert}.
\end{eqnarray}
Dispersion relations can be used to calculate the real part of the amplitude. It is instructive to plot the perturbative part of the OPE, the plot is given in figure \ref{vectorpertplot} and shows only the amplitude corresponding to the first diagram in (\ref{vectorpert}), the one loop diagram.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.9]{vectorperturbative.eps}
\caption{The one loop contribution as a function of the total momentum squared. The dashed line is the real part and the full line the imaginary part.\label{vectorpertplot}}
\end{center}
\end{figure}
Higher order corrections do not change the overall structure of the plot. Thus, the lowest order contribution shows the important features of the perturbative part of the 2-point correlator. The imaginary part is going to be used as an approximation for the continuum in the model for the spectral function. The amplitude is real below the threshold, above threshold the imaginary part is non-zero.
\item{nonperturbative contribution $C_{G^2}$}\\ \\
The only nonperturbative contribution stems from the gluon condensate term. The coefficient is determined by the bare loop with two external gluon legs (see section \ref{wilsoncoefficient})
\begin{eqnarray}
C_{G^2}\left(q^2\right) =\left[
\parbox[c]{3cm}{
\begin{fmffile}{QCDMesongluon01}
\begin{fmfgraph*}(30,30)
\fmfleft{i}
\fmfright{o}
\fmf{dots}{i,v1}
\fmf{dots}{v4,o}
\fmf{plain,left,tension=0.0001,tag=1}{v1,v4}
\fmf{plain,left,tension=0.0001,tag=2}{v4,v1}
\fmf{phantom}{v1,v2}
\fmf{phantom,tag=3}{v2,v3}
\fmf{phantom}{v3,v4}
\fmffixedx{0}{v2,v3}
\fmffixedy{0.4cm}{v2,v3}
\fmffixedy{0}{v1,v4}
\fmfposition
\fmfipath{p[]}
\fmfiset{p1}{vpath1(__v1,__v4)}
\fmfiset{p2}{vpath2(__v4,__v1)}
\fmfiset{p3}{vpath3(__v2,__v3)}
\fmfi{gluon,right}{point 0 of p3 -- point length(p2)/2 of p2}
\fmfi{gluon,right}{point length(p3) of p3 -- point length(p1)/2 of p1}
\fmfiv{d.sh=cross,d.ang=0,d.siz=5thick}{point 0 of p3}
\fmfiv{d.sh=cross,d.ang=0,d.siz=5thick}{point length(p3) of p3}
\fmfdot{v1}
\fmfdot{v4}
\end{fmfgraph*}
\end{fmffile}}+
\parbox[c]{3cm}{
\begin{fmffile}{QCDMesongluon02}
\begin{fmfgraph*}(30,30)
\fmfleft{i}
\fmfright{o}
\fmf{dots}{i,v1}
\fmf{dots}{v4,o}
\fmf{plain,left,tension=0.01,tag=1}{v1,v4}
\fmf{plain,left,tension=0.01,tag=2}{v4,v1}
\fmf{phantom}{v1,v2}
\fmf{phantom,tag=3}{v2,v3}
\fmf{phantom}{v3,v4}
\fmffixedx{0.4cm}{v2,v3}
\fmffixedy{0cm}{v2,v3}
\fmffixedy{0}{v1,v4}
\fmfposition
\fmfipath{p[]}
\fmfiset{p1}{vpath1(__v1,__v4)}
\fmfiset{p2}{vpath2(__v4,__v1)}
\fmfiset{p3}{vpath3(__v2,__v3)}
\fmfi{gluon,right}{point length(p1)/5 of p1 -- point 0 of p3}
\fmfi{gluon,right}{point 4length(p1)/5 of p1 -- point length(p3) of p3}
\fmfdot{v1}
\fmfdot{v4}
\fmfiv{d.sh=cross,d.ang=0,d.siz=5thick}{point 0 of p3}
\fmfiv{d.sh=cross,d.ang=0,d.siz=5thick}{point length(p3) of p3}
\end{fmfgraph*}
\end{fmffile}}
\right]
\nonumber\\=\frac{\alpha_{S}}{48\pi q^4}\left\lbrace \frac{3(v^2(s)+1)(v^2(s)-1)^2}{v^4(s)}\frac{1}{2v(s)}\ln\left(\frac{v(s)+1}{v(s)-1}\right)-\frac{v^4(s)-2v^2(s)+3}{v^4(s)} \right\rbrace\nonumber\\
v(s)=\sqrt{1-\frac{4m^2}{q^2}}.~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
\label{jpgluon}
\end{eqnarray}
In contrast to the diagrammatic description the coefficient is not a simple diagram. Its plot is given in figure \ref{wilsongluonplot}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.9]{wilsongluon.eps}
\caption{Plot of $C_{G}\left(q^2\right)$, the dashed line is the real part and the full line the
imaginary part. The singularities above $q^2=0$ occur at $4m^2$.\label{wilsongluonplot}}
\end{center}
\end{figure}
The coefficient $C_{G}(q)$ in (\ref{jpgluon}) is valid in the limit $\abs{q}\gg0$. Hence, it is expected to find an unreasonable behavior of the coefficient if $q$ approaches zero. In figure \ref{wilsongluonsingularitiesplot} the coefficient is plotted in the momentum region close to zero. There a highly oscillating behavior is found, which signals the invalidity of the coefficient in that momentum domain. Unfortunately, this effect is not visible in figure \ref{wilsongluonplot}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.9]{wilsongluonsing.eps}
\caption{Plot of $C_{G}\left(q^2\right)$ showing highly oscillating singularities at $q^2=0$, the
dashed line is the real part and the full line the
imaginary part.\label{wilsongluonsingularitiesplot}}
\end{center}
\end{figure}
\end{enumerate}
Thus, the sum rule has been constructed and can now be evaluated. Momentum Sum Rules are employed for this task, see section \ref{dispersionrelations} for the basis that is needed for further considerations.
\subsubsection{Evaluation of the Sum Rule}
If (\ref{fullspectrum}) is used in the dispersion the following equation is obtained
\begin{eqnarray}
\frac{1}{n!}\left(-\frac{d}{dQ^2}\right)^n\Pi\left(Q^2\right)=\sum_{res}\frac{\bra{0}j_{\mu}\ket{n_{res}}^2}{(m_{R}^2+Q^2)^{(n+1)}}+\frac{1}{4\pi^2}\int\frac{1+\frac{\alpha_{S}(s)}{\pi}}{(s+Q^2)^{(n+1)}}\Theta(s-t_{c})ds~~n\geq 1 \label{disp}
\end{eqnarray}
where $n$ has to be bigger than 0 to make the integral convergent. This relation allows the calculation of the $J/\psi$ mass. The mass of the ground state with the quantum numbers $J^P=1^-$.
A simple conversion will show that this is true
\begin{eqnarray}
M_{n}(Q^2)=
\sum_{res}\frac{\bra{0}j_{\mu}\ket{n_{res}}^2}{(m_{R}^2+Q^2)^{(n+1)}}+\frac{1}{4\pi^2}\int\frac{1+\frac{\alpha_{S}(s)}{\pi}}{(s+Q^2)^{(n+1)}}\Theta(s-t_{c})ds.
\end{eqnarray}
The $M_{n}$'s are called the Moments. First of all the term corresponding to the $J/\psi$-resonance is factored out
\begin{eqnarray}
M_{n}(Q^2)=\frac{\bra{0}j_{\mu}\ket{n_{J/\psi}}^2}{(m_{J/\psi}^2+Q^2)^{(n+1)}}\nonumber\\
\times\left( 1+\frac{(m_{J/\psi}^2+Q^2)^{(n+1)}}{\bra{0}j_{\mu}\ket{n_{J/\psi}}^2}
\left\{\sum_{res>J/\psi}\frac{\bra{0}j_{\mu}\ket{n_{res}}^2}{(m_{res}^2+Q^2)^{(n+1)}}+\frac{1}{4\pi^2}\int\frac{1+\frac{\alpha_{S}(s)}{\pi}}{(s+Q^2)^{(n+1)}}\Theta(s-t_{c})ds\right\}\right).
\end{eqnarray}
The quantity
\begin{eqnarray}
\delta_{n}(Q^2)=\frac{(m_{J/\psi}^2+Q^2)^{(n+1)}}{\bra{0}j_{\mu}\ket{n_{J/\psi}}^2}\nonumber\\
\times \left\{\sum_{res>J/\psi}\frac{\bra{0}j_{\mu}\ket{n_{res}}^2}{(m_{res}^2+Q^2)^{(n+1)}}+\frac{1}{4\pi^2}\int\frac{1+\frac{\alpha_{S}(s)}{\pi}}{(s+Q^2)^{(n+1)}}\Theta(s-t_{c})ds\right\} )
\end{eqnarray}
is convenient because $\delta_{n}$ is suppressed for big $n$
\begin{eqnarray}
\left(\frac{m_{J/\psi}^2+Q^2}{m_{res}^2+Q^2}\right)^{n+1}\longrightarrow 0,~ n\longrightarrow \infty~~~~~~~~~\nonumber\\
\left(\frac{m_{J/\psi}^2+Q^2}{s+Q^2}\right)^{n+1}\longrightarrow 0,~ n\longrightarrow \infty, s>t_{c}.
\end{eqnarray}
This means that:
\begin{eqnarray}
M_{n}(Q^2)=\frac{\bra{0}j_{\mu}\ket{n_{res}}^2}{(m_{J/\psi}^2+Q^2)^{(n+1)}}\left( 1+\delta_{n}(Q^2)\right).
\end{eqnarray}
Now the ratio
\begin{eqnarray}
r_{n}(Q^2)=\frac{M_{n}}{M_{n-1}}=\frac{1}{m_{J/\psi}^2+Q^2}\cdot\frac{1+\delta_{n}}{1+\delta_{n-1}}
\end{eqnarray}
is considered. Depending on the channel, the factor $\frac{1+\delta_{n}}{1+\delta_{n-1}}$ can sometimes be replaced by 1. The spectrum of $J^P=1^-$ $c\overline{c}$-meson is such a case. This can be tested because the spectrum has been measured. This is equivalent to the replacement of the spectrum in figure \ref{manyresonance} with the one of figure \ref{jpsiapproxspec}. If the $r_{n}$'s are also computed by the OPE, the right hand side of the dispersion relation, theoretical predictions are possible. \\
A simple example would be the calculation of the $J/\psi$ mass. The c-quark mass is known sufficiently well to expose bounds on it and therefore there is only one really free parameter remaining, the gluon condensate. Hence, the mass calculation is possible, but the c-quark mass and the gluon condensate have to be fitted. Thus, the gluon condensate can be calculated by adjusting it to a value which reproduces the mass of the $J/\psi$. The result is shown in figure \ref{jpsimoments}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.9]{jpsimomentsumrule.eps}
\caption{Plot of the moments $M_{n}(-1GeV^2)$ belonging to the QSR of the $J/\Psi$-meson. The full line is the measured mass of the $J/\Psi$, while the dots are the moments at a squared momentum of -1. The zeroth and the first momentum do not converge due to a dispersion integral used to calculate the perturbative part of the Wilson coefficient. In the domain of the plateau, the contributions of the perturbative and non-perturbative terms in the OPE are optimized.\label{jpsimoments}}
\end{center}
\end{figure}
At small $n$ the continuum is not suppressed strongly enough and gives the dominant contribution to the phenomenological side. On the theoretical side the perturbative part of the OPE gives the main contribution to the Moments. Fortunately, with growing $n$ the continuum is suppressed and the resonance gives the dominant contribution to the phenomenological side. Unfortunately, with further increasing $n$ the contribution of the non-perturbative terms to the theoretical side becomes the dominant contribution. The plateau lies in the region where the resonance dominates the phenomenological part of the sum rules and the perturbative part of the OPE dominates the theoretical side.\\
The plateau determines the mass of the $J/\psi$. In the $n$ region of the plateau, the contribution of the perturbative term to the moments dominates and the non-perturbative term of the OPE (\ref{opejpsi}) is smaller by factors of 10. The plateau starts when the non-perturbative contribution to the moments is a 100 times smaller than the perturbative and ends when the non-perturbative contribution is equal to the perturbative one. Therefore left to the plateau everything is perturbation theory and there are no bound states. Hence, there is no plateau. While on the right of the plateau the non-perturbative terms dominate, but they are not known accurately enough and therefore the sum rule does not converge any more. In the domain of the plateau the convergence of the sum rule is optimized between convergence of perturbation theory and the lack of knowledge concerning the higher terms in the OPE. The c-quark mass and the gluon condensate are fitted in order to reproduce the mass of the system. The result is $m_{c}=1.4~GeV$ and the gluon condensate has the value quoted in \ref{condensates}, in this example the moments have been evaluated at $q^2=-1~GeV^2$ other values are of course also allowed.\\
Historically this was the starting point for the calculations of QSR \cite{Shifman:1978bx}. The gluon condensate has up to now never been calculated from first principles. It is multiplied by the strong coupling constant in order to form a renormalization group invariant.\\
Now this value of the gluon condensate can be used to calculate QSR for other systems. Reinders, Rubinstein and Yazaki performed a rigorous test of the gluon condensate on the $c\overline{c}$ spectrum. They repeated the calculation just performed for every quantum number $J^P$ for which the ground state was measured. These calculations confirmed the gluon condensate given in section \ref{condensates} (see \cite{Reinders:1984sr}). Moreover many other systems have been analyzed during these times.\\
Many systems containing two quarks with nearly equal masses have been calculated often using Borel transformed QSR (Borel Sum Rules) in order to improve the convergence of the Sum Rules. The mass of the $\rho$-meson given by QSR is remarkably close to the value measured and the plateau is very long, (see \cite{Shifman:1978bx}). In the period from 1978 to 1988 hadrons seemed to be describable in the way just outlined, but with proceeding time problems occurred. They occurred in exotic systems like hybrids or glue balls and in heavy-light systems. In the case of the D-meson, the sum rules converged very badly, and a broad plateau is not given.
\section{Borel transformation \label{borel}}
Section \ref{charmonium} discussed the general pattern how QSR are used. It is allways a dispersion relation consisting of a phenomenological and a theoretical (QCD) part. These parts are the left and the right hand side of the dispersion relation (see for example (\ref{dispersiontheory})). The phenomenological part of the QSR can be determined by the intuition of the user, by input from alternative theoretical approaches or from measurements, while the theoretical part is given by the OPE. The input from both sources is afflicted with errors. The spectral functions are in most cases approximations and can not be directly measured with the exception of spectral functions of the vector-current correlators. Furthermore the OPE is also an approximation because it is a truncated expansion. This is a truncation on several levels. First, it is an expansion in $\frac{1}{Q^2}$. Second, the Wilson coefficients are approximated. Finally, the condensates are approximate. It is difficult to judge the combined effect of the approximations. Nevertheless, the QSR are in many cases very successful.\\
The circumstance that ensures the reliability of the QSR is the existence of a working window. If the OPE would be known completely for the given spectrum the dots in figure \ref{jpsimoments} would not change much for small $n$ but they would drastically change for the $n$ to the right of the plateau. They would stay close to the line which shows the value of the mass of the system which is observed and would approach it. There exist exactly solvable models which substantiate this statement, one is given in \cite{Narison:1989aq}. In the ideal case just mentioned the OPE would not be truncated in any instance and its the truncation that is responsible for the break down of the plateau. To the right of the plateau the non-perturbative terms in the OPE are more important than the perturbative ones, but they are not known well enough for reliable calculations and therefore the results become unreliable and the plateau breaks down. The working window is therefore defined by the criteria of the dominance of perturbation theory or in other words the perturbative part of the OPE. This part is known to a sufficient accuracy and the errors that stem from the non-perturbative part are negligible as long as the perturbative part dominates in the evaluation of the sum rule. A common abbreviation used for the requirement is that asymptotic freedom still holds in the domain where QSR are used, which can be very misleading. The existence of such a windows is a hypothesis and has to be checked from case to case.\\
Many publications just use a guess for the spectral function. Hence, the situation can arise that the OPE is under control, but the form of the spectrum is incorrect. In this scenario the sum rules may be completely arbitrary. There may be a plateau, but the properties corresponding to it are incorrect or the plateau can simply be missing and no statement is made even if the OPE is correct. In this case there would be no working window and the OPE would be discarded.\\
These two cases have always to be concerned in the evaluation of a sum rule. Hence, a possibility to reduce the sensitivity of the sum rules would be more than welcome, and sometimes mandatory. Fortunately such a possibility exists, the Borel transformation which was introduced in section \ref{dispersionrelations} in order to improve the convergence of a dispersion integral.\\
The Borel transformation improves the QSR in three ways.
\begin{enumerate}
\item The dispersion integral is regularized because the Borel transformed sum rule is exponentially suppressed at high energies.
\item The exponential suppression also minimizes the influence of excited states.
\item The unknown terms in the OPE are suppressed.
\end{enumerate}
Hence, the Borel transformation improves the convergence of the sum rule if the ansatz for the spectral function is a good approximation for the corresponding OPE. The Borel transformation is performed by applying the operator
\begin{eqnarray}
\widehat{\mathcal{B}}=\frac{1}{(n-1)!}(Q^2)^n\left(-\frac{d}{dQ^2}\right)^n,\qquad Q^2\rightarrow\infty,\qquad n\rightarrow\infty,\qquad\frac{Q^2}{n}=M^2~fixed
\end{eqnarray}
where $M^2$ is the so called Borel parameter.
The improving features of the OPE can be illustrated by two simple calculations.
\begin{enumerate}
\item
Application of $\widehat{\mathcal{B}}$ to $\left(\frac{1}{s+Q^2}\right)$:
\begin{eqnarray}
\widehat{\mathcal{B}}\left(\frac{1}{s+Q^2}\right)=\frac{1}{(n-1)!}(Q^2)^n\left(-\frac{d}{dQ^2}\right)^n\frac{1}{s+Q^2}=\frac{1}{(n-1)!}(Q^2)^n\frac{n!}{(s+Q^2)^{n+1}}\nonumber\\=\frac{n}{Q^2}\frac{(Q^2)^{n+1}}{(s+Q^2)^{n+1}}
=\frac{n}{Q^2}\frac{1}{(1+\frac{s}{Q^2})^{n+1}}=\frac{n}{Q^2}\frac{1}{(1+\frac{s\frac{Q^2}{n}}{n})^{n+1}}\rightarrow\frac{1}{M^2}e^{-\frac{s}{M^2}}
\label{borel2}.
\end{eqnarray}
An exponential kernel in the dispersion integral suppresses the integrand for large s and makes the integral finite. At the same time the spectral function is suppressed for large s.
\item
Application of $\widehat{\mathcal{B}}$ to $\left(\frac{1}{Q^2}\right)^m$:
\begin{eqnarray}
\widehat{\mathcal{B}}\left(\frac{1}{Q^2}\right)^m=\frac{1}{(n-1)!}(Q^2)^n\left(-\frac{d}{dQ^2}\right)^n\left(\frac{1}{Q^2}\right)^m\nonumber\\
\frac{1}{(n-1)!}(Q^2)^n\cdot m(m+1)\cdot ...\cdot(m+n-1)\cdot\left(Q^2\right)^{-m-n}\nonumber\\=\frac{1}{(m-1)!}(Q^2)^{-m}\frac{(m+n-1)!}{(n-1)!}\nonumber\\
\underbrace{=}_{n-1\rightarrow n}\frac{1}{(m-1)!}(Q^2)^{-m}\frac{(m+n)!}{n!}\underbrace{=}_{Stirling}\frac{1}{(m-1)!}(Q^2)^{-m}\frac{\left(\frac{m+n}{e}\right)^{m+n}\sqrt{2\pi (m+n)}}{\left(\frac{n}{e}\right)^{n}\sqrt{2\pi n}}\nonumber\\
=\frac{1}{(m-1)!}(Q^2)^{-m}\frac{(m+n)^{m}}{e^{m}}\frac{(m+n)^{n}}{n^{n}}\sqrt{\frac{m+n}{n}}\nonumber\\=\frac{1}{(m-1)!}\left(\frac{m+n}{Q^2}\right)^{m}\frac{\left(1+\frac{m}{n}\right)^n}{e^{m}}\sqrt{1+\frac{m}{n}}
\rightarrow\frac{1}{(m-1)!}\left(\frac{1}{M^2}\right)^{m}.
\end{eqnarray}
Clearly the Borel transformation leads to an improved convergence of the OPE.
\end{enumerate}
For the perturbative part of the OPE the Borel transformation of the logarithm is needed.\\
Application of $\widehat{\mathcal{B}}$ to $ln\left( Q^2\right)$ yields
\begin{eqnarray}
\widehat{\mathcal{B}}\cdot\ln\left( Q^2\right) =\frac{1}{(n-1)!}(Q^2)^n\left(-\frac{d}{dQ^2}\right)^n \ln\left( Q^2\right) =\frac{1}{(n-1)!}(Q^2)^n\left(-\frac{(n-1)!}{(Q^2)^n}\right)=-1.
\end{eqnarray}
From the point of view of mathematical consistency it would be better if the evaluation of a QSR would work by moment sum rules where the improving features are much weaker. Generally a sum rule is more reliable if the moment sum rule converges. However, there are sum rules where the moment sum rules do not converge but the Borel sum rules do. A nice pedagogical review to this topic is given in \cite{Shifman:1999mk}, where a sum rule is constructed which does not converge without being Borel transformed.
\section{Heavy-Light Systems \label{hl_systems}}
In the sector of heavy-light systems great progress has been made in recent times. A major break through was a discovery made in 2004. In the sector of particels with charm and strangeness two new particels where found, the $D^*_{s}(2317)^{\pm}$ with quantum numbers $J^{P}=0^+$ and the $D_{s}(2460)^{\pm}$ with quantum numbers $J^{P}=1^+$. This discovery confirmed theoretical predictions already made in 1992 \cite{Nowak:1992um}. Together with the already known states $D^{\pm}_{s}$ with $J^P=0^-$ and $D^{*\pm}_{s}$ with $J^P=1^-$ a hypermultiplett is formed which consist of two doublets (see figure \ref{hypermulti}).
\begin{figure}[htbp]
\begin{center}
\begin{picture}(0,0)%
\includegraphics{hypermultiplett.eps}%
\end{picture}%
\setlength{\unitlength}{4144sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(2979,1910)(616,-2563)
\put(958,-1132){$J$}%
\put(3580,-800){$D_{s}(2460)^{\pm},J^P=1^+$}%
\put(3580,-2493){$D^*_{s}(2317)^{\pm},J^P=0^+$}%
\put(1257,-1630){$P$}%
\put(631,-826){$D^{*\pm}_{s},J^P=1^-$}%
\put(631,-2491){$D^{\pm}_{s},J^P=0^-$}%
\end{picture}%
\caption{Hypermultiplett build up by the most prominent $D_{s}$-states. \label{hypermulti}}
\end{center}
\end{figure}
The doublets are given by two points which are connected via a dashed line, the coordinates of the dots are their quantum numbers $J^P$. The quantum numbers are determined by the total angular momentum, the orbital angular momentum and the coupling of the orbital angular momentum and the spins. The parity of a D-meson is given by $P=(-1)^{l+1}$, where $l$ is the quantum number of orbital angular momentum of the valence quarks. Hence negative parity is given by even $l$ and positive by odd $l$. Therefore the orbital angular momentum of the left doublet is odd and of the right one even. Moreover the particels of figure \ref{hypermulti} should be as lowest lying in the spectrum as possible. Hence, the smallest $l$ possible is assumed for them. This results in $l=0$ for the left doublet and $l=1$ for the right doublet. The spin configuration for the left doublet is antiparallel for the the $0^-$ state and parallel for the $1^-$ state, while for the right doublet the spins are parallel for the $0^+$ state and anti-parallel for the $1^+$ state.\\
A descriptive reason for the existence of such particles is given by symmetry arguments. There are two symmetries that determine the structure of the hypermultiplett, the chiral symmetry and the heavy quark symmetry. In the limit where the symmetries are exact and not spontaneously broken the four particels of figure \ref{hypermulti} are degenerate in mass. Both symmetries are explicitly broken, and restored in two totally different limits. Chiral symmetry in the limit of vanishing mass and heavy quark symmetry in the limit of infinitely large mass. If chiral symmetry is assumed to be exact but spontaneously broken for the light quarks and heavy quark symmetry to be exact for the heavy quarks, the main contributions to the mass degeneracy and therefore the overall structure of the masses of the hypermultiplet can be interpreted in terms of the light quark condensates, as discussed by \cite{Bardeen:2003kt}.\\
Historically the states with negative parity have been known for a long time before the states with positive parity were discovered. A theoretical link to the existence of the positive parity states is given by chiral symmetry arguments. The Lagrangian which describes only the negative parity states of a heavy-light system does not posses chiral symmetry in the limit of vanishing light quark masses. However, a Lagrangian which describes the hypermultiplett where both, the negative and the positive parity states are included, exhibits chiral symmetry in the chiral limit. Such symmetry arguments do not prove the existence of the positive parity states, but they are strong arguments and show that theory is on the right way.\\
Heavy quark symmetry links the $J=0$ states directly to the $J=1$ states. In the heavy quark effective theory, the theory in the limit of infinitely heavy quarks, the spin decouples from the dynamics of the system. In that limit a heavy-light system with parallel spins is completely degenerate with a heavy-light system with anti-parallel spins. Hence, the states in the doublets are linked to each other.\\
These considerations have raised a lot of interest for the spectroscopy of $D_{s}$-mesons. One expects similar arguments to hold also for other heavy-light systems. However, the spectrum can not be reproduced exactly by such methods, because only the main contributions to the mass spectrum are included and the fine structure of the spectrum is missing. One method to calculate the mass spectrum more accurately is provided by the QSR. In the following, the QSR in the heavy quark effective theory (HQET) and the results for the masses of the particels in the hypermultiplett are shown. After that the full relativistic theory is going to be reviewed as it has been used and accepted for a long time. The shortcomings of such approaches concerning the calculation of hadron properties will lead to the next topic.
\subsection{Heavy flavor sum rules for the D-meson \label{heavyflavorsr}}
All of the arguments just made can be checked by QSR calculations. This section concerns the question whether the doublets are degenerate in the heavy-quark limit or not. The masses of the $J^P=0^+$ and $J^P=1^+$ and those of the $J^P=0^-$ and $J^P=1^-$ states respectively are expected to be degenerate in the heavy quark limit.\\
In the remainder of the section a mini review of the HQET is given in order to illustrate the most important computation tools. The basis of all calculations is the Lagrangian of the system which is in the actual case split into two parts. The light quark dynamics is described by the ordinary QCD Lagrangian while the heavy quark dynamics is described the heavy quark effective theory Lagrangian
\begin{eqnarray}
\mathscr{L}=\mathscr{L}_{QCD}+\mathscr{L}_{HQET}=\mathscr{L}_{light}+\mathscr{L}_{heavy}.
\end{eqnarray}
Those Lagrangians are given by
\begin{eqnarray}
\mathscr{L}_{QCD}=\bar{q}\left(i\slashed{D}-m_{q}\right)q-\frac{1}{4}G^a_{\mu\nu}G^a_{\mu\nu}~~~~\mathscr{L}_{HQET}=\bar{h}i\slashed{D}h=\bar{h}iv_{\mu}D^{\mu}h \label{hqetLagrangian}.
\end{eqnarray}
All that is necessary for further considerations are the Feynman rules of the theory. The propagator for the heavy quark is given by
\begin{eqnarray}
\frac{i}{v_{\mu}p^{\mu}}\frac{\slashed{v}+1}{2}=\frac{i}{\omega}\left(\slashed{v}+1\right)
\end{eqnarray}
and the vertex for the heavy quark by
\begin{eqnarray}
ig_{s}v_{\mu}t_{a}.
\end{eqnarray}
The rules for the light quark are the ones of ordinary QCD. The heavy quark vertex does not contain any Dirac matrices. Hence the interaction is independent of the spin. The momentum in the heavy quark theory is split up in two parts
\begin{eqnarray}
q_{\mu}=m_{heavy}v_{\mu}+p_{\mu}~~~v_{\mu}v^{\mu}=1,
\end{eqnarray}
where $mv_{\mu}$ is the on-shell part of the momentum $q_{\mu}$ and $p_{\mu}$ the off-shell part. $p_{\mu}$ is also called the residual momentum. The splitting of the heavy quark momentum is useful because the heavy quark is expected to be non-relativistic. The huge mass is much larger than the typical momentum in a hadron. Hence, higher order terms in $p^{\mu}$ can be neglected. The dependence on $v_{\mu}$ and $p_{\mu}$ can be comprised in a dependence on $\omega=2v_{\mu}p^{\mu}$.\\
The mass of the heavy quark does not appear in the Lagrangian of the HQET, but in the finite quark mass corrections. A reflection of the "missing" mass parameter is found on the hadron side of the sum rule. The spectral function runs over $\omega$ and the resonances occur at the points $2\bar{\Lambda}=M_{hl-system}-m_{heavy~quark}$, where $2\bar{\Lambda}$ is the value where the heavy quark mass is set to infinity. In this limit the heavy light system also contains a infinitely heavy quark.\\
In a effective theory the fields get modified which is reflected by the Lagrangian (\ref{hqetLagrangian}) where the heavy quark fields $h$ occur, but also the current operators change. Unfortunately the situation here gets very difficult in comparison with the situation in QCD. The currents are not unique in heavy flavor sum rules (see for example \cite{Dai:1996yw}). However only the following currents in table \ref{hqetcurrents} are used in the sections below.
\begin{table}[htbp]
\begin{tabular}{||c|l|l|l|l|l|l||}
\hline
$J^P$& current\\
\hline
$0^+$&$\bar{q}h$\\
\hline
$0^-$&$\bar{q}\gamma_{5}h$ \\
\hline
$1^+$&$\bar{q}\gamma_{5}\left(\gamma_{\mu}-\slashed{v}v_{\mu}\right)h $\\
\hline
$1^-$&$\bar{q}\left(\gamma_{\mu}-\slashed{v}v_{\mu}\right)h$\\
\hline
\end{tabular}
\caption{Table with the currents which are used for the approximation of the analyzed mesons in the HQET.}\label{hqetcurrents}
\end{table}
These currents are used to compute sum rules for the D-meson hypermultiplett which reproduce the expected mass degeneracy in the heavy quark limit. The calculations in the HQET are illustrated in the following.
\subsubsection{Calculation of the heavy-light quark loop}
The calculation for the pseudo-scalar and scalar current correlator are demonstrated. In the scalar case the current is
\begin{eqnarray}
j(x)=\bar{q}(x)h_{Q}(x),~j^{\dagger}(x)=\bar{h}_{Q}(x)q(x)\label{hqetscalar}.
\end{eqnarray}
The current correlator is given by
\begin{eqnarray}
\Pi_{pert}(q^2)=i\int d^4xe^{iqx}\bra{0}T\left(j(x),j^{\dagger}(x)\right)\ket{0}=i\int d^4xe^{iqx}\wick{2-5}{3-4}{\bra{0},\bar{q}(x),h_{Q}(x),\bar{h}_{Q}(x),q(x),\ket{0}}\nonumber\\=
\parbox[c]{5cm}{
\begin{fmffile}{heavylighthqet01}
\begin{fmfgraph*}(45,40)
\fmfleft{i}
\fmfright{o}
\fmf{dots_arrow,label.side=down,label=$q$}{i,v1}
\fmf{dots_arrow,label.side=down,label=$q$}{v2,o}
\fmf{fermion,label.side=left,label=$p$,left,tension=.3}{v1,v2}
\fmf{dbl_plain_arrow,label.side=left,label=$p+q$,left,tension=.3}{v2,v1}
\end{fmfgraph*}
\end{fmffile}}
=\int\frac{d^4p}{(2\pi)^4}\left( -tr\left[\frac{i(\slashed{p}+m)}{p^2-m^2}\frac{i}{v^{\mu}(q_{\mu}+p_{\mu})}\frac{\slashed{v}+1}{2}\right]\right) \nonumber\\
=\int\frac{d^4p}{(2\pi)^4}\frac{4v^{\mu}p_{\mu}+4m}{\left( p^2-m^2\right)2\left( v^{\mu}(q_{\mu}+p_{\mu})\right)}=4\int\frac{d^4p}{(2\pi)^4}\frac{v^{\mu}p_{\mu}+m}{\left( p^2-m^2\right)\left(\omega+2v^{\mu}p_{\mu}\right)}
\label{hqetpertscalar},
\end{eqnarray}
where the double line represents the propagator in the heavy quark effective theory. The expression for the correlator is renormalized by using the $\overline{MS}$-scheme. Therefore it has to be regularized by dimensional regularization. To begin a Feynman parameter is introduced and the integration is transformed to a spheric symmetric integration
\begin{eqnarray}
4\int\frac{d^4p}{(2\pi)^4}\int_{0}^{1}dx\frac{v^{\mu}p_{\mu}+m}{\left[x\left( p^2-m^2\right)+(1-x)\left(\omega+2v^{\mu}p_{\mu}\right)\right]^2}=4\int\frac{d^{4}l}{(2\pi)^4}\int_{0}^{1}dx\frac{-\frac{1-x}{x}+m}{x^2\left[l^2-\Delta\right]^2 },
\end{eqnarray}
with $l_{\mu}=p_{\mu}+v_{\mu}\frac{1-x}{x}$ and $\Delta=\left(\frac{1-x}{x}\right)^2+m^2-\frac{1-x}{x}\omega$. The $dp^4$ integral is extended to d dimensions and the integration is performed
\begin{eqnarray}
d\left(\mu^2\right)^{\left(2-\frac{d}{2}\right)} \int_{0}^{1}dx\frac{-\frac{1-x}{x}+m}{x^2}\int\frac{d^{d}l}{(2\pi)^d}\frac{1}{\left[l^2-\Delta\right]^2}\nonumber\\=id\int_{0}^{1}dx\frac{-\frac{1-x}{x}+m}{x^2}\frac{1}{\left(4\pi\right)^{d/2}}\frac{\Gamma(2-\frac{d}{2})}{\Gamma(2)}\left(\frac{\mu^2}{\Delta}\right)^{2-\frac{d}{2}}\nonumber\\\longrightarrow\frac{di}{(4\pi)^2}\int_{0}^{1}dx\frac{-\frac{1-x}{x}+m}{x^2}\left(\frac{2}{4-d}-\log\left( \frac{\Delta}{\mu^2}\right)-\gamma+log(4\pi)+\mathcal{O}(4-d)\right).
\end{eqnarray}
After the subtraction and in the limit $d\rightarrow 4$ the following expression is derived
\begin{eqnarray}
\Pi_{pert}(q^2)=-\frac{i}{4\pi^2}\int_{0}^{1}dx\frac{-\frac{1-x}{x}+m}{x^2}\log\left( \frac{\Delta(q^2)}{\mu^2}\right).
\end{eqnarray}
This form of the amplitude is very inconvenient because of the Feynman parameter integral. One method to get rid of it is to consider only the imaginary part of the amplitude. Which is sufficient for the derivation of the whole amplitude (see section \ref{dispersionrelations}). The logarithm determines the imaginary part of the correlator. It is imaginary if the argument is negative $\frac{\Delta(q^2)}{\mu^2}<0$. This domain is located in the interval between the roots of $\Delta(q^2)$. Hence, the imaginary part of the correlator is
\begin{eqnarray}
-\frac{i}{4\pi^2}\int_{x_{1}}^{x_{2}}dx\frac{-\frac{1-x}{x}+m}{x^2}\log(\frac{\Delta(q^2)}{\mu^2}),
\end{eqnarray}
where $x_{1/2}=\frac{\omega\pm\sqrt{\omega^2-4m^2}+2}{2(m^2+\omega+1)}$ are the roots of $\Delta(q^2)$. In this interval $\log(\frac{\Delta(q^2)}{\mu^2})=i\pi$
\begin{eqnarray}
Im\Pi_{pert}(q^2)=\frac{1}{4\pi}\int_{x_{1}}^{x_{2}}dx\frac{-\frac{1-x}{x}+m}{x^2}=\frac{1}{8\pi}(\omega-2m)\sqrt{\omega^2-4m^2}.
\end{eqnarray}
Consequently the imaginary part of the correlator is determined, but the color structure of the problem has not been considered. A factor 3 has to be added
\begin{eqnarray}
Im\Pi_{pert}(q^2)=\frac{3}{4\pi}\int_{x_{1}}^{x_{2}}dx\frac{-\frac{1-x}{x}+m}{x^2}=\frac{1}{8\pi}(\omega-2m)\sqrt{\omega^2-4m^2}.
\end{eqnarray}
The next step is to calculate the two point correlator of the pseudo-scalar current correlator, where the current is
\begin{eqnarray}
j(x)=\bar{q}(x)i\gamma_{5}h_{Q}(x),~j^{\dagger}(x)=-\bar{h}_{Q}(x)i\gamma_{5}q(x)
\end{eqnarray}
\begin{eqnarray}
\Pi_{pert,5}(q^2)=\parbox[c]{5cm}{
\begin{fmffile}{heavylighthqet02}
\begin{fmfgraph*}(45,40)
\fmfleft{i}
\fmfright{o}
\fmf{dots_arrow,label.side=down,label=$q$}{i,v1}
\fmf{dots_arrow,label.side=down,label=$q$}{v2,o}
\fmf{fermion,label.side=left,label=$p$,left,tension=.3}{v1,v2}
\fmf{dbl_plain_arrow,label.side=left,label=$p+q$,left,tension=.3}{v2,v1}
\end{fmfgraph*}
\end{fmffile}}
=4\int\frac{d^4p}{(2\pi)^4}\frac{-v^{\mu}p_{\mu}+m}{\left( p^2-m^2\right)\left(\omega+2v^{\mu}p_{\mu}\right)}
\label{hqetpertpseudoscalar}.
\end{eqnarray}
The only difference between (\ref{hqetpertscalar}) and (\ref{hqetpertpseudoscalar}) is the minus sign in front of $v^{\mu}p_{\mu}$. The result for $\Pi_{pert,5}(q^2)$ is
\begin{eqnarray}
Im\Pi_{pert,5}(q^2)=\frac{3}{8\pi}(\omega+2m)\sqrt{\omega^2-4m^2}.
\end{eqnarray}
This is an example for a interesting rule. The results for chiral partners are obtained from each other by exchanging $m$ with $-m$.
\subsubsection{Calculation of the quark condensate coefficient $C_{\bar{q}q}$}
Again the example of the pseudo-scalar current is considered (see table \ref{hqetcurrents})
\begin{eqnarray}
C_{\bar{q}q}\mele{\bar{q}q}=i\int d^4xe^{iqx}\wick{1-2,5-6}{3-4}{\bra{p,s},\bar{q}(x),h_{Q}(x),\bar{h}_{Q}(x),q(x),\ket{p,s}}=
\parbox[c]{3cm}{
\begin{fmffile}{heavylighthqet03}
\begin{fmfgraph*}(40,35)
\fmfleft{i1,o1} \fmfright{i2,o2}
\fmffixedx{2cm}{v1,v2}
\fmf{dots_arrow,label.side=left,label=$q$}{i1,v1}
\fmf{dbl_plain_arrow,label.side=down,label=$q+p$}{v1,v2}
\fmf{dots_arrow,label.side=left,label=$q$}{v2,i2}
\fmf{fermion,label.side=right,label=$p$}{o1,v1}
\fmf{fermion,label.side=right,label=$p$}{v2,o2}
\end{fmfgraph*}
\end{fmffile}}
\nonumber\\=i\bar{u}(p,s)\frac{i}{v^{\mu}(q_{\mu}+p_{\mu})}\frac{\slashed{v}+1}{2}u(p,s).
\end{eqnarray}
The results of section \ref{Wilson_Berechnung} are used to derive
\begin{eqnarray}
C_{\bar{q}q}\mele{\bar{q}q}=-\frac{1}{m}\left( \frac{1}{v^{\mu}(q_{\mu}+p_{\mu})}\frac{v^{\mu}p_{\mu}+m}{2}\right)\bar{u}(p,s)u(p,s)
=-\frac{1}{m}\frac{v^{\mu}p_{\mu}+m}{\omega+2v^{\mu}p_{\mu}}\bar{u}(p,s)u(p,s).
\end{eqnarray}
The expression
\begin{eqnarray}
-\frac{1}{m}\left(\frac{v^{\mu}p_{\mu}+m}{\omega+2v^{\mu}p_{\mu}}\right)
\end{eqnarray}
has to be averaged over the fourdimensional Euclidean angle as it is shown in section \ref{Wilson_Berechnung}. The result is
in the limit $m\rightarrow 0$ is
\begin{eqnarray}
C_{\bar{q}q}= -\frac{1}{\omega}\label{hqetquarkpseudo}.
\end{eqnarray}
The scalar current correlator is determined by the expression
\begin{eqnarray}
C_{\bar{q}q}\mele{\bar{q}q}=\parbox[c]{3cm}{
\begin{fmffile}{heavylighthqet03}
\begin{fmfgraph*}(40,35)
\fmfleft{i1,o1} \fmfright{i2,o2}
\fmffixedx{2cm}{v1,v2}
\fmf{dots_arrow,label.side=left,label=$q$}{i1,v1}
\fmf{dbl_plain_arrow,label.side=down,label=$q+p$}{v1,v2}
\fmf{dots_arrow,label.side=left,label=$q$}{v2,i2}
\fmf{fermion,label.side=right,label=$p$}{o1,v1}
\fmf{fermion,label.side=right,label=$p$}{v2,o2}
\end{fmfgraph*}
\end{fmffile}}~~~
=-\frac{1}{m}\left(\frac{v^{\mu}p_{\mu}-m}{\omega+2v^{\mu}p_{\mu}}\right)\bar{u}(p,s)u(p,s).
\end{eqnarray}
The change in the expression in comparison with the pseudo-scalar case is just the minus sign in front of the $m$-term. The result in the limit $m\rightarrow 0$ is
\begin{eqnarray}
C_{\bar{q}q}= \frac{1}{\omega}
\end{eqnarray}
where the difference to the pseudo-scalar case (\ref{hqetquarkpseudo}) is just a flip in the sign of the coefficient.
\subsubsection{The Borel transformed heavy flavor sum rule}
The Borel transformed heavy flavor sum rule for the $D-mesons$ with isospin or strangeness, in the limit $m_{light}=0$ reads for the positive parity doublet
\begin{eqnarray}
\bra{0}j\ket{0}^2e^{-2\frac{\overline{\Lambda}}{T}}=\frac{3}{16\pi^2}\int_{0}^{\omega_{c}}\omega^2e^{-\frac{\omega}{T}}d\omega+\frac{\bra{0}\bar{q}q\ket{0}}{2}-\frac{1}{8T^2}M^2_{0}\bra{0}\bar{q}q\ket{0} \label{hqet_positive}
\end{eqnarray}
and for the negative parity doublet
\begin{eqnarray}
\bra{0}j\ket{0}^2e^{-2\frac{\overline{\Lambda}}{T}}=\frac{3}{16\pi^2}\int_{0}^{\omega_{c}}\omega^2e^{-\frac{\omega}{T}}d\omega-\frac{\bra{0}\bar{q}q\ket{0}}{2}+\frac{1}{8T^2}M^2_{0}\bra{0}\bar{q}q\ket{0}.
\end{eqnarray}
T is the Borel parameter. The currents employed for the calculation of the sum rule have an additional $\frac{1}{\sqrt{2}}$ factor (see \cite{Dai:1996yw}). Hence, in the heavy quark limit the masses of the doublets are degenerated. \\
The result of the sum rules evaluation in the channel with isospin is $\overline{\Lambda}=(1.00)GeV$ for the positive parity doublet and $\overline{\Lambda}=(0.95)GeV$ for the negative parity doublet. Hence, the positive parity states are supposed to be heavier than the negative parity states. Thus, the prediction
of the mass for the $D-mesons$ with positive parity is $m_{D}\approx\overline{\Lambda}+m_{c}=2.3~GeV$. In the case of the negative parity states $m_{D}\approx\overline{\Lambda}+m_{c}=2.25~GeV$. The Borel curves are given in figure \ref{hqetborel}, the curves for the positive and negative parity states stay nearly equidistant for $T>1$.
\begin{figure}[htbp]
\begin{center}
\includegraphics{hqet_borel_curves.eps}
\caption{This figure shows the Borel curves for $\overline{\Lambda}$, the mass parameter in the HQET, for the states in the hypermultiplet of figure \ref{hypermulti}. The full line is the curve for the $P=+$ states while the dashed line is the curve for the $P=-$ states.\label{hqetborel}}
\end{center}
\end{figure}
Dai et al. where the first ones to analyze such sum rules (\ref{hqet_positive}), but they just analyzed the positive parity states. However, their value for the threshold $\omega_{c}$ was used for the calculations shown above. In their publication they obtained $\overline{\Lambda}=1.05 GeV$ with $\omega_{c}=2.65GeV$. The difference between the calculations above and their calculations is probably caused by a difference in the condensates. Unfortunately they do not give the values they used for the condensates.\\
However, in this calculation the values for the quark condensate, as given in section \ref{condensates}, has been used. Hence, in the negative parity channel the results can be compared with the masses of the $D^{0}(1869),D^{\pm}(1864),D^{*}(2007)^{0}$ and $D^{*}(2010)^{\pm}$ particles. In the positive parity channel the data of the $D_{1}(2420)^{0}$ and $D_{1}(2430)^{0}$ particles can be compared with the results. Obviously the positive parity states lie much closer to the results than the negative parity states. In the $P=+$ case the error is about $0.150~GeV$ while it is about $0.400~GeV$. This may be due to the OPE, an improved OPE may give better results.
\subsubsection{The mass formula for heavy flavor sum rules \label{hqetcorrections}}
The intradoublet mass splitting is another topic. It stems from the breaking of the heavy quark symmetry. This violation is in QFT formulated as $\frac{1}{m_{heavy}}$ corrections. The Lagrangian with the first corrections is given by
\begin{eqnarray}
\mathscr{L}_{eff}=\bar{h}_{v}iv_{\mu}D^{\mu}h_{v}+\frac{\mathcal{K}}{2m_{Q}}+\frac{\mathcal{S}}{2m_{Q}}+\mathcal{O}\left(\frac{1}{m_{Q}^2}\right)
\end{eqnarray}
where $\mathcal{K}$ is the nonrelativistic kinetic energy operator, defined as
\begin{eqnarray}
\mathcal{K}=\bar{h}_{v}\left(iD_{\perp}\right)^2 h_{v}
\end{eqnarray}
with $D^2_{\perp}=D_{\mu}D^{\mu}-\left(v_{\mu}D^{\mu}\right)^2$, and $\mathcal{S}$ is the chromomagnetic interaction term
\begin{eqnarray}
\mathcal{S}=C_{mag}\left(\frac{m_{Q}}{\mu}\right)\bar{h}_{v}\frac{g_{s}}{2}\sigma_{\mu\nu}G^{\mu\nu}h_{v}
\end{eqnarray}
where $C_{mag}=\left[\frac{\alpha_{s}(m_{Q})}{\alpha_{s}(\mu)}\right]^{\frac{3}{\beta_{0}}}$ and $\beta_{0}=11-\frac{2}{3}n_{f}$ is the first coefficient of the $\beta$ function.\\
Taking into account the $\frac{1}{m_{Q}}$ corrections in the Lagrangian, the meson mass formula in HQET is expressed as
\begin{eqnarray}
M=m_{Q}+\bar{\Lambda}-\frac{1}{2m_{Q}}\left(\lambda_{1}+d_{m}\lambda_{2}\right)
\end{eqnarray}
where the two additional parameters $\lambda_{1}$ and $\lambda_{2}$ at the $\frac{1}{m_{Q}}$ order are defined by two matrix elements
\begin{eqnarray}
2M\lambda_{1}=\bra{M}\mathcal{K}\ket{M},~~2d_{M}C_{mag}M\lambda_{2}=\bra{M}\mathcal{S}\ket{M}
\end{eqnarray}
the constant $d_{M}$ is spin-related $d_{M}=d_{j,j_{l}}$ and $d_{j_{l}-\frac{1}{2},j_{l}}=2j_{l}+2,d_{j_{l}+\frac{1}{2},j_{l}}=-2j_{l}$ .\\
The evaluation of $\lambda_{1}$ and $\lambda_{2}$ needs the consideration of the three-point correlator
\begin{eqnarray}
\Sigma(\omega,\omega `)=i^2\int d^4 xd^4 y\exp(ikx-ik'y)\bra{0}TJ^{\dagger}(x)O(0)J(y)\ket{0}
\end{eqnarray}
where the operator O can be $\mathcal{K}$ or $\mathcal{S}$ and J is still the generic interpolating current. The derivation of the formulas can be found in \cite{Ball:1993xv} and references herein.
\subsubsection{Shortcomings of the heavy flavor sum rules for the $D$-mesons}
The preceding sections have shown the characteristics of heavy flavor sum rules and their shortcomings. The mass formula shows that a lot of additional work has to be done in order to get the corrections to the heavy quark limit. Moreover, the operators in the HQET have to be renormalized in order to get the right results (see \cite{Bagan:1991sg}). This is also not necessary in the full theory. In summary the heavy flavor sum rules can be excluded as the right tool for the calculation of the properties of the spectral function, because except for additional work they supply nothing new nor an improvement against the sum rules in the full theory.\\
Even if the first corrections are taken into account the effective theory does not produce a satisfying accuracy (see for example \cite{Huang:2005ke}). This is a second argument against heavy flavour sum rules for the calculation of properties of spectral functions.
\subsection{The QSR for heavy light-mesons in full QCD}
The first step is to set the boundary conditions for the OPE. In section \ref{importantunimportant} a criteria was formulated which allows a classification of the terms in the OPE. Important are all terms containing operators of dimension smaller or equal to the dimension of the operator product under consideration, $d(O_{j})\leq d(jj)$. All remaining terms are less important. The currents for the heavy-light mesons which are analyzed here are given by
\begin{eqnarray}
j_{S}=\left(m_{2}+m_{1}\right)\bar{q}_{2}q_{1}=\partial_{\mu}j_{V}^{\mu}~~~~\nonumber\\
j_{P}=\left(m_{2}-m_{1}\right)\bar{q}_{2}i\gamma_{5}q_{1}=\partial_{\mu}j_{A}^{\mu}\nonumber\\
j_{V}=\bar{q}_{2}\gamma_{\mu}q_{1}~~~~~~~~~~~~~~~~~~~~~~~~~~ \nonumber\\
j_{A}=\bar{q}_{2}\gamma_{5}\gamma_{\mu}q_{1}.~~~~~~~~~~~~~~~~~~~~~~~~\label{hlcurrents}
\end{eqnarray}
The scalar and pseudo scalar currents have dimension four while the vector and axialvector current have dimension three. Hence, all important operators in the scalar or pseudoscalar case have dimension less than or equal to eight, while in the vector or axialvector case the important operators have dimension of less than or equal to six. In the ideal case the OPE for the particels approximated by these currents would contain all of these terms. However, the situation in reality is far from that goal.\\
Historically the pioneers in the area of mass spectroscopy of heavy light systems with QSR were the authors Novikov, Voloshin, Shifman, Vainshtein and Zakharov (SVZ) in the late seventies \cite{Novikov:1978tn}. Moreover the authors Reinders,Rubinstein and Yazaki (RRY) continued their work in the early eighties. They computed the first QSR for this problem (see \cite{Reinders:1981ty} and references therein). They truncated the OPE on a very crude level, keeping only the quark and gluon condensates. However their work showed that the sum rules for such particels work and they could derive first estimates of properties for the B-meson family with moment sum rules. Properties for the D-meson family were first computed by Aliev in 1983. He used an OPE which contained more operators than RRY. This lead to predictions of leptonic decay constants for D-mesons (see \cite{Aliev:1983ra}). He included, in addition to the quark and gluon condensate, the mixed and four quark condensates in the OPE. Narison used an OPE which was similar to the one of Aliev for further investigations of heavy-light systems in the late eighties \cite{Narison:1987qc}. His work showed that it is possible to treat B- and D-mesons with such sum rules, but for D-mesons the use of Borel sum rules was necessary. Moment sum rules do not converge for the D-meson. Despite the differences in the approximations made by the groups they also have things in common. The continuum contribution ca not be neglected as it has been done for the charmonium system. This is nothing particular to heavy-light systems. The continuum is important in many cases. The $\rho$-meson sum rule is a classical example where the continuum has to be taken into account. During the nineties the heavy quark effective theory entered the realm of QSR but as shown in section \ref{heavyflavorsr} this approach has not brought much progress to mass spectroscopy with QSRs. Thus for many years Narison was the only author who pushed the field forward. He always used the sum rules close to the one constructed in \cite{Aliev:1983ra}. Aliev's sum rule was for D-mesons with $J^P=0^-$. Thus all analyzes based on this sum rule where done for $0^-$ or $0^+$ states, the sum rules for the two particels are connected by a transformation. There have been many papers about QSR and heavy-light systems which simply used this sum rule in order to perform calculations, but they only applied Narison's work. The situation changed after the discovery of the positive parity doublet in the $D_{s}$-meson family. The interpretation of the nature of such states is still open and has been attacked in papers like \cite{Narison:2003td}, \cite{Hayashigaki:2004gq}, \cite{Kim:2005gt} and \cite{Nielsen:2005ia}. Where Narison's \cite{Narison:2003td} result implies that the dominant configuration of the $D_{S}(2317)$ is a quark anti-quark state, Hayashigaki \cite{Hayashigaki:2004gq} finds, using an unconventional approach that a more complicated state must dominate. In both publications two quark currents and the sum rule of the Aliev type are used for the $J=0$ particels. An important difference is that Hayashigaki uses a sum rule for the $J=1$ states, while Narison uses HQET arguments to derive the masses of the $J=1$ particels from the $J=0$ masses. Hence, Hayashigaki's calculation is superior in the $J=1$ channel. In the remaining publications a four quark current was used as the interpolating current for the $D_{S}(2317)$. However, there still remains much work to be done.\\
A short review of the sum rules used by Narison and Aliev exhibit the shortcomings of those sum rules. The OPE for the sum rules is of course truncated and the Wilson coefficients are calculated to a certain order in the strong coupling constant $\alpha_{S}$. Moreover the limit of massless quarks is used. A curious point is that various authors do not agree on the form of the OPE. Some authors have even published several papers on heavy-light systems using the same Sum Rule but with different results. The difference enters the sum rule through the Wilson coefficients. During this work special attention was payed to the mixed condensate coefficient. A short review about the different results will now be given. In the first calculation of the $J=0$ channel the result was incorrect \cite{Novikov:1978tn} and was corrected in the publication \cite{Aliev:1983ra}.The sum Rule in the publication \cite{Aliev:1983ra} is today concerned as the correct one in the $J=0$ channel. Unfortunately Narison published an additional version of the mixed condensate coefficient in the $J=0$ which is incorrect \cite{Narison:1987qc}. He subsequently improves his results step by step until he arrives at the correct one in \cite{Narison:2003td}. There have been additional publications in which the Wilson coefficients do not agree with other authors. In most cases the quark and gluon condensate coefficients are correct, but for higher dimensions the results differ. In the $J=1$ channel the situation is similar. This problem was also recognized by K�pfer et al. (see \cite{Kaempfer:2005}). Narison's result in \cite{Narison:1989aq} differs from Hayashigaki's \cite{Hayashigaki:2004gq} in more than the mixed condensate terms. The correct result in the $J=0$ and $J=1$ channel are given in \cite{Jamin:1992se} and \cite{Generalis:1990id}. However, most authors who recognize the ambiguities state that the difference does not change the results significantly \cite{Narison:2003td}. The OPE used for further calculations is given by
\begin{eqnarray}
\Pi_{5}\left(q^2\right)=\frac{1}{\pi}\int_{m^2}^{\infty}\frac{Im\left[ \Pi_{5,pert}(s)\right] }{s-q^2}ds\nonumber\\+\frac{m^2}{m^2-q^2}\left[\frac{\mele{\alpha_{s}G^2}}{12\pi}-m\mele{\bar{\psi}_{i}\psi_{i}}\right]+\frac{m^3q^2}{(m^2-q^2)^3}\frac{1}{2}\mele{g\bar{\psi_{i}}\sigma^{\mu\nu}\frac{\lambda_{a}}{2}\psi_{i}G^{a}_{\mu\nu}}\nonumber\\+\frac{m^2}{(m^2-q^2)^2}\left[2-\frac{m^2}{m^2-q^2}-\left(\frac{m^2}{m^2-q^2}\right)^2\right]\frac{\pi}{6}\alpha_{s} \mele{\bar{\psi}_{i}\gamma_{\mu}\lambda_{a}\psi_{i}\sum_{i=u,d,s}\left(\bar{\psi}_{i}\gamma_{\mu}\lambda_{a}\psi_{i}\right)}
\label{hlscalar}
\end{eqnarray}
for the pseudo-scalar current and by
\begin{eqnarray}
\Pi_{V}\left(q^2\right)=\frac{1}{\pi}\int_{m^2}^{\infty}\frac{Im\left[ \Pi_{V,pert}(s)\right] }{s-q^2}ds\nonumber\\+\frac{m^2}{m^2-q^2}\left[-\frac{\mele{\alpha_{s}G^2}}{12\pi}-m\mele{\bar{\psi}_{i}\psi_{i}}\right]+\left(\frac{m^2}{m^2-q^2}\right)^3\frac{1}{m} \frac{1}{2}\mele{g\bar{\psi_{i}}\sigma^{\mu\nu}\frac{\lambda_{a}}{2}\psi_{i}G^{a}_{\mu\nu}}\nonumber\\-\frac{m^2}{(m^2-q^2)^2}\left[4+8\frac{m^2}{m^2-q^2}-3\left(\frac{m^2}{m^2-q^2}\right)^2\right]\frac{\pi}{18}\alpha_{s} \mele{\bar{\psi}_{i}\gamma_{\mu}\lambda_{a}\psi_{i}\sum_{i=u,d,s}\left(\bar{\psi}_{i}\gamma_{\mu}\lambda_{a}\psi_{i}\right)}
\label{hlvector}
\end{eqnarray}
for the vector current (see \cite{Jamin:1992se,Generalis:1990id}). The OPE for the axial partners can be derived from the OPEs above by exchanging m to -m. Thus the OPE of the scalar particels can be derived from the one of the pseudo-scalar particels and the same holds for the axial-vector and vector particles. The terms that are changed by this procedure are the ones which contain odd powers in the mass. An inspection of at the OPEs above exhibits that these are terms which contain quarks in the condensates. Thus chiral symmetry breaking manifests itself through the quark condensates. For vanishing quark condensates the OPEs for the axial partners would be identical. Hence the interdoublet splitting is due to these terms. Another important aspect is the mass due to breaking of the heavy quark symmetry. Here the intradoublet mass splitting is mainly due to the mixed condensate term. In comparison with the finite mass corrections of section \ref{hqetcorrections} this is quite obvious because the operator of the mixed condensate is nearly the chromomagnetic interaction term which is responsible the biggest contribution to the finite mass corrections and these corrections give the mass splittings between the $J=0$ and $J=1$ states.\\
The Wilson coefficient of the unit operator $\mathds{1}$ is written down as a dispersion integral. Though this notation seems to be quite long winded it is very handy. In many cases the imaginary part of the perturbative expression of the current correlator is much simpler than the full expression of the current correlator. Moreover if the OPE is applied to a QSR calculation and the continuum contribution has to be taken into account it turns out that the notation quoted above implements the continuum effects very naturally. The reason is that on the phenomenological side of the OPE the continuum of the spectral function is approximated by the imaginary part of the perturbative piece of the 2-point correlator, which is included in a dispersion relation. Hence, the imaginary part of the perturbative piece of the two point correlator appears both on the left and right hand side of a sum rule and is on both sides embedded in a dispersion integral. Thus it can be eliminated from the phenomenological side by a simple subtraction and be incorporated by the perturbative Wilson coefficient on the theoretical side. In conclusion only the limits of integration in the perturbative Wilson coefficient change as the continuum contribution is accounted for.\\
The perturbative part of the current correlator for the currents given in (\ref{hlcurrents}), with the mass of light quarks set to zero, is given in \cite{Chetyrkin:2000mq}. In that paper a Mathematica package is presented that contains the perturbative part of the correlators. The use of the second order corrections is necessary for the scalar and pseudo-scalar channel if large $q^2$ are considered because the imaginary part of the correlator becomes negative for large $q^2$ if only the first order corrections are used. This problem is remedied when the second order corrections are included (see figure \ref{hl_scalar_pert}).
\begin{figure}[htbp]
\begin{center}
\includegraphics{hl_scalar_correlator.eps}
\caption{The correlator for the scalar current of a heavy-light system in the approximation $m_{light}=0$
to first (dashed line) and second order in $\alpha_{S}$ (full line). The expressions are nearly identical for $q^2<1000~GeV^2$. Above $q^2=1000~GeV^2$ the curves differ strongly from each other until the point is reached where the first order approximation becomes negative. In the case of a vector current, the correlators in first and second order remain nearly equal to each other over the whole $q^2$ range.\label{hl_scalar_pert}}
\end{center}
\end{figure}
The requirement of positivity for the imaginary part of the correlator stems from the connection between the spectral function and the imaginary part of the correlator. The spectral function has to be positive and is up to a constant and a power of $q^2$ equal to the imaginary part of the correlator. Hence, a sign switch is forbidden. In fact for small $q^2$ the expression for the correlator in first and second order of $\alpha_{S}$ are nearly equal, while they differ strongly with growing $q^2$. This is a beautiful example for a calculation where higher order corrections in the coupling constant are necessary in order to get a valid approximation.\\
In the OPEs (\ref{hlscalar}) and (\ref{hlvector}) the quark mass effects, as outlined in section \ref{gluonmix}, are taken into account. It is a remarkable coincidence that in the limit of vanishing light quark masses the coefficient of the gluon and the quark condensate nearly coincide.
\subsubsection{Analysis with moment sum rules}
As the starting point for the analysis of such sum rules, moment sum rules and the narrow resonance plus continuum ansatz are used to evaluate the QSR for heavy-light systems. The qualitative result of such an analysis concerns the convergence of the sum rule. For D-mesons in the $J^P=0^-,1^-$ the sum rule converges, although the stability of the curves with respect to a shift in the parameters is weak. On the other hand in the $J^P=0^+,1^+$ channels no convergence is seen (see figure \ref{0+moments}).
\begin{figure}[htbp]
\begin{center}
\includegraphics{dmeson_moments_0positive.eps}
\caption{Results of the moment sum rules for the $J^P=0^+$ D-mesons. The full line with diamonds corresponds to the set $q^2=-1GeV^2,m=1.3GeV,t_{c}=50GeV^2$ while the other curve corresponds to the set $q^2=-5GeV^2,m=1.3GeV,t_{c}=50GeV^2$. The condensate values are those from section \ref{condensates}. No convergence is found for theses sum rules.\label{0+moments}}
\end{center}
\end{figure}
Hence, there is something wrong, with the Sum Rules in the positive parity channels. The moment sum rules in the channel with negative parity is also excluded, but why ? In the example at hand the OPEs (\ref{hlscalar}) and (\ref{hlvector}) with the condensate values of section \ref{condensates} have been used. The mass of the c-quark and the threshold value are the only free parameters. The result should be independent from $q^2$. The fit of the free parameters should yield nearly equal values independent of the isospin or strangeness of the system. Unfortunately this is not the case, the difference in the parameters are too large. The plots in figure \ref{0-moments} show the masses for the negative parity states which are fitted to reproduce the right result. The criteria for the right result was to hit a value which lies around the right value for D-mesons with isospin and $D_{s}-mesons$ with strangeness. This should lead to nearly equal c-quark masses, a threshold value $t_{c}$ which is close for the two cases and a independence from $q^2$ in a small errorbar window. The threshold requirement is satisfied, but all others are not. In addition the plateau for the $q^2=-1$ sum rule is to small to be reliable. This does not exclude the sum rule but it diminishes the trust in the reliability of the sum rule drastically, without further improvements the sum rule ca not be used. In the $J=1$ channels a similar behavior is found.\\
\begin{figure}[htbp]
\begin{center}
\includegraphics{dmeson_moments_0negative.eps}
\caption{Results from the moment sum rules for the $J^P=0^-$ D-meson with isospin. The full line with diamonds corresponds to the set $q^2=-1GeV^2,m=1.3GeV,t_{c}=50GeV^2$ while the other curve corresponds to the set $q^2=-5GeV^2,m=1.3GeV,t_{c}=50GeV^2$. The condensate values are those from section \ref{condensates}. The horizontal line gives the experimental mass of $0^-$ heavy-light systems with isospin.\label{0-moments}}
\end{center}
\end{figure}
In the case of B-mesons an analogous behavior is seen, but the convergence in the $J^P=0^-,1^-$ channel is much better. This is manifested in a wider plateau than in the D-meson case and in a better correlation of the parameters. Therefore the moment sum rules for the B-mesons can and have been used. At this stage a warning is necessary. In his paper \cite{Narison:1988ep} from 1988 Narison calculated the mass splittings of the B-meson hypermultiplett with $J^P=0^+,0^-,1^+,1^-$ of B-mesons with the sum rules (\ref{hlscalar}) and (\ref{hlvector}) and concluded from that the existence of B-mesons with $J^P=0^+,1^+$. From todays view point that conclusion has to be contradicted. The masses of the positive parity states have not been calculated directly, but by Narison's double moment sum rules which calculate the quotient of the masses which belong to the states $J^P=0^+,0^-$ or $J^P=1^+,1^-$. Hence the mas splittings of the states $J^P=0^+,0^-$ and the splitting of the states $J^P=1^+,1^-$ can be calculated by his method, but only the masses of the states $J^P=0^-,1^-$ can be calculated directly by moment sum rules. However no moment sum rule calculation of the D-meson masses has ever been published. The only published moment sum rule calculation concerning D-mesons was found in the book \cite{Narison:1989aq}, where an estimate for the decay constants of D-mesons based on moment sum rules is given. On the other hand, the calculation of the B-meson masses and decay constants by moment sum rules was published separately in a paper (see \cite{Narison:1989aq} and references herein). The reason for this is probably the bad convergence of the D-meson moment sum rules which have been used.\\
There exist many methods to fix the parameters which occur in the moment sum rules. The one chosen in this thesis is based on measurements of masses. The decay constants are no directly observable quantities, but the masses are. Hence, it can be tested if the sum rule reproduces the experimental mass spectroscopy. If it reproduces the mass spectroscopy and fullfills the stability criteria it can be used for the calculation of other quantities like the decay constants. In the case of the D-meson moment sum rule it did not prove to be reliable. Thus it can not be used for the calculation of the decay constants.\\
The moment sum rule in the $J^P=0^-$ channel is given by:
\begin{eqnarray}
M_{n}(q^2)=\frac{1}{n!}\left(\frac{d}{dq^2}\right)^n\Pi_{5}(q^2)\nonumber\\=\frac{1}{\pi}\int_{m^2}^{t_{c}}\frac{Im\left[\Pi_{5,pert}(s)\right]}{(s-q^2)^{n+1}}ds
+\frac{m^2}{(m^2-q^2)^{n+1}}\left[\frac{\mele{\alpha_{s}G^2}}{12\pi}-m\mele{\bar{\psi}_{i}\psi_{i}}\right]\nonumber\\ +\frac{m^3}{2}\left(\frac{(n+2)(n+1)}{(m^2-q^2)^{n+3}}q^2+\frac{(n+1)n}{(m^2-q^2)^{n+2}}\right)\frac{1}{4}\mele{g\bar{\psi_{i}}\sigma^{\mu\nu}\lambda_{a}\psi_{i}G^{a}_{\mu\nu}}\nonumber\\-\frac{m^2}{6}(n+1)\dfrac{n(n+8)m^4-3(n-6)q^2m^2-12q^4}{\left( m^2-q^2\right)^{n+4}}\frac{\pi}{6}\alpha_{s}\mele{\bar{\psi}_{i}\gamma_{\mu}\lambda_{a}\psi_{i}\sum_{i=u,d,s}\left(\bar{\psi}_{i}\gamma_{\mu}\lambda_{a}\psi_{i}\right)}.
\end{eqnarray}
Calculations of D-meson masses based on the OPE given in (\ref{hlscalar}) and (\ref{hlvector}) using moment sum rules are plagued by too many shortcomings, which prohibit high precision analyzes. Thus only rough estimates can be obtained by moment sum rules. The problem can be remedied by using Borel sum rules. The discussion of the results will follow in the next section.
\subsubsection{Analysis with Borel sum rules \label{hlborelope}}
The first step to obtain the Borel transformed sum rule is to transform the phenomenological part of the sum rule. This leads to a change of the integral kernel from a fraction to exponential function as shown in section \ref{dispersionrelations}. The imaginary part of the correlator stays untouched by transformation. This is the simplest part of the transformation. The transformation of the OPE is in many cases much more work. Here every Wilson coefficient has to be transformed. If the perturbative coefficient is given in the form of a dispersion integral at least this one is simple to transform. All other coefficients need explicit treatment. The results for the OPEs (\ref{hlscalar}) and (\ref{hlvector}) are shown below.\\
Here are the Borel transformed Wilson coefficients for the pseudoscalar correlator.
\begin{enumerate}
\item{Borel transformation of $C_{\bar{q}q}$:}
\begin{eqnarray}
C_{\bar{q}q}=\frac{m^2}{m^2-q^2}\longrightarrow\frac{m^2}{M^2}e^{-\frac{m^2}{M^2}}.
\end{eqnarray}
\item{Borel transformation of $C_{mixed}$:}
\begin{eqnarray}
C_{mixed}=\frac{m^3}{2}\frac{q^2}{(m^2-q^2)^3}\longrightarrow\frac{m^3}{2M^4}\left(1-\frac{1}{2}\frac{m^2}{M^2}\right)e^{-\frac{m^2}{M^2}} \label{borelcmixed}.
\end{eqnarray}
\item{Borel transformation of $C_{4q}$:}
\begin{eqnarray}
C_{4q}=\frac{m^2}{(m^2-q^2)^2}\left[2-\frac{m^2}{m^2-q^2}-\left(\frac{m^2}{m^2-q^2}\right)^2\right]\frac{\pi}{6}\nonumber\\ \longrightarrow\frac{\pi}{6}\frac{m^2}{M^4}\left[ 2-\frac{1}{2}\frac{m^2}{M^2}-\frac{1}{6}\frac{m^4}{M^4}\right] e^{-\frac{m^2}{M^2}}.
\end{eqnarray}
\end{enumerate}
Here are the Borel transformed Wilson coefficients for the vector correlator.
\begin{enumerate}
\item{Borel transformation of $C_{\bar{q}q}$:}
\begin{eqnarray}
C_{\bar{q}q}=\frac{1}{m^2-q^2}\longrightarrow\frac{e^{-\frac{m^2}{M^2}}}{M^2}.
\end{eqnarray}
\item{Borel transformation of $C_{mixed}$:}
\begin{eqnarray}
C_{mixed}=\frac{m^3}{2}\frac{1}{(m^2-q^2)^3}\longrightarrow\frac{m^3}{4}\frac{e^{-\frac{m^2}{M^2}}}{M^6}.
\end{eqnarray}
\item{Borel transformation of $C_{4q}$:}
\begin{eqnarray}
C_{4q}=-\frac{1}{(m^2-q^2)^2}\frac{\pi}{18}\left[4+8\frac{m^2}{m^2-q^2}-3\left(\frac{m^2}{m^2-q^2}\right)^2\right]\nonumber\\\longrightarrow-\frac{\pi}{18}\left[ 4+4\frac{m^2}{M^2}-\frac{m^4}{2M^4}\right]\frac{e^{-\frac{m^2}{M^2}}}{M^4} .
\end{eqnarray}
\end{enumerate}
This leads to the OPE
\begin{eqnarray}
\widehat{\mathcal{B}}\Pi_{5}(Q^2)=
\frac{1}{M^2\pi}\int_{m^2}^{\infty}Im\left[ \Pi_{5,pert}(s)\right] e^{-\frac{s}{M^2}}ds+\left(\left[\frac{\mele{\alpha_{s}G^2}}{12\pi}-m\mele{\bar{\psi}_{i}\psi_{i}}\right]\right.\nonumber\\+\frac{m}{2M^2}\left(1-\frac{1}{2}\frac{m^2}{M^2}\right)\mele{g\bar{\psi_{i}}\sigma^{\mu\nu}\frac{\lambda_{a}}{2}\psi_{i}G^{a}_{\mu\nu}}\nonumber\\\left.+\frac{\pi}{6}\frac{1}{M^2}\left(2-\frac{1}{2}\frac{m^2}{M^2}-\frac{1}{6}\frac{m^4}{M^4}\right)\alpha_{s}\mele{\bar{\psi}_{i}\gamma_{\mu}\lambda_{a}\psi_{i}\sum_{i=u,d,s}\left(\bar{\psi}_{i}\gamma_{\mu}\lambda_{a}\psi_{i}\right)}\right)\frac{m^2}{M^2}e^{-\frac{m^2}{M^2}}=\nonumber\\
\frac{1}{M^2\pi}\int_{m^2}^{\infty}Im\left[ \Pi_{5,pert}(s)\right] e^{-\frac{s}{M^2}}ds+\left(\left[\frac{1.2*10^{-2}GeV^4}{12}-m\mele{-1.5625*10^{-2}GeV^3}\right]\right.\nonumber\\+\frac{m}{2M^2}\left(1-\frac{1}{2}\frac{m^2}{M^2}\right)\mele{-0.0125GeV^5}\nonumber\\\left.+\frac{\pi}{6}\frac{1}{M^2}\left(2-\frac{1}{2}\frac{m^2}{M^2}-\frac{1}{6}\frac{m^4}{M^4}\right)\left(-3.11\cdot 10^{-4}GeV^6 \right) \right)\frac{m^2}{M^2}e^{-\frac{m^2}{M^2}} \label{hlpseudoborelope}
\end{eqnarray}
for the pseudo scalar correlator and
\begin{eqnarray}
\widehat{\mathcal{B}}\Pi_{V}(Q^2)=\frac{1}{M^2\pi}\int_{m^2}^{\infty}Im\left[
\Pi_{V,pert}(s)\right]e^{-\frac{s}{M^2}}ds+\left(
\frac{-\mele{\alpha_{s}G^2}}{12\pi}-m\mele{\bar{q}q}+\frac{m^3}{4M^4}\mele{g\bar{q}\sigma^{\mu\nu}\frac{\lambda_{a}}{2}qG^{a}_{\mu\nu}}\right.\nonumber\\\left.-\frac{\pi}{18}\frac{1}{M^2}\left(4+4\frac{m^2}{M^2}-\frac{1}{2}\frac{m^4}{M^4}\right)
\alpha_{s}\mele{\bar{q}\gamma_{\mu}\lambda_{a}q\sum_{i=u,d,s}\left(\bar{n}\gamma_{\mu}\lambda_{a}n\right)}\right)\frac{1}{M^2}e^{-\frac{m^2}{M^2}}\nonumber\\
=\frac{1}{M^2\pi}\int_{m^2}^{\infty}Im\left[ \Pi_{V,pert}(s)\right] e^{-\frac{s}{M^2}}ds+\left(-\frac{1.2*10^{-2}GeV^4}{12}-m\mele{-1.5625*10^{-2}GeV^3}\right.\nonumber\\\left.+\frac{m^3}{4M^4}\mele{-0.0125GeV^5}-\frac{\pi}{18}\frac{1}{M^2}\left(4+4\frac{m^2}{M^2}-\frac{1}{2}\frac{m^4}{M^4}\right)\left(-3.11\cdot 10^{-4}GeV^6 \right)\right)\frac{m^2}{M^2}e^{-\frac{1}{M^2}} \label{hlvectorborelope}
\end{eqnarray}
for the OPE of the vector correlator. After the transformation the situation for the positive parity states changed drastically. The sum rules now converge. Moreover the parameters of all four sum rules are now correlated in an acceptable way and the continuum threshold is much smaller. This is a welcome effect because the threshold should lie close to the first radial excitation which was not the case in the moment sum rule computations. The original analysis of the $J=0$ states in the strange D-meson channel was done by Narison (see \cite{Narison:2004th} and references herein). Hayashigaki performed an analysis of all four states \cite{Hayashigaki:2004gq}.\\
The results of these papers will now be discussed in order to serve as a basis for a further analysis. Narison's analysis can be classified as a conservative one. He uses the relative freedom in the choice of the continuum threshold $t_{c}$ and the c-quark pole mass $m_{c}$ to obtain agreement with the measurements in the $D_{s}$ channel for the $J=0$ states. The parameters of his choice are given by $t_{c}=(7.5\pm1.5)GeV^2=(2.725\pm0.275)GeV$ and $m_{c}=1.46GeV$. Although the measurements and the analysis agreed, the analysis had a huge problem. The Borel curves for the D-meson states are expected to have a plateau, as classical mesons like the $\rho$ had \cite{Shifman:1978bx}, but this was not the case. Narison's curves have all been hyperbola shape like.\\
Hayashigaki performed an unconservative analysis. He uses a larger value for the charm quark mass than Narison and uses another interval for the continuum threshold $t_{c}$. His analysis reproduces the masses of all particles from the hypermultiplet except the $0^+$ states. For the $0^+$ states he obtained masses larger than the experimental ones. This holds both for the channel with isospin and strangeness. He concludes that the $D_{s}(2317)$ is therefore a four quark state, while he claims no clear conclusion for the $0^+$ state in the channel with isospin. Narison made a comparison of his and Hayashigaki's work in his paper \cite{Narison:2003td}, and argues against Hayashigaki. The difference between the two publications is probably due to a difference in the perturbative charm quark pole mass. Hayashigaki use a perturbative charm quark pole mass of $1.46GeV$, while Narison used $1.3GeV$. Another problem recognized during the work on this thesis considers the sign of the mixed condensate coefficient in the $J=1$ channel. In Narison's book \cite{Narison:1989aq} he claims a negative sign in the case of a vector current, while Hayashigaki has a positive one \cite{Hayashigaki:2004gq}. Hence, Hayashigaki's calculations in the vector channel are in question, while Narison is out of the game because he never performed calculations for D-mesons in the vector channel. He did it for B-mesons, which are now also in question.
One thing holds for Hayashigaki's calculations in all cases, except of the $1^+$ case, the curves are hyperbolas and no plateau is visible. Narison just calculated the $J=0$ channel and there the same phenomenon occurred. The reason for this phenomenon is unclear. There are several possibilities
\begin{enumerate}
\item The OPE could be to inexact. The terms could yield too crude approximations or there may be terms missing.
\item The ansatz for the spectral function could be wrong.
\end{enumerate}
However, there must be something in the sum rules that is right. The likelyhood to write down a sum rule which by chance produces results that agree with the measurements is to small to seriously consider that possibility. Anyhow it is reasonable to search for possibilities to improve the convergence of the sum rule.
\subsubsection{Testing new possibilities to improve the Borel sum rules for the D-meson hypermultiplet}
Recent measurements changed the picture of the spectral function owing to heavy-light systems. The narrow resonance ansatz is given by a single $\delta$-resonance and a continuum which is a step function. The distance between the resonance and the threshold of the continuum lies in the region of 0.5 to 1 GeV in the calculations performed by Narison and Hayashigaki. This distribution is justified from the $J/\psi$ and $\rho$ spectral function where the spectral functions are measured and the assumption holds. During this work no reason for this approximation in other channels was found. However, in the axial vector channel of D-mesons there are two cases where the narrow resonance plus continuum approximation does not hold. In those spectral functions two states lie very close to each other the ground state and a second one, which may be a radial excitation. The channels are the s- and u-quark channel of $1^+$ D-mesons.\\
In the s-quark case the resonances are of nearly equal width and mass, while the u-quark case consits of a broad and a narrow state (see table \ref{spectralfunc}).
\begin{table}[htbp]
\begin{tabular}{||c|l|l|l|l||}
\hline
$J^P_{(I,S)}$&$1^+_{(\frac{1}{2},0)}$&$1^+_{(\frac{1}{2},0)}$&$1^+_{(0,-1)}$&$1^+_{(0,-1)}$\\
\hline
\hline
m (in GeV)&2.425 &2.427&2.459& 2.535\\
\hline
$\Gamma$ (in MeV)&58 &384&5.5 &2.3\\
\hline
\end{tabular}
\caption{Data of the spectra as they are given by the particle data group (2005). The quantum numbers of the heavy particles in the spectral function are not yet sure, the errors are omitted.\label{spectralfunc}}
\end{table}
The mass splitting is below $0.1GeV$. Therefore the narrow resonance plus continuum ansatz does not seem to be reasonable. The continuum has to start close to the second resonance and if the calculations are performed with Aliev type sum rules no convergence is seen with such a small threshold value. Hence, it is reasonable to introduce a second resonance and to work with two resonances and a continuum. Moreover, the second resonance in the u-quark channel is not narrow but very broad. Therefore the narrow resonance approximation is not reasonable any more. As an alternative a Breit-Wigner function is suitable.\\
Forced by such measurements the question arises if there may be a similar spectrum in the remaining channels $0^+$,$0^-$ and $1^-$. Unfortunately no measurements were found, but a theoretical prediction. Recently a new method for the calculation of exited states of heavy-light systems has been developed by M.F.M.Lutz. Based on this method J.Hofmann calculated the spectral function for the $J^P=0^+,1^+$ states with isospin or strangeness. The result exhibit something in common with the measurered spectral functions. Hofmann calculated for six spectral functions the part of the spectral function around the ground state. He did it for three channels, one with isospin, one with strangeness and one with anti-strangeness. In the channel with isospin he published results which can be compared with the measurements. In his publication \cite{Hofmann:2003je} the axial-vector channel with isospin is found which corresponds to the u-quark channel. The calculations which where published before the measurements where done agreed qualitatively with the data. Hence, the scheme seems to be valid in the positive parity doublet of the D-mesons with isospin. Moreover, he calculated further spectral functions. In the $0^+$ channel with isospin he predicts a spectral function which again has two resonance close to each other where one is broad and the other is narrow. Therefore a check of the spectral functions with QSR is reasonable for several reasons. The quantum number assignment for the measurered spectralfunction can be checked. Moreover, the model with which Lutz and Hofmann work does not have quarks and gluons but hadrons as basic degrees of freedom. QSR offer a method with which can be checked what QCD says to the predictions which have been made. Hence, a test of the method can be performed with such a calculation. Therefore the $0^+$ channel with isospin is checked (see the data in table \ref{juliandata}).
\begin{table}
\begin{tabular}{||c|l|l|l|l||}
\hline
$J^P_{(I,S)}$&$0^+_{(\frac{1}{2},0)}$&$0^+_{(\frac{1}{2},0)}$\\
\hline
\hline
m (in GeV)&2.255 &2.389\\
\hline
$\Gamma$ (in MeV)&360 &10\\
\hline
\end{tabular}
\caption{Data of the spectra as they were given in \cite{Hofmann:2003je} for the $0^+$ channel with isospin \label{juliandata}}
\end{table}
In order to perform the calculations the usual method has to be modified, the parameterization of the spectral function must be modified. In the case of the s-quark channel two narrow resonances can be used and in the u-quark channel a narrow resonance and a Breit-Wigner resonance can be used
\begin{eqnarray}
S(s,m,\Gamma)=\frac{1}{\pi} \frac{\sqrt{s}~\Gamma}{(s-m^2)^2+s(\Gamma)^2}
\end{eqnarray}
where $\Gamma$ is the width of the resonance. The continuum is still approximated by the imaginary part of the correlator. Thus, the Breit-Wigner curves for the resonances are cut at the threshold $t_{c}$ and from there on the imaginary part of the correlator is used. The spectral function for the scalar D-meson is than given by
\begin{eqnarray}
\Pi\left(s\right)= S(s,2.255GeV,0.36GeV)+S(s,2.389GeV,0.01GeV)+Im\Pi_{0^+,pert}(s) \label{julianspectrum01}
\end{eqnarray}
and plotted in figure\ref{julianspectrum02}.
\begin{figure}[htbp]
\begin{center}
\includegraphics{julian_spectrum_0plus.eps}
\caption{Sketch of the imaginary part of the scalar correlator as predicted by \cite{Hofmann:2003je} (see the data in table \ref{juliandata}). The spectral function contains two resonances a broad one and a narrow one which lie close to each other. The continuum is approximated by the imaginary part of the correlator. In the case of the axialvector D-mesons the sketch is similar. \label{julianspectrum02}}
\end{center}
\end{figure}
Due to the Breit-Wigner function the QSR can not be solved for the mass corresponding to the broad state. Fortunately, with tools like Mathematica this problem is easily manageable. The width of the states will be treated as given by measurements or predictions. Thus the narrow resonances can also be approximated by a Breit-Wigner curve without much additional work. Figure \ref{julianspectrum02} and equation (\ref{julianspectrum01}) are not fully correct. The resonances have to carry factors which are given by squared matrix elements $\bra{0}j\ket{n}^2$. Those matrix have to be inserted into the spectral function
\begin{eqnarray}
\Pi\left(s\right)= \bra{0}j\ket{2.255}^2~S(s,2.255GeV,0.36GeV)+\bra{0}j\ket{2.389}^2~S(s,2.389GeV,0.01GeV)\nonumber\\+Im\Pi_{0^+,pert}(s) \label{julianspectrum03}.
\end{eqnarray}
The coefficients can be fitted to reproduce the mass spectroscopy, more exactly the ratio of the coefficient. This is due to the way the Borel sum rules are evaluated. As always the spectral function enters the Borel transformed dispersion integral. In order to eliminate as much as possible from the matrix elements in front of the resonances the logarithmic derivative of the dispersion integral is taken
\begin{eqnarray}
\mathcal{R}(\tau)=-\frac{d}{d\tau}\log\left(\frac{\tau}{\pi}\int Im\Pi(s)e^{-s\cdot\tau}ds\right).
\end{eqnarray}
Hence a ratio between the original expressions and the derivative of this expressions is calculated. Thus one of the matrix elements can be factored out leaving a ratio between the matrix elements. This reduces the number of a priory unknown quantities from three to two. However, there are three new parameters in the QSR two widths, one for each particle and the ratio of the matrix elements. In QSR calculations with narrow resonances the widths have of course been absent and if only one resonance is considered only one matrix element enters the spectral function and this is cancelled from the evaluation of the sum rule due to the ratio following from the logarithmic derivative.\\
Thus the phenomenological part of the sum rules is determined. As the theoretical part the OPEs of (\ref{hlscalar}) and (\ref{hlvector}) are used. Therefore a two quark dominance of the states is assumed. The expression for the dispersion integral and the OPE are equal. Hence, the logarithmic derivative of the dispersion integral and the OPE are equal. This equation can be used to determine properties of the spectral function, assuming that the condensates are all known sufficiently exact. Overall six free parameters enter the sum rule.
\begin{enumerate}
\item The mass of the first resonance $M_{1}$.
\item The mass of the second resonance $M_{2}$.
\item The width of the first resonance $\Gamma_{1}$.
\item The width of the second resonance $\Gamma_{2}$.
\item The ratio of the matrix elements $r=\frac{\bra{0}j\ket{2.389}}{\bra{0}j\ket{2.255}}$.
\item The values of the continuum threshold $t_{c}$.
\end{enumerate}
Generally it is possible to measure everything except of $t_{c}$ and the matrix elements $r$. These quantities can be measured only in special cases. $t_{c}$ can be measured only for vector currents. A particle for which the matrix elements can be measured is the $\pi$ meson, where it can be determined from decays to a high accuracy. The goal is to check if its possible to fit $r$ and $t_{c}$ so that the sum rule reproduces the spectral function. Depending on whether this is possible or not statements concerning the physics can be made.\\
Before the results are presented a short review of the evaluation of Borel sum rules is given. The dispersion relation where on the left side the phenomenological and on the right side the theoretical part of the sum rules are found is evaluated in order to give the plot of the meson mass in dependency of the Borel parameter $M$. There are two borders for the Borel parameter. The first is given by the theoretical part, for small $M$ the OPE is not valid. The second is given by the phenomenological part, for large $M$ the continuum contribution gets dominant, but then the resonance properties can no longer be determined from the sum rule. The Borel window is determined to be a domain where the OPE is valid and the resonance dominates the phenomenological part of the sum rules. Unfortunately, it is not sure that a Borel window exists. In the case of small $M$ the Borel window is fixed, by the requirement that the $d=6$ terms contribute less than $10-20\%$ to the OPE for big $M$ the Borel window is fixed by the requirement that the contribution of the resonance to the dispersion integral is bigger than the contribution of the continuum
\begin{eqnarray}
\int_{0}^{t_{c}}dsIm\Pi(s)e^{-\frac{s}{M^2}}\leq\int_{t_{c}}^{\infty}ds~Im\Pi(s)e^{-\frac{s}{M^2}}.
\end{eqnarray}
Hence the upper border of the Borel window depends on $t_{c}$, while the lower does not. The Borel window determines the regime where the sum rules are reliable in order to determine the meson mass. The larger the Borel Window is the more reliable is the corresponding determination of resonance properties.\\
\begin{table}[htbp]
\begin{tabular}{||c|l|l|l|l||}
\hline
$J^P_{(I,S)}$&$0^+_{(\frac{1}{2},0)}$&$0^+_{(\frac{1}{2},0)}$&$1^+_{(0,-1)}$&$1^+_{(0,-1)}$\\
\hline
\hline
m (in GeV)&$2.255\pm0.01$&$2.389\pm0.01$&$2.459\pm0.06$&$2.535\pm0.1$\\
\hline
$\Gamma$ (in MeV)&360&10&5.5 &2.3\\
\hline
$f$ (in MeV)&$148\pm 16$&$264\pm32$&$155\pm27$&$202\pm21$\\
\hline
$t_{c}$ (in GeV)&$8.25\pm0.75$&$8.25\pm0.75$&$9.875\pm1.125$&$9.875\pm1.125$\\
\hline
$r$ (in GeV)&$6\pm3$&$6\pm3$&$3\pm2$&$3\pm2$\\
\hline
$m_{c}$ (in GeV)&$1.15\pm0.01$&$1.15\pm0.01$&$1.2\pm0.02$&$1.2\pm0.02$\\
\hline
\end{tabular}
\begin{tabular}{||c|l|l||}
\hline
$J^P_{(I,S)}$&$1^+_{(\frac{1}{2},0)}$&$1^+_{(\frac{1}{2},0)}$\\
\hline
\hline
m (in GeV)&$2.425\pm0.080$ &$2.427\pm0.050$\\
\hline
$\Gamma$ (in MeV)&58 &384\\
\hline
$f$ (in MeV)&&\\
\hline
$t_{c}$ (in GeV)&$7.5\pm0.3$&$7.5\pm0.3$\\
\hline
$r$ (in GeV)&$0.3\pm0.25$&$0.3\pm0.25$\\
\hline
$m_{c}$ (in GeV)&$1.2\pm0.01$&$1.2\pm0.01$\\
\hline
\end{tabular}
\caption{Results from the Sum Rule analysis. Everything except of the widths has been determined from the Sum Rules.\label{results}}
\end{table}
As it was already discussed above the parameters of the sum rules have to be fixed. Therefore, experimental data on the particles are used. The parameters of the Sum Rule are adjusted to reproduce the experimental data. Moreover, the Sum Rule has to fulfill additional criteria in order to be reliable. The data which are used are the masses of the particles. The parameters are adjusted in order to shift the plateau of the corresponding Borel curve to the mass of the corresponding particle. In addition, the Borel curve is required to be stable against shifts in the parameters. This corresponds to an interval in which the parameters can be varied without changing the position of the plateau significantly. Finally the plateau has to be located in the Borel window. A large Borel window is of course much more reliable than a small Borel window. As the mass of the charm quark the running mass was used. The width of the particles was held fixed.\\
The intervals for the parameters are given in table \ref{results}. In these intervals it is possible to reproduce the masses of the corresponding particles. Hence, they can be used to determine the corresponding decay constants. After that the Borel window can be determined. Unfortunately, the Borel curve in the channel with strangeness shows no plateau in the plots for the decay constants. Due to this only one interpretation is possible, the spectral function can not be described by the QSR which was used. The other channels have a plateau in the plot for the decay constant, although they do not have a plateau in the plots for the masses. The plots for the masses always have the shape of a hyperbola. The decay constants agree with earlier calculations. The minimal range for the Borel window for every plot is $\tau=0.3-1.0$. Thus, all plateaus lie inside the Borel window. However, also the Sum Rules for these states show inconsistencies. The intervals for the parameters are small, far from the ideal case.
The curves for the masses look like a parabola and not like a plateau, some representive curves are shown in figure \ref{rep1} to \ref{rep6}. Due to these inconsistencies the sum rules are also regarded as unreliable. To be more exact, the Sum Rule in the channel with strangeness is less reliable than the other Sum Rules.\\
The results raise many questions. Even the improved spectral function did not improve the reliability of the Sum Rules. Moreover, their reliability even got worser. Without the improved spectral function the Sum Rule in the channel with strangeness is more reliable than it is here. If the values of the decay constants are correct or not can only be proven by experiments. Unfortunately, no data from which the decay constants can be extracted are available. The source which is supposed to produce the correct result is lattice QCD. The results in these calculations are slightly lower than the lattice results, but in agreement with earlier Sum Rule calculations. At this point further discussions are not reasonable only further investigations could help to solve the problems. The conclusions will offer a possibility to find such a solution.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=1.5]{0plus_massen.eps}
\caption{Example for the Borelcurves in the $J^P=0^+$ with isospin. On the left the state with m=2.255 GeV and on the right the state with m=2.389 GeV is found. The lines correspond to the following parameters: $t_{c}=8.25$ for all lines, full line r=4, dashed line r=5 and dotted dashed line r=6. \label{rep1}}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.6]{me_0plus.eps}
\caption{Example for the Borelcurves in the $J^P=0^+$ with isospin for the state with m=2.255 GeV. The lines correspond to the following parameters: $t_{c}=8.25$ for all lines, full line r=4, dashed line r=5 and dotted dashed line r=6.}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=1.5]{isospin_massen.eps}
\caption{Example for the Borelcurves in the $J^P=1^+$ with isospin. On the left the state with m=2.425 GeV and on the right the state with m=2.427 GeV is found. The lines correspond to the following parameters: $t_{c}=7.5$ for all lines, full line r=0.1, dashed line r=0.2 and dotted dashed line r=0.3.}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.6]{me_isospin.eps}
\caption{Example for the Borelcurves in the $J^P=1^+$ with isospin for the state with m=2.425 GeV. The lines correspond to the following parameters: $t_{c}=7.5$ for all lines, full line r=0.1, dashed line r=0.2 and dotted dashed line r=0.3.}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=1.5]{strangeness_massen.eps}
\caption{Example for the Borelcurves in the $J^P=1^+$ with strangeness. On the left the state with m=2.459 GeV and on the right the state with m=2.535 GeV is found. The lines correspond to the following parameters: $t_{c}=9.75$ for all lines, full line r=1, dashed line r=2 and dotted dashed line r=3.}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.6]{me_strangeness.eps}
\caption{Example for the Borelcurves in the $J^P=1^+$ with strangeness for the state with m=2.459 GeV. The lines correspond to the following parameters: $t_{c}=9.75$ for all lines, full line r=1, dashed line r=2 and dotted dashed line r=3.\label{rep6}}
\end{center}
\end{figure}
\newpage
\section{Conclusions}
The calculations made in this thesis use an improved ansatz for the spectral function. In most of the Sum Rule applications the spectral function is approximated by a single resonance and a continuum. In this work a spectral function with two resonances and a continuum is used. According to the QCD Sum Rule frame work the calculations are improved when a more realistic spectral function is used. Due to the second resonance an additional parameter enters the theory. The properties of the resonances are known from experiment or other theoretical approaches. Hence, they are used to fix the parameters of the theory. On that basis the decay constants of the corresponding hadrons are calculated (see table \ref{results}). The results of these calculations and the shape of the corresponding Borel curves are used to extract statements concerning the analyzed hadrons.\\
The hadrons which are addressed are D-mesons. D-mesons are believed to have two valence quarks, a charm quark together with a up, down, or strange quark. Therefore, the Sum Rules used here are based on a two quark structure. In particular three systems are analyzed. A system with $J^P=0^+$ with charm and isospin two systems with $J^P=1^+$ one with charm and isospin and the other with charm and strangeness. The results are of quantitative and qualitative nature and can be divided into two groups.\\
In the first group the D-mesons with $J^P=0^+$ and isospin and the D-meson with $J^P=1^+$ and strangeness are contained. Although, the Sum Rules for these particles can reproduce the data, they are not satisfactory. The OPE does not seem to reproduce the spectral function reliably. Although it can not be excluded that an OPE which contains more terms would do it. The quantitative results for the D-mesons in the first group are the decay constants in the two quark picture. The results for the decay constants of the resonances have again to be split into two groups. The first one is the group of the resonances which are the lightest ones. The ground state so to say. Their values agree within error bars with earlier calculations. The second group is the group of resonances which are higher in mass. No earlier calculations or estimates on their value has been found during this work. These properties have most probably never been calculated before. For these states the decay constants are always higher than for the ground state particles.\\
In the second group the D-mesons with $J^P=1^+$ and isospin is contained. In that channel the OPE can not reproduce the spectral function. In consideration of the experience gained during the work on this thesis even an OPE with more terms should not change this situation. The calculation of decay constants was impossible. The sum rule in that channel did not show any sign of a plateau for the decay constant. This point was discussed in section \ref{sumrules}.\\
However, not long ago some papers on D-mesons and QCD Sum Rules appeared which implement a four quark structure of D-mesons \cite{Kim:2005gt,Nielsen:2005ia}. In these publications many problems with which the two quark versions are plagued with are absent. Hence, from the viewpoint of the current investigation four quark structures have to be taken into account in QSR calculations in the hope of maintaining Sum Rules which behave more reasonable. The minimal recommendation which can be extracted from this thesis is to consider Sum Rules for D-meson systems which implement a two and a four quark structure of D-Mesons in order to extract the mixing angle between those structures. This means that it is necessary to consider four-quark structures in addition to the standard two-quark ones.
\newpage
|
2,877,628,091,046 | arxiv | \section{Introduction}
It is of fundamental interest in operator algebras to analyze interplay
between a geometric or dynamical object and a $C^*$-algebra associated
with it.
For a branched covering, Deaconu and Muhly \cite{DM} introduced a
$C^*$-algebra associated with it using a r-discrete groupoid.
A typical example of a branched covering is a rational function regarded
as a self-map of the Riemann sphere $\hat{\mathbb C}$.
In order to capture information of the branched points for the complex
dynamical system arising from a rational function $R$,
the second and third-named authors \cite{KW1} introduced a
slightly different construction of a $C^*$-algebra
${\mathcal O}_R(\hat{\mathbb C})$
(resp. ${\mathcal O}_R(J_R)$ and ${\mathcal O}_R(F_R)$)
associated with $R$ on $\hat{\mathbb C}$
(resp. the Julia set $J_R$ and the Fatou set $F_R$ of $R$).
The $C^*$-algebra ${\mathcal O}_R(\hat{\mathbb C})$ is the Cuntz-Pimsner
algebra of a Hilbert bimodule over the $C^*$-algebra $C(\hat{\mathbb{C}})$
of the set of continuous functions and the other two are defined in a
similar way.
One of the purposes of the present paper is to discuss KMS states for
the gauge action on the $C^*$-algebra $\ORC$.
The structure of the KMS states reflects that of the singular points of $R$
as we expected.
We completely classify the KMS states for the gauge action of
${\mathcal O}_R(\hat{\mathbb C})$.
If $R$ has no exceptional points, then the gauge action has a phase transition
at $\beta = \log \deg R$ in the following sense:
In the region $0 \leq \beta < \log \deg R$, no KMS-state exists.
A unique KMS-state exists at $\beta = \log \deg R$, which is of type
$III_{1/\deg R}$ and corresponds to the Lyubich measure.
The extreme $\beta$-KMS states at $\beta > \log \deg R$
are parameterized by the branched points of $R$ and are factor states of
type I.
If $R$ has exceptional points, then there appear additional $\beta$-KMS
states for $0<\beta \leq \log \deg R$ parameterized by exceptional points.
We can recover the degree of $R$, the number of branched points,
the number of exceptional points from the structure of the KMS states.
The orbits of exceptional points are distinguished by 0-KMS states.
We also classify the KMS states for the $C^*$-algebras associated with some
self-similar sets including the full tent map and the Sierpinski gasket by
a similar method.
Olsen-Pedersen \cite{OP} showed that a $\beta$-KMS state for the gauge action
of the Cuntz algebra ${\mathcal O}_n$ exists if and only if $\beta = \log n$
and that $\log n$-KMS state is unique.
Since then, several authors have discussed KMS states for the gauge action
(and its generalization) of the Cuntz-Pimsner algebra
(and its generalization).
Here we content ourselves with only giving an incomplete list of
such works:
\cite{DS}, \cite{EFW}, \cite{Ev}, \cite{Ex1}, \cite{Ex2}, \cite{EL}, \cite{I},
\cite{KP}, \cite{KR}, \cite{LN}, \cite{M}, \cite{MWY}, \cite{OK}, \cite{PWY}.
Among the others, in this paper we follow Laca and Neshveyev's approach where
the structure of the KMS states is described in terms of a certain
Perron-Frobenius type operator.
We give an explicit description of the Perron-Frobenius type operator
in our cases, which allows us to perform detailed analysis of the KMS states.
This paper is an extended version of the preprint \cite{KW3}.
\section{Dynamical systems and Hilbert bimodules}
In this section, we recall our construction of the $C^*$-algebras associated
with rational functions in \cite{KW1} and self-similar sets in \cite{KW2},
which are constructed as the Cuntz-Pimsner algebras \cite{Pi}.
\subsection{The Cuntz-Pimsner algebra}
Let $A$ be a $C^*$-algebra and $X$ be a Hilbert right $A$-module.
We denote by $L(X)$ the algebra of the adjointable bounded operators
on $X$.
For $\xi$, $\eta \in X$, the ``rank one" operator $\theta _{\xi,\eta}$
is defined by $\theta _{\xi,\eta}(\zeta) = \xi(\eta|\zeta)_A$
for $\zeta \in X$.
The closure of the linear span of the rank one operators is denoted by $K(X)$.
A sequence $(u_n)_n$ in $X$ is said to be a countable basis of $X$ over $A$
if for any $x \in X$, the series
$\sum_{n=1}^{\infty} u_n(u_n|x)_A$ converges to $x$ in norm.
We note that $(u_n)_n$ converges unconditionally in the
sense that the net $(\sum_{n \in F} u_n(u_n|x)_A)_F$,
where $F$ runs all finite subsets of ${\mathbb N}$, converges
to $x$ in norm. (See \cite{KPW} on basis).
We have $\|u_n \| \leq 1$ and the sequence
$(\sum_{k=1}^n \theta _{u_k,u_k})_n$ is an approximate unit for $K(X)$.
We say that
$X$ is a Hilbert bimodule over $A$ if $X$ is a Hilbert right $A$-
module with a *-homomorphism $\phi : A \rightarrow L(X)$. We always assume
that $X$ is full and $\phi$ is injective.
Let $F(X) = \bigoplus _{n=0}^{\infty} X^{\otimes n}$
be the full Fock module of $X$ with a convention $X^{\otimes 0} = A$.
For $\xi \in X$, the creation operator $T_{\xi} \in L(F(X))$ is defined by
\[
T_{\xi}(a) = \xi a \qquad \text{and } \
T_{\xi}(\xi _1 \otimes \dots \otimes \xi _n) = \xi \otimes
\xi _1 \otimes \dots \otimes \xi _n .
\]
We define $i_{F(X)}: A \rightarrow L(F(X))$ by
$$
i_{F(X)}(a)(b) = ab \qquad \text{and } \
i_{F(X)}(a)(\xi _1 \otimes \dots \otimes \xi _n) = \phi (a)
\xi _1 \otimes \dots \otimes \xi _n
$$
for $a,b \in A$. The Cuntz-Toeplitz algebra ${\mathcal T}_X$
is the C${}^*$-algebra acting on $F(X)$ generated by $i_{F(X)}(a)$
with $a \in A$ and $T_{\xi}$ with $\xi \in X$.
Let $j_K | K(X) \rightarrow {\mathcal T}_X$ be the homomorphism
defined by $j_K(\theta _{\xi,\eta}) = T_{\xi}T_{\eta}^*$.
We consider the ideal $I_X := \phi ^{-1}(K(X))$ of $A$.
Let ${\mathcal J}_X$ be the ideal of ${\mathcal T}_X$ generated
by $\{ i_{F(X)}(a) - (j_K \circ \phi)(a) ; a \in I_X\}$. Then
the Cuntz-Pimsner algebra ${\mathcal O}_X$ is defined as
the quotient ${\mathcal T}_X/{\mathcal J}_X$ .
Let $\pi : {\mathcal T}_X \rightarrow {\mathcal O}_X$ be the
quotient map.
We set $S_{\xi} = \pi (T_{\xi})$ and $i(a) = \pi (i_{F(X)}(a))$.
Let $i_K : K(X) \rightarrow {\mathcal O}_X$ be the homomorphism
defined by $i_K(\theta _{\xi,\eta}) = S_{\xi}S_{\eta}^*$. Then
$\pi((j_K \circ \phi)(a)) = (i_K \circ \phi)(a)$ for $a \in I_X$.
The Cuntz-Pimsner algebra ${\mathcal O}_X$ is
the universal C${}^*$-algebra generated by $i(a)$ with $a \in A$ and
$S_{\xi}$ with $\xi \in X$ satisfying that
$i(a)S_{\xi} = S_{\phi (a)\xi}$, $S_{\xi}i(a) = S_{\xi a}$,
$S_{\xi}^*S_{\eta} = i((\xi | \eta)_A)$ for $a \in A$,
$\xi, \eta \in X$ and $i(a) = (i_K \circ \phi)(a)$ for $a \in I_X$.
We usually identify $i(a)$ with $a$ in $A$.
We also identify $S_{\xi}$ with $\xi \in X$ and simply write $\xi$
instead of $S_{\xi}$.
There exists an action
$\alpha : {\mathbb R} \rightarrow \hbox{\rm Aut} \ {\mathcal O}_X$
defined by $\alpha_t(\xi) = e^{it}\xi$ for $\xi\in X$ and $\alpha_t(a)=a$
for $a\in A$, which is called the {\it gauge action}.
\subsection{The case of rational functions}
Let $R$ be a rational function with $N = \deg R$, that is,
if $R(z)=P(z)/Q(z)$ with relatively prime polynomials $P(z)$ and $Q(z)$,
the degree $\deg R$ is the maximum of those of $P(z)$ and $Q(z)$.
We regard $R$ as a $N$-fold branched covering map
$R : \hat{\mathbb C} \rightarrow \hat{\mathbb C}$
on the Riemann sphere $\hat{\mathbb C} = {\mathbb C}
\cup \{ \infty \}$.
The sequence $(R^n)_n$ of iterations of $R$
gives a complex dynamical system on $\hat{\mathbb C}$.
The Fatou set $F_R$ of $R$ is the maximal open subset of
$\hat{\mathbb C}$ on which $(R^n)_n$ is equicontinuous (or
a normal family), and the Julia set $J_R$ of $R$ is the
complement of the Fatou set in $\hat{\mathbb C}$.
We always assume that $R$ is of degree at least two.
Recall that a {\it branched point} of $R$ is a point
$z_0$ around which $R$ is not locally one to one.
It is a zero of the derivative $R'$ or
a pole of $R$ of order two or higher.
The image $w_0 =R(z_0)$ is called a {\it branch value} of $R$.
Using appropriate local charts, if $R(z) = w_0 + c(z - z_0)^n +
(\text{higher terms})$ with
$n \geq 1$ and $c \not= 0$ on some neighborhood of $z_0$,
then the integer $n = e(z_0)$ is called the
{\it branch index} of $R$ at $z_0$.
Thus $e(z_0) \geq 2$ if $z_0$ is a branched point and $e(z_0) = 1$
otherwise.
Therefore $R$ is an $e(z_0) :1$ map in a punctured neighborhood of $z_0$.
Let ${\mathcal B}(R)$ be the
set of branched points of $R$ and ${\mathcal C}(R)$ be the
set of the branch values of $R$.
Then the restriction $R: \hat{\mathbb C} \setminus R^{-1}({\mathcal C}(R))
\rightarrow \hat{\mathbb C} \setminus {\mathcal C}(R) $ is an
$N$-to-one regular covering.
Let $A= C(\hat{\mathbb C})$ and $X = C(\mathop{\rm {graph}}\nolimits R)$ be the set of continuous
functions on $\hat{\mathbb C}$ and $\mathop{\rm {graph}}\nolimits R$ respectively,
where $\mathop{\rm {graph}}\nolimits R = \{(x,y) \in \hat{\mathbb C}^2 ; y = R(x)\} $
is the graph of $R$.
Then $X$ is an $A$-$A$ bimodule by
$$
(a\cdot \xi \cdot b)(x,y) = a(x)\xi(x,y)b(y),\quad a,b \in A,\;
\xi \in X.$$
We define an $A$-valued inner product $(\ |\ )_A$ on $X$ by
$$
(\xi|\eta)_A(y) = \sum _{x \in R^{-1}(y)} e(x) \overline{\xi(x,y)}\eta(x,y),
\quad \xi,\eta \in X,\; y \in \hat{\mathbb C}.$$
Thanks to the branch index $e(x)$, the inner product above gives a continuous
function and $X$ is a full Hilbert bimodule over $A$ without completion.
The left action of $A$ is unital and faithful.
Since the Julia set $J_R$ is completely invariant under $R$, i.e.,
$R(J_R) = J_R = R^{-1}(J_R)$, we can consider the restriction
$R|_{J_R} : J_R \rightarrow J_R$, which will be often denoted by
the same letter $R$.
Let $\mathop{\rm {graph}}\nolimits R|_{J_R} = \{(x,y) \in J_R \times J_R \ ; \ y = R(x)\} $
be the graph of the restriction map $R|_{J_R}$ and
$X(J_R) = C(\mathop{\rm {graph}}\nolimits R|_{J_R})$.
In the same way as above, $X(J_R)$ is a full Hilbert bimodule over $C(J_R)$.
Since the Fatou set $F_R$ is also completely invariant,
$X(F_R):=C_0(\mathop{\rm {graph}}\nolimits R|_{F_R})$ is a full Hilbert bimodule over $C_0(F_R)$.
\begin{defi}[\cite{KW1}]
The $C^*$-algebra
${\mathcal O}_R(\hat{\mathbb C})$ is defined
as the Cuntz-Pimsner algebra of the Hilbert bimodule
$X= C(\mathop{\rm {graph}}\nolimits R)$ over
$A = C(\hat{\mathbb C})$.
When the Julia set $J_R$ is not empty (for example
$\deg R \geq 2$), we define
the $C^*$-algebra ${\mathcal O}_R(J_R)$
as the Cuntz-Pimsner algebra of the Hilbert bimodule $
X= C(\mathop{\rm {graph}}\nolimits R|_{J_R})$ over $A = C(J_R)$.
When the Fatou set $F_R$ is not empty, the $C^*$-algebra
${\mathcal O}_R(F_R)$ is defined similarly.
\end{defi}
\subsection{The case of self-similar sets}
Let $(\Omega,d)$ be a separable, complete metric space $\Omega$ with a
metric $d$. Let $\gamma = (\gamma_1,\dots , \gamma_N)$ be a system of
continuous maps from $\Omega$ to itself. We say that $\gamma_i$ is
a proper contraction if
there exist positive constants $c_1$ and $c_2$
with $0 < c_1 \leq c_2 < 1$ satisfying the condition:
$$c_1 d(x,y)\leq d(\gamma_i(x),\gamma_i(y))\leq c_2 d(x,y),
\quad i=1,2,..,N,\quad \forall x,y \in X.$$
We say that a non-empty compact set $K \subset \Omega$ is {\it self-similar}
(in a weak sense) with respect to the system
$\gamma = (\gamma_1,\dots , \gamma_N)$ if
$$
K=\bigcup_{i=1}^{N} \gamma_{i}(K).
$$
If the contractions are proper, then there exists a unique self-similar
set $K \subset X$.
In this note we usually forget the ambient space $\Omega$ and
assume that $\gamma = (\gamma_1,\dots , \gamma_N)$ is a
system of continuous functions on a self-similar $K$.
We use the following notations:
\begin{align*}
{\mathcal B}(\gamma) & = \{ x \in K | x =\gamma_j(y)=\gamma_{j'}(y)
\text{ for some } y
\in K \text{ and } j \ne j' \} \\
{\mathcal C}(\gamma) & = \{ y \in K | \gamma_j(y)=\gamma_{j'}(y) \text{ for some }
j \ne j' \}
\end{align*}
We call a point in ${\mathcal B}(\gamma)$ a branched point, and a point in
${\mathcal C}(\gamma)$ a branch value.
If $\gamma = (\gamma_1,\dots , \gamma_N)$ is a system
of branches of $R^{-1}$ for a certain map $R$, the terms are
compatible with those for $R$.
We set ${\mathcal G}$ to be the union of the cographs of
$\gamma_i$ for $i=1,2,\cdots,N$, that is,
\[
{\mathcal G} = {\mathcal G}(\{\gamma_j:j=1,2,..,N \}) :=
\bigcup _{i=1}^N \{(x,y) \in K^2 ; x = \gamma _i(y)\}.
\]
Let $A = C(K)$ and let $X = C({\mathcal G})$.
Then $X$ is an $A$-$A$ bimodule by
$$
(a\cdot f \cdot b)(x,y) = a(x)f(x,y)b(y),\quad
a,b \in A,\;f \in X.$$
We introduce an $A$-valued inner product $(\ |\ )_A$ on $X$ by
$$
(\xi|\eta)_A(y) = \sum _{i=1}^N
\overline{\xi(\gamma _i(y),y)}\eta(\gamma _i(y),y),\quad
\xi,g \in X,\; y \in K.$$
\begin{defi}[\cite{KW2}]
Let $(K,d)$ be a compact metric space
and $\gamma = (\gamma_1,\dots , \gamma_N)$ be a system of
proper contractions on $K$. Assume that $K$ is self-similar.
The $C^*$-algebra ${\mathcal O}_{\gamma}(K)$ is defined
as the Cuntz-Pimsner algebra ${\mathcal O}_X$ of the Hilbert bimodule
$X= C({\mathcal G})$ over $A = C(K)$.
\end{defi}
\section{KMS states}
First we recall some facts on KMS states on general
Cuntz-Pimsner algebras from Laca and Neshveyev \cite{LN}.
Let $A$ be a $C^*$-algebra and $X$ be a full Hilbert
$A$-$A$ bimodule with a non-degenerate left $A$-action.
Let $\alpha$ be the gauge action.
For $\beta>0$, we denote by $K_\beta(\alpha)$ the set of $\beta$-KMS states
for $({\mathcal O}_X,\alpha)$.
In the following we assume that there exists a countable basis
$\{u_i\}_{i=1}^\infty$ for $X$ as a right Hilbert $A$-module
and assume that
$N:=\sup_{n\in \mathbb{N}}||\sum_{i=1}^n(u_i|u_i)_A||$ is finite.
\begin{defi}[Perron-Frobenius type operators]
Let the notation be as above.
We introduce a Perron-Frobenius type operator $F:A^*\rightarrow A^*$,
which is a bounded positive map, by the following formula:
$$
F(\omega)(a)= \sum_{i=1}^\infty \omega((u_i|a \cdot u_i)_A),
\quad \omega\in A^*,\;a\in A.$$
For $\beta\geq 0$, we set $F_\beta = e^{-\beta} F$.
\end{defi}
The operator $F$ was discussed in \cite{PWY} in the case where $X$ is finitely generated
and in \cite{LN} in the general case.
When $\tau$ is a finite positive trace on $A$, it is known that $F(\tau)$ is again
a finite positive trace, which does not depend on the choice of
$\{u_i\}_{i=1}^\infty$.
Therefore for positive $a\in A^+$ and a finite positive trace $\tau$, we have
$$F(\tau)(a)=\sup_{v_i}\sum_{i=0}^n\tau((v_i|a\cdot v_i)),$$
where the supremum is taken over all $v_0,v_1, \cdots,v_n\in X$ satisfying
$\sum_{i=1}^n\theta_{v_i,v_i}\leq I$ (see \cite[Theorem 1.1]{LN}).
\begin{thm}[Laca-Neshveyev \cite{LN}]
Let $A$ and $X$ be as above and let $\alpha$ be the gauge action
on $\mathcal O_X$.
Then there is a bijective affine isomorphism
between $K_\beta(\alpha)$ and the set $T(A)_{\beta}$ of tracial states
$\tau$ on $A$ satisfying
the following conditions:
$$F_\beta(\tau)(a)=\tau(a),\quad \forall a\in I_X,\leqno (K1)$$
$$F_\beta(\tau)(a)\leq \tau(a),\quad \forall a\in A^+.\leqno (K2)$$
The correspondence is given by restriction.
\label{thm;K1K2}
\end{thm}
\begin{defi} Let $A$ and $X$ be as above.
Following Exel and Laca \cite{EL}, we say that
a finite positive trace $\tau _1$ on $A$ is of {\it finite type}
if there exists a finite positive trace $\tau _0$ on $A$ such that
$\tau _1 = \sum _{n=0}^{\infty} F_\beta^n(\tau_0)$ in the weak*
topology.
We denote by $T_f(A,F_\beta)$ the set of finite positive traces on
$A$ of finite type.
We say that a finite positive trace $\tau _2$ on $A$ is of
{\it infinite type} if $F_\beta (\tau_2) = \tau _2$.
We denote by $T_{i}(A,F_\beta)$
the set of finite positive traces on $A$ of infinite type.
A $\beta$-KMS states $\varphi$ for the gauge action $\alpha$ on $\mathcal O_X$
is said to be of {\it finite type} (resp. {\it infinite type})
if the restriction $\tau=\varphi|_A$ is of finite type
(resp. infinite type).
We denote by $K_\beta(\alpha)_f$ (resp. $K_\beta(\alpha)_i$)
the set of $\beta$-KMS states of finite (resp. infinite) type.
\end{defi}
Laca and Neshveyev \cite[Proposition 2.4]{LN} showed that
any tracial state $\tau \in T(A)_{\beta}$ is uniquely decomposed as
\begin{equation}
\tau= \tau_1 + \tau_2
\end{equation}
with $\tau_1 \in T_f(A,F_\beta)$ and
$\tau_2 \in T_{i}(A,F_\beta)$.
Moreover $\tau_1$ is given by
\begin{equation}\tau_1 = \sum _{n=0}^{\infty} F_\beta^n(\tau_0)
\end{equation}
with
\begin{equation}\tau_0 = \tau - F_\beta(\tau)\end{equation}
and $\tau_2$ is given by
\begin{equation}
\tau_2 = \lim _n F_\beta^n(\tau), \text{ in the weak* topology }.
\end{equation}
The above classification of the KMS states a priori depends on the construction
of ${\mathcal O}_X$ from $A$ and $X$, and we don't know whether there is an intrinsic
characterization of it in terms of the system $({\mathcal O}_X, \alpha)$.
Since the decomposition (3.1) is unique, we have
\begin{equation}
\mathrm{ex}(K_\beta(\alpha))=\mathrm{ex}(K_\beta(\alpha)_f)\cup
\mathrm{ex}(K_\beta(\alpha)_i).
\end{equation}
where for a convex set $C$, we denote by $\mathrm{ex}(C)$
the set of extreme points of $C$.
(3.3) shows that $\tau_0$ vanishes on $I_X$ thanks to the condition (K1).
Let $T(A/I_X)$ be the set of tracial states on $A$ that vanish
on $I_X$, which may be regarded as tracial states on $A/I_X$.
We denote by $T(A/I_X)_{\beta}$ the set of $\tau \in T(A/I_X)$
such that
$\sum_{n=0}^\infty F_\beta^n(\tau )$
converges in the weak$^*$-topology.
When $\beta>\log N$, we have $||F_\beta||<1$ and so
$T(A/I_X)=T(A/I_X)_{\beta}$.
For $\tau\in T(A/I_X)_{\beta}$, we set $\varphi_{\tau,\beta}$ to be
the $\beta$-KMS state corresponding to the tracial state
$\psi_{\tau,\beta}
:= m_{\tau,\beta}\sum_{n=0}^\infty F_\beta^n(\tau) \in T(A)_\beta,$
where $m_{\tau,\beta}$ is a normalizing constant. That is,
$m_{\tau,\beta} = (\sum_{n=0}^\infty F_\beta^n(\tau)(1))^{-1}$ and
$$
\varphi_{\tau,\beta}|_A
= m_{\tau,\beta}\sum_{n=0}^\infty F_\beta^n(\tau)
= \psi_{\tau,\beta}.
$$
\begin{lemma} The map
$T(A/I_X)_{\beta}\ni \tau\mapsto \varphi_{\tau,\beta}\in K_\beta(\alpha)_f$
is a bijection, which is not necessarily affine, but sends
$\mathrm{ex}(T(A/I_X)_{\beta}) =
T(A/I_X)_{\beta}\cap \mathrm{ex}(T(A/I_X))$ onto $\mathrm{ex}(K_\beta(\alpha)_f)$.
\label{lemma;finite}
\end{lemma}
\begin{proof} If a positive functional $\omega$ is
dominated by a positive functional $\tau$ such that
$\sum_{n=0}^\infty F_\beta^n(\tau )$
converges, then $\sum_{n=0}^\infty F_\beta^n(\omega )$ also
converges. Hence $\mathrm{ex}(T(A/I_X)_{\beta}) =
T(A/I_X)_{\beta}\cap \mathrm{ex}(T(A/I_X))$.
The fact that the above map is bijective follows from the fact
that $\tau$ is uniquely given by
\[
\tau
=((\varphi_{\tau,\beta}|_A - F_{\beta}(\varphi_{\tau,\beta}|_A)(1))^{-1}
(\varphi_{\tau,\beta}|_A - F_{\beta}(\varphi_{\tau,\beta}|_A)).
\]
Let $\tau\in T(A/I_X)_\beta$ and $\tau', \tau''\in T(A/I_X)$
satisfy $\tau=c_1\tau'+ c_2\tau''$
for some non-negative constants $c_1$ and $c_2$ with $c_1 + c_2 =1$ .
Then $\tau'$ and $\tau''$ belong to $T(A/I_X)_\beta$, and we get
$$
\varphi_{\tau,\beta}
=\frac{m_{\tau,\beta}c_1}{m_{\tau',\beta}}\varphi_{\tau',\beta}
+ \frac{m_{\tau,\beta}c_2}{m_{\tau'',\beta}}\varphi_{\tau'',\beta}.
$$
Therefore if $\varphi_{\tau,\beta}$ is extreme, then so is $\tau$.
Conversely assume that
$\tau \in \mathrm{ex}(T(A/I_X)_{\beta})$ and that
there exist $\varphi',\varphi''\in K_\beta(\alpha)_f$ such that
$$\varphi_{\tau,\beta}=t_1\varphi'+t_2\varphi'', $$
for some non-negative constants $t_1,t_2$ with
$t_1 + t_2 =1$.
Then there exists
$\tau'$ and $\tau''\in T(A/I_X)_\beta$
such that $\varphi_{\tau',\beta}=\varphi'$ and
$\varphi_{\tau'',\beta}=\varphi''$. Put
$$c_1 = \frac{m_{\tau',\beta}t_1}{m_{\tau,\beta}}
\text{ and }
c_2 = \frac{m_{\tau'',\beta}t_2}{m_{\tau,\beta}}.
$$
Then
\begin{align*}
& m_{\tau,\beta}\sum_n F_{\beta}^n(\tau)
= \varphi_{\tau,\beta}|A
= t_1\varphi'|A+t_2\varphi''|A \\
&= m_{\tau,\beta}c_1\sum_n F_{\beta}^n(\tau') +
m_{\tau,\beta}c_2\sum_n F_{\beta}^n(\tau'')
= m_{\tau,\beta}\sum_n F_{\beta}^n(c_1\tau' +c_2\tau''),
\end{align*}
which shows $\tau = c_1\tau' + c_2\tau''$.
Since $\tau$ is extreme, we get $\tau'=\tau''=\tau$ and so
$\varphi'=\varphi''=\varphi_{\tau,\beta}$, which finishes the proof.
\end{proof}
\begin{cor}
If $\beta>\log N$, then $\mathrm{ex}(K_\beta(\alpha))=
\{\varphi_{\tau,\beta}\}_{\tau\in \mathrm{ex}(T(A/I_X))}$.
\label{cor;parametrization}
\end{cor}
\begin{proof}
If $\beta>\log N$, then $||F_\beta||<1$ holds,
and so $K_\beta(\alpha)_i=\emptyset$.
\end{proof}
In the rest of this section, we investigate the type of the GNS
representation of a finite type $\beta$-KMS state $\varphi$
on ${\mathcal O}_X$ for the gauge action.
Let $(\pi_{\varphi},H_{\varphi},\Omega _{\varphi})$
be the GNS representation associated with $\varphi$.
Since $\{u_i\}_{i=1}^\infty$ is a countable basis for $X$,
the sequence $(a_n)_n := (\sum_{k=1}^n \theta _{u_k,u_k})_n$ is an
approximate unit for $K(X) \subset {\mathcal O}_X$.
Therefore $(\pi_{\varphi}(a_n))_n$ converges to a projection
$p \in \pi_{\varphi}({\mathcal O}_X)''$.
For $x\in X$, we have
$$p\pi_\varphi(S_x)=\sum_{i=1}^\infty
\pi_\varphi(S_{u_i(u_i|x)_A})=\pi_\varphi(S_x),
\text{ in the strong topology } .$$
Thus for any $a\in A$
$$(1-p)\pi_{\varphi}(a)p=
\sum_{i=1}^\infty (1-p)\pi_{\varphi}(S_{au_i}S_{u_i}^*) =0,
\text{ in the strong topology } .
$$
In particular, $p$ commutes with $\pi_{\varphi}(A)$.
We denote by $\hat{\varphi}$ the weakly continuous extension of
the state $\varphi$ to $\pi_{\varphi}({\mathcal O}_X)''$ given by
$\hat{\varphi} (T) = (T\Omega_{\varphi} | \Omega_{\varphi})$
for $T \in \pi_{\varphi}({\mathcal O}_X)''$.
\begin{lemma} Let $\varphi$ be a $\beta$-KMS state for $\alpha$ and
let the notation be as above.
Then the followings are equivalent:
\begin{itemize}
\item [$(1)$] $\varphi$ is of infinite type.
\item [$(2)$] $\hat{\varphi} (p) = 1$.
\end{itemize}
\label{lemma;p=1}
\end{lemma}
\begin{proof}Since $\varphi$ is a $\beta$-KMS state,
for any $a \in A$,
\begin{align*}
(F_{\beta}(\varphi |_{A}))(a)
& = \sum_{n=1}^\infty \frac{1}{e^{\beta}}\varphi |_{A}((u_n |au_n)_A)
= \sum_{n=1}^\infty \frac{1}{e^{\beta}}\varphi (S_{u_n}^*aS_{u_n}) \\
& = \sum_{n=1}^\infty \varphi (S_{u_n}S_{u_n}^*a)
= \sum_{n=1}^\infty
(\pi_{\varphi}(S_{u_n}S_{u_n}^*a)\Omega_{\varphi} | \Omega_{\varphi}) \\
& = (p\pi_{\varphi}(a)\Omega_{\varphi} | \Omega_{\varphi})
= \hat{\varphi} (p\pi_{\varphi}(a)).
\end{align*}
Putting $a = I$, we get $(F_{\beta}(\varphi |_{A}))(1) = \hat{\varphi} (p)$.
Since $\varphi$ is a $\beta$-KMS state, the condition (K2) in Theorem \ref{thm;K1K2}
implies $F_{\beta}(\varphi |_{A}) \leq \varphi |_{A}$.
Therefore $1 - \hat{\varphi} (p) = 0$ if and only if
$\varphi |_{A}(1) - (F_{\beta}(\varphi |_{A}))(1) = 0$
if and only if $\varphi |_{A} = F_{\beta}(\varphi |_{A})$,
i.e., $\varphi$ is of infinite type.
\end{proof}
\begin{thm} In the above setting,
let $\tau\in \mathrm{ex}(T(A/I_X)_{\beta})$ and
$\varphi := \varphi_{\tau,\beta}$ be the corresponding extreme
$\beta$-KMS state of finite type.
Let $\tau'$ be a state on
$A/I_X$ defined by $\tau'(a + I_X) = \tau (a)$.
Then $\pi_{\varphi}({\mathcal O}_X)''$ is of type I (resp. type II) if and only
if $\pi_\tau'(A/I_X)"$ is of type I (resp. type II).
In particular, if $A$ is abelian, then $\varphi_{\tau,\beta}$
is a type I state.
\end{thm}
\begin{proof} Since the $\beta$-KMS state $\varphi$ is of
finite type, $\hat{\varphi} (p) \not= 1$ by Lemma \ref{lemma;p=1}.
Hence $p \not= I$.
Let $H_0 = (I-p)H_{\varphi}$.
Since $p$ commutes with $\pi_{\varphi}(A)$,
we can define a representation
$\rho : A \rightarrow B(H_0)$ by
$\rho (a) = (I-p)\pi_{\varphi}(a)|_{H_0}$.
For any $x \in X^{\otimes n}$
and $y \in X^{\otimes m}$, we have $(I-p)\pi_{\varphi}(S_xS_y^*) = 0$.
Therefore
\[
(I-p)\pi_{\varphi}({\mathcal O}_X)''(I-p)
=(I-p)\pi_{\varphi}(A)''(I-p)=(I-p)\pi_{\varphi}(A)''
\cong \rho (A)''
\]
Put $\Omega _0 := \frac{1}{\sqrt{1-\hat{\varphi} (p)}}(I-p)\Omega_{\varphi}$.
Then $\|\Omega _0 \| = 1$. For $a \in A$,
\begin{align*}
& (\rho (a)\Omega _0|\Omega _0)
= \frac{1}{1-\hat{\varphi} (p)}
((I-p)\pi_{\varphi}(a)\Omega_{\varphi}|\Omega_{\varphi})
= \frac{1}{1-\hat{\varphi} (p)}\hat{\varphi}((I-p)\pi_{\varphi}(a)) \\
& = \frac{1}{1-\hat{\varphi} (p)}
(\varphi |_{A}(a) - (F_{\beta}(\varphi |_{A}))(a))
= \tau (a).
\end{align*}
Since $\rho (A)\Omega_0 = (I-p)\pi_{\varphi}({\mathcal O}_X)\Omega_{\varphi}$,
the vector $\Omega_0$ is cyclic for $\rho$.
Therefore $\rho$ is unitarily equivalent to the GNS representation
$\pi _{\tau}$ of $A$ for $\tau$.
Hence $(I-p)\pi_{\varphi}({\mathcal O}_X)''(I-p)$ is isomorphic to
$\pi _{\tau}(A)''$.
Since $\pi_{\varphi}({\mathcal O}_X)''$ is a factor and $\pi _{\tau}(A)''$ is
isomorphic to $\pi _{\tau '}(A/I_X)''$, we get the statement.
\end{proof}
\section{Branched points and Perron-Frobenius type operators}
In this section we give a detailed description of the Perron-Frobenius type
operators $F$ introduced in the previous section for concrete examples.
We treat the Hilbert bimodules arising from rational functions and
self-similar sets in a unified setting, that is, a topological
relation in the sense of Brenken \cite{Br} and a topological quiver with
a weighted counting measure in the sense of Muhly-Solel \cite{MS} and
Muhly-Tomforde \cite{MT}.
Firstly we unify the notation of bimodules.
Let $K$ be a compact metric space and ${\mathcal G}$ be a closed subset of
$K\times K$.
For $(x,y)\in {\mathcal G}$, we set $p_1(x,y)=x$ and $p_2(x,y)=y$, and
${\mathcal G}_y=p_1p_2^{-1}(y)$.
We assume $p_1({\mathcal G})=p_2({\mathcal G})=K$ and
$N:=\sup_{y\in K} \#p_2^{-1}(y)<\infty.$
Let $A=C(K)$, $X=C({\mathcal G})$.
We introduce an $A$-$A$ bimodule structure into $X$ by
$(a\cdot \xi\cdot b)(x,y)=a(x)\xi(x,y)b(y).$
We assume that there exists a positive function $e$ on ${\mathcal G}$ such
that $e(x,y)\geq 1$ for all $(x,y)\in {\mathcal G}$ and
$$K\ni y\mapsto (\xi|\eta)_A(y):=\sum_{x\in {\mathcal G}_y}e(x,y)
\overline{\xi(x,y)}\eta(x,y)$$
is continuous for all $\xi,\eta\in X$.
\begin{lemma}\label{lemma;openneibourhood}
For every $(x_0,y_0)\in {\mathcal G}$, every open neighbourhood
$W$ of $(x_0,y_0)$, and every $\varepsilon>0$, there exist open
neighbourhoods $U$ of
$x_0$ and $V$ of $y_0$ such that $U\times V\subset W$,
$p_2((U\times V)\cap{\mathcal G})=V$, and
$$|e(x_0,y_0)-\sum_{x\in U\cap {\mathcal G}_y}e(x,y)|<\varepsilon,\quad
\forall y\in V.$$
In particular, $e$ is upper semicontinuous.
\end{lemma}
\begin{proof} Let ${\mathcal G}_{y_0}=\{x_0,x_1,\cdots,x_n\}$.
We choose open neighbourhoods $U_a$ of $x_a$ for $0\leq a\leq n$ and
open neighbourhood $V_0$ of $y_0$ such that
$U_0\times V_0\subset W$ and
$\overline{U_a}\cap \overline{U_b}=\emptyset$ for $a\neq b$.
Then there exists an open neighbourhood $V_1$ of $y_0$
with $ V_1\subset V_0$ such that
for all $y\in V_1$,
${\mathcal G}_y\subset \bigcup_{a=0}^nU_a.$
Indeed, if there were no such $V_1$, there would exist a sequence
$\{(x_n,y_n)\}_{n=1}^\infty$ in ${\mathcal G}$ such that $\{y_n\}_{n=1}^\infty$
converges to $y_0$ and $x_n\notin U_a$ for all $a$.
An accumulation point of this sequence is of the form $(x',y_0)\in {\mathcal G}$
such that $x'\notin U_a$ for all $a$, which is a contradiction.
Therefore $V_1$ as above exists.
Let $\xi\in X$ such that $\xi(x,y)=1$ for $(x,y)\in U_0\times V_1$ and
$\xi(x,y)=0,\quad \forall x\in \bigcup_{a=1}^nU_a.$
Then for $y\in V_1$, we have
$$(\xi|\xi)_A(y)=\sum_{x\in U_0\cap {\mathcal G}_y}e(x,y),$$
which is continuous and $(\xi|\xi)_A(y_0)=e(x_0,y_0)\geq 1$.
Note that by assumption ${\mathcal G}_y\neq \emptyset$ for each $y\in V_1$,
and so $U_0\cap\ {\mathcal G}_y\neq \emptyset$ for $y$ sufficiently close to $y_0$.
Replacing $U_0$ and $V_1$ with smaller neighborhoods
$U$ of $x_0$ and $V$ of $y_0$
respectively, we get the first half of the statement.
Since $e$ is a positive function, for all $(x,y)\in (U\times V)\cap {\mathcal G}$
we have
$e(x_0,y_0)+\varepsilon\geq e(x,y),$
which means that $e$ is upper semicontinuous.
\end{proof}
Since $e$ is an upper semicontinuous function on a compact set ${\mathcal G}$,
$e$ is bounded above.
Therefore $(\xi|\eta)_A$ gives an $A$-valued inner product of $X$ with
the corresponding norm equivalent to the uniform norm of $C({\mathcal G})$, and so
$X$ is a Hilbert bimodule over $A$.
Following \cite{KW3}, for $f\in C(K)$ we set
$\tilde{f}(y)=\sum_{x\in {\mathcal G}_y}f(x),$ which is not necessarily continuous.
Let $u_0,u_1,\cdots, u_n\in X$.
Then for $\xi\in X$ we have
\begin{eqnarray*}
\sum_{a=0}^n(\xi|\theta_{u_a,u_a}\xi)_A(y)
&=&\sum_{a=0}^n|(u_a|\xi)(y)|^2\\
&=&\sum_{x,x'\in {\mathcal G}_y}\sum_{a=0}^n
e(x',y)e(x,y)u_a(x',y)\overline{u_a(x,y)}\xi(x,y)
\overline{\xi(x',y)}.
\end{eqnarray*}
Let
$$A(y)_{x',x}=\sum_{a=0}^n \sqrt{e(x',y)e(x,y)}u_a(x',y)
\overline{u_a(x,y)}.$$
Then
\begin{equation}\sum_{a=0}^n\theta_{u_a,u_a}\leq I.
\end{equation}
is equivalent to that the matrix $(A(y)_{x',x})_{x',x}$
is dominated by 1 for any $y$.
In particular, (4.1) implies
$$A(y)_{x,x}=\sum_{a=0}^n e(x,y)|u_a(x,y)|^2\leq 1,
\quad \forall x\in {\mathcal G}_y,$$
and so
\begin{equation}
\sum_{a=0}^n(u_a|f\cdot u_a)(y)
=\sum_{a=0}^n \sum_{x\in {\mathcal G}_y}f(x)e(x,y)|u_a(x,y)|^2
\leq \tilde{f}(y)
\end{equation}
for all positive $f\in C(K)$.
In what follows, we often identify an element in $C(K)^*_+$ with
the corresponding measure on $K$.
In our setting, the Perron-Frobenius type operator $F$ is a positive map
$F : C(K)^* \rightarrow C(K)^* $ given by
\begin{equation}
F(\mu)(f)=\sup_{u_a}\sum_{a=0}^n\mu((u_a|f\cdot u_a)),\quad
\mu\in C(K)^*_+,\; f\in C(K)_+,
\end{equation}
where the supremum is taken over all $u_0,u_1, \cdots,u_n\in X$
satisfying (4.1).
We have $||F||=N$.
Since $K$ is separable, $X$ is countably generated and has a countable
basis $\{v_i\}_{i=0}^\infty$.
We introduce an increasing sequence $\{\varphi_n\}_{n=0}^\infty$ of
positive maps of $C(K)$ by
$$\varphi_n(f)=\sum_{i=0}^n
(v_i|f\cdot v_i)_A,\quad f\in C(K).$$
Then $F$ is given by the limit
\begin{equation}F(\mu)(f)=\lim_{n\to\infty}\mu(\varphi_n(f)).
\end{equation}
\begin{thm} Let $F$ be as above. Then
$$F(\mu)(f)=\int_K\tilde{f}(y)d\mu(y),\quad f\in C(K)_+,\; \mu\in C(K)^*_+.$$
\label{thm;explicitPF}
\end{thm}
\begin{proof}
First, we show the statement for any Dirac measure
$\mu=\delta_{y}$ on $y\in K$.
Note that the above argument implies
$$F(\mu)(f)\leq \int_K\tilde{f}(y)d\mu(y),\quad f\in C(K)_+,\; \mu\in C(K)^*_+.$$
We fix $y_0\in K$ and $\varepsilon >0$.
Let ${\mathcal G}_{y_0}=\{x_0,x_1,\cdots, x_n\}$.
Applying Lemma \ref{lemma;openneibourhood}, we choose open neighbourhoods
$U_a$ of $x_a$ and $V$ of $y_0 $ such that
$U_a\cap U_b=\emptyset$ for $a\neq b$ and
$$\bigcup_{a=1}^nU_a\supset {\mathcal G}_y,\quad \forall y\in V,$$
$$|e(x_a,y_0)-\sum_{x\in U_a\cap V}e(x,y)|<\varepsilon ,
\quad \forall y\in V.$$
Let $u_a\in X$ be a continuous non-negative function such that
$\mathrm{supp}(u_a)\subset U_a\times V$ and
$$u_a(x_a,y_0)=\frac{1}{\sqrt{e(x_a,y_0)+\varepsilon}},$$
$$0\leq u_a(x,y)\leq \frac{1}{\sqrt{e(x_a,y_0)+\varepsilon}},\quad \forall
(x.y)\in (U_a\times V)\cap {\mathcal G}.$$
For $y\in V$, we define the matrix $A(y)=(A(y)_{x',x})_{x',x\in{\mathcal G}_y}$
as before.
We set $I_a(y)=U_a\cap {\mathcal G}_y$ for $y\in V$ and
$A_a(y)=(A(y)_{x',x})_{x',x\in I(a)}$.
Then $A(y)$ is a direct sum of the $A_a(y)$ s.
Let $c\in \ell^2(I(a))$.
Then
\begin{eqnarray*}
\inpr{A_a(y)c}{c}&=&
\sum_{x',x\in I_a(y)}\sum_{b=0}^n
\sqrt{e(x',y)e(x,y)}u_b(x',y)\overline{c(x')}\overline{u_b(x,y)}c(x)\\
&=&|\sum_{x\in I_a(y)}\sqrt{e(x,y)} \;
\overline{u_a(x,y)}c(x)|^2\\
&\leq &||c||^2\sum_{x\in I_a(y)}e(x,y)|u_a(x,y)|^2\\
&\leq &||c||^2\sum_{x\in I_a(y)}\frac{e(x,y)}{e(x_0,y_0)+\varepsilon}
\leq ||c||^2,
\end{eqnarray*}
where we use Lemma \ref{lemma;openneibourhood}.
Thus $A_a(y)\leq 1$ and (4.1) is satisfied.
Since
\begin{eqnarray*}
\sum_{a=0}^n(u_a|f\cdot u_a)_A(y_0)
&=&\sum_{x\in {\mathcal G}_{y_0}}\sum_{a=0}^n f(x)e(x,y_0)|u_a(x,y_0)|^2 \\
&=&\sum_{x\in {\mathcal G}_{y_0}}f(x)\frac{e(x,y_0)}{e(x,y_0)+\varepsilon}
\geq \tilde{f}(y)\frac{1}{1+\varepsilon},
\end{eqnarray*}
we get $F(\delta_{y_0})(f)=\tilde{f}(y_0)$.
The above computation implies that for every $f\in C(K)_+$ and every $y\in K$,
$\{\phi_n(f)(y)\}_{n=0}^\infty$ increasingly converges to $\tilde{f}(y)$.
Thus thanks to the bounded (or monotone) convergence theorem, we get
$$F(\mu)(f)=\lim_{n\to\infty}\int_K\phi_n(f)(y)d\mu(y)
=\int_K\tilde{f}(y)d\mu(y),\quad \forall \mu\in C(K)^*_+,$$
which finishes the proof in the general case.
\end{proof}
We denote by $D(e)$ the set of discontinuous points of $e$.
For example, if $R$ is a rational function, ${\mathcal G} = \mathop{\rm {graph}}\nolimits R$ and
$e(z,R(z)) = e(z)$ is the branch index,
then $D(e)$ is the set $\{(z,R(z)) \in \mathop{\rm {graph}}\nolimits R |\; z \in {\mathcal B}(R) \}$
and $p_1(D(e)) = {\mathcal B}(R)$, where ${\mathcal B}(R)$ is the set of
branched points of $R$.
If $\gamma = (\gamma_1,\dots , \gamma_N)$ is a system of
proper contractions on $K$,
${\mathcal G} = \cup _{i=1}^N \{(x,y) \in K^2 ; x = \gamma _i(y)\}$,
and $e(x,y) =\# \{i | x = \gamma _i(y)\}$,
then $D(e) = \{(x,y) \in K^2 | x = \gamma _j(y) = \gamma _{j'}(y)
\text{ for some } j \ne j'\}$
and $p_1(D(e)) = {\mathcal B}(\gamma)$.
\begin{lemma} Let $D(e)$ be the set of discontinuous points of $e$.
\begin{itemize}
\item [$(1)$] For every $(x_0,y_0)\in {\mathcal G}\setminus D(e)$,
there exist closed neighborhoods $F$ of $x_0$ and $G$ of $y_0$, and a
continuous map $\gamma:F\rightarrow G$ such that
$$(F\times G)\cap {\mathcal G}=\{(\gamma(y),y)\in K^2;\; y\in G\}.$$
\item [(2)] $D(e)$ is a closed set.
\end{itemize}
\label{lemma;discontinuous}
\end{lemma}
\begin{proof} (1)
Let $0<\varepsilon<1/2$ and take a neighborhood $W$ of
$(x_0,y_0)$ such that
$$|e(x_0,y_0)-e(x,y)|<\varepsilon,\quad \forall (x,y)\in W.$$
By Lemma \ref{lemma;openneibourhood},
there exist closed neighborhoods $F$ of $x_0$ and $G$ of
$y_0$ such that $F\times G\subset W$,
$p_2((F\times G)\cap {\mathcal G})=G$, and
$$|e(x_0,y_0)-\sum_{x\in {\mathcal G}_y\cap F}e(x,y)|<\varepsilon,
\quad \forall y\in G.$$
Since $e(x,y)\geq 1$ for all $(x,y)\in {\mathcal G}$, the set
${\mathcal G}_y\cap F$ is a singleton for all $y\in G$.
Thus there exists a function $\gamma:G\rightarrow F$ such that
$$(F\times G)\cap {\mathcal G}=\{(\gamma(y),y);\; y\in G\}.$$
Since $(F\times G)\cap {\mathcal G}$ is closed, $\gamma$ is continuous.
(2) We choose $\xi\in X$ such that $\mathrm{supp}(\xi)\subset F\times G$,
$0\leq \xi\leq 1$, and $\xi(x,y)=1$ in a neighborhood $W_1$ of
$(x_0,y_0)$.
Then for $(x,y)\in W_1$, we have
$$(\xi|\xi)_A(y)=\sum_{y\in F\cap {\mathcal G}_y}e(x,y)|\xi(x,y)|^2=e(\gamma(y),y),$$
which shows that $e$ is continuous on $W_1\cap {\mathcal G}$.
Thus $D(e)$ is a closed set.
\end{proof}
The following Proposition shows that the set of
singularity corresponds to the ideal $I_X$ of $A$.
We can prove it using the preceding lemmas as in
\cite{KW1} and in \cite{KW2}.
It is also shown in a general situation by Muhly and Tomforde
\cite[Corollary 3.12]{MT}.
\begin{prop} In the above situation
$$
I_X = \{f \in C(K) ; f|_{p_1(D(e))} = 0 \}
\cong C_0(K\setminus p_1(D(e))).
$$
\label{proposition;I_X}
\end{prop}
\begin{cor} Let the notation be as above. Then
\begin{itemize}
\item [$(1)$] If $\beta>\log N$, then there is a one-to-one
correspondence between $\mathrm{ex}(K_\beta(\alpha))_f
=\mathrm{ex}(K_\beta(\alpha))$ and $p_1(D(e))$.
\item [$(2)$] If $\beta=\log N$, then there is a one-to-one
correspondence between $K_\beta(\alpha)_i$ and the set of
Borel probability measures $\mu$ on $K$ satisfying
$$\int_K f(x)d\mu(x)=\frac{1}{N}\int_K\tilde{f}(x)d\mu(x),\quad \forall
f\in C(K).$$
In particular, such a measure $\mu$ satisfies
$\mu\{y\in K;\#{\mathcal G}_y<N \}=0.$
\end{itemize}
\label{cor;characterization}
\end{cor}
\begin{proof} (1) follows from Proposition~\ref{proposition;I_X}
and Lemma~\ref{lemma;finite}.
The first statement of (2) follows from Theorem~\ref{thm;K1K2}.
The case with $f=1$ implies the second statement.
\end{proof}
\section{classification of KMS states in
the case of rational functions}
Throughout this section we assume that $R$ is a rational function of degree at least two.
We shall completely classify the KMS states for the gauge actions on the $C^*$-algebras
${\mathcal O}_R(\hat{\mathbb C})$ and ${\mathcal O}_R(J_R)$.
We can recover the degree of $R$, the number of branched points,
the number of exceptional points and the orbits of exceptional points
from the information of the structure of the KMS states.
Recall that for $f \in C(\hat{\mathbb C})$ and $ y \in \hat{\mathbb C}$,
we have
$$\tilde{f}(y)=\sum_{x\in R^{-1}(y)}f(x),$$
which is not necessarily continuous.
For a bounded regular Borel measure $\mu$ on $\hat{\mathbb C}$,
we denote by $\tau _{\mu} \in C(\hat{\mathbb C})^*$ the
corresponding positive linear functional (though
we often identify $\mu$ with $\tau _{\mu}$).
Then the Perron-Frobenius type
operator $F : C(\hat{\mathbb C})^* \rightarrow
C(\hat{\mathbb C})^* $ with norm $N = \deg R$ is given by
\[
F(\tau _{\mu})(f) = \int_{\hat{\mathbb C}} \tilde{f}(z) d\mu(z).
\]
In particular, for a Dirac measure $\delta _y$ on $y$ we get
\[
F(\delta _y) = \sum _{x \in R^{-1}(y)}\delta _x .
\]
In our particular situation, the conditions (K1) and (K2) in
Theorem \ref{thm;K1K2} take the following forms:
$$
\frac{1}{e^{\beta}}
\int_{\hat{\mathbb C}} \tilde{a}(z) d\mu(z)
=\int_{\hat{\mathbb C}} a(z) d\mu(z),\quad \forall a \in C_0(\hat{\mathbb{C}}\setminus B(R)), \leqno (K1)
$$
$$\frac{1}{e^{\beta}}
\int_{\hat{\mathbb C}} \tilde{a}(z) d\mu(z)
\leq \int_{\hat{\mathbb C}} a(z) d\mu(z),\quad \forall a \in A_+,\leqno (K2)
$$
where we identify $C_0(\hat{\mathbb{C}}\setminus {\mathcal B}(R))$ with
$I_X=\{f \in A | f(z) = 0 \mbox{ for } z \in {\mathcal B}(R) \}$.
\begin{lemma}
If a measure $\mu$ satisfies the
condition (K1) and (K2), then the point mass
$\mu(\{w\}) $ for $w \in \hat{\mathbb C}$
must satisfy the following:
$$e^{-\beta}\mu(\{R(w)\}) = \mu(\{w\}),\quad
w \not\in {\mathcal B}(R), \leqno (K1)'$$
$$e^{-\beta}\mu(\{R(w)\}) \leq \mu(\{w\}),\quad w \in \hat{\mathbb C}.
\leqno (K2)'$$
\label{lemma;pointmassinequality}
\end{lemma}
\begin{proof} Let $w \not\in {\mathcal B}(R)$.
Then we can approximate the characteristic function $\chi_{\{w\}}$
by a monotone decreasing sequence $(a_n)_n$ of non-negative
functions in $I_X$. Thus
\begin{align*}
\mu(\{w\}) &
= \lim _n \int_{\hat{\mathbb C}} a_n(z) d\mu(z)
= \lim _n e^{-\beta} \int_{\hat{\mathbb C}}
\sum_{x\in R^{-1}(z)}a_n(x) d\mu(z) \\
& = e^{-\beta} \int_{\hat{\mathbb C}}
\sum_{x\in R^{-1}(z)} \chi_{\{w\}}(x) d\mu(z)
= e^{-\beta}\mu(\{R(w)\}).
\end{align*}
by the Lebesgue convergence theorem.
The other one is similarly obtained.
\end{proof}
Let us recall some facts on exceptional points for $R$.
For $z$ and $w$ in $\hat{\mathbb C}$, we define $z \sim _R w$ if
there exists non-negative integers $n$ and $m$ with
$R^n(z) = R^m(w)$. Then $\sim _R$ is an equivalence relation on
$\hat{\mathbb C}$. We denote the equivalence class containing
$z$ by $[z]_R$. We also define the {\it backward orbit}
$O^-(z)$ of $z$
by
\[
O^-(z) := \{ w \in \hat{\mathbb C} | R^n(w) = z \text{ for \ some
\ non-negative } n \in \mathbb Z
\}
\]
Then $O^-(z) \subset [z]_R$.
\begin{defi} A point $z$ in $\hat{\mathbb C}$ is
an {\it exceptional point} for $R$ if the backward orbit
$O^-(z)$ of $z$ is finite.
We denote by $E_R$ the set of exceptional points.
\end{defi}
It is known that $E_R$ is a subset of $F_R \cap {\mathcal B}(R)$ and that $z$ is
an exceptional point if and only if $[z]_R$ is finite.
A rational function $R$ of degree at least two has at most
two exceptional points by the Riemann-Hurwitz formula.
The reader is referred to \cite[section 4.1]{B} for basic properties of
exceptional points, including the following:
\begin{lemma} Let $R$ be a rational function of degree at least two.
Then either of the following holds:
\begin{itemize}
\item[$(1)$] $E_R = \emptyset$.
\item[$(2)$] $E_R$ consists of one point $z_0$ and $\{z_0\} = [z_0]_R$.
After a suitable conjugation by a M\"{o}bius transformation, we may assume
that $z_0 = \infty$ and $R$ is a polynomial.
\item[$(3)$] $E_R$ consists of two points $z_0 \not= z_1$ and
$[z_0]_R = \{z_0\}$ and $[z_1]_R = \{z_1\}$.
After a suitable conjugation by a M\"{o}bius transformation, we may assume
$z_0 = 0$, $z_1 = \infty$, and $R(z) =z^d$ for some
positive integer $d$.
\item[$(4)$] $E_R$ consists of two points $z_0 \not= z_1$ and
$[z_0]_R = [z_1]_R = \{z_0,z_1\}$.
After a suitable conjugation by a M\"{o}bius transformation, we may assume
$z_0 = 0$, $z_1 = \infty$, and $R(z) =z^d$ for some negative integer $d$.
\end{itemize}
\label{lemma;exceptional}
\end{lemma}
The following proposition is crucial for the complete classification
of the KMS states.
\begin{prop}
Assume that a measure $\mu$ satisfies the condition (K1) and (K2).
If $\mu$ has a point mass at $z$ for some $z \not\in E_R$,
then $\beta > \log N$.
\label{prop;pointmass}
\end{prop}
\begin{proof} Thanks to Lemma~\ref{lemma;pointmassinequality}, we have
$\mu\{w\}>0$ for any $w\in O^-(z)$.
To prove the statement, it suffices to show that there exists
$w\in O^-(z)$ satisfying the following two conditions:
(1) $O^-(w)\cap {\mathcal C}(R)=\emptyset$, (2) $R^{-m}(w)\cap R^{-n}(w)=\emptyset$
for all distinct non-negative integers $m,n$.
Indeed, Lemma~\ref{lemma;pointmassinequality} with such $w$ implies ,
\[
\sum _{n = 0} ^{\infty}
(\frac{N}{e^{\beta}})^n {\mu}(\{w\}) \leq \mu(O^-(w))
\leq 1.
\]
Assume that (1) does not holds for any $w\in O^-(z)$.
Then since ${\mathcal C}(R)$ is a finite set, there exists two positive integers
$m<n$ such that $R^{-m}(z)\cap R^{-n}(z)$ is not empty, and so
$z=R^{n-m}(z)$.
Let $p$ be the minimal positive integer such that $R^p(z)=z$ and
set $L=\{R^k(z)\}_{k=0}^{p-1}$.
For any $w\in O^-(z)$, the same argument as above shows that there
exists a positive integer $q$ such that $w=R^q(w)$.
Thus for a sufficiently large integer $k$, we get
$w=R^{qk}(w)\in L$.
Since $O^{-}(z)$ is an infinite set, this is a contradiction.
Therefore (1) holds for some $z_1\in O^-(z)$.
Assuming that (2) does not hold for any $w\in O^-(z_1)$, we get
a contradiction in the same way, and so there exists $w\in O^-(z_1)$
satisfying (1) and (2).
\end{proof}
We recall basic properties of the Lyubich measure now.
\begin{defi}[Lyubich measure \cite{FLM}, \cite{L}]
Let $N = \deg R$ and $\delta _x$ be the Dirac measure on $x$
for $x \in \hat{\mathbb C}$.
For any $y \in \hat{\mathbb C} \setminus E_R$
and each $n \in {\mathbb N}$, we define
a probability measure $\mu _n^y$ on the Riemann sphere
$\hat{\mathbb C}$ by
$$
\mu _n^y = \sum _{x\in R^{-n}(y)}
N^{-n}e(x)e(R(x))\dots e(R^{n-1}(x)) \delta _x \ .
$$
The sequence $(\mu _n^y)_n$ converges
weakly to a measure $\mu ^L$, which is called
the {\it Lyubich measure}.
The measure $\mu ^L$ is independent
of the choice of $y \in \hat{\mathbb C} \setminus E_R$.
\end{defi}
The measure $\mu ^L$ has been studied by several authors, e.g.
Freire-Lopes-M\"{a}n\'{e} \cite{FLM} and Lyubich \cite{L}.
The support of $\mu^L$ is the Julia set
$J_R$ and $\mu^L$ is an invariant measure in the sense that
$\mu ^L(E) = \mu ^L(R^{-1}(E))$ for any Borel set $E$.
Hence $\int f(R(x)) d\mu ^L(x) = \int f(x) d\mu ^L(x)$. Moreover
$\mu ^L(R(E)) = N\mu(E)$ for any Borel set $E$ on which
$R$ is injective. The measure $\mu^L$ is the unique measure of
maximal entropy.
We introduce an operator
$G: C(\hat{\mathbb C}) \rightarrow C(\hat{\mathbb C})$ by setting
\[
G(f)(w) = \sum_{z \in R^{-1}(w)} e(z)f(z)
\]
and the contraction $\overline{G} = N^{-1}G$.
The conjugate opeator $G^*$ is
slightly different from $F$. But,
if $\mu$ has no point mass on ${\mathcal C}(R)$, then
$G^*$ satisfies $G^* \tau_{\mu} = F\tau_{\mu}$.
\begin{prop}[Lyubich \cite{L}].
For any $a \in C(\hat{\mathbb C})$ and compact subset
$K \subset \hat{\mathbb C}$ with $K \cap E_R = \emptyset$,
we have
\[
\lim_{m \to \infty} \|\overline{G}^m a - \int_{\hat{\mathbb C}} a(z) d\mu^L(z)
\|_K = 0,
\]
where $\Vert \ \Vert_K$ is the uniform norm on $K$.
\label{prop:Lyu}
\end{prop}
We use the symbol $\tau ^L$ for $\tau_{\mu^L}$ for simplicity.
\begin{prop}
Let $\mu$ be a bounded regular Borel measure on $\hat{\mathbb C}$.
Suppose that $\mu$ satisfies one of the following conditions:
\begin{itemize}
\item[(1)] $\mu$ satisfies the condition (K1) and (K2) and
$\mu$ has no point mass on ${\mathcal B}(R)$.
\item [(2)] $\mu$ satisfies the condition (K1) and
$\mu$ has no point mass on ${\mathcal B}(R) \cup {\mathcal C}(R)$.
\end{itemize}
Then we have the following: If $\beta \not = \log N$, then $\mu = 0$.
If $\beta = \log N$, then $\mu$ coincides with
the Lyubich measure up to constant.
\label{prop;Lyubich}
\end{prop}
\begin{proof} Suppose that $\mu$ is non-zero.
We may and do assume that
$\mu$ is a probability measure.
First we consider the case (1). Since
$e^{-\beta}\mu(\{R(w)\}) \leq \mu(\{w\})$ for $w \in \hat{\mathbb C}$
by Lemma \ref{lemma;pointmassinequality},
$\mu$ has no point mass on ${\mathcal C}(R)$.
Therefore (K1) holds for all $a \in A$ and
we may consider the case (2).
For $a = 1$ , we have $\tilde{a}(x) = N $ a.e. $x$ with respect to
$\mu$ and $\beta = \log N$.
For any $a \in A$, we have
$$\tau_{\mu}(a) = F_{\beta}(\tau_{\mu})(a) =
\tau_{\mu}(\overline{G}(a)).$$
Note that $\mu(E_R) = 0$ holds as $E_R \subset {\mathcal B}(R)$.
For any $\varepsilon > 0$, we can choose a compact subset
$K \subset \hat{\mathbb C}$ satisfying $K \cap E_R = \emptyset$
and $\mu(K^c) \|a\| < \varepsilon $ by the regularity of $\mu$.
Thus
\begin{eqnarray*}\lefteqn{
| \tau_{\mu}(a) - \tau^L(a)|
= |\tau_{\mu}(\overline{G}^m(a)) - \tau^L(a)|} \\
&= & | \int_K \overline{G}^m(a)\,d\mu
+ \int_{K^c} \overline{G}^m(a)\,d\mu
- (\tau^L(a) \mu(K) + \tau^L(a) \mu(K^c))| \\
&\le & | \int_K \big(\overline{G}^m(a) - \tau^L(a)\big) \,d\mu|
+ | \int_{K^c} \overline{G}^m(a)\,d\mu |
+ \tau^L(a) \mu(K^c)
< 3 \varepsilon.
\end{eqnarray*} for sufficiently large $m$ thanks to
Proposition \ref{prop:Lyu}, which shows $\mu = \mu^L$.
\end{proof}
\begin{prop} The Lyubich measure $\mu^L$
is extended to an infinite type $\log N$-KMS state
$\varphi^L$ on ${\mathcal O}_R(\hat{\mathbb C})$ for
the gauge action.
\end{prop}
\begin{proof}
Since the Lyubich measure has no point mass,
it satisfies $F(\tau^L)=G^*(\tau^L)$.
On the other hand, for any $x\in \hat{\mathbb{C}}$ we have
$$G^*(\mu_n^x)=N\sum_{y\in R^{-1}(x)}\mu_{n+1}^y,$$
which implies $G^*(\tau^L)=N\tau^L$.
Thus by Corollary \ref{cor;characterization},
the Lyubich measure $\mu ^L$ corresponds to a
$\log N$-KMS state $\varphi^L$.
\end{proof}
Now we are ready to state our classification results.
Since we already know the result for $\beta>\log N$ thanks to Corollary
\ref{cor;characterization}, we assume $0<\beta\le \log N$.
Proposition~\ref{prop;pointmass} implies that every finite type
$\beta$-KMS state, if it exists, arises from a point in $E_R$.
Let $\mu$ be a probability measure corresponding to an extreme
infinite type $\beta$-KMS state $\varphi$, which satisfies $F_\beta(\mu)=\mu$.
Let $\mu=\mu_a+\mu_d$ be the decomposition of $\mu$ into the
atomic part $\mu_a$ and the diffuse part $\mu_d$.
Then a similar argument as in the proof of
Lemma~\ref{lemma;pointmassinequality} shows that
$F_\beta(\mu_a)$ is atomic and $F_\beta(\mu_d)$ is diffuse again,
and so we get $F_\beta(\mu_a)=\mu_a$ and $F_\beta(\mu_d)=\mu_d$.
Therefore either $\mu_a=0$ or $\mu_d=0$ holds.
If $\mu_d=0$, Proposition~\ref{prop;pointmass} and
Lemma~\ref{lemma;exceptional} imply that $\mu$ would be supported by
$E_R$.
Since $R^{-1}(x)$ is a singleton for $x\in E_R$, this does not occur.
Therefore $\mu_a=0$ and Proposition \ref{prop;Lyubich} implies that
$\beta=\log N$ and $\mu$ is the Lyubich measure.
The above observation with an easy case-by-case analysis using
Lemma~\ref{lemma;exceptional} shows the following theorems:
\begin{thm}
For $\beta>0$, there exists an infinite type $\beta$-KMS state for
the gauge action $\alpha$ on ${\mathcal O}_R(\hat{\mathbb{C}})$ if and only if
$\beta=\log N$.
The set $K_{\log N}(\alpha)_i$ is a singleton consisting of
$\varphi_L$ given by the Lyubich measure.
\end{thm}
\begin{thm} For the finite type $\beta$-KMS states for the gauge action
$\alpha$ on ${\mathcal O}_R(\hat{\mathbb{C}})$, the following holds:
\begin{itemize}
\item [$(1)$]
For $\beta>\log N$, the set
$\mathrm{ex}(K_\beta(\alpha)_f)$ of the extreme finite type $\beta$-KMS states
$\varphi$ is parameterized by the branched points ${\mathcal B}(R)$.
For $w \in {\mathcal B}(R)$, the corresponding extreme $\beta$-KMS state
$\varphi _{\beta,w}$ is determined by
its restriction $\varphi _{\beta,w} |_{C(\hat{\mathbb C})}$,
i.e., the regular Borel measure $\mu_{\beta,w}$ on $\hat{\mathbb C}$
given by
\[
\mu_{\beta,w} = m_{\mu _{\beta,w}}\sum_{k=0}^{\infty} \frac{1}{e^{k\beta}}
\sum_{z \in R^{-k}(w)} \delta_z,
\]
where $m_{\mu _{\beta,w}}$ is the normalized constant.
\item [$(2)$] For $0<\beta\leq \log N$, the set
$\mathrm{ex}(K_\beta(\alpha)_f)$ of the extreme finite type $\beta$-KMS states
coincides with $\{\varphi_{\beta,w}\}_{w\in E_R}$.
\end{itemize}
\label{thm;classification}
\end{thm}
Furthermore the set of all extreme $\beta$-KMS states, which depends only
on the system $({\mathcal O}_R(\hat{\mathbb{C}}),\alpha)$, is obtained just as
$\mathrm{ex}(K_\beta(\alpha))=\mathrm{ex}(K_\beta(\alpha)_f)\cup
\mathrm{ex}(K_\beta(\alpha)_i)$.
\begin{rema}
Corollary \ref{cor;characterization} shows that the GNS representation of
every extreme finite type KMS state with $\beta>0$ is a type I factor representation.
In Section 7, we show that the GNS representation of $\varphi^L$ is a factor
representation of type III$_{1/N}$.
\end{rema}
\begin{rema}
Theorem~\ref{thm;K1K2} still holds for $\beta=0$ if we
define a 0-KMS state to be an $\alpha$-invariant tracial state.
Such a state exists if and only if $E_R$ is not empty and we have
the following:
\begin{itemize}
\item [(1)] When $E_R$ consists of one point $w$, then there exists a unique
$\alpha$-invariant trace state $\varphi_w$.
The restriction of $\varphi_w$ to $C(\hat{\mathbb{C}})$ is given by
the Dirac measure $\delta_w$.
\item [(2)] When $E_R$ consists of two points $w_1$ and $w_2$ with $R(w_1)=w_1$
and $R(w_2)=w_2$, the set of $\alpha$-invariant trace states has exactly two
extreme points $\{\varphi_{w_i}\}_{i=1,2}$.
The restriction of $\varphi_{w_i}$ to $C(\hat{\mathbb{C}})$ is given by
$\delta_{w_i}$ for $i=1,2$.
\item [(3)] When $E_R$ consists of two points $w_1$ and $w_2$ with $R(w_1)=w_2$
and $R(w_2)=w_2$, then there exists a unique $\alpha$-invariant
trace $\varphi$.
The restriction of $\varphi$ to $C(\hat{\mathbb{C}})$ is given by
$$\frac{1}{2}(\delta_{w_1}+\delta_{w_2})$$
\end{itemize}
Note that the GNS representations of these states are not factor representations.
It is routine work to show that they give finite type I von Neumann algebras.
\end{rema}
\begin{exam} Let $R(z)=z^N$ with $N\geq 2$.
Then $E_R=\{0,\infty\}={\mathcal B}(R)$ with $R(0)=0$ and $R(\infty)=\infty$.
For every $\beta>0$ and $w=0,\infty$, we have $\mu_{\beta,w}=\delta_w$.
\end{exam}
\begin{exam} Let $R(z)=z^{-N}$ with $N\ge 2$.
Then we have $E_R={\mathcal B}(R)=\{0,\infty\}$ with $R(0)=\infty$ and $R(\infty)=0$.
For every $\beta>0$ we have
\[
\mu_{\beta,0} = \frac{e^\beta}{e^{\beta} + 1} \delta _{0}
+ \frac{1}{e^{\beta} + 1} \delta _{\infty},
\]
\[
\mu_{\beta,\infty} = \frac{1}{e^{\beta}+ 1} \delta _{0}
+ \frac{e^\beta}{e^{\beta} + 1} \delta _{\infty}.
\]
\end{exam}
\begin{exam} Let $R(z) = z^N +1$ with $N \ge 2$. Then
$E_R = \{\infty\}$ and ${\mathcal B}(R) = \{0,\infty\}$.
For any $\beta \geq 0$ the Dirac measure $\delta_{\infty}$ extends to a
$\beta$-KMS state $\varphi_{\beta,\infty}$.
The measure
\[
\mu_{\beta,0} =(1-\frac{N}{e^\beta})\sum_{k=0}^{\infty} (\frac{1}{e^{k\beta}}
\sum_{z \in R^{-k}(0)} \delta_z),
\]
corresponds to a $\beta$-KMS state $\varphi_{\beta,0}$ for $\beta > \log N$.
For $0 \le \beta < \log N$, the set of $\beta$-KMS states consists of one
point $\varphi_{\beta,\infty}$.
For $\beta =\log N$, the set of extreme $\beta$-KMS states
consists of two points $\varphi^L$ and $\varphi_{\beta,\infty}$.
For $\log N < \beta $, the set of extreme $\beta$-KMS states
consists of two points $\varphi_{\beta,\infty}$ and $\varphi_{\beta,0}$.
\end{exam}
Finally we consider the $C^*$-algebras ${\mathcal O}_R = {\mathcal O}_R(J_R)$
associated with a rational function $R$ on the Julia set $J_R$,
which is purely infinite and simple \cite{KW1}.
\begin{thm}
For the gauge action $\alpha$ on the $C^*$-algebras
${\mathcal O}_R(J_R)$, the following hold:
\begin{itemize}
\item [$(1)$] For $0 < \beta < \log N$, there exist no $\beta$-KMS states.
\item [$(2)$] For $\beta = \log N$, there exists a unique $\beta$-KMS state
$\varphi^L$ and its restriction $\varphi^L |_{C(J_R)}$ is the Lyubich measure.
The GNS representation of $\varphi^L$ is a factor representation of
type $III_{1/N}$.
\item [$(3)$] For $\log N < \beta$, the set $\mathrm{ex}(K_\beta(\alpha))$
of extreme $\beta$-KMS states is parameterized by the branched points
${\mathcal B}(R) \cap J_R$.
For $w \in {\mathcal B}(R) \cap J_R$, the corresponding extreme $\beta$-KMS state
$\varphi _{\beta,w}$ is determined by its restriction
$\varphi _{\beta,w} |_{C(J_R)}$, which is given
by the same formula as in Theorem \ref{thm;classification}.
The GNS representation of $\varphi_{\beta,w}$ is a factor representation of
type $I$.
\end{itemize}
\end{thm}
\begin{proof} The proof follows from a similar argument as in the
case of ${\mathcal O}_R(\hat{\mathbb{C}})$ with the fact that the Julia set $J_R$ contains
no exceptional point by \cite{B}.
\end{proof}
\section{classification of KMS states in the case of self-similar sets}
Let $(K,d)$ be a compact metric space and let
$\gamma=(\gamma_1,\gamma_2,\cdots,\gamma_N)$ be a system of proper
contractions satisfying $K=\bigcup_{i=1}^N\gamma_i(K)$.
In this section, we consider the structure of the KMS states for the
gauge action on ${\mathcal O}_X={\mathcal O}_\gamma (K)$.
Recall $A = C(K)$ and $I_X = \{f \in C(K) ; f|_{{\mathcal B}(\gamma)} = 0 \}
=C_0(K\setminus {\mathcal B}(\gamma))$ in this case.
The set ${\mathcal B}(\gamma)$ is finite if and only if ${\mathcal C}(\gamma)$ is finite.
For $y\in K$, we set $\gamma(y)=\bigcup_{i=1}^N\{\gamma_i(y)\}$.
The Borel function $\tilde{a}$ for $a \in A$ is
\begin{equation}
\tilde{a}(y) = \sum_{x \in \gamma(y)}a(x)
= \sum_{j=1}^N \frac{1}{e(\gamma_j(y),y)}a(\gamma_j(y)).
\end{equation}
Note that if ${\mathcal C}(\gamma)$ is not empty, $\tilde{a}$ is not necessarily
continuous.
Theorem \ref{thm;explicitPF} shows
\begin{equation}
F_\beta(\delta_y)=e^{-\beta}\sum_{j=1}^N
\frac{1}{e(\gamma_j(y),y)}\delta_{\gamma_j(y)}
= e^{-\beta}\sum_{x \in \gamma(y)}\delta_{x}.
\end{equation}
If a probability measure $\mu$ on $K$ satisfies (K2), we have
$\mu\geq F_\beta(\mu)\geq \mu(\{x\})F_\beta(\delta_x)$ and so,
the following holds:
\begin{equation}
\mu(\{\gamma_i(x)\})\geq e^{-\beta}\mu(\{x\}).
\label{pointmass}
\end{equation}
\begin{lemma} Let $\mu$ be a probability regular Borel measure
on $K$ satisfying (K1) and (K2). If $\mu({\mathcal C}(\gamma)) = 0$,
then $\log N \leq \beta$.
\label{lemma;pointmassless}
\end{lemma}
\begin{proof}
Suppose $\mu({\mathcal C}(\gamma))=0$. Then
\[
\tilde{1}(y)
= \sum_{j=1}^N \frac{1}{e(\gamma_j(y),y)}1(\gamma_j(y))
= N, \quad \text{a.e. $y$ with respect to $\mu$}.
\]
(K2) implies that
\[
\frac{N}{e^{\beta}} =
\frac{1}{e^{\beta}}
\int_{K} \tilde{1}(z) d\mu(z)
\leq \int_{K} 1 d\mu(z)
= 1 .
\]
Hence $\log N \leq \beta$.
\end{proof}
For $x\in K$ and $n\geq 0$, we set the $n$-th orbit of $x$ by
$$O_n(x)=\{\gamma_{i_1}\cdots\gamma_{i_2}\cdots\gamma_{i_n}(x)\in K;\:
1\leq i_1,i_2,\cdots,i_n\leq N\}$$
and set $O(x)=\bigcup_{n=0}^\infty O_n(x)$.
Note that if a measure $\mu$ satisfying (K2) has a point mass at $x$,
it has a point mass at $y$ for all $y\in O(x)$ thanks to (\ref{pointmass}).
\begin{lemma} Let $0<\beta\leq\log N$ and
let $\mu$ be a probability regular Borel measure
on $K$ satisfying (K1) and (K2).
Assume either $0<\beta<\log N$ or $\mu$ is of finite type.
Let $y\in K$.
If there exists $x\in O(y)$ such that $O(x)\cap {\mathcal C}(\gamma)=\emptyset$,
then $\mu$ has no point mass at $y$.
\label{lemma;selfsimilarpointmass}
\end{lemma}
\begin{proof}
First we assume $0 < \beta<\log N$.
If $\mu$ had a point mass at $y$, then
$\mu$ would have a point mass at $x$ as well and
we would have
$\mu\geq \mu(\{x\})F_\beta^n(\delta_x)$.
However, by assumption we get
\[
1 \geq \mu(\{x\})F_\beta^n(\delta _x)(K)
=\mu(\{x\}) (Ne^{-\beta})^n \rightarrow \infty,
\quad (n\to \infty),
\]
which is a contradiction.
Now we assume $\beta=\log N$ and the corresponding KMS state is of finite
type.
Then there is a measure $\nu$ supported by ${\mathcal B}(\gamma)$ such that
$$\mu=\sum_{n=0}^\infty F_\beta^n(\nu).$$
Suppose that $\mu$ has a point mass at $y$.
Then there would exist $n$ and $c>0$ such that $F_\beta^n(\nu)\geq c\delta_x$,
and so $F_\beta^{n+m}(\nu)\geq cF_\beta^m(\delta_x)$.
As we have $F_\beta^m(\delta_x)(1)=1$,
$$
1 = \mu(1) =\sum_{k=0}^\infty F_\beta^k(\nu)(1)
\geq \sum_{k=n}^\infty c = \infty.
$$
We get a contradiction.
\end{proof}
To describe infinite type KMS states, we recall the notion of
the Hutchinson measure for a self-similar set (see \cite{H} for details).
\begin{defi} Let $\overline{G}:C(K):\rightarrow G(K)$ be the unital positive
map defined by
$$\overline{G}(a)=\frac{1}{N}\sum_{i=1}^Na\cdot\gamma_i,\quad a\in C(K).$$
The {\it Hutchinson measure} is a unique $\overline{G}^*$-invariant probability measure.
\end{defi}
\begin{lemma} Unless $K$ consists of one point,
the Hutchinson measure $\mu^H$ has no point mass.
\label{lemma;nopintmass}
\end{lemma}
\begin{proof} Let $\mu^H=\mu_a+\mu_d$ be the decomposition of $\mu^H$
into the atomic part $\mu_a$ and the diffuse part $\mu_d$.
Then it is easy to show that $\overline{G}^*(\mu_a)$ is atomic and
$\overline{G}^*(\mu_d)$ is diffuse.
Thus either $\mu_a=0$ or $\mu_d=0$ holds thanks to the uniqueness of a
$\overline{G}$-invariant measure.
Suppose $\mu_d=0$.
We set $m=\max\{\mu\{x\};\;x\in K\}$ and $L=\{x\in K;\; \mu\{x\}=m\}$,
which is a finite set.
Since
$$\overline{G}^*(\delta_x)=\frac{1}{N}\sum_{i=1}^N\delta_{\gamma_i(x)},$$
and $\overline{G}^*(\mu^H)=\mu^H$, we have
\begin{eqnarray*} m(\#L)&=&\mu^H(L)=\overline{G}^*(\mu^H)(L)
=\frac{1}{N}\sum_{i=1}^N\sum_{y\in K}\mu\{y\}\delta_{\gamma_i(y)}(L)\\
&=&\frac{1}{N}\sum_{i=1}^N\sum_{x\in L,y\in K}\mu\{y\}\delta_{x,\gamma_i(y)}.
\end{eqnarray*}
This shows that $\gamma_i(L)=L$ for all $i$ and $L=K$.
Since $\gamma_i$ is a proper contraction and $L$ is a finite set,
we conclude that $K$ consists of one point.
\end{proof}
Note that when a measure $\mu$ satisfies $\mu({\mathcal C}(\gamma))=0$, we
have $F(\mu)=N\overline{G}(\mu)$.
Thus when the Hutchinson measure $\mu^H$ satisfies $\mu^H({\mathcal C}(\gamma))=0$,
it gives rise to an infinite type $\log N$-KMS state, which we denote by
$\varphi^H$.
Thanks to Lemma~\ref{lemma;nopintmass}, the condition $\mu^H({\mathcal C}(\gamma))=0$
holds whenever ${\mathcal C}(\gamma)$ is a countable set.
Conversely, we have
\begin{lemma} Let $\varphi$ be a $\beta$-KMS state of
infinite type for $\beta=\log N$ and let $\mu$ be the corresponding
measure on $K$.
Then $\varphi=\varphi^H$ and $\mu^H({\mathcal C}(\gamma))=0$.
\label{lemma;Hutchinson}
\end{lemma}
\begin{proof} $F_\beta(\mu)=\mu$ implies
\begin{eqnarray*}
0&=&\int_K(N-\tilde{1}(x))d\mu(x)=\int_K \sum_{j=1}^N
\Big(1-\frac{1}{e(\gamma_j(x),x)}\Big)d\mu(x)\\
&=&\int_{{\mathcal C}(\gamma)} \sum_{j=1}^N
\Big(1-\frac{1}{e(\gamma_j(x),x)}\Big)d\mu(x),
\end{eqnarray*}
which shows $\mu({\mathcal C}(\gamma))=0$.
Thus $\mu=F_\beta(\mu)=\overline{G}(\mu)$ and so $\mu=\mu^H$.
\end{proof}
\begin{thm}
Let $\gamma=(\gamma_1,\gamma_2,\cdots,\gamma_N)$
be a system of proper contractions on a compact metric space $K$
such that $K=\bigcup_{i=1}^{N} \gamma_{i}(K)$.
Suppose either that ${\mathcal B}(\gamma)$ is empty or that
${\mathcal B}(\gamma)$ is a finite set and for every
$y\in {\mathcal C}(\gamma)$, there exists $x\in O(y)$ such that
$O(x)\cap {\mathcal C}(\gamma)=\emptyset$.
For the gauge action $\alpha$ on the $C^*$-algebras
${\mathcal O}_{\gamma}(K)$, the following hold:
\begin{itemize}
\item [$(1)$] For $0<\beta < \log N$, there exist no $\beta$-KMS states.
\item [$(2)$] For $\beta = \log N$, then there exists a unique
$\beta$-KMS state $\varphi^H$ and its restriction $\varphi^H |_{C(K)}$ is
the Hutchinson measure $\mu^H$.
\item [(3)] For $\log N < \beta$, the set $\mathrm{ex}(K_\beta(\alpha))$
of extreme $\beta$-KMS states $\varphi$
is parameterized by the branched points ${\mathcal B}(\gamma)$.
\end{itemize}
\label{thm;selfsimilarkms}
\end{thm}
\begin{proof}
For $0< \beta<\log N$, Lemma \ref{lemma;pointmassless} and
Lemma \ref{lemma;selfsimilarpointmass} show that there is no
$\beta$-KMS state.
Assume that $\mu$ is a measure corresponding to a finite type
$\beta$-KMS state for $\beta=\log N$.
Then Lemma \ref{lemma;selfsimilarpointmass} implies $\mu({\mathcal C}(\gamma))=0$
and so $\mu(K)=F_\beta(\mu)(K)$ would hold.
However, this means that $\mu$ is of infinite type, which is a contradiction.
The rest follows from Lemma \ref{lemma;Hutchinson} and
Corollary \ref{cor;characterization}.
\end{proof}
\begin{rema}
In Section 7, we show that
the GNS representation of $\varphi^H$ is a factor representation of type
$III_{1/N}$.
\end{rema}
\begin{exam}[Inverse branches of the tent map]
Let $K=[0,1]$, $\gamma_1(y) = \frac{1}{2}y$,
and $\gamma_2(y) = 1 - \frac{1}{2}y$.
Then ${\mathcal B}(\gamma) =\{ \frac{1}{2}\}$ and ${\mathcal C}(\gamma) = \{1\}$.
Since $O(1/2)\cap {\mathcal C}(\gamma)=\emptyset$, the assumption of Theorem
\ref{thm;selfsimilarkms} is satisfied.
The Hutchinson measure $\mu^H$ is
the normalized Lebesgue measure on $[0,1]$.
Hence for $0< \beta < \log 2$, there exist no $\beta$-KMS states.
For $\beta = \log 2$, there exists a unique $\beta$-KMS state
$\varphi^H$ and its restriction $\varphi^H |_{C([0,1])}$ is
the normalized Lebesgue measure.
For $\log 2 < \beta$, there exists a unique $\beta$-KMS state
$\varphi_{\beta,1/2}$ and its restriction $\varphi |_{C([0,1])}$ is
\[
\mu_{\beta,1/2}=(1-\frac{2}{e^\beta})
\sum_{n=0}^{\infty} \frac{1}{e^{n\beta}}
\sum_{(j_1,\cdots,j_n) \in \{1,2\}^n}
\delta_{\gamma_{j_1}\cdots \gamma_{j_n}(1/2)}.
\]
\end{exam}
\begin{exam} Let $K=[0,1]$, $\gamma_1(y) = \frac{1}{2}y$,
and $\gamma_2(y) =\frac{1}{2}(y+1)$.
Then ${\mathcal B}(\gamma)=\phi$ and ${\mathcal C}(\gamma)=\phi$ and
The Hutchinson measure $\mu^H$ is the normalized Lebesgue measure on $[0,1]$.
The $C^*$-algebra ${\mathcal O}_{\gamma}(K)$ with $\alpha$
is isomorphic to the Cuntz algebra ${\mathcal O}_2$ with
the usual gauge action.
Thus a $\beta$-KMS state on ${\mathcal O}_{\gamma}(K)$ exists if and
only if $\lambda = \log 2$ and its restriction to
$C([0,1])$ is the normalized Lebesgue measure.
\end{exam}
\begin{exam}[Sierpinski Gasket]
Let $\Omega$ be a regular triangle in ${\mathbb R}^2$
with three vertices
$c_1=(1/2,\sqrt{\,3}/2)$, $c_2 = (0,0)$ and $c_3=(1,0)$.
The middle point of $c_1c_2$ is denote by $b_1$, the middle point of
$c_1c_3$ is denoted by $b_2$ and the middle point of $c_2c_3$ is
denoted by $b_3$.
We define proper contractions ${\gamma_i}$ for $i=1,2,3$ by
\[
{\gamma}_1(x,y) = \left( \frac{x}{2} + \frac{1}{4}, \frac{y}{2}
+ \frac{\sqrt{\,3}}{4}\right), \quad
{\gamma}_2(x,y) = \left( \frac{x}{2}, \frac{y}{2}\right), \quad
{\gamma}_3(x,y) = \left( \frac{x}{2} + \frac{1}{2},\frac{y}{2}
\right).
\]
Then the self-similar set $K$ associated with $\gamma$ is
the Sierpinski gasket.
It is known that $K$ is homeomorphic to
the Julia set $J_R$ of the rational function
$$R(z) = \frac{z^3-\frac{16}{27}}{z}.$$
However, these three contractions can not be identified with the
inverse branches of $R$ because $\gamma _1(c_2) = \gamma _2(c_1)$.
Let
$\tilde{\gamma _1} = \gamma _1$,
$\tilde{\gamma _2} = \rho_{-\frac{2\pi}{3}} \circ \gamma _2$,
and $\tilde{\gamma _3} = \rho_{\frac{2\pi}{3}} \circ \gamma _3$,
where $\rho_{\theta}$ is a rotation by the angle $\theta$.
The self-similar set for
$\tilde{\gamma}=(\tilde{\gamma _1}, \tilde{\gamma _2}, \tilde{\gamma _3})$
is $K$ as well and they are inverse branches of a map $h: K \rightarrow K$,
which is conjugate to $R : J_R \rightarrow J_R$.
Therefore ${\mathcal O}_R(J_R)$ is isomorphic to
${\mathcal O}_{\tilde{\gamma}}(K)$,
which is known to be a purely infinite and simple $C^*$-algebra.
The $C^*$-algebras ${\mathcal O}_{\gamma}(K)$ and
${\mathcal O}_{\tilde{\gamma}}(K)$ have non-isomorphic $K$-groups and hence
they are not isomorphic (See \cite{KW1} for details).
Since ${\mathcal B}(\gamma)=\emptyset$ and ${\mathcal C}(\gamma)=\emptyset$, there exists
a $\beta$-KMS state for the gauge automorphism for
${\mathcal O}_{\gamma}(K)$ if and
only if $\lambda = \log 3$.
For $\tilde{\gamma}$, we have ${\mathcal B}(\tilde{\gamma})=\{b_1,b_2,b_3\}$,
${\mathcal C}(\tilde{\gamma}) =\{ c_1,c_2,c_3 \}$, and
the assumption of Theorem \ref{thm;selfsimilarkms} is satisfied.
Thus for $0<\beta < \log 3$, there exist no $\beta$-KMS states.
For $\beta = \log 3$, there exists a unique $\beta$-KMS state
whose restriction to $C(K)$ is given by the Hutchinson measure $\mu^H$.
For $\log 3 < \beta$, the set
$\mathrm{ex}(K_\beta(\alpha))$ of extreme $\beta$-KMS states is parameterized
by the branched points ${\mathcal B}(\gamma) = \{b_1,b_2,b_3\}$.
For $b_i \in {\mathcal B}(\gamma)$, the corresponding extreme $\beta$-KMS state
is given by the measure
\[
\mu_{\beta, b_i} = (1 -\frac{3}{e^{\beta}})
\sum_{n=0}^{\infty} \frac{1}{e^{n\beta}}
\sum_{(j_1,\cdots,j_n) \in \{1,2,3\}^n}
\delta_{\gamma_{j_1}\cdots \gamma_{j_n}(b_i)}.
\]
\end{exam}
\section{Type III representations by Lyubich and Hutchinson measures}
In this section, we show that the KMS states arising from the Lyubich
measures and the Hutchinson measures give
type III$_{1/N}$ representations.
Our proof is based on the facts that the Cuntz algebra ${\mathcal O}_N$ arises from
the Bernoulli $N$-shift and that the unique KMS state of the gauge action
of ${\mathcal O}_N$ gives a type III$_{1/N}$ representation.
Let $N$ be an integer greater than 1 and $K_N=\{1,2,\cdots,N\}^{\mathbb{N}}$.
We denote by $\nu_N$ the Bernoulli measure with weight
$(\frac{1}{N},\dots,\frac{1}{N})$, i.e. the infinite
product measure of the uniform measure of $\{1,2,\cdots,N\}$.
Let $\sigma$ be the Bernoulli shift
$$\sigma(x_1,x_2,\cdots)=(x_2,x_3,\cdots).$$
For $j=1,2,\cdots,N$, we set $\rho_j$ to be the inverse branches of
$\sigma$, that is,
$$\rho_j(x_1,x_2,\cdots)=(j,x_1,x_2,\cdots).$$
Note that $\nu_N$ is nothing but the Hutchinson measure of
$\rho=(\rho_1,\rho_2,\cdots,\rho_N)$.
Let $Y$ be the $C(K_N)-C(K_N)$ bimodule giving ${\mathcal O}_{\rho}(K_N)$.
Since
$$\bigcup_{j=1}^N\{(\rho_j(y),y)\in K_N^2;\;x\in K_N\}
=\{(x,\sigma(x))\in K_N^2;\;x\in K_N\},$$
in what follows, we identify $Y$ with
$C(K_N)$ whose Hilbert $C^*$-bimodule structure is given by
$$(a\cdot f\cdot b)(x)=a(x)f(x)b(\sigma(x))$$
$$(f|g)_{C(K_N)}(y)=\sum_{j=1}^N \overline{f(\rho_j(y))}{g(\rho_j(y))}.$$
We use the symbol $1_E$ for the characteristic function of a subset
$E\subset K_N$.
For $\xi=(\xi_1,\xi_2,\cdots,\xi_n)\in \{1,2,\cdots,N\}^n$,
we denote by $E_\xi$ the cylinder set
$$E_\xi=\{x\in K_N; \;x_k=\xi_k,\; 1\leq k\leq n\}.$$
It is known that the $C^*$-algebra ${\mathcal O}_\rho(K_N)$ is isomorphic to
the Cuntz algebra ${\mathcal O}_N$.
Indeed, ${\mathcal O}_\rho(K_N)$ is generated by $S_{1_{E_1}},S_{1_{E_2}},
\cdots,S_{1_{E_N}}$, which satisfy the Cuntz algebra relation
(see \cite[Proposition 4.1]{KW2} and \cite[Section 4]{PWY}).
Note that the isomorphism ${\mathcal O}_N\ni S_j\mapsto S_{1_{E_j}}\in {\mathcal O}_\rho(K_N)$
intertwines the gauge actions, and in consequence, the KMS states.
We give a $W^*$-version of this argument first.
\subsection{The case of Hutchinson measures}
Let $\gamma=(\gamma_1,\gamma_2,\cdots,\gamma_N)$ be
an $N$-tuple of proper contractions of a compact metric space $K$
satisfying the self-similar condition
$K=\bigcup_{j=1}^N\gamma_j(K).$
We assume that the Hutchinson measure $\mu^H$ satisfies
$\mu_H({\mathcal C}(\gamma))=0$ and denote by $\varphi^H$ the corresponding
KMS state of the gauge action $\alpha$ on ${\mathcal O}_\gamma(K)$ as before.
For $\xi=(\xi_1,\xi_2,\cdots,\xi_n)\in \{1,2,\cdots,N\}^n$, we use
the notations
$$\gamma_\xi=\gamma_{\xi_1}\cdot\gamma_{\xi_2}\cdots\gamma_{\xi_n}.$$
Since all the maps $\gamma_1,\gamma_2,\cdots,\gamma_N$ are proper contractions,
there exists an surjective continuous map
$q : K_N \rightarrow K$ such that
\[
\{q (x)\} = \bigcap_{n=0}^\infty\gamma_{x_1}\cdot
\gamma_{x_2}\cdots\gamma_{x_n}(K)
\]
We have
$q \circ \rho _i = \gamma _i \circ q$ for $i = 1, \dots , N$.
Since
$$
\sum_{j=1}^N{\gamma_j}_*q_*\nu_N=\sum_{j=1}^Nq_*{\rho_j}_*\nu_N=Nq_*\nu_N,$$
the uniqueness of the Hutchinson measure $\mu^H$ implies $\mu_H= q_* \nu_N$.
\begin{lemma} For all $f,g\in L^2(K,\mu_N)$, the following holds:
$$(f|g)_{\mu}=\lim_{n\to\infty}\frac{1}{N^n}
\sum_{\xi\in \{1,2,\cdots,N\}^n}(f\cdot\gamma_{\xi}|1)_\mu
(1|g\cdot\gamma_\xi)_\mu,$$
where $(\cdot|\cdot)_\mu$ is the inner product of $L^2(K,\mu)$.
\end{lemma}
\begin{proof} Since $q^*:L^2(K,\mu_H)\ni f\mapsto f\cdot q\in L^2(K_N,\nu_N)$
is an isometry satisfying $\rho_j^*q^*=q^*\gamma_j^*$,
it suffices to show the statement for $(K_N,\rho,\nu_N)$ instead of
$(K,\gamma,\mu)$.
Let ${\mathcal F}_n$ be the $\sigma$-field generated by the first $n$
coordinate functions of $K_N$, and $E(f|{\mathcal F}_n)$ be the conditional
expectation of $f$ given ${\mathcal F}_n$.
Then for $f\in L^1(K_N,\nu_N)$ and $\xi\in \{1,2,\cdots,N\}^n$,
we have
$$E(f|{\mathcal F}_n)(\xi)=\int_{K_N}f\cdot\rho_\xi(x)d\nu_N(x).$$
Thus for $f,g\in L^2(K_N,\nu_N)$, we get
$$\frac{1}{N^n}
\sum_{\xi\in \{1,2,\cdots,N\}^n}
(f\cdot\rho_\xi|1)_{\nu_N}(1|g\cdot\gamma_\xi)_{\nu_N}
=(E(f|{\mathcal F}_n)|E(g|{\mathcal F}_n))_{\nu_N},$$
which tends to $(f|g)_{\nu_N}$ as $n$ goes to infinity.
\end{proof}
Let $A=C(K)$ and $X$ be the $A-A$ bimodule giving ${\mathcal O}_\gamma(K)$.
We denote by $(\pi,H,\Omega)$ the GNS triple for $\varphi^H$.
We set $M=\pi({\mathcal O}_\gamma(K))''$.
Let $\hat{\varphi}^H$ be the normal extension of $\varphi^H$ to $M$ given by
$\hat{\varphi}^H(m) = (m\Omega|\Omega)$ for $m \in M$.
Since $\varphi^H$ is a KMS state, $\hat{\varphi}^H$ is faithful on $M$.
Let $\tau ^H$ be the trace on $A=C(K_N)$ given by $\mu^H$ and
let $(\pi_0,H_0,\Omega _0)$ be the GNS triple for $\tau^H$.
We can identify $H_0$ with $L^2(K,\mu ^H)$
and $\pi_0(a)$ with the multiplication operator by $a \in A$.
Thus $\pi_0$ extends to a normal representation of $L^\infty(K,\mu^H)$.
On the other hand, since $\tau^H$ is the restriction of $\varphi^H$ to $A$,
the Hilbert space $H_0$ is naturally identified with the closure of
$\pi(A)\Omega$ and the restriction $\pi|_A$ of $\pi$ to $A$ is
quasi-equivalent to $\pi_0$ \cite[Lemma 4.1]{I}.
Therefore $\pi|_A$ extends to a normal representation
of $B:=L^\infty(K,\mu^H)$, which is denoted by $\hat{\pi}$.
For a measurable function $f$ on $K$, we denote by $||f||_p$ the $L^p$-norm
of $f$ in $L^p(K,\mu^H)$.
For $m \in M$, we set
$||m||_2=||m\Omega_L||$.
It is well-known that the strong operator topology on the
unit ball $M^1$ of $M$ coincides with the topology given by $||\cdot||_2$,
and $M^1$ is complete with respect to $||\cdot||_2$.
Let $\{h_n\}_{n=1}^\infty$ be an increasing sequence of non-negative
functions in $A$ such that $h_n(x)=0$ for $x\in {\mathcal C}(\gamma)$
and $\{h_n(y)\}_{n=1}^\infty$ converges to 1 for all
$y\in K\setminus {\mathcal C}(\gamma)$.
For $1\leq j\leq N$, we introduce $u_{j,n}\in X$ by setting
$u_{j,n}(\gamma_j(x),x)=h_n(x)$ and
$u_{j,n}(\gamma_k(x),x)=0$ for $k\neq j$.
\begin{lemma} Let the notation be as above.
Then the sequence $\{\pi(S_{u_{j,n}})\}_{n=1}^\infty$ converges to
an isometry, say $\tilde{S}_j$, in the $*$ strong operator topology.
The isometries $\{\tilde{S}_j\}_{j=1}^N$ satisfy the Cuntz algebra relation.
For the modular automorphism group $\{\sigma^{\hat{\varphi}^H}_t\}_{t\in \mathbb{R}}$,
the following holds:
$\sigma^{\hat{\varphi}^H}_t(\tilde{S}_j)=e^{-t\log N\sqrt{-1}}\tilde{S}_j$,
$\forall t\in \mathbb{R}.$
\label{lemma;IIIH}
\end{lemma}
\begin{proof} By straightforward computation, we have $||S_{u_{j,m}}||\leq 1$ and
$$||\pi(S_{u_{j,m}}-S_{u_{j,n}})||_2^2=||h_m-h_n||_2^2,$$
$$||\pi(S_{u_{j,m}}^*-S_{u_{j,n}}^*)||_2^2=\frac{1}{N}||h_m-h_n||_2^2. $$
Since we assume that $\mu(C(\gamma))=0$, these imply that
$\{\pi(S_{u_{j,n}})\}_{n=1}^\infty$ and
$\{\pi(S_{u_{j,n}})^*\}_{n=1}^\infty$ are
Cauchy sequences with respect to
$||\cdot||_2$, and so $\{\pi(S_{u_{j,n}})\}_{n=1}^\infty$ converges in the
$*$ strong operator topology.
As we have $S_{u_{j,n}}^*S_{u_{k,n}}=\delta_{j,k}\pi(|h_n|^2)$ which converges
to $\delta_{j,k}1$ in the strong operator topology, the operators
$\{\tilde{S}_j\}_{j=1}^N$ are isometries with mutual orthogonal ranges.
Since $\sigma^{\hat{\varphi}^H}_t$ is a normal extension of
$\alpha_{-t\log N}$, we have
$\sigma_{t}^{\hat{\varphi}^H}(\tilde{S}_j)=e^{-t\log N\sqrt{-1}}\tilde{S}_j$,
which implies
$$\sum_{j=1}^N\hat{\varphi}^H(\tilde{S}_j\tilde{S}_j^*)
=\sum_{j=1}^N\frac{1}{N}\hat{\varphi}^H(\tilde{S}_j^*\tilde{S}_j)=1.$$
Since $\hat{\varphi}^H$ is faithful, this implies
$\sum_{j=1}^N\tilde{S}_j\tilde{S}_j^*=1.$
\end{proof}
\begin{thm} Let the notation be as above.
Then $M$ is the AFD type III$_{1/N}$ factor.
\label{theorem;IIIH}
\end{thm}
\begin{proof} Thanks to the above lemma, it suffices to show that
$M$ is generated by $\{\tilde{S}_j\}_{j=1}^N$.
Since $aS_{u_{j,n}}=S_{u_{j,n}}a\cdot\gamma_j$ holds for all $a\in A$, we get
$\pi(a)\tilde{S}_j=\tilde{S}_j\pi(a\cdot\gamma_j)$.
For $f\in X$, we define $f_j\in A$ by
$f_j(x)=f(\gamma_j(x),x)$.
Since $\mu(C(\gamma))=0$, we have $\tilde{S}_j^*\pi(S_f)=\pi(f_j)$, and so
$$\pi(S_f)=\sum_{j=1}^N\tilde{S}_j\pi(f_j).$$
Therefore the linear span of the elements of the form
$\tilde{S}_\eta \pi(a) \tilde{S}_\zeta^*$ with $a\in C(K)$,
$\eta\in \{1,2,\cdots,N\}^m$, and $\zeta\in \{1,2,\cdots,N\}^n$ is a dense
$*$-algebra of $M$,
where $\tilde{S}_\eta=\tilde{S}_{\eta_1}\tilde{S}_{\eta_2}\cdots \tilde{S}_{\eta_m}$.
We understand $\tilde{S}_\eta f\tilde{S}_\zeta^*=f$ for $\eta=\zeta=\emptyset$.
This observation shows that to finishes the proof, it suffices to prove
$\pi(A)\subset {\{\tilde{S}_j\}_{j=1}^N}''$.
For $a\in A$, we set
$$a_n=\sum_{\eta\in \{1,2,\cdots,N\}^n}\hat{\varphi}^H(\tilde{S}_\eta^*\pi(a)
\tilde{S}_\eta)\tilde{S}_\eta\tilde{S}_\eta^*
=\sum_{\eta\in \{1,2,\cdots,N\}^n}\hat{\varphi}^H(\pi(a\cdot\gamma_\eta))
\tilde{S}_\eta\tilde{S}_\eta^*.$$
Then $\{a_n\}_{n=1}^\infty$ is a bounded sequence in ${\{\tilde{S}_j\}_{j=1}^N}''$.
We show that $\{a_n\}_{n=1}^\infty$ converges to $\pi(a)$ in the strong
operator topology.
Indeed, we have
\begin{eqnarray*}
||\pi(a)-a_n||_2^2&=&\hat{\varphi}^H(\pi(|a|^2)
-\pi(a^*)a_n-a_n^*\pi(a)+|a_n|^2)\\
&=&||a||_2^2-\frac{1}{N^n}
\sum_{\eta\in \{1,2,\cdots,N\}^n}|(f\cdot\gamma_\eta|1)_{\nu_N}|^2.
\end{eqnarray*}
Lemma \ref{lemma;IIIH} shows that the right-hand side tends to 0 as
$n$ goes to infinity.
Thus $\pi(A)\subset {\{\tilde{S}_j\}_{j=1}^N}''$.
\end{proof}
\subsection{The case of Lyubich measures}
Let $R$ be a rational function with $N = \deg R \geq 2$,
$A=C(\hat{\mathbb{C}})$, and $X=C(\mathop{\rm {graph}}\nolimits R)$ be as before.
Since $\hat{\mathbb{C}}\ni z\mapsto (z,R(z))\in \mathop{\rm {graph}}\nolimits R$ is a homeomorphism,
we identify $X$ with $C(\hat{\mathbb{C}})$, whose $A-A$ bimodule structure is given
by
$$(a\cdot f\cdot b)(z)=a(z)f(z)b(R(z)),$$
$$(f|g)_A(z)=\sum_{w\in R^{-1}(z)}e(w)\overline{f(w)}g(w).$$
The following theorem relies on Heicklen-Hoffman's remarkable
result \cite{HH} saying that $(\hat{\mathbb{C}},R,\mu^H)$ is conjugate to
$(K_N,\sigma,\nu_N)$ as measurable dynamical systems.
\begin{thm} Let $R$ be a rational function with
$N = \deg R \geq 2$. Let $\mu^L$ be the Lyubich measure
and $\varphi^L$ be the corresponding
$\log N$-KMS state on ${\mathcal O}_R(\hat{\mathbb C})$
for the gauge action. Then the GNS representation of $\varphi^L$ is
a factor representation of type $III_{1/N}$.
\label{theorem;IIIL}
\end{thm}
\begin{proof}
Let $(\pi,H,\Omega)$ be the GNS triple for $\varphi^L$.
We set $M=\pi({\mathcal O}_R(\hat{\mathbb C}))''$.
Let $\hat{\varphi}^L$ be the normal extension of $\varphi^L$ to $M$.
We denote by $\hat{\pi}$ the normal extension of the restriction $\pi|_A$
to $B:=L^\infty(\hat{\mathbb{C}},\mu^L)$.
For $m \in M$, we set $||m||_2=||m\Omega_L||$.
We denote by $||\cdot||_p$ the $L^p$-norm of
$L^p(\hat{\mathbb{C}},\mu^L)=L^p(J_R,\mu^L)$.
For $f\in X$, the condition $\mu^H({\mathcal C}(R))=0$ implies
$$||f||_{\infty} \leq ||\pi_L(S_f)||=||\pi_L((f|f)_A)||^{1/2}
\leq \sqrt{N}||f||_\infty,$$
$$||\pi_L(S_f)||_2^2 =
\int_{\hat{\mathbb{C}}}\sum_{w\in R^{-1}(z)}e(w)|f(w)|^2d\mu^L(z)
=\int_{\hat{C}} \sum_{w \in R^{-1}(z)}|f(z)|^2 d\mu ^L(z)
=N||f||_2^2.$$
The KMS condition implies $||\pi(S_f^*)||_2=||f||_2/\sqrt{N}$.
For $f\in L^\infty(\hat{\mathbb{C}},\mu^L)$, we choose a sequence
$\{f_n\}_{n=1}^\infty$ in $C(\hat{\mathbb{C}})$ satisfying the following
three conditions: (1) $\{||f_n||_\infty\}_{n=1}^\infty$ is dominated by
$||f||_\infty$, (2) $\{||f_n-f||_2\}_{n=1}^\infty$ converges to 0, and
(3) $\{f_n(z)\}_{n=1}^\infty$
converges to $f(z)$ for almost every $z$ with respect to $\mu^L$.
Then $\{\pi_L(S_{f_n})\}_{n=1}^\infty$ converges in the $*$ strong operator
topology, whose limit is denoted by $\hat{S}_f$.
Note that $\hat{S}_f$ does not depends on the choice of the sequence
$\{f_n\}_{n=1}^\infty$.
Let $g\in L^\infty(\hat{\mathbb{C}},\mu^L)$ and $\{g_n\}_{n=1}^\infty$ be a sequence
in $C(\hat{\mathbb{C}})$ satisfying the same properties as above for $g$ instead of
$f$.
Then
$$\hat{S}_f^*\hat{S}_g=\lim_{n\to\infty}\pi_L((f_n|g_n)_A),$$
where the limit is taken in the $*$ strong operator topology.
For $f,g\in B$, we define $(f|g)_B\in B$ by the same formula as the
$A$-valued inner product of $X$.
Then we get
\begin{equation}
\hat{S}_f^*\hat{S}_g=\hat{\pi}((f|g)_B),\quad f,g\in
L^\infty(\hat{\mathbb{C}},\mu^L).
\label{L1}
\end{equation}
In a similar way, we have
\begin{equation}
\hat{\pi}(a)\hat{S}_f\hat{\pi}(b)=
\hat{S}_{afb\cdot R},\quad f,a,b\in
L^\infty(\hat{\mathbb{C}},\mu^L).
\label{L2}
\end{equation}
Since $\sigma_t^{\hat{\varphi}^L}$ is a normal
extension of $\alpha_{-t\log N}$, we have
\begin{equation}
\sigma_{t}^{\hat{\varphi}^L}(\hat{\pi}(a))=\hat{\pi}(a),
\quad a\in L^\infty(\hat{\mathbb{C}},\mu^L).
\label{L3}
\end{equation}
\begin{equation}
\sigma_{t}^{\hat{\varphi}^L}(\hat{S}_f)=e^{-t\log N\sqrt{-1}}
\hat{S}_f,\quad f\in L^\infty(\hat{\mathbb{C}},\mu^L).
\label{L4}
\end{equation}
Thanks to Heicklen and Hoffman \cite{HH}, there exists a
von Neumann algebra isomorphism $\phi:L^\infty(K_N,\nu_N)\rightarrow
L^\infty(\hat{\mathbb{C}},\mu^L)$ satisfying
$\phi(f)\cdot R=\phi(f\cdot \sigma)$ for all
$f\in L^\infty(K_N,\nu_N)$.
We claim that there exists a representation $(\tilde{\pi},H)$
of ${\mathcal O}_{\rho}(K_N)$ satisfying
$$\tilde{\pi}(a)=\hat{\pi}(\phi(a)),\quad a\in C(K_N).$$
$$\tilde{\pi}(S_f)=\hat{S}_{\phi(f)},\quad f\in Y=C(K_N).$$
Thanks to (\ref{L1}) and (\ref{L2}), such a representation
would exist for the Cuntz-Toeplitz algebra ${\mathcal T}_Y$.
Since $Y$ has a finite basis $\{1_{E_j}\}_{j=1}^N$, to prove
the claim it suffices to show
$$\sum_{j=1}^N\hat{S}_{\phi(1_{E_j})}\hat{S}_{\phi(1_{E_j})}^*=I.$$
Indeed, thanks to the KMS condition we have
$$\hat{\varphi^H}(I-
\sum_{j=1}^N\hat{S}_{\phi(1_{E_j})}\hat{S}_{\phi(1_{E_j})}^*)
=1-\frac{1}{N}\sum_{j=1}^N
\hat{\varphi}^H(\hat{S}_{\phi(1_{E_j})}^*\hat{S}_{\phi(1_{E_j})})=0.$$
Since $\hat{\varphi}^H$ is faithful, we get the claim.
It is routine work to show that $\tilde{\pi}({\mathcal O}_\rho(K_N))''=M$,
and so $(\tilde{\pi},H, \Omega)$ is a cyclic representation.
(\ref{L3}) and (\ref{L4}) imply that
$(\tilde{\pi},H,\Omega)$ is unitarily equivalent
to the GNS representation of the unique $\log N$-KMS state
of the gauge action on ${\mathcal O}_\rho(K_N)$.
Thus Theorem~\ref{theorem;IIIH} shows that
$M$ is the AFD type III$_{1/N}$ factor.
\end{proof}
\section{Symmetry}
Since our construction of the Cuntz-Pimsner algebra from a dynamical system
is natural, a symmetry of the dynamical system gives rise to an
automorphism of the algebra.
In this section, we give a criterion for outerness of such automorphisms.
Quasi-free automorphisms on the Cuntz-Pimsner algebras for bimodules with
finite bases are studied by Katayama-Takehana \cite{KT} and
Zacharias \cite{Z} while the bimodules we treat in this paper do not
necessarily have finite bases.
Let $M$ be the AFD type III$_{1/N}$ factor acting on a Hilbert space.
Since $M$ arises from the GNS representation of the unique KMS state
of the gauge action on the Cuntz algebra ${\mathcal O}_N$,
we may find a generating set $\{S_j\}_{j=1}^N$ of $M$ consisting of isometries
satisfying the following conditions:
(1) they satisfy the Cuntz algebra relation and (2) there exists a cyclic and
separating vector $\Omega\in H$ for $M$ such that if we denote
the vector state for $\Omega$ by $\varphi$, then
$\sigma^\varphi_t(S_j)=N^{-t\sqrt{-1}}S_j$ holds for $t\in \mathbb{R}$ and
$j=1,2,\cdots,N$.
The canonical shift endomorphism $\Phi$ of $M$ is defined by
$$\Phi(x)=\sum_{j=1}^NS_jxS_j^*, \quad x\in M.$$
Let $B$ be the maximal abelian subalgebra of $M$ generated by
$\bigcup_{n=1}^\infty \{S_\xi S_{\xi}^*\}_{\xi\in \{1,2,\cdots,N\}^n}.$
\begin{lemma} Let the notation be as above.
Assume that $\theta$ is an automorphism of $M$ commuting with the modular
automorphism group $\{\sigma^\varphi_t\}_{t\in \mathbb{R}}$ such that
$\theta(B)=B$ and $\Phi\cdot\theta(x)=\theta\cdot \Phi(x)$ for all $x\in B$.
If there exists $b\in B$ satisfying $\theta(b)\neq b$, then $\theta$ is outer.
\label{lemma;outer}
\end{lemma}
\begin{proof} We first fix the notation.
Let ${\mathcal F}_n=\Phi^n(M)'\cap M$, which is isomorphic to
$M_{N^n}(\mathbb{C})$.
The centralize $M_\varphi$ is a type II$_1$ factor generated by
$\bigcup_{n=1}^\infty{\mathcal F}_n$ and $M_\varphi$ satisfies $M\cap M_\varphi'=\mathbb{C}$.
$B$ is a maximal abelian subalgebra of $M_\varphi$ as well.
Suppose that there exists a unitary $u\in M$ satisfying
$\theta(x)=uxu^*$ for all $x\in M$.
We claim $u\in M_\varphi$.
For $n\in \mathbb{Z}$, we set
$$u_n=\frac{1}{T}\int_{0}^{T}N^{tn\sqrt{-1}}\sigma^\varphi_t(x)dt,$$
where $T=2\pi/\log N$.
Then $u_mx=\theta(x)u_m$ holds for all $x\in M_\varphi$, which shows
$u_m^*u_m, u_mu_m^*\in M\cap M_\varphi'=\mathbb{C}$.
Thus $u_m$ is a multiple of a unitary.
If $u_m\neq 0$ for some $m\neq 0$, the KMS condition would imply
$\varphi(u_m^*u_m)\neq \varphi(u_mu_m^*)$, which is a contradiction.
Thus we get $u=u_0\in M_\varphi$.
Let $E_n$ be the trace preserving conditional expectation from
$M_\varphi$ onto ${\mathcal F}_n$.
Then $\{||u-E_n(u)||_2\}_{n=1}^\infty$ converges to 0.
For every $b\in B$, we have
\begin{eqnarray*}||b-\theta(b)||_2&=&||\Phi^n(b)-\Phi^n(\theta(b))||_2
=||\Phi^n(b)-\theta(\Phi^n(b))||_2\\
&=&||\Phi^n(b)u-u\Phi^n(b)||_2=||\Phi^n(b)(u-E_n(u))-(u-E_n(u))\Phi^n(b)||_2\\
&\leq& 2||u-E_n(u)||_2||b||,
\end{eqnarray*}
which shows $b=\theta(b)$.
This is a contradiction and $\theta$ is outer.
\end{proof}
Let $R$ be a rational function with $\deg R \geq 2$ and
let $\theta$ be a homeomorphism of $\hat{\mathbb{C}}$ commuting with $R$.
Then by naturality, we get an automorphism of ${\mathcal O}_{R}(\hat{\mathbb{C}})$ and
${\mathcal O}_R(J_R)$, which we denote by $\theta_{\hat{\mathbb{C}}}$ and
$\theta_{J_R}$ respectively.
Namely, we define the actions of $\theta$ on $a\in C(\hat{\mathbb{C}})$ and
$f\in X=C(\mathop{\rm {graph}}\nolimits R)$ by $\theta\cdot a(z)=a(\theta^{-1}(z))$ and by
$\theta\cdot f(z,R(z))=f(\theta^{-1}(z),R(\theta^{-1}(z)))$ respectively.
\begin{thm} Let the notation be as above and let $\mu^L$ be the
Lyubich measure.
If the automorphism of $L^\infty(\hat{\mathbb{C}},\mu^L)=L^\infty(J_R,\mu^L)$
induced by $\theta$ is non-trivial, then
$\theta_{\hat{\mathbb{C}}}$ and $\theta_{J_R}$ are outer.
\label{theorem;outer}
\end{thm}
\begin{proof} We use the notation in the proof of Theorem~\ref{theorem;IIIL}.
Since $\theta_{\hat{\mathbb{C}}}$ preserves $\varphi^L$, there exists an
automorphism $\theta_M$ of $M$ satisfying
$\theta_M\cdot\pi=\pi\cdot \theta_{\hat{\mathbb{C}}}$.
Since $\pi$ factors through ${\mathcal O}_R(J_R)$, to prove the theorem
it suffices to show that $\theta_M$ is outer.
Let $S_j=\hat{S}_{\phi(1_{E_j})}$ and
$$\Phi(x)=\sum_{j=1}^N S_jxS_j^*,\quad x\in M.$$
We claim that for any countable basis $\{u_n\}_{n=1}^\infty$ of $X$ and
$f\in L^\infty(\hat{\mathbb{C}},\mu^L)$, the following hold:
$$\Phi(\hat{\pi}(f))=\sum_{j=1}^\infty\pi(S_{u_j})\hat{\pi}(f)\pi(S_{u_j}^*).$$
Indeed, thanks to Lemma~\ref{lemma;p=1}, the right-hand side converges in
the strong operator topology.
Since $S_k^*\pi(S_{u_j})$ commutes with $\hat{\pi}(A)''$,
we have
\begin{eqnarray*}\pi(S_{u_j})\hat{\pi}(f)(S_{u_j}^*)&=&\sum_{k=1}^N
S_kS_k^*\pi(S_{u_j})\hat{\pi}(f)\pi(S_{u_j}^*)
=\sum_{k=1}^N
S_k\hat{\pi}(f)S_k^*\pi(S_{u_j})\pi(S_{u_j}^*)\\
&=&\Phi(\hat{\pi}(f))\pi(S_{u_j})\pi(S_{u_j}^*),
\end{eqnarray*}
which shows the claim.
Since $\{\theta_{\hat{\mathbb{C}}}(u_j)\}_{j=1}^\infty$ is a basis of $X$ too,
we get $\theta_M(\Phi(\hat{\pi}(f)))=\Phi(\theta_M(\hat{\pi}(f)))$
for every $f\in L^\infty(\hat{\mathbb{C}},\mu^L)=\phi(L^\infty(K_N,\nu_N))$.
Now the statement follows from Lemma~\ref{lemma;outer} as
the proof of Theorem~\ref{theorem;IIIH} shows that $\pi(A)''$ coincides
with $B$ in Lemma~\ref{lemma;outer}.
\end{proof}
\begin{exam}
We denote by $G_R$ the group of M\"{o}bius
transformations commuting with $R$.
Then $G_R$ is a finite subgroup of $PSL(2,{\mathbb C})$
(see \cite[pp.\ 104]{B}) and $G_R$ is either a cyclic group,
a dihedral group or the group of symmetries of
a regular tetrahedron, octahedron or icosahedron.
It is known that
there exist exactly three rational functions
with $2 \leq \deg R \leq 30$ such that
$G_R$ is the group of symmeties of a regular
icosahedron, which is isomorphic to $A_5$
(see Doyle and McMullen \cite{DMc}).
Let $R(z) = z^n$ for $n \geq 2$ and let $\omega$ be the primitive $(n-1)$th
root of the unity. Then
\[
G_R = \{z, \omega z, \dots, \omega ^{n-2}z,
\frac{1}{z}, \frac{\omega}{z}, \dots, \frac{\omega^{n-2}}{z} \}.
\]
Thus $G_R$ is isomorphic to the dihedral group $D_{n-1}$ for
$n \geq 3$ and is isomorphic to ${\mathbb Z}/2{\mathbb Z}$ for
$n = 2$.
Therefore the $G_R$-action on ${\mathcal O}_{R}(\hat{\mathbb{C}})$ transitively acts on
the set of extreme $\beta$-KMS states for $\beta > \log n$.
\end{exam}
\begin{exam} If $R$ has only real coefficients,
then $\theta(z)=\overline{z}$ commutes with $R$.
\end{exam}
\begin{rema} Let $\gamma=(\gamma_1,\gamma_2,\cdots,\gamma_N)$ be a system
of proper contractions on a compact metric space $K$ satisfying
the self-similarity condition.
Then for a homeomorphism $\theta$ of $K$ satisfying
$\theta\cdot\gamma_j\cdot\theta^{-1}=\gamma_{p(j)}$, where
$p$ is a permutation of $\{1,2,\cdots,N\}$, a result
similar to Theorem~\ref{theorem;outer} follows from Lemma~\ref{lemma;outer}.
\end{rema}
|
2,877,628,091,047 | arxiv | \section{Conclusions}
In this paper, we have presented an approach, extending the previous work \cite{fielder2016decision}, which implements a cybersecurity safeguards selection model along with game-theoretic and Knapsack optimization tools. We have evaluated our model in a healthcare use case using the CIS group $17$ controls which attend to implementation of security awareness and training programs for employees. The simulation results demonstrate that the Nash Safeguard Strategy comfortably outperforms common-sense selection strategies, such as the Weighted and Cautious, in terms of Defender's expected utility over a large number of attacks. This work is our step towards integrating the developed Optimal Safeguards Tool (OST) within cybersecurity risk management and investment environments.
An interesting extension to this work would be to capture the real-world uncertainty about an Attacker's type, for example considering a Bayesian game of application level selection. Furthermore, we plan to bring together several objective functions for Knapsack to compare the performance of the investment strategies. As the next steps, we aim at creating a use case with greater size of safeguards in collaboration with healthcare organizations. We also aim at using the well-known repository of cybersecurity safeguards like the 20 CIS controls or a list of Privacy Enhancing Technologies (PETs) to support our research.
\section{Introduction}
In the last few years, several cybersecurity incidents have taken place
in the healthcare sector, including the WannaCry ransomware, which influenced globally the cybersecurity landscape\footnote{\scriptsize{\url{https://www.telegraph.co.uk/technology/2018/10/11/wannacry-cyber-attack-cost-nhs-92m-19000-appointments-cancelled.}}}. The 2018 Ponemon Cost of a Data Breach study\footnote{\scriptsize{\url{https://securityintelligence.com/series/ponemon-institute-cost-of-a-data-breach-2018.}}} shows that the healthcare industry has the
highest cost per record breached in a cyber incident, at \$408. This is almost twice the equivalent cost per record breached in the financial sector. This calls for the effective preparation of healthcare organizations in an ever-evolving cyber attack landscape. An example project that is addressing this from the perspective of training the users in the sector is H2020 CUREX
project\footnote{\scriptsize{\url{https://cordis.europa.eu/project/rcn/220350/factsheet/en.}}}, which allows a healthcare provider to assess the realistic cybersecurity and privacy risks
they are exposed to \cite{mohammadi2019curex}.
Yet, a recent report from Mckinsey\footnote{\scriptsize{\url{https://www.mckinsey.com/business-functions/risk/our-insights/cyber-risk-measurement-and-the-holistic-cybersecurity-approach.}}} states that
almost all companies systematically over-invest in the protection of assets
that have no risk while at the same time they under-fund the protection of high-risk assets.
Furthermore, regarding bearing costs of cybersecurity controls, in a survey from
KPMG\footnote{\scriptsize{\url{https://advisory.kpmg.us/content/dam/advisory/en/pdfs/cyber-report-healthcare.pdf.}}}, 43\% of correspondents stated that they did not increase their cybersecurity budget even though high profile security breaches have been widely known. So, effective risk management is not only about assessing the risk correctly but also about selecting the controls that are optimal given the cost constraints of adopting them. To address the challenge of optimal control selection, in this paper, we formulate a model and tool for suggesting mathematically optimal \textit{cyber hygiene} strategies minimising the cyber risk.
Regarding cyber hygiene, we adopt the recent definition proposed by \cite{vishwanath2019cyber}, which relates it to ``\textit{the cyber security practices that online consumers should
engage in to protect the safety and integrity of their personal information on their
Internet enabled devices from being compromised in a cyber-attack.}''
\begin{comment}
According to a study from IBM Security and the Ponemon Institute, the cost of
a data breach for health care organizations rose from \$380 per breached record
in 2017 to \$408 per record in 2018.8 Across all industries, health care has the
highest cost for data breaches.
a Missouri health care organization was victim to a ransomware attack, leading the
organization to redirect ambulances as a safety measure. This was a small clinic of
under 50 beds that specialized in treating trauma and stroke patients.9 The attack
compromised the entire EHR system, prompting the facility to take precautions in
an effort to guarantee quality of care.
a Montana hospital had over 7,000 patient records stolen from an employee’s e-mail while
the employee was on business travel outside of the country.
The cyber-attack occurred while the individual was using an unsecure internet connection,
making the device susceptible to the hacker. The e-mails and attachments were exposed,
prompting the employer to put in place efforts to safeguard patient data.
\end{comment}
Towards the goal of optimizing cyber hygiene, we extend the model presented in \cite{fielder2016decision} so that:
\begin{itemize}
\item the Attacker's target is a user group (focusing on social engineering attacks)
instead of (asset, vulnerability) pair of the system;
\item the Indirect cost of a safeguards' application depends not only on the
safeguard itself but also on the size of the user group (i.e., number of users)
and more specifically it increases with the group size;
\item we adopt an aggregated risk model, as the objective function of Knapsack optimization
problem, rather than the weakest-link defending
against a variety of attacks that can cause, in total, highest aggregated damage and;
\item we use a ``small'' healthcare case study as a preliminary example to evaluate the OST against other common-sense approaches for a number of attacking strategies that have not been
simulated in \cite{fielder2016decision}.
\end{itemize}
Our analysis results show that the game-theoretic approach increases risk control
efficacy, by selecting an optimal combination of safeguard application levels, compared
with alternative common-sense approaches. In addition, our use case designed for the healthcare domain exhibits a number of interchangeably optimal investment strategies subject to a budget constraint under the framework of 0-1 Knapsack optimization.
The remainder of this paper is organized as follows. Section \ref{sec:relatedwork} presents
the related work in both the fields of (i) user-oriented cybersecurity safeguards and (ii)
optimization of cybersecurity countermeasures including security investments.
Section \ref{sec:model} presents both the game-theoretic model used to determine optimal cybersecurity safeguard plans as well as the optimization problem modeled and solved to derive the best ways to invest in these safeguards given a limited available budget.
In Section \ref{sec:evaluation}, we undertake comparisons of the game-theoretic defending
strategies against alternative common-sense approaches as well
as we plot the results of the Knapsack optimization to illustrate the optimal
investment solutions. Finally, Section 5 concludes this paper by summarizing its
main contributions and highlighting future work to be undertaken to further improve
the performance and the usability of our model.
\section{Acknowledgments}
We thank the reviewers for their valuable feedback and comments.
\vspace{0.25cm}
\noindent Emmanouil Panaousis is partially supported by the European Commission as part of the CUREX project (H2020-SC1-FA-DTS-2018-1 under grant agreement No 826404). The work of Christos Laoudias has been partially supported by the CUREX project (under grant agreement No 826404), by the European Union's Horizon 2020 research and innovation programme (under grant agreement No 739551 (KIOS CoE)), and from the Republic of Cyprus through the Directorate General for European Programmes, Coordination and Development.
\begin{acronym}
\end{acronym}
\bibliographystyle{unsrt}
\section{Model Evaluation}\label{sec:evaluation}
We have developed the proposed models
as part of the Optimal Safeguards Tool (OST) proposed in \cite{mohammadi2019curex}.
OST computes Nash Safeguards Plans as well as the Knapsack solutions.
OST aims at offering realistic actionable advice to healthcare organizations.
The following represents a case study based on Critical Internet Security (CIS)
17 Control ``Implement a Security Awareness and Training Program''.
\subsection{Use Case}
\subsubsection{User groups.}
Here, we assume a representative (non-exhaustive) set of three user groups,
denoted by $i$, in decreasing order of \textit{access privileges}:
\begin{itemize}
\item $i=1$; \textbf{ICT}: The information and communication technology
professionals responsible for the systems, networks and software.
They set up digital systems, support staff who use them, diagnose and
address faults, as well as set up and maintain security provisions.
In addition to the ICT infrastructure, they may also interact with medical
devices and electronic healthcare record systems. We consider the value of
corresponding assets that can be affected by an attack on this group to be
the highest possible, $A_1=100$ (e.g., \$100k). At the same time, due to limited interaction
with the public, this is the group with the lowest visibility to attacks
targeting the human, and as such we can consider it as lower risk, $R_1=0.2$.
\item $i=2$; \textbf{Clinical}: Nurses, doctors and other clinical staff have
access to medical devices and electronic healthcare records. We consider the
value of corresponding assets that can be affected by an attack on this group
to be $A_2=50$ (e.g., \$50k). As a result of visibility due to interaction with the patients
and presence on the hospital's website, this group has a moderate risk, $R_2=0.5$.
\item $i=3$; \textbf{Administration}: Receptionists, medical secretaries and other
administration roles involve access to electronic healthcare records. We consider
the value of corresponding assets that can be affected by an attack on this group
to be $A_3=25$ (e.g., \$25k). This group of users may have high interaction with the public
and volume of email traffic (e.g., appointment requests) and as such high risk,
$R_3=0.8$.
\end{itemize}
\begin{table}[th]
\begin{center}
\label{tab:cis174}
\begin{tabular}{@{}lcccc@{}}
\toprule
Control level \textbackslash Role & & ICT & Clinical & Administration \\ \midrule
\multicolumn{1}{l|}{\multirow{2}{*}{Low (once per year)}} & \multicolumn{1}{c|}{E} & 0.35 & 0.3 & 0.3 \\
\multicolumn{1}{l|}{} & \multicolumn{1}{c|}{C} & 1 & 30 & 10 \\ \midrule
\multicolumn{1}{l|}{\multirow{2}{*}{Medium (twice per year)}} & \multicolumn{1}{c|}{E} & 0.6 & 0.5 & 0.5 \\
\multicolumn{1}{l|}{} & \multicolumn{1}{c|}{C} & 2 & 60 & 20 \\ \midrule
\multicolumn{1}{l|}{\multirow{2}{*}{High (once per month)}} & \multicolumn{1}{c|}{E} & 0.8 & 0.7 & 0.7 \\
\multicolumn{1}{l|}{} & \multicolumn{1}{c|}{C} & 12 & 360 & 120 \\ \bottomrule
\end{tabular}
\vspace{0.3cm}
\caption{Evaluation parameters for control CIS-17.4.}
\end{center}
\vspace{-0.9cm}
\end{table}
We have assumed a user group ratio of size 1:30:10 that loosely follows the corresponding breakdown of hospital workforce in the United States\footnote{\scriptsize{\url{https://www.bls.gov/oes/current/naics3\_622000.htm.}}}:
81,790 computer, information system and security managers and analysts; 2,437,540 healthcare practitioners; 737,750 receptionists, healthcare record information clerks and other office and administrative support staff.
\begin{table}[h]
\begin{center}
\label{tab:cis176}
\begin{tabular}{@{}lcccc@{}}
\toprule
Control level \textbackslash Role & & ICT & Clinical & Administration \\ \midrule
\multicolumn{1}{l|}{\multirow{2}{*}{Low (Tests)}} & \multicolumn{1}{c|}{E} & 0.25 & 0.2 & 0.2 \\
\multicolumn{1}{l|}{} & \multicolumn{1}{c|}{C} & 1 & 30 & 10 \\ \midrule
\multicolumn{1}{l|}{\multirow{2}{*}{Medium (Videos)}} & \multicolumn{1}{c|}{E} & 0.7 & 0.6 & 0.6 \\
\multicolumn{1}{l|}{} & \multicolumn{1}{c|}{C} & 2 & 60 & 20 \\ \midrule
\multicolumn{1}{l|}{\multirow{2}{*}{High (Games)}} & \multicolumn{1}{c|}{E} & 0.6 & 0.5 & 0.5 \\
\multicolumn{1}{l|}{} & \multicolumn{1}{c|}{C} & 4 & 120 & 40 \\ \bottomrule
\end{tabular}
\vspace{0.3cm}
\caption{Evaluation parameters for control CIS-17.6.}
\end{center}
\vspace{-1.5cm}
\end{table}
\subsubsection{Safeguards.}
As safeguards, we have considered a representative pair from the SANS institute's CIS-17
group of critical
security controls\footnote{\scriptsize{https://www.cisecurity.org/controls/implement-a-security-awareness-and-training-program/.}}:
CIS-17.4 ``Update Awareness
Content Frequently'' and CIS-17.6 ``Train Workforce on Identifying Social Engineering
Attacks''. All values used in this case study, for these two safeguards, are
presented in Tables 2 and 3.
For CIS-17.4, we set the frequency of completion of
the updated training (once per year, twice per year, or once per month
- i.e., 12 times per year) as the level of control.
As indirect cost $C(j,i)$, we consider the total time spent in training
by the employees
in group $i$ at application level $j$ (in this case is \textit{frequency}),
which is proportionate to the size of the group and the frequency of the training.
This time can be translated to some financial cost (in \$) resulting from loss of productive
working hours. In this way, the indirect cost can be subtracted from the expected loss
comprising the final utility value of the Defender in each cell of the game utility matrix.
For CIS-17.6, we set the nature of the work-based training
(tests, videos, games) as the levels of control. Further, we set the corresponding efficacy values for each type roughly equivalent to their importance in helping predict user susceptibility
to semantic social engineering attacks.
Specifically, \cite{heartfield2016you} has identified work-based security training with videos as
the best predictor out of the three. In terms of efficacy values, we have differentiated slightly
between groups based on our perceived rate of adoption of controls in each one.
Specifically, we assume that adoption is greater for ICT than for clinical
and administration employees.
This is only for illustration purposes, so that the model can also take into account the group at each level of control.
We also assume, further, that the primary indirect cost is employee time required, with a ratio of 1:2:4 for the three control levels.
\subsection{Comparison with Alternative Defense Strategies}
In the following, we analyze the proposed model in two phases;
(i) the game-theoretic; and (ii) the 0-1 Knapsack optimization.
The \textit{first phase} evaluates different cybersecurity safeguard
selection strategies using the utility table of the investigated Cybersecurity
Safeguards Games (CSG) based on the use case discussed in the previous section.
To evaluate our approach, we have created a simulated environment
in Python which performs the attack sampling. For all comparisons performed, a sample
size of 1,000 attacks was used. Such a sample is referred to as a \textit{run} in the results.
In the following, we present the results, where 25 runs have been performed in each case and
the average Defender Utility (in \$) seen across the runs have been plotted.
More specifically, we have simulated
$\Gamma_{\sigma,2}$ and $\Gamma_{\sigma,3}$ (please see Table \ref{tab:list_symbols} for the notation)
for the two different safeguards presented
in the use case, i.e., CIS 17.4 (denoted as $\sigma=1$) and 17.6 (denoted as $\sigma=2$).
The games $\Gamma_{1,2}$, $\Gamma_{2,2}$ exhibit maximum safeguard application level of 2 (Medium),
while the games $\Gamma_{1,3}$, $\Gamma_{2,3}$ are investigated up to application level 3 (High).
Each CSG generates a utility table that we use to derive three different Defender application
level selection strategies:
\begin{itemize}
\item \textit{Nash} Safeguard Strategy (NSS), as described in Section 3 and computed using the
the open source \emph{Nashpy} Python
library\footnote{\scriptsize{\url{https://nashpy.readthedocs.io/en/stable/index.html}}}.
\item the \emph{Weighted} Safeguard Strategy (WSS), which distributes the choice of a safeguard level
over the weighted expected utility of the CSG by computing
probability $\delta_{\sigma,j}$ of choosing application level $j$ of safeguard $\sigma$
as follows:
\[
\delta_{\sigma,j}:= \frac{\sum_{i=1}^{|\mathcal{U}|} U_d(j,i)}
{\sum_{j=1}^{|\mathcal{L}|} \sum_{i=1}^{|\mathcal{U}|} U_d(j,i)}
\]
\item the \emph{Cautious} Safeguard Strategy (CSS), which always prefers the
\textit{highest} application level of a safeguard.
\end{itemize}
\noindent Regarding adversarial strategies, we consider three profiles:
\begin{itemize}
\item the \textit{Nash} Attacker who plays the Nash Attacking Strategy (NAS), presented in Section 3 and computed using the \emph{Nashpy} Python library.
\item a \textit{Weighted} Attacker who plays the Weighted Attacking Strategy (WAS) by attacking a user group $i$ with probability $\frac{A_i}{\sum_{i\in \mc{U}} A_i}$, i.e. the Attacker attacks the different user groups proportionally based on the asset values they have access to.
\item the \textit{Opportunistic} Attacker who uniformly chooses the different user groups to attack.
\end{itemize}
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{vs_nash.pdf}
\caption{}
\label{fig:utilites_against_nash_attacker}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{vs_weighted.pdf}
\caption{}
\label{fig:utilities_against_weighted_attacker}
\end{subfigure}
\newline
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{vs_op.pdf}
\caption{}
\label{fig:utilities_against_opportunistic_attacker}
\end{subfigure}
\caption{Game-theoretic optimization results: Average Utility of the Defender over 1,000 attacks for 25 runs.
for various CSGs.}
\label{fig:defender_utility_against_attacker}
\end{figure}
Figure \ref{fig:defender_utility_against_attacker} illustrates the performance
of NSS against WSS and CSS in terms of average Defender's utility over the 1,000 attacks
for 25 runs. In all cases, we contrast between Attackers who follow NAS and WAS.
\subsubsection{Nash Attacker.}
The results, in Figure \ref{fig:defender_utility_against_attacker}(a),
show that NSS outperforms both WSS and CSS when the Attacker chooses NAS.
More specifically, the percentage improvement values, seen when choosing NSS,
in comparison to WSS for the different games [$\Gamma_{1,2}, \Gamma_{1,3},
\Gamma_{2,2}, \Gamma_{2,3}$] are [20.2\%, 79.78\%, 16\%, 52.12\%], respectively.
Likewise, when choosing NSS over CSS, we observe improvement values
of [34.48\%, 87.07\%, 28.57\%, 62.26\%] for the different games [$\Gamma_{1,2}, \Gamma_{1,3},
\Gamma_{2,2}, \Gamma_{2,3}$], respectively.
\begin{remark}
These results demonstrate an average improvement of approximately $42\%$ of
NSS over WSS and $53\%$ over CSS.
\end{remark}
Comparably, the smallest average improvement for NSS over WSS is around 16\% when
playing Control 17.6 at the maximum application level of 2 ($\lambda=2$).
Likewise, the minimum improvement of NSS over CSS, approximately equal to 28\%, is
for the same control and $\lambda=2$.
On the other hand, the maximum improvement seen in NSS over CSS is approximately 87\%,
where the maximum improvement over CSS does not exceed 80\%, for Control 17.4.
and $\lambda=3$.
One of the primary reasons why naive-deterministic safeguard selection
approaches perform poorly against the Nash Defending strategy is that they fail to
incorporate the opponent's strategies. At the same time, we have considered CSG as a zero-sum game.
The class of zero-sum games offers a degree of freedom as it can be shown that assuming that
the adversary's intentions are exactly opposite to the defender's assets, i.e., the Attacker
seeks to cause maximum damage, any other incentive of the Attacker can only improve the
Defender's situation \cite{rass2018password}.
\subsubsection{Weighted Attacker.}
When the Weighted Attacking Strategy is simulated, the results demonstrate that NSS has
higher efficacy over WSS and CSS apart from one game $\Gamma_{2,2}$ in which both WSS and
CSS perform approximately 2\% better
than NSS (Figure \ref{fig:defender_utility_against_attacker}(b)).
This difference is negligible making NSS
being at least as good as the rest of the Defending strategies in all investigated games.
Despite the performance of NSS in $\Gamma_{2,2}$, for the rest of the games, NSS
performs significantly better than WSS and CSS.
The percentage improvement values, seen when choosing NSS,
in comparison to WSS and CSS for $[\Gamma_{1,2}, \Gamma_{1,3},
\Gamma_{2,2}, \Gamma_{2,3}]$ are [7.34\%, 70.25\%, -2.25\%, and 32.29\%]
and [15.1\%, 80.44\%, -2.1\%, and 44.79\%], respectively.
\begin{remark}
These results demonstrate an average improvement of approximately $28\%$ of
NSS over WSS and $34\%$ over CSS.
\end{remark}
The smallest average improvements for NSS over WSS and CSS are approximately
7\% (in $\Gamma_{1,2}$) and 15\% ($\Gamma_{1,2}$), respectively, and the maximum
average improvement values are 70\% (in $\Gamma_{1,3}$) and 80\% (in $\Gamma_{1,3}$).
\subsubsection{Opportunistic Attacker.}
Finally, when the Opportunistic Attacking Strategy is simulated, the results
demonstrate that NSS has higher efficacy over WSS and CSS
(Figure \ref{fig:defender_utility_against_attacker}(c)).
The percentage improvement values, seen when choosing NSS,
in comparison to WSS and CSS for $[\Gamma_{1,2}, \Gamma_{1,3},
\Gamma_{2,2}, \Gamma_{2,3}]$ are [13.3\%, 74.24\%, 5.4\%, and 40.8\%]
and [23.73\%, 83.4\%, 12.33\%, and 52.51\%], respectively.
\begin{remark}
These results demonstrate an average improvement of approximately $33\%$ of
NSS over WSS and $43\%$ over CSS.
\end{remark}
The smallest average improvements for NSS over WSS and CSS are approximately
5\% (in $\Gamma_{2,2}$) and 12\% ($\Gamma_{2,2}$), respectively, and the maximum
average improvement values are 74\% (in $\Gamma_{1,3}$) and 83\% (in $\Gamma_{1,3}$).
We notice that the highest improvements among the three different Attacking strategies
are introduced by the first scenario where Nash Attacker is simulated.
This was anticipated as at the Nash Equilibrium the Defender does the best against
a rational Attacker. Between the results for Weighted and Opportunistic Attacker,
NSS is more efficient against an Opportunistic Attacker than a Weighted one.
\subsection{Analysis of the Investment Problem}
The Knapsack optimization phase investigates the optimal investment in Nash Safeguards Plans
(NSPs) given a budget $B$ (for details refer to section \ref{sec:investment_NSP}).
The Knapsack takes as input every NSP generated in the game-theoretic phase and recommends
a single solution which minimizes the aggregated risk of all the user groups while satisfying
the investment budget constraint. This is different to the weakest-link model investigated by
\cite{fielder2016decision}.
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{k_c_vs_r_b1}
\caption{}
\label{fig:cost_vs_risk_budget40}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{k_c_vs_r_b4}
\caption{}
\label{fig:cost_vs_risk_budget100}
\end{subfigure}
\caption{Knapsack selection over available candidate solutions.}
\label{fig:cost_vs_risk}
\end{figure}
Figure \ref{fig:cost_vs_risk} presents the financial cost and aggregated risk
overall users for each Knapsack candidate solution, i.e., a combination of NSPs
for two different available budget values.
We notice that there are multiple Knapsack optimal
solutions, which are candidate solutions number $5,6,7$ and $8$. In the presence of
multiple optimal solutions, the Knapsack solver, we have implemented, chooses the first option.
For both budgets $40$ and $100$, the Knapsack optimization recommends investing in
both CIS controls $17.4$ and $17.6$ at application level $1$ i.e.,
Low (once per year) and Low (Tests), respectively. Note here that the small size of
the use case effectively prohibits high variability of the parametric values,
which led to the selection of only two types.
Note that the plots in Knapsack optimization only present the candidate solutions for the Nash Defender against Nash Attacker, in contrast to the plots in game-theoretic phase,
(Figure \ref{fig:defender_utility_against_attacker}), which presents all three Defender strategies. This choice was made due to the Knapsack optimization not involving the notion of CSG. As a result of this, it does not optimize the overall indirect cost of safeguards when choosing NSPs which has been done in the previous phase. In addition, Knapsack does not consider the behavior of the Attacker characterizing all adversarial strategies as irrelevant to the Knapsack objective function.
\section{Optimal Cyber Hygiene Safeguards Model}\label{sec:model}
\subsection{System Model}
Our model assists in acquiring an optimal selection of safeguards using game theory and combinatorial optimization. We assume $\mathcal{U}$ be the set of potential user groups consisting of employees of a healthcare organization. Any employee of a user group being susceptible to malicious activities can use any of the safeguards from the set of available safeguards $\mathcal{S}$ to improve their defense posture. However, each safeguard has a set of implementation levels $\mathcal{L}$ with each level having different efficacies in improving the security posture of user groups.
Each user group $i$ is associated with an impact value which expresses the level of expected damage to the healthcare organization, given a successful attack against a user of a group $i$. This impact is equivalent to the overall asset value in association with user group $i$ and may relate to \emph{confidentiality}, \emph{integrity}, and \emph{availability}. We further consider $A_i$ to be a random variable that expresses the overall value of the assets that the user group $i$ has access to. For simplicity, we let the users of a group have the same \textit{access privileges}, thus having access to assets of the same value. Users of different groups have different \textit{access privileges} due to their different roles (e.g., IT personnel, healthcare practitioners, and administration) and access to different assets. The vulnerability of a user group $i$, i.e., the probability of being compromised by an attack, is captured by the security level $S_i$ exhibited by the user group $i$. We assume that $S_i$ increases with the number of safeguards applied as well as their application level.
Furthermore, we denote $R_i$ as the threat occurrence, i.e., the probability of a threat to attack the $i$ user group, and $L_i$ as the expected loss associated with a user group $i$. Using the well-known risk assessment formula, risk = (likelihood of being attacked) x (probability of success of this attack) x probable loss \cite{whitman2011principles}, we compute the risk as
\begin{equation}\label{eq:risk}
L_i = R_i \, S_i \ A_i .
\end{equation}
\ifExtended
\iMP{we can breakdown the risk into C,I,A risks for better modeling}
\fi
An attack against a user group $i$ is partially mitigated by the efficacy value of the implemented cybersecurity safeguard $p_{cj}$. The efficacy parameter, modeled as a random variable, depends on the selected application level and can be represented as $E(j,i) \colon\mc{L} \times \mc{U} \rightarrow [0,1)$.
It is evident from real-world practices that different implementation levels work differently on different users and this has motivated us in considering $E(j,i)$ rather than a single efficacy value for the level $j$ against all user groups $i$. Note that $E(j,i)$ is determined by the application level $j$ and the user group $i$. Due to the existence of 0-day vulnerabilities, we assume that $E(j,i) \neq 1$.
\begin{remark}
Different users have different likelihood of adopting a measure. A cyber hygiene measure works only when it is adopted, and this adaption rate distinguishes human users from systems. For example, a user may decide not to implement a cyber hygiene measure due to unfitting usability (e.g, hard to remember complex passwords) even if the optimization framework recommends otherwise.
\end{remark}
Let $S(j,i)$ be the security level of a user group $i$ when level $j$ is implemented and can be expressed as $S(j,i)=1-E(j,i)$. Replacing $L_i$ and $S_i$ as $L(j,i)$ and $S(j,i)$, respectively, in formula \ref{eq:risk}, we compute the \textit{cybersecurity loss} for a safeguard application level $j$ and target $i$ as
\begin{equation}\label{eq:loss}
L(j,i) = R_i \, A_i \, [1-E(j,i)] .
\end{equation}
Equation \ref{eq:loss} implies the expected damage of the Defender when a user group $i$ is successfully compromised given the investigated safeguard has been applied at level $j$.
While the application of a cybersecurity safeguard strengthens the defense of
the healthcare organization, it is associated with two types of cost namely;
\emph{indirect} and \emph{direct}. Examples of indirect cost are System
Performance and Usability. We express the indirect cost of an application level $j$ by the random variable $C\colon\mathcal{C} \times \mc{L} \times \mc{U} \rightarrow \mathbb{Z^+}$. Note that $C(j,i)$ adheres to the defined property for any safeguard against a user group $i$. Further, the indirect cost increases with an increase in the level of application of the safeguard i.e.,
\begin{equation}
j > j' \Leftrightarrow C(j,i) \geq C(j',i),\quad \forall j \neq j' .
\end{equation}
From the above, we derive the \textit{overall expected loss} of the
organization when application level $j$ is applied on user group $i$ as
\begin{equation}
\sum_{i=1}^{|\mc{U}|} L(j,i) + C(j,i) .
\end{equation}
Each level has also a direct cost expressed by the random variable $F\colon \mc{L} \rightarrow \mathbb{Z^+}$ that maps the safeguards and application levels to the monetary cost of the plan. In this paper, we refer the direct cost to be the available investment budget of the organization. For reference purposes, the symbols used throughout this paper are described in Table \ref{tab:list_symbols}.
\begin{table}[t]
\footnotesize
\centering
\renewcommand*{\arraystretch}{1.1}
\begin{tabular}{c l}
\hline
\textbf{Symbol} & \textbf{Description} \\
\hline
$\mc{S}$ & Set of safeguards \\
$\mc{U}$ & Set of users \\
$\mc{L}$ & Set of safeguard implementation levels \\
$R_i$ & Probability of group $i$ to be attacked\\
$S_i$ & Security level of group $i$\\
$A_i$ & Asset value that group $i$ has access to\\
$\lambda$ & Maximum application level\\
$U_d$ & Utility of the Defender \\
$U_a$ & Utility of the Attacker \\
$\vec{\delta}_{\sigma,j}$ & Randomized Safeguard Strategy for safeguard $\sigma$ at application level $j$\\
$\vec{\alpha}$ & Randomized Attacking Strategy \\
$\vec{\alpha}(i)$ & Probability of attacking group $i$ \\
$L_i$ & Expected loss from group $i$ \\
$L(j,i)$ & Expected loss from group $i$ when choosing application level $j$\\
$L(\vec{\delta}_{\sigma,j},i)$ & Expected loss from group $i$ when choosing Safeguards Plan $\vec{\delta}_{\sigma,j}$\\
$C(j,i)$ & Indirect cost of level $j$ when applied to group $i$\\
$E(j,i)$ & Efficacy of application level $j$ on group $i$ \\
$E(\vec{\delta}_{\sigma,j},i)$ & Efficacy of safeguards plan $\vec{\delta}_{\sigma,j}$ on group $i$\\
$\Gamma_{\sigma,\lambda}$ & Cyber Safeguard Game for safeguard $\sigma$ and maximum application level $\lambda$\\
$\vec{\delta}^{NE}_{\sigma,\lambda}$ & Nash Safeguards Plan\\
$F(\vec{\delta}_{\sigma,j})$ & Financial cost of Safeguards Plan $\vec{\delta}_{\sigma,j}$\\
$F(\sigma,j)$ & Financial cost of safeguard $\sigma$ when applied at level $j$\\
$B$ & Available financial budget to invest in Nash Safeguards Plans\\
\hline
\end{tabular}
\vspace{0.3cm}
\caption{List of Symbols}
\label{tab:list_symbols}
\vspace{-0.8cm}
\end{table}
\subsection{Game-Theoretic Model for Selection of Safeguards Levels}
This section presents a formal model for the selection of safeguard implementation levels for each of the available safeguards. The Defender chooses to implement (or apply as in this paper we use these two terms interchangeably)
a cyber hygiene safeguard from $\mc{S}$, while the Attacker chooses to attack a user group from $\mc{U}$. The Defender must decide to apply this safeguard at a specific level (pure strategy) or combination of different levels (mixed strategy) both from $\mc{L}$. The higher the level, the greater is the applied degree of a cyber hygiene safeguard. We refer to the application of a safeguard $s$ at a certain level $j$ as \textit{cybersecurity safeguard plan}. This strategic interaction is modeled as a game where the Defender chooses the level of a safeguard to implement rather than the safeguards from $\mc{S}$.
We define the Cyber Safeguard Game (CSG) between Defender and Attacker, as an \emph{one-shot, bimatrix} game of \emph{complete information} played for any of the safeguards leading to a total number of $\mathcal{S}$ independent games. For simplicity, we have assumed no inter-dependencies between the safeguards, i.e., each safeguard mitigates a portion of the overall risk inflicted by the Attacker \cite{smeraldi2014spend}.
The set of pure strategies of the Defender consists of all possible application levels, $j \in \mc{L}$, while the Attacker's pure strategies are the different user groups $i \in \mc{U}$ which could be targeted using attacks such as social engineering. Thus, in CSG a pure strategy profile is a pair of Defender and Attacker actions, $(j,i) \in \mc{L} \times \mc{U}$ giving a pure strategy space of size $|\mc{L}| \times |\mc{U}|$. For the rest of the paper, we adopt the convention where the Defender is the row player and the Attacker is the column player.
Each player's preferences are specified by a \emph{payoff function} defined as $U_d:(j,i) \rightarrow \mathbb{R_{-}}$ and $U_a:(j,i) \rightarrow \mathbb{R_{+}}$ for the Defender and Attacker, respectively, for the pure strategy profile
$(j,i)$. According to \cite{osborne1994course}, we define a \emph{preference relation} $\succsim$, when $i$ is chosen by the Attacker, defined by $j \succsim j'$, if and only if $U_d(j,i) \geq U_d(j',i)$. In general, given the set $\mc{L}$ of all available application levels of a safeguard, a rational Defender can choose a level (i.e., pure strategy) $j^*$ that is \emph{feasible}, that is $j^* \in \mc{L}$, and \emph{optimal} in the sense that $j^* \succsim j, \forall j \in \mc{L}, j\neq j^*$; alternatively she solves the problem $\max_{j \in \mc{L}} U_d(l, i)$, for a user group $i \in \mc{U}$. Likewise, we define the preference relation for the Attacker, where $i \succsim i' \iff U_a(j,i)\geq U_a(j,i')$, for an application level $j \in \mc{L}$. CSG is a game defined for each cyber hygiene safeguard and it is realistic to assume that all levels may be available for selection by the Defender. Their availability depends on the investment budget of the Defender and the overall financial cost of the game solution.
To derive optimal strategies for the Defender, we deploy the notion of
\emph{mixed strategies}. Since players act independently, we can enlarge
their strategy spaces to allow them to base their decisions on the outcome
of random events that create uncertainty to the opponent about individual
strategic choices maximizing their payoffs. Hence, both Defender and Attacker
deploy randomized (i.e., mixed) strategies. The mixed strategy $\vec{\delta}$
of the Defender is a probability distribution over the different application
levels (i.e.~pure strategies) where $\vec{\delta}(j)$ is the probability of
applying level $j$ under mixed strategy $\vec{\delta}$. We refer to a mixed strategy of the Defender as a \emph{Randomized Safeguard Strategy} (RSS). For the finite nonempty set $\mc{L}$, let $\Pi_{\mc{L}}$ represent the set of all probability distributions over it, i.e.,
\begin{eqnarray}\label{eq:set_probs_L}
\Pi_{\mc{L}} := \{\vec{\delta} \in
\mathbb{R}^{+R} |~\sum_{j \in \mc{L}} \vec{\delta}(j)=1 \} .
\end{eqnarray}
Therefore a member of $\Pi_{\mc{L}}$ is a mixed strategy of the Defender.
Likewise, the Attacker's mixed strategy is a probability distribution over the
different available user groups. This is denoted by $\vec{\alpha}$, where
$\vec{\alpha}(i)$ is the probability of attacking the $i$-th user
group under mixed strategy $\vec{\alpha}$. We refer to a mixed strategy of the Attacker as the \emph{Randomized Attacking Strategy} (RAS). Alike (\ref{eq:set_probs_L}),
we express $\Pi_{\mc{U}}$ as the set of all probability distributions over the set of all Attacker's pure strategies (i.e., given by $\mc{U}$). Therefore, a member of $\Pi_{\mc{U}}$ is as a mixed strategy of the Attacker. From the above, the set of mixed strategy profiles of CSG is the Cartesian product of the individual mixed strategy sets, $\Pi_{\mc{L}} \times \Pi_{\mc{U}}$.
\begin{definition}(Support of RSS)
The support of $\vec{\delta}$ is the set of application levels
$\{j|\vec{\delta}(j)>0\}$, and it is denoted by $supp(\vec{\delta})$.
\end{definition}
\begin{definition}(Support of RAS)
The support of $\vec{\alpha}$ is the set of healthcare user groups
$\{i|\vec{\alpha}(i)>0\}$,~and it is denoted by $supp(\vec{\alpha})$.
\end{definition}
The above definitions state that the subset of applications levels
(resp. user groups) that are assigned positive probability by the mixed
strategy $\vec{\delta}$ (resp. $\vec{\alpha}$) is called the \emph{support} of
$\vec{\delta}$ (resp. $\vec{\alpha})$). Note that a pure strategy is a special
case of a mixed strategy, in which the support is a single action.
Now that we have defined the mixed strategies of the players, we define
CSG as the finite strategic game
\begin{equation}
\Gamma:=\langle(\mathrm{Defender},~\mathrm{Attacker}),\Pi_{\mc{L}}
\times \Pi_{\mc{U}},~(U_d,U_a)\rangle .
\end{equation}
For a given mixed strategy profile $(\vec{\delta},\vec{\alpha}) \in \Pi_{\mc{L}} \times \Pi_{\mc{U}}$, we denote by $U_d(\vec{\delta},\vec{\alpha})$, and $U_a(\vec{\delta},\vec{\alpha})$ the expected payoff values of the Defender and Attacker, where the expectation is due to the independent randomization according to mixed strategies $\vec{\delta}$,~and $\vec{\alpha}$. This can be formally represented as
\begin{equation}\label{eq:util_def}
\begin{aligned}
U_d(\vec{\delta},\vec{\alpha}):=\sum_{j \in \mc{L}} \sum_{i\in \mc{U}} U_d(j,i)
\,\vec{\delta}(j) \,\vec{\alpha}(i) ,
\end{aligned}
\end{equation}
and similarly
\begin{equation}\label{eq:util_att}
\begin{aligned}
U_a(\vec{\delta},\vec{\alpha}):=\sum_{j\in \mc{L}} \sum_{i\in \mc{U}}
U_a(j,i) \,\vec{\delta}(j) \,\vec{\alpha}(i).
\end{aligned}
\end{equation}
By using the preference relation we can say that, for an Attacker's mixed strategy $\vec{\alpha}$, the Defender prefers to follow the RSS $\vec{\delta}$
as opposed to $\vec{\delta}'$ (i.e., $\vec{\delta} \succsim \vec{\delta}'$),
if and only if $U_d(\vec{\rho},\vec{\mu}) \geq U_d(\vec{\delta}',\vec{\alpha})$.
\begin{definition}
The Defender's (resp. Attacker's) best response to the mixed strategy
$\vec{\alpha}$ (resp. $\vec{\delta}$) of the Attacker (resp. Defender) is an RSS
(resp. RAS)
$\vec{\delta}^{\mathrm{BR}} \in \Pi_{\mc{L}}$ (resp. $\vec{\alpha}^{\mathrm{BR}} \in \Pi_{\mc{U}})$
such that $U_d(\vec{\delta}^{\mathrm{BR}},\vec{\alpha}) \geq U_d(\vec{\delta},\vec{\alpha}),
\forall \vec{\delta}\in \Pi_{\mc{L}}$ (resp. $U_a(\vec{\delta},\vec{\alpha}^{\mathrm{BR}})
\geq U_d(\vec{\delta},\vec{\alpha}), \forall \vec{\alpha}\in \Pi_{\mc{U}})$.
\end{definition}
\begin{remark}
The game-theoretic solutions that we propose in the next section involve \emph{randomization}. For instance, in a mixed equilibrium, each player's randomization leaves the other \emph{indifferent} across her randomization support. These choices can be deliberately randomized, however these are not the
only equilibria interpretations. For instance, the probabilities over the pure actions (i.e., application level or user group pure selections) can represent (i) time averages of an ``adaptive'' player, (ii) a vector of fractions of a ``population'', where each player type adopts pure strategies and, (iii) a ``belief'' vector that each player has about the other regarding their behavior.
\end{remark}
\subsection{CSG solutions}\label{solutions}
Given the definition of CSG and its components, we derive optimal strategies for the Defender. First, we investigate the problem of determining best RSSs and RASs (i.e., mixed strategies), for the Defender and the Attacker respectively, when both players are strategic and play simultaneously.
As we have not explicitly defined the \emph{strategic type} of Attacker,
we consider different types of solutions based on various Attacker behaviors.
This analysis will allow us to draw robust conclusions regarding the
\emph{overall optimal} Defender strategy, which will minimize expected damages
\emph{regardless of the Attacker type}.
The most commonly used solution concept in game theory is that of
\emph{Nash Equilibrium} (NE) \cite{osborne1994course}. This concept
captures a steady state of the play of the CSG in which both Defender and
Attacker hold the correct expectation about the other players' behavior
and they act rationally. A NE dictates optimal responses to each other's
actions, keeping the others' strategies fixed, i.e., strategy profiles that
are resistant against unilateral deviations of players.
\begin{definition}
In any Cyber Safeguard Game, a mixed strategy profile
$(\vec{\rho}^{\mathrm{NE}},\vec{\mu}^{\mathrm{NE}})$ of $\Gamma$ is a mixed NE if and only if
\begin{enumerate}
\item $\vec{\delta}^{\mathrm{NE}} \succsim \vec{\delta}, \forall \vec{\delta} \in \Pi_{\mc{L}}$,
when the Attacker chooses $\vec{\alpha}^{\mathrm{NE}}$, i.e.
\begin{eqnarray}
U_d(\vec{\rho}^{\mathrm{NE}},\vec{\mu}^{\mathrm{NE}})\geq_{\forall \vec{\delta}\in \Pi_{\mc{L}}} U_d(\vec{\delta},\vec{\alpha}^{\mathrm{NE}});
\end{eqnarray}
\item $\vec{\alpha}^{\mathrm{NE}}\succsim \vec{\alpha}, \forall \vec{\alpha} \in \Pi_{\mc{U}}$,
when the Defender chooses $\vec{\delta}^{\mathrm{NE}}$, i.e.
\begin{eqnarray}
U_a(\vec{\rho}^{\mathrm{NE}},\vec{\mu}^{\mathrm{NE}})\geq_{\forall \vec{\alpha}\in \Pi_{\mc{U}}} U_a(\vec{\delta}^{\mathrm{NE}},\vec{\alpha}).
\end{eqnarray}
\end{enumerate}
\end{definition}
\begin{definition}
The Nash Safeguards Plan (NSP),~denoted by $\vec{\delta}^{\mathrm{NE}}$, is a probability
distribution over the different levels, as determined by the NE of the CSG.
\end{definition}
\textbf{Example 1}. For a safeguard with 3 application levels including level 0, which
corresponds to not applying the safeguard at all, an NSP $(0,0.2,0.8)$ dictates
that 20\% of the users will be strengthened (e.g., trained) at $j=1$ (e.g.,
once when they join the organization), while 80\% of the users will be applied
a higher level of the safeguard $j=2$ (e.g., attending training once per year).
\subsection{Optimality analysis}\label{analysis}
We model \emph{complete information} Nash CSGs, according to which both
players know the game matrix, which contains the utilities of both
players for each pure strategy profile. The utility function of the
Defender is determined by the probability of failing to protect a user group and the indirect costs of the chosen application levels.
We consider a \emph{zero-sum} CSG, where the Attacker's utility is the
opposite of the Defender's utility. The rationale behind the zero-sum CSG is that when the Defender is uncertain about the Attacker type, she considers the \emph{worst case scenario}, which can be formulated by a zero-sum game where the Attacker can cause her \emph{maximum damage}.
The idea behind a zero-sum game like this is that the Attacker focuses on causing
maximum corruption to cyberspace, while the Defender aims at minimizing the damage. Due to the Attacker's goal being conflicting to the Defender's objective, the application of game theory to study the selection of safeguards application levels is convenient. While in most security situations the interests of the players are neither in strong conflict nor in complete identity, the zero-sum game provides important insights into the notion of ``optimal play'', which is closely related to the \emph{minimax theorem} \cite{minimax}.
In the zero-sum CSG,
\begin{equation}
\Gamma_0=\langle \{d,a\}, \mc{L} \times \mc{U}, \{U_d,-U_d\}\rangle ,
\end{equation}
the Attacker's gain is equal to the Defender's security loss, and vice versa.
We define the utility of the Defender in $\Gamma_0$ as
\begin{eqnarray}\label{eq:utility_defender_in_zs}
\small
U_d^{\Gamma_0}(j,i) := - w_L \, L(j,i) - w_C \, C(j,i).
\end{eqnarray}
The first term of (\ref{eq:utility_defender_in_zs}) is the expected
loss of the Defender inflicted by the Attacker when
attempting to compromise user group $i$, while the second term
expresses the aggregated indirect cost of the safeguard application
irrespective of the attacking strategy.
Let $w_L, w_C \in [0,1]$ are importance weights, which can facilitate
the Defender with setting her preferences in terms of security loss,
and indirect cost, accordingly.
For a mixed profile $(\vec{\delta},\vec{\alpha})$, the utility of the Defender equals
\begin{equation}
\label{eq:mixed_payoff_def}
\small
\begin{aligned}
U_d^{\Gamma_0}(\vec{\delta},\vec{\alpha}) &\overset{(\ref{eq:util_def})}{=}
\sum_{j \in \mc{L}} \sum_{i \in \mc{U}} U_d^{\Gamma_0}(j,i) \vec{\delta}(j) \, \vec{\alpha}(i)\\
&\overset{(\ref{eq:utility_defender_in_zs})}{=}
\sum_{j\in \mc{L}} \sum_{i\in \mc{U}} [-w_L \, L(j,i) - w_c \, C(j)]\,\vec{\delta}(j) \,\vec{\alpha}(i) \\
&= - w_L \sum_{j\in \mc{L}} \sum_{i\in \mc{U}} L(j,i)\,\vec{\delta}(j) \,\vec{\alpha}(i)
- w_C \sum_{j\in \mc{L}} C(j,i)\,\vec{\delta}(j) .
\end{aligned}
\end{equation}
As $\Gamma_0$ is a zero-sum game, the Attacker's utility
is given by $U_a^{\Gamma_0}(\vec{\delta}, \vec{\alpha})
= -\,U_d^{\Gamma_0}(\vec{\delta}, \vec{\alpha})$. Since the Defender's
equilibrium strategies maximize her utility, given that
the Attacker maximizes her own utility, we will refer to
them as \emph{optimal strategies}.
As $\Gamma_0$ is a two-person zero-sum game with a finite number of
actions for both players, according to Nash \cite{nash1950equilibrium},
it admits at least a NE in mixed strategies and saddle-points
correspond to Nash equilibria as discussed in \cite{alpcan2010network}
(p.\,42).~The following result from \cite{basar1995dynamic},
establishes the existence of a saddle (equilibrium) solution
in the games, we examine and summarizes their properties.
\begin{definition}[Saddle point of the CSG]
The $\Gamma_0$ Cyber Safeguard Game (CSG) admits a saddle point in mixed
strategies, $(\vec{\delta}^{\mathrm{NE}}_{\Gamma_0},\vec{\alpha}^{\mathrm{NE}}_{\Gamma_0})$, with the
property that
\begin{itemize}
\item $\vec{\delta}^{\mathrm{NE}}_{\Gamma_0}=\arg \max_{\vec{\delta} \in \Delta_{\mc{L}}} \min_{\vec{\alpha} \in \Delta_{\mc{U}}} U_d^{\Gamma_0}(\vec{\delta},\vec{\alpha}),\; \forall \vec{\alpha}$, and
\item $\vec{\alpha}^{\mathrm{NE}}_{\Gamma_0}=\arg \max_{\vec{\alpha} \in \Delta_{\mc{U}}} \min_{\vec{\delta} \in \Delta_{\mc{L}}} U_a^{\Gamma_0}(\vec{\delta},\vec{\alpha}),\,\forall \vec{\delta}$.
\end{itemize}
Then, due to the zero-sum nature of the game, the minimax theorem \cite{minimax} holds, i.e.~
$\max_{\vec{\delta} \in \Delta_{\mc{L}}} \min_{\vec{\alpha} \in
\Delta_{\mc{U}}} U_d^{\Gamma_0}(\vec{\delta},\vec{\alpha})= \min_{\vec{\alpha} \in \Delta_{\mc{U}}}
\max_{\vec{\delta} \in \Delta_{\mc{L}}} U_d^{\Gamma_0}(\vec{\delta},\vec{\alpha})$ .
The pair of saddle point strategies $(\vec{\delta}^{\mathrm{NE}}_{\Gamma_0},\vec{\alpha}^{\mathrm{NE}}_{\Gamma_0})$ are at the same time security strategies for the players, i.e.~they ensure a minimum performance regardless of the actions of the other.~Furthermore, if the game admits multiple saddle points (and strategies), they have the ordered interchangeability property, i.e.~the player achieves the same performance level independent from the other player's choice of saddle point strategy.
\end{definition}
The minimax theorem \cite{minimax} states that for zero-sum games, NE and minimax
solutions coincide.~Therefore, $\vec{\delta}^{\mathrm{NE}}_{\Gamma_0} =
{\tt \arg\min}_{\vec{\delta} \in \Delta_{\mc{L}}} \max_{\vec{\alpha} \in
\Delta_{\mc{U}}} U_a^{\Gamma_0} (\vec{\delta},\vec{\alpha})$.
This means that regardless of the strategy the Attacker
chooses, NSP is the Defender's security strategy
that guarantees a minimum performance.
Formally, the Defender seeks to solve the following LP:
\begin{equation}
\small
\begin{aligned}
&\max_{\vec{\delta} \in \Delta_{\mc{L}}} \min_{\vec{\alpha} \in \Delta_{\mc{U}}} U_d^{\Gamma_0}(\vec{\delta}, \hat{i}\,) \\
\text{subject} & \text{~to}
\begin{cases}
\label{eq:lp}
U_{d}^{\Gamma_0}(\vec{\delta},1) - \min_{\vec{\alpha} \in \Delta_{\mc{U}}}U_d^{\Gamma_0}(\vec{\delta}, \hat{i})e \geq 0\\ \hspace{2cm} \vdots \\ U_{d}^{\Gamma_0}(\vec{\delta},|\mc{U}|) - \min_{\vec{\alpha} \in \Delta_{\mc{U}}}U_d^{\Gamma_0}(\vec{\delta}, \hat{i})e \geq 0\ \\ \vec{\delta} e = 1 \\ \vec{\delta} \geq 0 .
\end{cases}
\end{aligned}
\end{equation}
In this problem, $e$ is a vector of ones of size $|\mc{U}|$.
\subsection{Multiple Games Per Safeguard}
Given that we have to allocate a budget in applying different
safeguards, we may come across the challenge of not having enough
monetary resources to select some of the equilibria of the CSG.
Therefore, one has to derive the financial cost of equilibrium
and assess its feasibility by comparing its financial cost to the
available remaining budget. We refer to ``remaining'' budget as
we expect that the Defender will have to select among a number of
equilibria, one per safeguard, as we show later in this section.
To provide to the Defender a wider variety, in terms of financial cost,
of equilibria per safeguard, we define a number of CSGs per safeguard. Each of these games has a different number of application levels available to the Defender.
Aligned with \cite{fielder2016decision}, for each safeguard $\sigma$, we
study $|\mc{L}|$ CSGs.
\begin{definition}
To differentiate among different safeguards and implementations levels,
we denote the CSG by $\Gamma_{\sigma,\lambda}$, where the
safeguard $\sigma$ can be applied up to $\lambda \in [0,|\mc{L}|]$.
\end{definition}
Note that we allow $\lambda=0$ so that the Defender has the option to avoid selecting
a safeguard should this violate some budget constraints.
A Knapsack optimisation is used in the second phase of the model to select the equilibria, at most one per safeguard. In this way, we manage to have $|\mc{L}|$ NSPs per safeguard,
each of a different financial cost. Each $\Gamma_{\sigma,\lambda}$ is a game where
(i) Defender's pure strategies correspond to consecutive application levels
of safeguard $\sigma$ starting always from 0 and including all levels up to $\lambda$
and, (ii) Attacker's pure strategies are the different targets akin to user groups.
Figure \ref{fig:gt_illustration} illustrates the different Cybersecurity Safeguards Games
along with the utilities of the Defender.
\begin{figure*}[t]
\centering
\includegraphics[width=1\linewidth]{OST_concept}
\caption{Illustration of the safeguard-centered model of OST used to devise game-theoretic strategies for the Defender.}
\label{fig:gt_illustration}
\end{figure*}
Let $\vec{\delta}^{NE}_{\sigma,\lambda}$ be the equilibrium of $\Gamma_{\sigma,\lambda}$ then
\begin{equation}
\vec{\delta}^{NE}_{\sigma,\lambda} = [\delta^{NE}_{\sigma,0},
\delta^{NE}_{\sigma,1}, \dots,
\delta^{NE}_{\sigma,\lambda}] .
\end{equation}
Let $F(\vec{\delta}_{\sigma,\lambda})$ be the financial cost of the safeguards plan
$\vec{\delta}_{\sigma,\lambda}$ which can be derived by summing the financial costs
of all application levels $j \in \{1,2,\dots,\lambda\}$ for safeguard $\sigma$ contributed
proportionally by using the corresponding probability from
$\vec{\delta}_{\sigma,\lambda}$, i.e., $\delta_{\sigma,j}$.
Let $F(\sigma,j)$ denote the financial cost of safeguard $\sigma$ then
\begin{equation}
F(\vec{\delta}_{\sigma,\lambda}) = \sum_{j \in \{1,2,\dots,\lambda\}} \delta_{\sigma,j} \, F(\sigma,j) .
\end{equation}
\subsection{Investment in Nash Safeguards Plans} \label{sec:investment_NSP}
Let $\mc{S}$ be the set of all available safeguards
to the Defender. We can solve all $|\mc{S}| \times |\mc{L}|$
CSGs and derive a set of equilibria per safeguard $\sigma$ represented as follows
\begin{equation}
\{\vec{\delta}^{NE}_{\sigma, 1},\vec{\delta}^{NE}_{\sigma, 2}, \dots, \vec{\delta}^{NE}_{\sigma, |\mc{L}|}\} .
\end{equation}
For all safeguards $\{1,2,\dots,|\mc{S}|\}$ the following set of sets of equilibria, i.e., NSPs, is available
\begin{equation}\label{eq:allNSPs}
\Big\{\{\vec{\delta}^{NE}_{1,0},\vec{\delta}^{NE}_{1,1}, \dots, \vec{\delta}^{NE}_{1,|\mc{L}|}\},
\{\vec{\delta}^{NE}_{2,0},\vec{\delta}^{NE}_{2,1}, \dots, \vec{\delta}^{NE}_{2,|\mc{L}|}\}, \dots,
\{\vec{\delta}^{NE}_{|\mc{S}|,0},\vec{\delta}^{NE}_{|\mc{S}|,1}, \dots, \vec{\delta}^{NE}_{|\mc{S}|,|\mc{L}|}\}\Big\} .
\end{equation}
Optimal budget allocation in cybersecurity can be tackled by combinatorial optimization as
previously investigated by Smeraldi and Malacaria \cite{smeraldi2014spend}.
We are concerned with the challenge of protecting multiple targets, in our case user groups,
with the use of a number of NSPs that interact between them in different ways.
In the following, we model the challenge of investing in these different NSPs in a way that
at most one NSP per safeguard is chosen and the sum of financial costs of these NSPs fits an available cybersecurity budget. We have used 0-1 Knapsack Optimization to solve this
problem. As opposed to the solution provided in \cite{fielder2016decision}, we have chosen
the objective function of the Defender to consider the sum of expected losses incurred from
the different user groups being attacked. This is not to say that the proposed weakest-link
model in \cite{fielder2016decision} is not relevant anymore but we realize the potential risk
to have all user groups targeted by the Attacker with the goal to maximize the collective damage over
a number of assets rather than trying to compromise the most precious asset. We argue that such
a goal to maximize the aggregated damage is more applicable in attacks like Advanced Persistent
Threat, where the goal is to maximize the Defender's overall loss in a number of different
ways.
The Knapsack Problem (KP) is an NP-hard problem \cite{pisinger2005hard}.
There are several applications of KP such as resource distribution,
investment decision making and budget controlling.
In our model, we define KP as:
Assuming that there is a knapsack with a maximum capacity of $B$, which represents the budget of the Defender.
Given the set of all possible $|\mc{S}| \times |\mc{L}|$ NSPs shown in (\ref{eq:allNSPs}),
each Knapsack candidate solution consists of at most $|\mc{S}|$ NSPs, one per each safeguard.
Each NSP reduces, to some degree, the overall cyber risk of the organization as a result of reducing
the individual risk on each user group.
The problem is to select a subset of NSPs that maximize the knapsack profit without exceeding the
maximum capacity of the knapsack.
We define an optimal solution to our KP as
$\Psi = \{\vec{\delta}^{NE}_{\sigma,\lambda}\},\forall\sigma\in\mc{S},\forall\lambda\in\mc{L}$.
A solution $\Psi$ takes exactly one solution (i.e., equilibrium or cybersecurity plan) for each
safeguard as a \textit{policy for implementation/application}. To represent the cyber security
investment problem, we need to expand the definitions for both expected loss $L$ and effectiveness $E$
to incorporate the solutions of the different CSGs. Hence, we expand $L$ such that
$L(\vec{\delta}_{\sigma,\lambda},i)$ is the expected loss inflicted by compromising user group $i$ given the
application of the plan $\vec{\delta}_{\sigma,\lambda}$.
We also expand $E$ such that
$E(\vec{\delta}_{\sigma,\lambda},i)$ is the efficacy that $\vec{\delta}_{\sigma,\lambda}$ brings when applied to
user group $i$.
From equation (\ref{eq:loss}) the expected loss on user group $i$ when
NSP $\vec{\delta}_{\sigma,\lambda}$ is applied is given by
\begin{equation}\label{eq:loss_plan}
L(\vec{\delta}_{\sigma,\lambda},i) = R_i \, A_i \, [1-E(\vec{\delta}_{\sigma,\lambda},i)]
\end{equation}
A natural approach is the KP to seek a set of NSPs that minimize the aggregated expected risks
across all user groups. We assume that each NSP may protect more than one user groups.
We then seek optimal safeguards allocation for a series of user groups each of which
can be protected by a different set of NSPs. The latter may not necessarily have an
additive efficacy. The following illustrated example considers two NSPs and explains
how we have decided to combine their efficacy in a single formula that we then use in KP
formulation.
\textbf{Example 2}.
By slightly abusing notation, assume two
NSPs $\vec{\delta}, \vec{\delta}'$ that mitigate 20\% and 30\% of the same
user group risk, respectively.
If the NSPs had additive efficacy the total expected loss
on user group $i$ when applying both $\vec{\delta}, \vec{\delta}'$
equals
$R_i \, A_i \, \Big\{1 -\big\{E(\vec{\delta},i) + E(\vec{\delta}',i)\big\}\Big\} =
R_i \, A_i \, (1-0.2-0.3) = 0.5 \, R_i \, A_i$.
In this paper, we assume a more conservative expected loss mitigation
function when combining two or more NSPs as follows
$R_i \, A_i \, \Big\{\big\{1-E(\vec{\delta},i)\big\} \,
\big\{1-E(\vec{\delta}',i)\big\}\Big\} =
R_i \, A_i \cdot (1-0.2) \, (1-0.3) =
R_i \, A_i \cdot 0.8 \cdot 0.7 =
0.56 \, R_i \, A_i$.
Given the above, if we represent the solution $\Psi$ by the bitvector $\vec{z}$,
we can then represent the 0-1 KP as
\begin{eqnarray}\label{eq:knapsack}
&& \max_{\vec{z}} \sum_{i=0}^{|\mc{U}|} A_i \, R_i
\Bigg\{\prod_{\sigma=1}^{|\mc{S}|}
\Big\{1 - \sum_{j=0}^{\lambda} E(\vec{\delta}_{\sigma,\lambda}^{NE},i) \, z_{\sigma,\lambda} \Big\}\Bigg\} \nonumber \\
&& \text{s.t.}~\sum_{\sigma=1}^{|\mc{U}|} \sum_{\lambda=0}^{|\mc{L}|}
F(\vec{\delta}_{\sigma,\lambda}) \, z_{\sigma,\lambda} \leq B, \nonumber \\
&& \sum_{\lambda=0}^{|\mc{L}|} z_{\sigma,\lambda}=1, z_{\sigma,\lambda}\in\{0,1\}, \forall \sigma=1,2,\dots,|\mc{S}| .
\end{eqnarray}
where $B$ is the available budget of the Defender to be spent in cyber safeguards and
$z_{\sigma,\lambda}=1$ holds when $\vec{\delta}_{\sigma,\lambda}^{NE} \in \Psi$. Among KP solutions that all
maximize the overall expected loss, we choose the solution with the lowest financial cost as this will be, in overall, the best advice to the defender producing same benefit for lower price.
\ifExtended
However, in general such a budget allocation is not a given and should be a
product of the optimisation. We can obtain the budget allocation by
noticing that the dynamic programming algorithm above actually finds an
optimal allocation of resources for all budgets up to B.
\fi
\section{Related Work}\label{sec:relatedwork}
This work has been inspired by a previous work of Fielder et al. \cite{fielder2016decision}
where the authors have proposed decision support methodologies for the optimal
choice of cybersecurity controls within an investment budget.
They have addressed cybersecurity investment decisions by proposing different
approaches; a game-theoretic approach, a combinatorial optimization approach
and a mix of both called \textit{hybrid}.
This paper utilizes the latter method to recommend the optimal choice of safeguards
for healthcare organizations. In this section, we discuss two classes of work
relevant to this paper: literature on \textit{cyber hygiene in healthcare} -
more specifically on the \textit{user-oriented cybersecurity safeguards},
and literature on \textit{optimal selection of cybersecurity safeguards}.
Note that the literature covered on optimal selection of cybersecurity
safeguards mainly highlight work beyond the literature covered in
\cite{fielder2016decision}.
\subsection{Cyber Hygiene in Healthcare}
There have been growing concerns that the existing cybersecurity
posture of healthcare organizations are insufficient and this has
already impacted the confidentiality \cite{kruse2017cybersecurity}
and integrity of medical data \cite{fernandez2017shared}.
Further, many healthcare organizations are still using legacy
systems such as Windows XP and Windows NT $3.1$ which Microsoft
has long stopped
supporting\footnote{\scriptsize{\url{https://www.itpro.co.uk/public-sector/27740/nine-in-10-nhs-trusts-still-use-windows-xp.}}}, allowing
adversaries to easily breach the defenses (e.g., WannaCry attacks
on NHS\footnote{\scriptsize{\url{https://www.nao.org.uk/wp-content/uploads/2017/10/Investigation-WannaCry- cyber-attack-and-the-NHS-Summary.pdf.}}}).
In general, healthcare organizations being rich sources of valuable data and
relatively weaker security postures have become attractive targets for
cybercrime \cite{coventry2018cybersecurity}. The weaker security posture
that they exhibit is primarily due to lack of adequate cybersecurity budget resulting in
limited access to technology and expertise \cite{kotz2016privacy}.
Besides, investment in cybersecurity has not been traditionally considered essential
for healthcare systems as emphasis has predominantly been upon providing
patient care and people believed that there would be no motivation to
attack them.
On the other hand, the increasing use of \ac{IoT} technologies in healthcare has widened
the attack surface beyond electronic health record databases and privacy issues to physical
safety \cite{loukas2015cyber}. Alongside technical aspects, the role of the user in cybersecurity is paramount, as a significant proportion of attacks target the users directly through deceptive means such as application masquerading and spear-phishing. This is particularly the case in healthcare as deceiving a nurse, doctor, healthcare IT professional or administrator can
impact the privacy and physical safety of patients \cite{billingsley2016cybersecurity}.
With the increasing usage of technology, the role that humans play in underlying
security processes will continually expand. Heartfield and Loukas \cite{heartfield2018detecting}
have developed a framework involving humans to effectively detect and report semantic social
engineering attacks against them. Their results illustrate that involving users significantly
improves the cyber threat detection rate affirming the importance of the human in cybersecurity. This further depicts that humans can no longer be seen as a threat and/or vulnerability in cybersecurity.
Acknowledging the importance of human in cybersecurity along with the increase in the severity of breaches, security experts, policymakers and governments are urging to improve cyber hygiene.
Such et al. \cite{such2019basic} have demonstrated that Cyber Essentials\footnote{\scriptsize{\url{https://www.gov.uk/government/publications/cyber-essentials-scheme-overview.}}}
have worked well for SMEs in mitigating threats exploiting vulnerabilities remotely using
commodity-level exploitation tools. From a human-cyber interaction perspective,
Vishwanath et al. \cite{vishwanath2019cyber} have demonstrated that cyber hygiene practices
positively impact individuals' cyber attitude which is pivotal to cyber safety.
These studies have actively exhibited that even general concepts of basic cyber hygiene work
in different organizational contexts and can convincingly reduce cyber risk.
Security training in healthcare has been studied for over 20 years. It ranges from an exploratory analysis of the factors that healthcare professionals need to focus on, up to highly targeted digital applications (e.g., \cite{zhou2018mobile}) and platforms for raising
awareness of healthcare data privacy and security risks.
Furnell et al. \cite{furnell1997addressing} discussed the necessity to promote information
security issues and the need for appropriate training and awareness initiatives in healthcare
institutions. They have highlighted factors to consider while designing training and awareness
programmes to familiarize healthcare personnel with basic security concepts and procedures.
The effect to which security training and awareness programmes work for different users
has been studied from multiple angles. The authors have shown that specifically for
deception-based attacks, such as semantic social engineering \cite{heartfield2016taxonomy},
where self-study and work-based training are considerably more effective than formal
education in cybersecurity \cite{heartfield2016you}. Besides, the perceived origin
of training materials i.e., from security experts, third party agencies, or peers can
have large impacts on security outcomes \cite{wash2018provides}.
\subsection{Optimal Selection of Cybersecurity Controls}
Cybersecurity has become a key factor in determining the growth of organizations
relying on information systems as it is not only a defensive measure but also has
become a strategic decision providing a competitive advantage over rivalry firms.
Further, the potential loss due to cyber incidents has encouraged organizations
to imperatively consider cybersecurity investment decisions, especially in deriving the optimum level of investments between risk treatment options.
The objective of cybersecurity investment methodologies is to compute an optimal
distribution of cybersecurity budget and one of the initial work studying this was performed by Gordon and Loeb \cite{gordon2002economics}.
Beyond previous works such as \cite{fielder2016decision,fielder2018risk,fielder2014game}
and the related work investigated there, Nagurney et al. \cite{nagurney2017supply} have proposed a game-theoretic supply chain network model with retailers competing to maximize their expected profits. This maximization is based on determining optimal product transactions and cybersecurity investments under budget constraints. Along the direction of optimal cybersecurity investments, Wang \cite{wang2019integrated} investigated the cybersecurity investment balance between acquiring knowledge and expertise, and deploying mitigation techniques. On the other hand, Chronopoulos et al. \cite{chronopoulos2017options} have opted a real options approach to analyze the performance of optimal cybersecurity controls on organizations. In particular, the authors have analyzed the effects of the cost of cyber attacks and the time of arrival of cybersecurity controls on the organization's optimal strategy. Similar to these papers, our work also considers the choice of the optimal strategy based on the efficacy of the control towards mitigating cyber risks.
Most closely, in terms of methodology, related recent work on optimal cybersecurity investment is \cite{martinelli2018optimal} where the authors have investigated the balance between investing in self-protection and cyber insurance. The key difference is that their optimization minimizes expected risk and cyber insurance premium, while our model optimizes considering the efficacy of control in mitigating the aggregated residual risk and the security investment budget. Besides this, our work
uses a unique combination of game theory and combinatorial optimization inspired by \cite{fielder2016decision}.
|
2,877,628,091,048 | arxiv |
\section{Conclusions}~\label{sec:conclusions}
We proposed in this paper a fast Two-Stream Siamese that combines the discriminatory power of two distinctive and persistent features, the vehicle's shape and the registration plate, to address the problem of vehicle re-identification by using non-overlapping cameras. Tests indicate that our network is more robust than other One-Stream Siamese architectures which are fed with the same features or larger images. We also evaluated simple and complex CNNs, used by the Siamese Network, to find a trade-off between efficiency and performance.
\section{Experiments}~\label{sec:experiments}
For our tests, we used 10 videos --- 5 from Camera
1 and 5 from Camera 2 (20 minutes of duration each one) --- recorded
with frame resolution of $1920\times1080$ pixels, at $30.15$ frames per second.
They are summarized in Table~\ref{tab:dataset}.
\begin{table}[!htpb]
\setlength{\tabcolsep}{1.9pt}
\def\multicolumn{4}{c|}{\multicolumn{2}{c||}}
\def\normalsize{\normalsize}
\def\small{\normalsize}
\def\textbf{\textbf}
\renewcommand{\arraystretch}{1.0}
\centering
\caption{Dataset information: number of
vehicles and number of vehicles with a visible license plate in Camera 1 and 2; number of vehicles
matchings between Camera 1 and 2. }
\begin{tabular}{||c||c|c||c|c||c||}
\hline
\small Set & \multicolumn{4}{c|}{\small Camera 1} & \multicolumn{4}{c|}{\small Camera 2} & \small No.Match. \\ \hline
& \normalsize \#Vehicles & \normalsize \#Plates & \normalsize \#Vehicles & \normalsize \#Plates & \\ \hline
\small 01 & \small 389 & \small 343 & \small 280 & \small 245 & \small 199 \\
\small 02 & \small 350 & \small 310 & \small 244 & \small 227 & \small 174 \\
\small 03 & \small 340 & \small 301 & \small 274 & \small 248 & \small 197 \\
\small 04 & \small 280 & \small 251 & \small 233 & \small 196 & \small 140 \\
\small 05 & \small 345 & \small 295 & \small 247 & \small 194 & \small 159 \\ \hline
\small Total & \small 1704 & \small 1500 & \small 1278 & \small 1110 & \small 869 \\ \hline
\end{tabular}
\label{tab:dataset}
\end{table}
There are multiple distinct occurrences of the same vehicle as it moves across the video. Therefore, instead of only 869 matchings as shown in Table~\ref{tab:dataset}, we can generate thousands of true matchings by doing the Cartesian product between a sequence of images of the same vehicle that appears in Camera 1 and 2. This data augmentation is usually necessary for CNN training. Therefore, we used the MOSSE tracker~\cite{5539960} to extract the $N$ first occurrences of each license plate (see Fig~\ref{fig:pairwise}).
Note that negative pairs are easier to generate, since we can use any combination of distinct vehicles from Camera 1 and 2.
\begin{figure}[!htb]
\centering
\begin{tikzpicture}
\draw(0.0,0.0) node[text centered, text width=2.2cm, inner sep=0pt] (cruz1-img1) {
\includegraphics[width=1.3cm,height=1.3cm]{./figures/two-stream/2980.png}
};
\draw(1.6,0.0) node[text centered, text width=2.2cm, inner sep=0pt] (cruz1-img2) {
\includegraphics[width=1.3cm,height=1.3cm]{./figures/two-stream/2981.png}
};
\draw(4.5,0.0) node[text centered, text width=2.2cm, inner sep=0pt] (cruz1-img6) {
\includegraphics[width=1.3cm,height=1.3cm]{./figures/two-stream/2985.png}
};
\draw(0.0,2.4) node[text centered, text width=2.2cm, inner sep=0pt] (cruz2-img1) {
\includegraphics[width=1.3cm,height=1.3cm]{./figures/two-stream/741.png}
};
\draw(1.6,2.4) node[text centered, text width=2.2cm, inner sep=0pt] (cruz2-img2) {
\includegraphics[width=1.3cm,height=1.3cm]{./figures/two-stream/742.png}
};
\draw(4.5,2.4) node[text centered, text width=2.2cm, inner sep=0pt] (cruz2-img6) {
\includegraphics[width=1.3cm,height=1.3cm]{./figures/two-stream/746.png}
};
\draw (cruz1-img1.north) -- (cruz2-img1.south);
\draw (cruz1-img1.north) -- (cruz2-img2.south);
\draw (cruz1-img1.north) -- (cruz2-img6.south);
\draw (cruz1-img2.north) -- (cruz2-img1.south);
\draw (cruz1-img2.north) -- (cruz2-img2.south);
\draw (cruz1-img2.north) -- (cruz2-img6.south);
\draw (cruz1-img6.north) -- (cruz2-img1.south);
\draw (cruz1-img6.north) -- (cruz2-img2.south);
\draw (cruz1-img6.north) -- (cruz2-img6.south);
\draw [decoration={brace},decorate] (-0.8,3.4) -- (5.3,3.4) node [pos=0.5,anchor=south,yshift=0.05cm] {$N$ images};
\path[->](3.1,2.4) node[] {\Large $\dots$};
\path[->](3.1,0.0) node[] {\Large $\dots$};
\path[->](-1.0,2.4) node[rotate=90] {Camera 1};
\path[->](-1.0,0.0) node[rotate=90] {Camera 2};
\path[->](0.0,-0.8) node[] {\small frame $j$};
\path[->](1.6,-0.8) node[] {\small frame $j$+1};
\path[->](4.5,-0.8) node[] {\small frame $j$+$N$};
\path[->](0.0,3.2) node[] {\small frame $i$};
\path[->](1.6,3.2) node[] {\small frame $i$+1};
\path[->](4.5,3.2) node[] {\small frame $i$+$N$};
\node[draw,align=left] at (6.4,1.3) { \small
All $N^2$ pairs\\
\small of each \\
\small vehicle are\\
\small used to train\\
\small or test\\
\small (not for both)};
\end{tikzpicture}
\caption{Pair-wise data augmentation only for positive (true) matchings: from $N$ images of two vehicle sequences we extract $N^2$ distinct pairs. The same procedure is applied for license plate and vehicle shapes.}
\label{fig:pairwise}
\end{figure}
We also adjusted another parameter $\lambda$ that was meant to multiply the number of false negatives pairs (non-matchings) in the testing set to simulate the network in a real environment assuming it may have many more tests of non-matchings pairs than the opposite. In Table~\ref{tab:settings} we show some parameter settings for our experiments. Note however, that we keep the same proportion of positive and negative pairs during the training in order to avoid class imbalance.
\begin{table}[!htpb]
\setlength{\tabcolsep}{0.2pt}
\def\multicolumn{4}{c|}{\multicolumn{2}{c||}}
\def\normalsize{\normalsize}
\def\small{\normalsize}
\def\textbf{\textbf}
\renewcommand{\arraystretch}{1.1}
\centering
\caption{Parameter settings used in our experiments.}
\begin{tabular}{||l||c|c||c|c||c|r||}
\hline
\small Settings & \multicolumn{4}{c|}{Training} & \multicolumn{4}{c|}{Testing} \\ \hline
& \normalsize \#positives & \normalsize \#negatives & \normalsize \#positives & \normalsize \#negatives \\ \hline
\small $N = 3, \lambda = 5$ & \small 3867 & \small 3867 & \small 3903 & \small 19515 \\
\small $N = 10, \lambda = 10$ & \small 42130 & \small 42130 & \small 42707 & \small 427070 \\
\hline
\end{tabular}
\label{tab:settings}
\end{table}
The quantitative criteria we used to evaluate the architectures performance are the precision $P$, recall $R$, accuracy $A$
and $F$-measure. As shown in Table~\ref{tb:results}, the Two-Stream Siamese outperforms two distinct One-Stream Siamese Networks: the first one, Siamese-Car, is fed only with the shape of vehicles ($96 \times 96$ pixels); the second, Siamese-Plate, only use patches of license plates ($96 \times 48$ pixels). Note that even when we increased the number of false matchings in the negative testing set, $\lambda=10$, the $F$-measure of the Two-Stream Siamese was similar in both scenarios. The accuracy $A$ is usually much higher since the number of negative pairs is much larger. Some inference results are shown in Fig.~\ref{fig:results}.
\begin{table}[ht]
\renewcommand{\arraystretch}{1.1}
\setlength{\tabcolsep}{4pt}
\def\multicolumn{4}{c|}{\multicolumn{4}{c|}}
\def\small{\small}
\centering
\caption{Matching performance of the proposed Two-Stream Siamese (Small-VGG) against two One-Stream Siamese (Car and Plate with Small-VGG) by using different settings to generate image pairs.}
\label{tb:results}
\begin{tabular}{|l||c|c|c|c|} \hline
& \multicolumn{4}{c|}{$N = 3, \lambda = 5$} \\ \hline
\small Algorithm & \small $P$ &\small $R$ &\small $F$ &\small $A$ \\ \hline
\small Siamese-Car \hspace{3pt} (Stream 1) &\small 85.8\% &\small 93.1\% &\small 89.3\% &\small 96.3\% \\ \hline
\small Siamese-Plate (Stream 2) &\small 75.9\% &\small 81.8\% &\small 78.8\% &\small 92.6\% \\ \hline
\small Siamese \hspace{18pt} (Two-Stream) &\small 92.7\% &\small 93.0\% &\small 92.9\% &\small 97.6\% \\ \hline
& \multicolumn{4}{c|}{$N = 10, \lambda = 10$} \\ \hline
\small Algorithm & \small $P$ &\small $R$ &\small $F$ &\small $A$ \\ \hline
\small Siamese-Car \hspace{3pt} (Stream 1) &\small 92.4\% &\small 83.5\% &\small 87.8\% &\small 97.9\% \\ \hline
\small Siamese-Plate (Stream 2) &\small 86.8\% &\small 59.5\% &\small 70.6\% &\small 95.5\% \\ \hline
\small Siamese \hspace{18pt} (Two-Stream) &\small 94.7\% &\small 90.6\% &\small 92.6\% &\small 98.7\% \\ \hline
\end{tabular}
\end{table}
We also tried different CNN in our Two-Stream Siamese, their performance are reported in Table~\ref{tb:results2}.
Furthermore, as can be seen in Fig.~\ref{fig:results2}, we also evaluated the performance of the proposed Two-Stream Siamese against two One-Stream Siamese versions fed with larger image patches ($224 \times 224$ pixels). Note that we achieved a higher $F$-measure by using two small image patches than a single patch containing both features. Another advantage is the Two-Stream Siamese training time: 1938 seconds per epoch ($N = 10$ and $\lambda = 10$) against 3441 seconds per epoch of the Siamese-Car by using the same Small-VGG and 4937 seconds with ResNet.
The experiments were carried out on a Intel i7 with 32GB DRAM and a Nvidia Titan Xp GPU.
\begin{table}[ht]
\renewcommand{\arraystretch}{1.1}
\setlength{\tabcolsep}{5pt}
\def\multicolumn{4}{c|}{\multicolumn{4}{c|}}
\def\small{\small}
\centering
\caption{Matching performance of the proposed Two-Stream Siamese with different CNN architectures.}
\label{tb:results2}
\begin{tabular}{|l||c|c|c|c|} \hline
& \multicolumn{4}{c|}{$N = 10, \lambda = 10$} \\ \hline
\small Siamese (Two-Stream) & \small $P$ &\small $R$ &\small $F$ &\small $A$ \\ \hline
\small CNN = Lenet5 \hspace{3pt} &\small 89.6\% &\small 85.2\% &\small 87.3\% &\small 97.8\% \\ \hline
\small CNN = Matchnet~\cite{matchnet2015} \hspace{3pt} &\small 94.5\% &\small 87.1\% &\small 90.7\% &\small 98.4\% \\ \hline
\small CNN = MC-CNN~\cite{mccnn} \hspace{3pt} &\small 89.0\% &\small 90.1\% &\small 89.6\% &\small 98.1\% \\ \hline
\small CNN = GoogleNet \hspace{3pt} &\small 88.8\% &\small 81.8\% &\small 85.1\% &\small 97.4\% \\ \hline
\small CNN = AlexNet \hspace{3pt} &\small 91.3\% &\small 86.5\% &\small 88.8\% &\small 98.0\% \\ \hline
\small CNN = Small-VGG \hspace{3pt} &\small 94.7\% &\small 90.6\% &\small 92.6\% &\small 98.7\% \\ \hline
\end{tabular}
\end{table}
\begin{figure}[!htb]
\hspace{-10pt}
\begin{tikzpicture}
\draw(-0.4,2.53) node[text centered, text width=3.2cm] (img1) {
{\small Vehicle (96$\times$96 pixels)}\\
\vspace{0.1cm}
\includegraphics[width=2.1cm,height=2.1cm]{./figures/two-stream/fig4_8336c2.png}
};
\draw(-0.4,0.3) node[text centered, text width=3.2cm] (img2) {
{\small Plate (96$\times$48 pixels)}\\
\vspace{0.1cm}
\includegraphics[width=2.1cm,height=1.0cm]{./figures/two-stream/fig4_8336p2.png}
};
\draw(4.35,1.5) node[text centered, text width=4.5cm] (img3) {
{\small Vehicle (patches 224$\times$224 pixels)}\\
\vspace{0.1cm}
\includegraphics[width=4.2cm,height=4.2cm]{./figures/two-stream/fig4_8336c2.png}
};
\draw(-0.2,-1.3) node[text centered, text width=4.0cm] (t1) {
{\small Siamese Two-Stream\\ (\textbf{Small-VGG})\\ $F$ = 92.6\% and $A$ 98.7\%}};\\
\vspace{0.1cm}
\draw(4.35,-1.2) node[text centered, text width=4.4cm] (t2) {
{\small Siamese-Car (\textbf{Small-VGG}): $F$ = 88.1\% and $A$ = 97.9\%}};\\
\vspace{0.1cm}
\draw(1.5,1.4) node[text centered, text width=4.4cm] (vs) {
\includegraphics[scale=0.6]{./figures/two-stream/vs.png}
};
\draw(4.35,-2.0) node[text centered, text width=4.4cm] (t3) {
{\small Siamese-Car (\textbf{Resnet50}):\\ $F$ = 81.2\% and $A$ = 97.1\%}};
\end{tikzpicture}
\caption{Siamese Two-Stream versus Siamese-Car.}
\label{fig:results2}
\end{figure}
\begin{figure}[!htb]
\begin{tikzpicture}
\node[draw,align=left,text width=8.5cm,text height=0.2cm, inner sep=1pt] at (3.3,1.7) {
\small Siamese-Car (Stream 1): \textbf{matching} {\normalsize \cmark}\\
\small Siamese-Plate (Stream 2): \textbf{matching} {\normalsize \cmark}\\
\small Siamese (Two-Stream): \textbf{matching} {\normalsize \cmark} };
\draw(0.0,3.4) node[text centered, text width=2.0cm] (img1) {
\includegraphics[width=2.1cm,height=2.1cm]{./figures/two-stream/case1_cruz1_car.png}
};
\draw(2.2,3.4) node[text centered, text width=2.2cm] (img2) {
\includegraphics[width=1.6cm,height=1.5cm]{./figures/two-stream/case1_cruz1_plate.png}
};
\draw(4.35,3.4) node[text centered, text width=2.2cm] (img3) {
\includegraphics[width=2.1cm,height=2.1cm]{./figures/two-stream/case1_cruz2_car.png}
};
\draw(6.5,3.4) node[text centered, text width=2.2cm] (img4) {
\includegraphics[width=1.6cm,height=1.5cm]{./figures/two-stream/case1_cruz2_plate.png}
};
\node[draw,align=left,text width=8.5cm,text height=0.2cm, inner sep=1pt] at (3.3,5.1) {
\small Siamese-Car (Stream 1): \textbf{non-matching} { \normalsize \cmark} \\
\small Siamese-Plate (Stream 2): \textbf{matching} { \normalsize \xmark} \\
\small Siamese (Two-Stream): \textbf{non-matching} { \normalsize \cmark} };
\draw(0.0,6.8) node[text centered, text width=2.0cm] (img1) {
\includegraphics[width=2.1cm,height=2.1cm]{./figures/two-stream/case2_cruz1_car.png}
};
\draw(2.2,6.8) node[text centered, text width=2.2cm] (img2) {
\includegraphics[width=2.1cm,height=1.0cm]{./figures/two-stream/case2_cruz1_plate.png}
};
\draw(4.35,6.8) node[text centered, text width=2.2cm] (img3) {
\includegraphics[width=2.1cm,height=2.1cm]{./figures/two-stream/case2_cruz2_car.png}
};
\draw(6.5,6.8) node[text centered, text width=2.2cm] (img4) {
\includegraphics[width=2.1cm,height=1.0cm]{./figures/two-stream/case2_cruz2_plate.png}
};
\node[draw,align=left,text width=8.5cm,text height=0.2cm, inner sep=1pt] at (3.3,8.5) {
\small Siamese-Car (Stream 1): \textbf{matching} { \normalsize \xmark}\\
\small Siamese-Plate (Stream 2): \textbf{non-matching} { \normalsize \cmark}\\
\small Siamese (Two-Stream): \textbf{non-matching} { \normalsize \cmark} };
\draw(0.0,10.2) node[text centered, text width=2.0cm] (img1) {
\includegraphics[width=2.1cm,height=2.1cm]{./figures/two-stream/case3_cruz1_car.png}
};
\draw(2.2,10.2) node[text centered, text width=2.2cm] (img2) {
\includegraphics[width=2.1cm,height=1.0cm]{./figures/two-stream/case3_cruz1_plate.png}
};
\draw(4.35,10.2) node[text centered, text width=2.2cm] (img3) {
\includegraphics[width=2.1cm,height=2.1cm]{./figures/two-stream/case3_cruz2_car.png}
};
\draw(6.5,10.2) node[text centered, text width=2.2cm] (img4) {
\includegraphics[width=2.1cm,height=1.0cm]{./figures/two-stream/case3_cruz2_plate.png}
};
\node[draw,align=left,text width=8.5cm,text height=0.2cm, inner sep=1pt] at (3.3,11.9) {
\small Siamese-Car (Stream 1): \textbf{non-matching} { \normalsize \xmark} \\
\small Siamese-Plate (Stream 2): \textbf{non-matching} { \normalsize \xmark}\\
\small Siamese (Two-Stream): \textbf{non-matching} { \normalsize \xmark}};
\draw(0.0,13.6) node[text centered, text width=2.0cm] (img1) {
\includegraphics[width=2.1cm,height=2.1cm]{./figures/two-stream/case4_cruz1_car.png}
};
\draw(2.2,13.6) node[text centered, text width=2.2cm] (img2) {
\includegraphics[width=2.1cm,height=1.0cm]{./figures/two-stream/case4_cruz1_plate.png}
};
\draw(4.35,13.6) node[text centered, text width=2.2cm] (img3) {
\includegraphics[width=2.1cm,height=2.1cm]{./figures/two-stream/case4_cruz2_car.png}
};
\draw(6.5,13.6) node[text centered, text width=2.2cm] (img4) {
\includegraphics[width=2.1cm,height=1.0cm]{./figures/two-stream/case4_cruz2_plate.png}
};
\end{tikzpicture}
\caption{Inference results (testing set): from top-to-bottom an example where the three architectures failed (severe lighting conditions); Siamese-Car failed (similar vehicle shape); Siamese-Plate failed (similar license plate); and, at bottom, the three architectures found a correct matching.}
\label{fig:results}
\end{figure}
\section{Introduction}~\label{cha:intro}
This paper address the problem of matching moving vehicles that appear in two videos taken by multiple cameras with non-overlapping fields of view. See Fig.~\ref{fig:setup}. This is a common sub-problem for several applications in intelligent transportation systems, such as enforcement of road speed limits, criminal investigations, monitoring of commercial transportation vehicles, and traffic management.
\begin{figure}[!htb]
\begin{tikzpicture}
\draw(0.0,1.7) node[] (image) {
\frame{\includegraphics[width=6.0cm]{./figures/curitiba_info3.png}}
};
\draw(3.3,3.2) node[text centered, text width=2.7cm, inner sep=0pt] (image) { \frame{\includegraphics[width=3.5cm]{./figures/cruz1.png}} };
\draw(3.3,0.2) node[text centered, text width=2.7cm, inner sep=0pt] (image) { \frame{\includegraphics[width=3.5cm]{./figures/cruz2.png}} };
\end{tikzpicture}
\caption{System setup: a traffic engineering company placed in two different semaphores a pair of low-cost full-HD cameras properly calibrated and time synchronized. In general, not every vehicle seen in one video appears in the other video.
} \label{fig:setup}
\end{figure}
Some of these applications traditionally use physical sensors placed over, near, or under the road, such as pressure-sensitive cables and inductive loop detectors~\cite{5763781,5659904}. However, these detectors present limitations such as the same vehicles entering or leaving the road between the two cameras. Other applications use optical character recognition (OCR) algorithms~\cite{hiercnn} to translate the license plate image regions into character codes, such as ASCII. However, this translation is not straightforward when two or more lanes are recorded at the same time, producing small license plate regions that are very hard to read. Recognition of vehicles by shape and color is not sufficiently reliable either, since vehicles of the same brand and model often look exactly the same \cite{she2004vehicle}.
For such reasons, in our solution we have opted to identify vehicles across non-overlapping cameras by using an hybrid strategy, that is, we developed a Two-Stream Siamese Neural Network that is fed, simultaneously, with two of the most distinctive and persistent features available, the vehicle's shape and the registration license plate. Then, for fusion of the Two-Streams we concatenate the distance descriptors extracted from each single Siamese network and add fully connected layers for classification. We also show that the combination of small image patches produces a fast network that outperforms other complex architectures, even if they use higher resolution image patches. The rest of this paper is organized as follows. In Sec.~\ref{sec:related}, we discuss the related work. In Sec.~\ref{sec:method}, we describe the Two-Stream Siamese Network. Experiments are reported in Sec.~\ref{sec:experiments}. Finally, in Sec.~\ref{sec:conclusions} we state the conclusions.
\section{Two-Stream Siamese Network}~\label{sec:method}
The inference flowchart of the proposed Two-Stream Siamese Network is shown in Fig.~\ref{fig:system-overview}. The left stream processes the vehicle's shapes while the right stream the license plates. The network weights $W$ are shared only within each stream. We merged the distance vectors of each Siamese --- whose similarity is measured by a Mahalanobis distance --- and combined the strengths of both features by using a sequence of
fully connected layers with dropout regularization (20\%) in order to avoid over-fitting. Then, we used a softmax activation function to classify matching pairs from non-matching pairs.
\begin{figure}[!htb]
\begin{tikzpicture}
\tikzset{blockS/.style={draw, rectangle, text centered, drop shadow, fill=white, text width=1.0cm}}
\tikzset{blockL/.style={draw, rectangle, text centered, drop shadow, fill=white, text width=3.2cm}}
\tikzset{blockG/.style={draw, rectangle, text centered, drop shadow, fill=white, text width=3.6cm}}
\path[->](1.0,6.6) node[] {\textbf{\large Camera 1}};
\path[->](5.5,6.6) node[] {\textbf{\large Camera 2}};
\draw(0.0,4.9) node[text centered, text width=2.0cm] (img1) {
{\small Shape \\ 96$\times$96 pixels}\\
\vspace{0.1cm}
\includegraphics[width=2.0cm,height=2.0cm]{./figures/two-stream/car_2595_atc1189_cruz1.png}
};
\draw(2.2,4.9) node[text centered, text width=2.2cm] (img2) {
{\small Plate \\ 96$\times$48 pixels}\\
\vspace{0.1cm}
\includegraphics[width=2.0cm,height=1.0cm]{./figures/two-stream/placa_2595_atc1189_cruz1.png}
};
\draw(4.4,4.9) node[text centered, text width=2.2cm] (img3) {
{\small Shape \\ 96$\times$96 pixels}\\
\vspace{0.1cm}
\includegraphics[width=2.0cm,height=2.0cm]{./figures/two-stream/car_8414_atc1182_cruz2.png}
};
\draw(6.6,4.9) node[text centered, text width=2.2cm] (img4) {
{\small Plate \\ 96$\times$48 pixels}\\
\vspace{0.1cm}
\includegraphics[width=2.0cm,height=1.0cm]{./figures/two-stream/placa_8414_atc1182_cruz2.png}
};
\path[->](0.0,2.4) node[blockS] (cnn1) {
\textbf{CNN}
};
\path[->](2.2,2.4) node[blockS] (cnn2) {
\textbf{CNN}
};
\path[->](1.1,1.3) node[blockL] (d1) {
\textbf{Distance ($\mathbb{L}_1$)}
};
\draw [->] (cnn1) to [out=270,in=110] (d1);
\draw [->] (cnn2) to [out=270,in=70] (d1);
\path[->](4.4,2.40) node[blockS] (cnn3) {
\textbf{CNN}
};
\path[->](6.6,2.40) node[blockS] (cnn4) {
\textbf{CNN}
};
\path[->](5.5,1.3) node[blockL] (d2) {
\textbf{Distance ($\mathbb{L}_1$)}
};
\draw [->] (cnn3) to [out=270,in=110] (d2);
\draw [->] (cnn4) to [out=270,in=70] (d2);
\path[<->] (cnn1) edge[line width=0.3mm] node[fill=white, anchor=center, pos=0.5,font=\bfseries, inner sep=0pt] {W} (cnn2);
\path[<->] (cnn3) edge[line width=0.3mm] node[fill=white, anchor=center, pos=0.5,font=\bfseries, inner sep=0pt] {W} (cnn4);
\path[->](3.3,0.1) node[blockG] (fusion) {
\textbf{Concatenate (Fusion)}
};
\draw [->] (d1) to [out=270,in=130] (fusion);
\draw [->] (d2) to [out=270,in=50] (fusion);
\path[->](3.3,-0.7) node[blockG] (fc1) {
\textbf{Fully Connected (1024)}
};
\path[->](3.3,-1.5) node[blockG] (fc2) {
\textbf{Fully Connected (512)}
};
\path[->](3.3,-2.3) node[blockG] (fc3) {
\textbf{Fully Connected (256)}
};
\path[->](3.3,-3.1) node[blockG] (fc4) {
\textbf{Fully Connected (2)}
};
\draw [->] (fusion) to [] (fc1);
\draw [->] (fc1) to [] (fc2);
\draw [->] (fc2) to [] (fc3);
\draw [->] (fc3) to [] (fc4);
\draw [rounded corners=0.5cm, dashed] (3.5,0.8) rectangle (7.5,2.9) node [midway]{};
\path[->](0.2,0.6) node[] {\textbf{Stream 1}};
\draw [rounded corners=0.5cm, dashed] (-0.9,0.8) rectangle (3.0,2.9) node [midway]{};
\path[->](6.4,0.6) node[] {\textbf{Stream 2}};
\draw [->] (img1) to [out=270,in=90] (cnn1);
\draw [->] (img3) to [out=270,in=90] (cnn2);
\draw [->] (img2) to [out=270,in=90] (cnn3);
\draw [->] (img4) to [out=270,in=90] (cnn4);
\path[->](1.5,-4.2) node[] (out1) {\textbf{Matching}};
\path[->](5.2,-4.2) node[] (out2) {\textbf{\textcolor{red}{Non-Matching}}};
\draw [->] (fc4) to [out=210,in=90] (out1);
\draw [->,thick,red] (fc4) to [out=330,in=90] (out2);
\end{tikzpicture}
\caption{Inference flowchart of the proposed Two-Stream Siamese for Vehicle Matching.}
\label{fig:system-overview}
\end{figure}
We extracted the vehicle rear end and the vehicle license plate by using the real-time motion detector and algorithms described by Luvizon~\emph{et~al.}~\cite{Luvizon:2016,Minetto:2013}. The CNN used in our Siamese is shown in Fig.~\ref{fig:CNN}. Basically, it is a simplified VGG~\cite{vgg} based network, with a reduced number of layers so as to save computational effort. Each CNN provided a vector with 512 features. Each Distance ($\mathbb{L}_1$) provided a vector with 512 distances. Finally, Concatenate (Fusion) provided a vector with 1024 distances.
\begin{figure}[!htb]
\begin{tikzpicture}[scale=1.0]
\begin{scope}[scale=1.2]
\pic [fill=white, draw=black] at (0,0) {annotated cuboid={width=1, height=20, depth=16, units=mm}};
\pic [fill=darkgray!70, draw=black] at (0.6,0) {annotated cuboid={width=2, height=20, depth=16, units=mm}};
\pic [fill=blue!15, draw=black] at (0.9,-0.24) {annotated cuboid={width=2, height=17, depth=14, units=mm}};
\pic [fill=darkgray!70, draw=black] at (1.6,-0.24) {annotated cuboid={width=3, height=17, depth=14, units=mm}};
\pic [fill=blue!15, draw=black] at (2.0,-0.48) {annotated cuboid={width=3, height=14, depth=14, units=mm}};
\pic [fill=darkgray!70, draw=black] at (2.7,-0.48) {annotated cuboid={width=3, height=14, depth=14, units=mm}};
\pic [fill=blue!15, draw=black] at (3.1,-0.72) {annotated cuboid={width=3, height=11, depth=14, units=mm}};
\pic [fill=darkgray!70, draw=black] at (3.9,-0.72) {annotated cuboid={width=4, height=11, depth=14, units=mm}};
\pic [fill=blue!15, draw=black] at (4.4,-0.96) {annotated cuboid={width=4, height=8, depth=14, units=mm}};
\pic [fill=darkgray!70, draw=black] at (5.1,-0.96) {annotated cuboid={width=5, height=8, depth=14, units=mm}};
\pic [fill=blue!15, draw=black] at (5.8,-1.20) {annotated cuboid={width=5, height=5, depth=14, units=mm}};
\pic [fill=red, draw=black] at (6.8,-1.44) {annotated cuboid={width=8, height=2, depth=2, units=mm}};
{\scriptsize
\draw [decoration={brace,mirror},decorate] (-0.2,-1.70) -- (0.30,-1.70) node [pos=0.5,anchor=north,yshift=-0.05cm] {W$\times$H$\times$3};
\draw [decoration={brace},decorate] (0.7,0.62) -- (1.40,0.62) node [pos=0.5,anchor=south,yshift=0.05cm] {64 filters};
\draw [decoration={brace},decorate] (1.6,0.3) -- (2.6,0.3) node [pos=0.5,anchor=south,yshift=0.05cm] {128 filters};
\draw [decoration={brace},decorate] (2.7,0.05) -- (3.7,0.05) node [pos=0.5,anchor=south,yshift=0.05cm] {128 filters};
\draw [decoration={brace},decorate] (3.9,-0.2) -- (4.9,-0.2) node [pos=0.5,anchor=south,yshift=0.05cm] {256 filters};
\draw [decoration={brace},decorate] (5.1,-0.45) -- (6.4,-0.45) node [pos=0.5,anchor=south,yshift=0.05cm] {512 filters};
\path[->](0.2,0.6) node[] {Input};
\path[->](6.4,-1.8) node[] {FC (512 units)};
}
\end{scope}
\end{tikzpicture}
\caption{Small-VGG: a VGG-based convolutional neural network used in the Two-Stream Siamese Network. The dark gray boxes denote convolutions by using filter kernel size of $3 \times 3$; the light blue boxes denote $2 \times 2$ max-pooling layers; and, the red box depicts a fully connected layer.}
\label{fig:CNN}
\end{figure}
\section{Related work}~\label{sec:related}
Vehicle re-identification is an active field of research with many algorithms and extensive bibliography~\cite{5763781,5659904,zhong2019poses,8296310,8265213,8036238,tang2017multi}.
The survey of Tian~{\emph{et~al.}}~\cite{6875912} listed this problem as an open challenge for intelligent transportation systems. Traditionally, algorithms for this task were based on the comparison of electromagnetic signatures. However, as observed by Ndoye~\emph{et~al.}~\cite{5659904}, such signature-matching algorithms are exceedingly complex and depends on extensive calibrations or complicated data models.
Video-based algorithms have been proven to be powerful for vehicle re-identification~\cite{5659904,8265213,8036238,tang2017multi,8451776}. Such algorithms need to address \emph{fine-grained vehicle recognition} issues~\cite{7350898}, that is,
to distinguish between subordinate categories with similar visual appearance, caused by a huge number of car design and models with similar appearance. As an attempt to solve these issues many authors proposed to use handed-crafted image descriptors such as SIFT~\cite{Zhang2016}. Recently, inspired by the tremendous progress of the Siamese Neural Networks Tang~\emph{et~al.}~\cite{tang2017multi}, in 2017, proposed for vehicle re-identification in traffic surveillance environment to fuse deep and hand-crafted features by using a Siamese Triplet Network~\cite{10.1007/978-3-319-24261-3_7}. In 2018, Yan~\emph{et~al.}~\cite{8265213} proposed a novel deep learning metric, a Triplet Loss Function, that takes into account the inter-class similarity and intra-class variance in vehicle models considering only the vehicle's shape. Also in 2018, Liu~\emph{et~al.}~\cite{8036238} proposed a coarse-to-fine vehicle re-identification algorithm that initially filters out the potential matchings by using hand-crafted and deep features based on shape and color and, then they used the license plates in a Siamese Network and a Spatiotemporal re-ranking to refine the search.
The idea of a two stream convolutional neural networks (CNN) is not new. Ye~\emph{et~al.}~\cite{Ye:2015:ETC:2671188.2749406} proposed an architecture that uses static video frames as input in one stream and optical flow features in the other stream for video classification. Chung~\emph{et~al.}~\cite{chung2017two} also proposed a two-stream architecture composed by two Siamese CNN fed with spatial and temporal information extracted from RGB frames and optical flow vectors for person re-identification. Zagoruyko~\emph{et~al.}~\cite{zagoruyko2015} described distinct Siamese architectures to compare learning image patches. In special, the Central-Surround Two-Stream architecture is similar to the one proposed here.
Finally, some authors~\cite{5763781} use self-adaptive time-window constraints to define upper and lower bounds in order to predict the search space and narrow-down the potential matches. That is, they predict a time-window size based on the camera's distance and the traffic conditions, e.g. free flow or congested. However, we are not trying to solve the travel time estimation problem here, thus, we considered the maximum number of true or false matchings available in order to evaluate the robustness of the architectures.
|
2,877,628,091,049 | arxiv | \section{SUPPLEMENTAL MATERIAL}
\textit{In this supplementary material, we first describe the simulation of the photon emission dynamics for the atoms in the cavity, which is used to estimate the probability of success and of false-positive detection events.
We then discuss peculiar features related to the center-of-mass of the two-entangled-species interferometer when compared to conventional dual-species atom interferometers in which the center-of-mass of the two species are independent.}
\section{Simulation of the VSTIRAP dynamics}
\subsection{Model}
The single photon which is detected when the $^{87}$Rb and $^{85}$Rb atoms are in the cavity is created by a vacuum-stimulated Raman adiabatic passage (vSTIRAP). To estimate the performance of the process under the particular conditions of our work, we simulate the vSTIRAP dynamics of the two atoms using a master equation approach. We follow the treatment given in pioneering theoretical \cite{Law1997, Kuhn1999} and experimental work \cite{Kuhn2002, Keller2004}. The method is related to coherent population transfer and electromagnetically induced transparency \cite{Bergmann1998,Fleischhauer2005}.
The coupling of an atomic transition between two levels $i\leftrightarrow j$ to the cavity is given by
\begin{equation}
g_{ij}(z)=\sqrt{\frac{2\pi\mu_{ij}^2}{2\hbar \lambda \epsilon_0 V}}f(z),
\end{equation}
where $\mu_{ij}$ is the relevant transition dipole matrix element, $V$ is the mode volume of the cavity and $f(z)$ is the dependence of the vacuum electric field of the cavity on the vertical position. We assume that the atoms fall through the central portion of the cavity mode, such that the longitudinal and transversal variation of the mode strength is negligible on the scale of the distribution of the atoms' trajectories.
We now describe the evolution of the electronic state of the atoms, starting from an initial state $|S\rangle$ and ending in the final state $|F\rangle$, which are stable ground states of the atoms. Electronic excitation is reduced by introducing a detuning from the relevant transition frequency $\omega_{ij}$.
The Hamiltonian for a single atom in the cavity is then given by
\begin{equation}
H/\hbar=\Delta_C a^\dagger a+\Delta_P |S\rangle \langle S|+\Omega(t)\left(\sigma_{SE}^\dagger + \sigma_{SE}\right)+g_{FE}(t)\left(\sigma_{FE}^\dagger a + \sigma_{FE} a^\dagger \right),
\end{equation}
where $\sigma_{ij}=|i\rangle\langle j|$ is the atomic inversion operator, and the detuning values for the classical pump field and the cavity are given by $\Delta_{P}=\omega_{P}-\omega_{SE}$ and $\Delta_{C}=\omega_{C}-\omega_{FE}$. The creation and annihilation operators $a^\dagger$ and $a$ relate to the cavity photon occupation.\\
The evolution of the system over time, including spontaneous emission of the atomic excited states and the transmission of generated photons through the cavity mirrors, is described by the master equation
\begin{equation}
\frac{\partial \rho}{\partial t}
=-\frac{i}{\hbar}\left[H,\rho\right]+D(\kappa,a)+D(\gamma_{ES},\sigma_{SE})+D(\gamma_{EF},\sigma_{FE}).
\label{mEq}\end{equation}
The decay evolution term for an operator $o$ and a decay rate $Y$ is given by $D(Y,o)=Y\left( 2 o\rho o^\dagger - o^\dagger o\rho -\rho o^\dagger o \right)$. The amplitude decay rates $\kappa$, $\gamma_{ES}$ and $\gamma_{EF}$ relate to the cavity field, atomic spontaneous decay to the initial state, and decay to the final state, respectively. Assuming that the cavity field decays only by transmission through the output mirror, the probability of photon generation efficiency is given by
\begin{equation}
P_{stim}=2\kappa\int_0^{\infty}Tr\left\lbrace a^\dagger a \rho \right\rbrace dt.
\end{equation}
Similarly, the probability of an atom undergoing spontaneous emission is given by
\begin{equation}
P_{spont}=2\left(\gamma_{ES}+\gamma_{EF}\right)
\int_0^{\infty}Tr\left\lbrace |E\rangle \langle E| \rho \right\rbrace dt.
\end{equation}
\subsection{Transitions and parameters}\label{subsec:setup}
We simplify our simulation to three levels by compounding the two decay channels occurring in transition $E \rightarrow S$ into a single effective decay back into state $|S\rangle$ (see Fig. \ref{fig:specPic} a), dashed gray lines). The two decay channels $E \rightarrow S$ and $E \rightarrow F$ are scaled with the square of the Clebsch-Gordan coefficients.
In $^{87}$Rb, $\gamma_{ES}=\gamma_{EF}$ (since $1/3 + 1/6 = 1/2$), and in $^{85}$Rb, $\gamma_{ES}=4\gamma_{EF}/5$ (since $1/3 + 1/9=4/9=4/5\times 5/9$).
Note that while $\gamma_{EF}$ only results in the loss of an atom, $\gamma_{ES}$ can result in the detection of a photon and concurrent loss of the atom. Therefore the experimental data will have to be post-selected on sequences in which one photon and both atoms are detected within the designated spatial and temporal windows.
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{FigS1.pdf}
\caption{a) Schematic representation of the energy levels involved in the D$_1$ transitions in $^{85}$Rb \ and $^{87}$Rb \ (not to scale). The dashed horizontal line indicates the virtual level through which the Raman transitions are operated. The dashed gray lines are undesired spontaneous decay channels. The relevant Clebsch-Gordan coefficients are shown. The black arrows indicate the transitions resonant to the cavity, while the red and blue arrows denote the pump transitions.
b) Example trace of the photon emission probability as a function of time for the two isotopes (plain red line for $^{85}$Rb, dashed blue line for $^{87}$Rb), showing near-identical emission profiles.
The black lines show the pump fields $\Omega(t)$ for $^{85}$Rb \ and $^{87}$Rb.
}\label{fig:specPic}
\end{figure}
We envisage a cavity with similar parameters as used in \cite{Wilk2007}, but in a ring geometry, with coupling strength, field amplitude decay and atomic decay rates $\lbrace g,\kappa,\gamma \rbrace/2\pi=\lbrace 2.24,0.5,2.9 \rbrace\,$MHz for the $|F=2, m_F=2\rangle\leftrightarrow |F^\prime=3, m_F^\prime=3\rangle$ transition of the D$_1$ line of the $^{85}$Rb \ atoms.
The $|F=1, m_F=1\rangle\leftrightarrow |F^\prime=2, m_F^\prime=2\rangle$ transition of the D$_1$ line of $^{87}$Rb \ also couples to the cavity.
The $^{85}$Rb \ and $^{87}$Rb \ atoms enter the cavity in the states $|F=3, m_F=3\rangle$ and $|F=2, m_F=2\rangle$, respectively, and are driven with individual, $\pi$-polarized pump beams.
The coupling strength $g_{max}$ for $^{87}$Rb \ is reduced to $2\pi\times2.12\,$MHz because of its slightly smaller transition matrix element. This reduction can be compensated for by a small increase of the driving amplitude, in order to create indistinguishable wavepackets. The cavity is detuned by $\Delta/2\pi=1.367\,$GHz from the $^{85}$Rb \ transition and by $-\Delta$ from the $^{87}$Rb \ line, leading to identical emission frequencies. This setting is chosen as neighboring transitions are either far-detuned or forbidden (See Fig. \ref{fig:specPic} a)).
\subsection{Emission}\label{subsec:emission}
\begin{figure}[t!]
\centering
\includegraphics[width=1.\columnwidth]{suppl_pic_PsPf.pdf}
\caption{a) Log-linear plot of the probability of success $P_S$ versus the probability of a false-positive event $P_F$. $^{87}$Rb \ and $^{85}$Rb \ (blue and red dots) show near-identical behaviour. b) Logarithmic plot of the ratio of the two probabilities. For comparison, the dashed black line shows a linear dependence.}\label{fig:success}
\end{figure}
The cavity mode is assumed to have a waist of $w_0=40\,\mu$m. The atoms are prepared in the cavity (e.g. by laser cooling in optical tweezers) and are dropped from their trap shortly before the beginning of the driving fields, which occur on a time-scale $t_0 \lesssim 1$~ms after the drop.
We therefore assume that the atoms experience a constant coupling $g_{max}$ to the cavity mode.
The coupling to the cavity, together with a time-dependent excitation pulse amplitude, controls the shape of the photon pulse emitted by the atom-cavity system.
In order to properly control the emission characteristics, a number of limiting factors need to be considered. The cavity linewidth sets a suitable lower bound for the emission pulse bandwidth, and also for the Zeeman splitting required to lift the degeneracy between allowed two-photon transitions in the electronic state manifolds of the two atomic species.
We therefore examine Gaussian pulse envelopes of the type $\Omega(t)=\Omega_{max}e^{-((t-t_0)/\tau)^2}$ with durations between $\tau=2\,\mu$s and $\tau=10\,\mu$s, for which the aforementioned limits are not expected to have a significant effect.
We numerically explored the parameter space by varying the pump field Rabi frequency $\Omega_{max}$ (in the range of tens of MHz) and the pulse duration $\tau$, and computed the probability of stimulated emission, $P_{stim}$, and that of spontaneous emission, $P_{spon}$.
The numerical exploration indicates that the ratio $P_{stim}/P_{spon}\simeq 3.2$, which limits the probability of success.
Fig.~\ref{fig:success} a) shows the result of the numerical exploration (each dot corresponds to a value of the tuplet $\{\Omega_{max},\tau,t_0\}$) by displaying the success probability, $P_S=2\times P_{stim}\times(1-P_{stim})\times P_{coll}\times (1-P_{spon})^2$, versus the false-positive probability, $P_F=P_{stim}^2\times P_{coll}\times (1-P_{coll})$. The figure shows that $P_S > 14\,$\% can be achieved for these parameters. We emphasize that this value is limited purely by technological factors, such as the cavity finesse and mode field diameter. Fig. \ref{fig:success} b) shows the ratio of $P_S/P_F$, underlining the trade-off between the rate of successful and false-positive events.
As exemplary values, $P_{stim}=10.4\,$\% leads to $P_S=7.0\,$\% and $P_F=0.26\,$\%, and $P_{stim}=20.0\,$\% leads to $P_S=11.2\,$\% and $P_F=0.95\,$\%.
These values for $P_F$ and $P_S$ rely on a modest assumption for the photon detection efficiency, since modern superconducting detectors approach unity efficiency. The value of $P_{coll}=0.4$ assumes large path losses and scattering or absorption in the cavity output mirror. This value can also be improved by purely technical means.
\section{Center-of-mass of the two-species interferometer}
The usual approach for describing the phase difference in an atom interferometer is the path integral formulation of quantum mechanics (see, e.g. Ref.~\cite{Storey1994}).
Following a semi-classical approach, it has been shown that the phase shift in a symmetric 3 light-pulse atom interferometer originates from the relative phase of the Raman lasers which is imprinted on the atomic wavefunction at the atom-laser interaction times \cite{Wolf2011}, such that the interferometer acts as an accelerometer \footnote{In a representation-free, full-quantum description, it has been shown that the interferometric phase originates from a product of non-commuting unitary operators which reflects the acceleration of the atom in the laser frame \cite{Schleich2013,Schleich2013a}.}.
We followed this semi-classical approach in our work: we evaluated the phase shifts imprinted on each atomic species by following the classical trajectories of the two atoms in the arms of the interferometer.
In single-atom interferometers, according to the midpoint theorem \cite{Antoine2003}, the interferometric phase can be computed by calculating the classical trajectory of the center-of-mass (COM) between the two arms of the interferometer, which is not a populated trajectory.
In dual-species single-atom interferometers, the trajectories of the COM associated with the two species are physically separated and independent.
In our proposal, remarkably, the COM associated with the two possible states forming the superposition of Eq.(1) of the main text follow different trajectories. Moreover, the two COMs associated to the two species are not equidistant from the total COM of the two-atom entangled state, within which interference occurs.
In this section, we analyze this non-local feature specific to the two-atom interferometer in more details.
Consider the state $|\mathcal{A},\hbar \vec{k}_\mathcal{A}; \mathcal{B}, \vec{0}\rangle$ where atom $\mathcal{A}$ recoils with a velocity $\hbar \vec{k}_\mathcal{A}/m_\mathcal{A}$ and atom $\mathcal{B}$ is left unperturbed.
We call $COM_1(t)$ the center-of-mass trajectory for that state in the vertical ($z$) direction \footnote{We recall that the wavevectors in the $x$ (cavity) direction are the same for the two species within the cavity mode linewidth. Only the wavevectors associated with the pump beams in the $z$ direction are different.}. Conversely, we call $COM_2(t)$ the center-of-mass trajectory corresponding to the state $|\mathcal{A}, \vec{0}; \mathcal{B}, \hbar \vec{k}_\mathcal{B}\rangle$, where $\mathcal{B}$ recoils with the velocity $\hbar \vec{k}_\mathcal{B}/m_\mathcal{B}$.
If we concentrate on the first part of the interferometer ($0\leq t \leq T$), the trajectories are given by
\begin{equation}
COM_1(t) = \frac{\hbar k_{\mathcal{A}}}{m_{tot}} t \ \ , \ \ COM_2(t) = \frac{\hbar k_{\mathcal{B}}}{m_{tot}} t,
\label{eq:COM_state}
\end{equation}
with $m_{tot}=m_{\mathcal{A}}+m_{\mathcal{B}}$.
The trajectory of the total center-of-mass of the entangled state is:
\begin{equation}
COM_{tot}(t) = \frac{1}{2}\left[COM_1(t)+COM_2(t)\right] = \frac{\hbar k_{\mathcal{A}} + \hbar k_{\mathcal{B}}}{2m_{tot}} t,
\label{eq:COM_tot}
\end{equation}
for all times $0\leq t \leq 2T$.
The center-of-mass trajectories for each atomic species are $COM_\alpha(t)=\hbar k_\alpha/2 m_\alpha \times t$, where $\alpha = \mathcal{A},\mathcal{B}$.
The relative trajectories between the center-of-mass of each species and the total center-of-mass are:
\begin{eqnarray}
COM_{\mathcal{A}}(t)-COM_{tot}(t) & = & \frac{\hbar k_\mathcal{B} \times t}{2 m_{tot}} \left(\frac{m_\mathcal{B} k_\mathcal{A}}{m_\mathcal{A} k_\mathcal{B}} -1 \right) \nonumber \\
COM_{\mathcal{B}}(t)-COM_{tot}(t) & = & \frac{\hbar k_\mathcal{A} \times t}{2 m_{tot}} \left(\frac{m_\mathcal{A} k_\mathcal{B}}{m_\mathcal{B} k_\mathcal{A}} -1 \right).
\label{eq:distance_COM}
\end{eqnarray}
The center-of-mass trajectories associated to the two species are, in general, not equidistant from the total center-of-mass of the full state.
\begin{figure}[t!]
\centering
\includegraphics[width=1.\columnwidth]{illustration_COM_difference-v2.pdf}
\caption{Illustration of the splitting of the center-of-mass trajectory of each isotope (blue and red lines) from the trajectory of the total center-of-mass of the two-atom entangled state (cyan line). The right panel is a zoom to the final two-atom interferometer recombination point.
The time between the light pulses is $T=50$~ms.
Gravity is set to zero to emphasize the recoil effect.
}
\label{fig:distance_to_COM}
\end{figure}
With our choice of atomic species and transition lines ($D_1$ for both isotopes), the relative difference between the two wavevectors is small, $(k_{85}-k_{87})/k_{85}\sim 10^{-5}$, but the relative mass difference is comparatively larger, $(m_{87}-m_{85})/m_{87}\simeq 0.023$. From Eqs.~\eqref{eq:distance_COM}, the displacement between the center-of-mass of each atom and $COM_{tot}$ is macroscopic ($ 3.43$ and $-3.35 \ \mu$m for $^{85}$Rb \ and $^{87}$Rb, respectively), as mentioned in the main text (Ref.~[48]). This effect is illustrated in Fig.~\ref{fig:distance_to_COM}.
In dual-species single-atom interferometers, the interference occurs independently within the center-of-mass of each species, corresponding to the two C points in Fig.~\ref{fig:distance_to_COM}.
The displacement between the center-of-mass of the atoms has therefore no fundamental role, but is of technical concern in WEP tests because of gravity gradients.
In our two-atom interferometer, the interference occurs within the total center-of-mass of the two-atom state (end point of the cyan line in Fig.~\ref{fig:distance_to_COM}), which is separated from the classical recombination points associated to each species.
Such peculiar non local effects could be magnified by using different transitions in both atoms, such as the $D_2$ (780 nm) and $D_1$ (795 nm) lines for $^{85}$Rb \ and $^{87}$Rb \ respectively, or two different atomic species such as $^{85}$Rb and $^{133}$Cs (780 nm and 852 nm for $D_2$ lines). In such scenarios, indistinguishability of the emitted photons can be engineered by means of wavelength conversion.
\section{References}
|
2,877,628,091,050 | arxiv | \section{Introduction}
\label{intro}
Hamiltonian systems in ${\mathbb R}^2$ that admit separation of variables were completely determined by Liouville \cite{Liouville1859} and Morera \cite{Morera}, and can be classified, see \cite{Perelomov}, in four different types according with the system of coordinates where the separability is manifested: elliptic, polar, parabolic and Cartesian respectively. Thus Type I Liouville systems in ${\mathbb R}^2$ are defined by natural Hamiltonians: $H=K+{\cal U}$, $K=\frac{m}{2} \left( (\frac{dx_1}{dt})^2+(\frac{dx_2}{dt})^2\right)$, such that in elliptic coordinates adopt the Liouville form \cite{Perelomov}.
In this work we shall establish an isomorphism between this kind of mechanical systems and Liouville systems in $S^2$ that are separable in sphero-conical coordinates, that, correspondingly, we shall call Type I Liouville systems in $S^2$.
This isomorphism will be constructed by mapping the configuration space $S^2$ by means of two gnomonic projections from the two $S^2$-hemispheres into two ${\mathbb R}^2$ planes, together with a redefinition of the physical time and the application of a linear transformation in the projecting planes. This procedure is a generalization of the method used in \cite{Gonzalez}, where the orbits of the two fixed center problem on $S^2$ \cite{Killing1885,Kozlov1992} were determined by inverting these transformations. The inspiration was taken from the work of Borisov and Mamaev \cite{Borisov2007}, based itself on the ideas of Albouy \cite{Al1,Al2}, the main novelty of \cite{Gonzalez} was the simultaneous consideration of two gnomonic projections in order to study the complete set of orbits, identifying each trajectory crossing the equator of $S^2$ with the conjunction of two planar unbounded orbits, one of the two attractive center problem and another corresponding to the system of the two associated repulsive centers.
The idea of projecting dynamical systems in constant positive curvature surfaces to planar ones goes back to Appell \cite{Appell1890,Appell1891} and has been developed in modern times by Albouy \cite{Al1,Al2,Al3,Albouy2013,Al4}, see also \cite{Borisov2016} for a detailed historical review and references on problems defined in spaces of constant curvature.
The structure of this paper is as follows: In Section 2 the gnomonic projections will be constructed. Section 3 is devoted to describe the properties of Liouville type I systems, both in $S^2$ and ${\mathbb R}^2$. In Section 4 the isomorphism is established. Section 5 contains several selected examples and finally some comments and future perspectives are showed in the final section.
\section{Gnomonic projections from $S^2$ to ${\mathbb R}^2$}
\label{sec:1}
Let us consider the $S^2$ sphere embedded in ${\mathbb R}^3$, i.e. $(X,Y,Z)\in {\mathbb R}^3$, such that $X^2+Y^2+Z^2=R^2$. Standard spherical coordinates in $S^2$:
\[
X=R \sin \theta \cos \varphi\ ,\quad Y=R \sin \theta \sin \varphi \ ,\quad Z=R \cos \theta
\]
$\theta\in [0,\pi]$, $\varphi\in [0,2\pi)$, allow us to write the metric tensor in $TS^2$ (i.e. the restriction of Euclidean metric in $T{\mathbb R}^3$ to the sphere) in standard form:
\[
ds^2=R^2 \left( d\theta^2+\sin^2\theta \, d\varphi^2\right)\label{metrica1}
\]
The gnomonic projections from the North/South hemispheres: $S^2_+=\{ (X,Y,Z)\in S^2/ Z>0\}$, $S^2_-=\{ (X,Y,Z)\in S^2/ Z<0\}$, to the ${\mathbb R}^2$ plane, with respect to the points $(0,0,\pm R)$, are defined by the change of variables
\[
\Pi_\pm: S_\pm^2 \longrightarrow {\mathbb R}^2\quad \Rightarrow \quad \left\{ \begin{array}{ll} x=\frac{R}{Z} X = R \tan \theta \cos \varphi \\ & \\ y=\frac{R}{Z} Y =R \tan \theta \sin \varphi \end{array}\right.\ ,\quad \varphi\in[0,2\pi)
\]
where $\theta \in \left[ 0, \frac{\pi}{2} \right)$ in the case of $\Pi_+$ and $\theta \in \left( \frac{\pi}{2},\pi\right] $ for $\Pi_-$. The inverse maps $\Pi_{\pm}^{-1}: {\mathbb R}^2 \longrightarrow S_{\pm}^2$, read:
\[
X=\frac{Rx}{\sqrt{R^2+x^2+y^2}}\ ,\ Y=\frac{Ry}{\sqrt{R^2+x^2+y^2}}\ ,\ Z=\frac{\pm R^2}{\sqrt{R^2+x^2+y^2}}
\]
The projections $\Pi_{\pm}$ define two copies of the Riemannian manifold $({\mathbb R}^2,g)$ where the metric tensor $g$ in each copy is given by:
\begin{equation}
ds^2=\frac{R^2}{(R^2+x^2+y^2)^2} \left( (R^2+y^2) dx^2 -2xy\, dx \, dy +(R^2+x^2) dy^2\right)\label{metrica2}
\end{equation}
with associated Christoffel symbols: $ \Gamma_{22}^1= \Gamma_{11}^2=0$,
\[
\Gamma_{11}^1=2\Gamma_{12}^2=2\Gamma_{21}^2= \frac{-2x}{R^2+x^2+y^2}\, ,\quad \Gamma_{22}^2=2\Gamma_{12}^1=2\Gamma_{21}^1= \frac{-2y}{R^2+x^2+y^2}\ \ .
\]
\begin{figure}
\begin{center} \includegraphics[height=7cm]{doblegnomonic}
\caption{Gnomonic projections $\Pi_+$ and $\Pi_-$.}
\end{center}
\label{fig:1}
\end{figure}
Gnomonic projections map geodesics in $S^2$ into straight lines in ${\mathbb R}^2$. In fact the geodesic equations for the metric (\ref{metrica2}):
\[
\nabla_{\dot{\bf x}} {\dot{\bf x}} =0 \Rightarrow \left\{ \begin{array}{lcc} \ddot{x}+\Gamma_{11}^1 \dot{x}^2+2\Gamma_{12}^1 \dot{x}\dot{y} +\Gamma_{22}^1 \dot{y}^2 &=& 0\\ \ddot{y}+\Gamma_{11}^2 \dot{x}^2+2\Gamma_{12}^2 \dot{x}\dot{y} +\Gamma_{22}^2 \dot{y}^2 &=& 0 \end{array}\right.
\]
where ${\bf x}\equiv (x(t),y(t))\in {\mathbb R}^2$ and dots represent derivative with respect to $t$, can be converted, by changing from physical to local (or \textit{projected}) time, into trivial standard form:
\[
d\tau = \frac{R^2+x^2+y^2}{R^2} dt \quad \Rightarrow \quad x^{\prime\prime}=0\ ,\quad y^{\prime\prime}=0
\]
where primes denote derivation with respect to $\tau$.
Given a mechanical problem in $S^2$ defined by a potential function ${\cal U}$, the projection of Newton equations in $S^2_+$ or $S^2_-$ to $({\mathbb R}^2,g)$ can be written as:
\begin{equation}
\nabla_{\dot{\bf x}} {\dot{\bf x}} =-{\rm grad} \, {\cal U}({\bf x})\label{equation1}
\end{equation}
where ${\bf x}\equiv (x,y)$, and covariant derivatives and the gradient are associated to the $g$ metric (\ref{metrica2}). Changing to projected time, equations (\ref{equation1}) will be written as:
\begin{equation}
{\bf x}'' = -{\rm grad} \, {\cal U}({\bf x})\Rightarrow \left\{ \begin{array}{lcl} x^{\prime\prime}&=&-g^{11}\frac{\partial{\cal U}}{\partial x}-g^{12}\frac{\partial{\cal U}}{\partial y} \\ y^{\prime\prime}&=& -g^{21}\frac{\partial{\cal U}}{\partial x}-g^{22}\frac{\partial{\cal U}}{\partial y}\end{array} \right. \label{newton}
\end{equation}
where $g^{ij}$ denote the components of $g^{-1}$, the inverse of the metric $g$.
We now pose the following question: Is it possible to understand equations (\ref{newton}) as Newton equations for a mechanical system in the Euclidean ${\mathbb R}^2$ plane with time $\tau$?, in other words: Would it exists a function ${\cal V}(x_1,x_2)$ such that equations
\begin{equation}
x_1^{\prime\prime}=-\frac{\partial{\cal V}}{\partial x_1} \quad , \quad x_2^{\prime\prime}=-\frac{\partial{\cal V}}{\partial x_2} \label{newton1}
\end{equation}
are equivalent to (\ref{newton})?
The answer was given by Albouy \cite{Al1} and developed explicitly by Borisov and Mamaev \cite{Borisov2007} for the case of the Killing problem restricted to the North Hemisphere, i.e. the problem of two Kepler centers in $S^2_+$. The equivalence (trajectory isomorphism) was achieved in this concrete case via the linear transformation $x_1=x$, $x_2=\frac{1}{\sigma}y$, for an adequate value of $\sigma$ parameter, in equations (\ref{newton}). Moreover, in \cite{Borisov2007} this isomorphism was extended to other mechanical systems and in general to systems admitting separation of variables in sphero-conical coordinates in $S^2_+$. In \cite{Gonzalez} the equivalence for the Killing problem was applied to the complete sphere considering the two projections $\Pi_+$ and $\Pi_-$ simultaneously. A delicate point is the gluing of the inverse projections at the equator of the sphere. Orbits crossing the equator have to be described by the differentiable gluing of two pieces coming from unbounded orbits in each of the two planes respectively.
In this work, following \cite{Borisov2007}, we shall show that these results are valid for the class of Type I Liouville systems in the whole $S^2$, i.e. separable system in sphero-conical coordinates in $S^2$, that are transformed by gnomonic projections and the linear transformation, into Liouville systems of type I in ${\mathbb R}^2$ (separable in elliptic coordinates) with respect to the ``non-physical" time $\tau$.
\section{Liouville type I systems in $S^2$ and ${\mathbb R}^2$}
We shall refer to Hamilton-Jacobi separable spherical systems in sphero-conical coordinate as Liouville dynamical systems of Type I in $S^2$, in analogy with the planar case for elliptic coordinates, see \cite{Perelomov},
Sphero-conical coordinates $U\in(\bar\sigma,1)$, $V\in (-\bar\sigma,\bar\sigma)$ describe points in an $S^2$-sphere by means of the geodesic distances $R\theta_1$ and $R\theta_2$ from the particle position to two fixed points that we choose without loosing generality as: $F_1=(R\bar{\sigma},0,R\sigma)$, $F_2=(-R\bar{\sigma},0,R\sigma)$, $\sigma=\cos \theta_f$, $\bar{\sigma}=\sin \theta_f$, see Figure 2, in the form:
\[
\theta_1={\rm arccos}({\sigma \cos\theta+\bar{\sigma}\sin\theta\cos\varphi)}\ ,\quad \theta_2={\rm arccos}({\sigma \cos\theta-\bar{\sigma}\sin\theta\cos\varphi})
\]
Sphero-conical coordinates are thus defined by replicating on the sphere the \lq\lq gardener\rq\rq \ construction which allowed Euler to define elliptic coordinates in ${\mathbb R}^2$:
\[
U=\sin\frac{\theta_1+\theta_2}{2}\ ,\quad V=\sin\frac{\theta_2-\theta_1}{2}
\]
and the change of coordinates is the following:
\[
X=\frac{R}{\bar{\sigma}} UV\ ,\quad Y^2=\frac{R^2}{\sigma^2\bar{\sigma}^2} (U^2-\bar{\sigma}^2)(\bar{\sigma}^2-V^2)\ ,\quad Z^2=\frac{R^2}{\sigma^2} (1-U^2)(1-V^2) \ .
\]
\begin{figure}
\begin{center}
\includegraphics[height=5cm]{trigo2}
\caption{Position of the particle in the sphere relative to two fixed points or foci.}
\end{center}
\label{fig:1}
\end{figure}
The kinetic energy of a particle moving on one $S^2$ sphere as configuration space expressed in sphero-conical coordinates reads:
\[
K=\frac{mR^2}{2}\, \left[ \frac{U^2-V^2}{(1-U^2)(U^2-\bar{\sigma}^2)} \left( \frac{dU}{dt}\right)^2 +\frac{U^2-V^2}{(1-V^2)(\bar{\sigma}^2-V^2)} \left( \frac{dV}{dt}\right)^2 \right] \ ,
\]
where $t$ is the physical time and we stress that $K$ is singular in the Equator, i.e., in the circle: $Z=0 \, \equiv \, U=1$.
Changing from physical to local time, $d\varsigma = \frac{dt}{U^2-V^2}$, the Kinetic energy is rewritten as:
\[
K=\frac{m R^2}{2(U^2-V^2)} \left[ \frac{1}{(1-U^2)(U^2-\bar{\sigma}^2)} \left( \frac{dU}{d\varsigma}\right)^2 +\frac{1}{(1-V^2)(\bar{\sigma}^2-V^2)} \left( \frac{dV}{d\varsigma}\right)^2 \right]\, .
\]
We define a natural dynamical system as Liouville of Type I in $S^2$ if the potential energy is a function of the form:
\begin{equation}
{\cal U}(U,V) = \frac{1}{U^2-V^2} \left( F(U)+G(V) \right) \quad .\label{typeIS2}
\end{equation}
This kind of potentials where the functions $F(U)$ and $G(V)$ are regular enough give rise to motion equations which are separable in the $U$ and $V$ evolutions.
Systems of this type are automatically completely integrable. The first integral of motion, the mechanical energy $E=K+{\cal U}$, leads to the separated expressions:
\[
- \frac{m R^2\left( \frac{dV}{d\varsigma}\right)^2}{2(1-V^2)(\bar{\sigma}^2-V^2)}\, -G(V) -E V^2 \, =\, \frac{m R^2 \left( \frac{dU}{d\varsigma}\right)^2}{2(1-U^2)(U^2-\bar{\sigma}^2)}\, +F(U) - E U^2
\]
which necessarily must be equal to a constant $-\Omega$, a second invariant in involution with the energy. Rearranging these expressions we finally reduce the equations of motion to the uncoupled first-order ODE's system:
\begin{eqnarray}
\left( \frac{dU}{d\varsigma}\right)^2 &=& \frac{2}{mR^2}\ (1-U^2)(U^2-\bar{\sigma}^2)\, (-\Omega+E \, U^2 - F(U)) \label{ode} \\ \left( \frac{dV}{d\varsigma}\right)^2 &=& \frac{2}{mR^2}\ (1-V^2)(\bar{\sigma}^2-V^2)\, (\Omega-E \, V^2 - G(V)) \label{vode} \quad .
\end{eqnarray}
that is immediately integrated via the quadratures:
\begin{eqnarray}
\varsigma -\varsigma_0 &=& \pm R\sqrt{\frac{m}{2}}\int_{\bar\sigma}^U \frac{d \tilde{U}}{\sqrt{(1-\tilde{U}^2)(\tilde{U}^2-\bar{\sigma}^2)\, (-\Omega+E \, \tilde{U}^2 - F(\tilde{U}))}} \label{quadrature1} \\ \nonumber
\\
\varsigma -\varsigma_0 &=& \pm R\sqrt{\frac{m}{2}}\int_{-\bar\sigma}^V \frac{d \tilde{V}}{\sqrt{ (1-\tilde{V}^2)(\bar{\sigma}^2-\tilde{V}^2)\, (\Omega-E \, \tilde{V}^2 - G(\tilde{V}))}} \quad . \label{quadrature2}
\end{eqnarray}
and the orbits are found by inversion, if possible, of these integrals. The physical time can be recovered by integration the expression:
\[
t=\int_{\varsigma_0}^{\varsigma} (U(\bar{\varsigma})^2-V(\bar{\varsigma})^2) d\bar{\varsigma}
\]
Liouville Type I systems in ${\mathbb R}^2$ are separable in elliptic coordinates \cite{Perelomov}. Recall that Euler elliptic coordinates in ${\mathbb R}^2$ relative to the foci: $f_1=(a,0)$, $f_2=(-a,0)$ are defined as half the sum and half the difference of the distances from the particle position to the foci:
\begin{equation}
u=\frac{r_1+r_2}{2a } \, , \ v=\frac{r_2-r_1}{2 a}
\, ; \ r_1=\sqrt{(x_1-a)^2+x_2^2} \, , \ r_2=\sqrt{(x_1+a)^2+x_2^2} \ . \label{euler}
\end{equation}
The new coordinates vary in the intervals: $-1<v<1$, $1<u<\infty$. In terms of these coordinates the particle position is defined to be
\begin{equation}
x_1=a uv \ ,\quad x_2^2= a^2\, (u^2-1)(1-v^2) \quad ,\label{elliptic}
\end{equation}
implying a two-to-one map from ${\mathbb R}^2$ to the infinite ``rectangle": $(-1,1)\times (1, +\infty)$. The Kinetic energy with respect to the local time $d\zeta = \frac{dt}{u^2-v^2}$ in this coordinate system reads
\[
K=\frac{m a^2}{2(u^2-v^2)}\, \left( \frac{1}{u^2-1} \left( \frac{du}{d\zeta}\right)^2 +\frac{1}{1-v^2} \left( \frac{dv}{d\zeta}\right)^2 \right)
\]
and the potential provides a Liouville Type I system in ${\mathbb R}^2$ if is of the form
\begin{equation}
{\cal V}(u,v) = \frac{1}{u^2-v^2} \left( f(u)+g(v) \right) \ ,\label{sepellip}
\end{equation}
for arbitrary but sufficiently regular functions $f(u)$ and $g(v)$. An standard separability process leads to the uncoupled first-order ODE's:
\begin{eqnarray}
\left( \frac{du}{d\zeta}\right)^2 &=& \frac{2}{ma^2}\ (u^2-1)\, (-\lambda+h \, u^2 -f(u)) \label{ueq}\\
\left( \frac{dv}{d\zeta}\right)^2 &=& \frac{2}{ma^2}\ (1-v^2)\, (\lambda-h \, v^2 - g(v))\label{veq}
\end{eqnarray}
depending on the energy $h$ and the second constant of motion $\lambda$.
\section{Trajectory Isomorphism between Liouville Type I systems in $S^2$ and ${\mathbb R}^2$}
The gnomonic projection $\Pi_+$ from $S^2_+$ to ${\mathbb R}^2$ allows us to write the cartesian coordinates $(x,y)$ in terms of the sphero-conical ones:
\[
x=\frac{R\sigma}{\bar{\sigma}} \, \frac{UV}{\sqrt{1-U^2}\sqrt{1-V^2}}\ ,\quad y^2=\frac{R^2}{\bar{\sigma}^2} \, \frac{(U^2-\bar{\sigma}^2)(\bar{\sigma}^2-V^2)}{(1-U^2)(1-V^2)}
\]
The re-scaling $x_1=x$, $x_2=\frac{y}{\sigma}$ permits us to re-write these expressions in terms of Euler elliptic coordinates in the form (\ref{elliptic}) for $a=\frac{R\bar{\sigma}}{\sigma}$. Note that with this choice we have that $\Pi_+(F_1)=(a,0)=f_1$ and $\Pi_+(F_2)=(-a,0)=f_2$. Thus the ${\mathbb R}^2$ plane with coordinates $(x_1,x_2)$ is equivalently described in terms of the sphero-conical coordinates or in the elliptic form (\ref{elliptic}) via the identifications:
\begin{equation}
u=\frac{\sigma U}{\bar{\sigma} \sqrt{1-U^2}}\ , \quad v=\frac{\sigma V}{\bar{\sigma} \sqrt{1-V^2}}\label{identification1}
\end{equation}
Let us consider a Liouville Type I system in $S^2$ with potential (\ref{typeIS2}) and the corresponding separated first-order equations (\ref{ode},\ref{vode}) with respect to the local time $\varsigma$ in $S^2$. The chain of changes from this local time $\varsigma$ to the elliptic local time $\zeta$, via going back to the physical time $t$, changing to the projected time $\tau$ and finally defining $\zeta$ in terms of $\tau$: $d\zeta = \frac{d\tau}{u^2-v^2}$, can be simply summarized in the form:
\[
d\zeta= \bar{\sigma}\, d\varsigma
\]
Using this expression and the identification (\ref{identification1}) one easily realizes that equations (\ref{ode},\ref{vode}) are equivalent to equations (\ref{ueq},\ref{veq}), and reciprocally, if the respective constants of motion are related through the equation:
\[
h=\frac{E-\Omega}{\sigma^2} \quad , \quad \lambda =\frac{\Omega}{\bar{\sigma}} \quad .
\]
and the the potential energy in ${\mathbb R}^2$ is obtained from the potential energy in $S^2$, and viceversa,
via the identities:
\begin{equation}
f(u)=\frac{\bar{\sigma}^2 u^2+\sigma^2}{\sigma^2 \bar{\sigma}^2}\, F(U(u))\ , \quad
g(v)=\frac{\bar{\sigma}^2 v^2+\sigma^2}{\sigma^2 \bar{\sigma}^2}\, G(V(v)) \label{fyg}
\end{equation}
It is thus established the prescription to pass back and forth between a Liouville Type I separable systems in $S^2_+$ with a given physical time $t$ and a Liouville Type I system in ${\mathbb R}^2$ with respect to a ``non-physical" time $\tau$.
An analogous procedure relative to the projection $\Pi_-$ can be developed for $S^2_-$, and thus Newton equations on $S^2_{\pm}$ are equivalent to Newton equations (\ref{newton1}) in the Euclidean planes. The orbits of a system with $S^2$ as configuration space require the determination of the orbits of two planar systems to be completely described in this projected picture.
In order to clarify the relationship between local times needed for separability in $S^2$ and $R^2$ we include a Table showing all the changes of time schedules:
\begin{center}
\entrymodifiers={++[F]}
\xymatrix@R=1.0cm{
{\begin{array}{c} \txt{\scriptsize $S^2_\pm$, Newton Equations for ${\cal U}$} \\ \txt{\scriptsize Physical time $t$} \end{array}} \ar[d]^{\txt {\scriptsize Sphero-Conical coor.}} \ar[r]^{\txt{\scriptsize Gnomonic Proj.}} &
{\begin{array}{c} \txt{\scriptsize $({\mathbb R}^2,g)$, Newton Eqs. for ${\cal U}$} \\ \txt{\scriptsize Physical time $t$} \end{array}} \ar[d]^{\txt{\scriptsize ``Projected" time $\tau$}}\\ {\begin{array}{c} \txt{\scriptsize Separable problem in $S^2_\pm$} \\ \txt{\scriptsize Physical time $t$} \end{array}} \ar[d]^{\txt{\scriptsize Local time $\varsigma$}} & { \txt{\scriptsize ${\mathbb R}^2$, System of second order ODE. Projected time $\tau$} } \ar[d]^{\txt{\scriptsize Linear transf.}} \\ { {\begin{array}{c} \txt{\scriptsize Separated First Order Eqs. in $S^2_\pm$} \\ \txt{\scriptsize Local time $\varsigma$} \end{array}}} \ar@2{<->}[ddr] & { \txt{\scriptsize $({\mathbb R}^2,\delta_{ij})$, Newton Eqs. for ${\cal V}$. Time $\tau$} }\ar[d]^{\txt {\scriptsize Elliptic coor.}}\\ * { } & { \txt{\scriptsize Separable problem in ${\mathbb R}^2$. Time $\tau$} } \ar[d]^{\txt{\scriptsize Local time $\zeta$}} \\ {\text{\scriptsize $d\zeta = \bar{\sigma} \, d\varsigma$ } } & {\begin{array}{c} \txt{\scriptsize Separated First Order Eqs. in ${\mathbb R}^2$} \\ \txt{\scriptsize Projected-Local time $\zeta$} \end{array}}
}
\end{center}
\section{Gallery of selected examples}
\subsection{The Neumann system}
The Neumann system \cite{Neumann} consists of a particle constrained to move in a $S^2$ sphere of radius $R$ subjected to maximally anisotropic linear attraction towards the center of the sphere. The potential energy is:
\begin{equation}
{\cal U}(X,Y,Z)= a X^2+b Y^2+c Z^2 \ , \quad a>b>c>0 \quad , \label{Neumann}
\end{equation}
where the couplings $a,b,c$ may be redefined as $\frac{m \omega^2}{2}=a-c$, $0<\sigma^2=\frac{b-c}{a-c}<1$, to easily show that the Neumann
problem is a Liouville Type I system in $S^2$ since the potential energy in sphero-conical coordinates is of the standard form (\ref{typeIS2}) with:
\[
F(U)=\frac{m\omega^2}{2}R^2\left(U^2-\bar{\sigma}^2\right)U^2 \ , \quad G(V)=\frac{m\omega^2}{2}R^2\left(\bar{\sigma}^2-V^2\right) V^2 \quad .
\]
The sigma parameter fixing the position of the foci measures in this case the asymmetry between the intensity of the elastic forces in the $X$ and $Y$ directions. Consequently, the orbits of a particle in the Neumann problem are determined by evaluating the quadratures (\ref{quadrature1},\ref{quadrature2}) with this choice of $F(U)$ and $G(V)$. Both integrals can be written in the compact form:
\begin{eqnarray}
&& \varsigma-\varsigma_0
=\pm \sqrt{m}R \int_{{\cal X}_0}^{\cal X} \, \frac{d \tilde{{\cal X}}}{\sqrt{P_5(\tilde{{\cal X}})}} \label{hyperell} \\ && \hspace{-1cm}
P_5({\cal X})=-{\cal X}(1-{\cal X})({\cal X}-\bar{\sigma}^2)\left(m\omega^2 R^2{\cal X}^2-(2E -m \omega^2R^2\bar{\sigma}^2){\cal X}+2\Omega\right) \nonumber
\end{eqnarray}
where a new integration variable: ${\cal X}$ has been introduced: $U={\cal X}^2$ for quadrature (\ref{quadrature1}) and $V={\cal X}^2$ for (\ref{quadrature2}). (\ref{hyperell}) is an hyperelliptic integral of genus 2, and obviously to obtain explicit expressions for these orbits requires the use of rank 2 Theta functions, see \cite{Dubrovine}, \cite{Enolski}.
Having into account the symmetry of the Neumann potential (\ref{Neumann}), the corresponding planar potentials ${\cal V}(x_1,x_2)$ will have identical expressions in both $\Pi_+(S^2_+)$ and $\Pi_-(S^2_-)$ planes. Applying (\ref{fyg}) we obtain potential (\ref{sepellip}) with:
\[
f(u)=\frac{m\omega^2R^2\sigma^2}{2}\, \frac{u^2(u^2-1)}{\bar{\sigma}^2 u^2+\sigma^2}\ ,\quad g(v)=\frac{m\omega^2R^2\sigma^2}{2} \ \frac{v^2(1- v^2)}{\bar{\sigma}^2 v^2+\sigma^2}
\]
that in cartesian coordinates corresponds to the potential function:
\begin{equation}
{\cal V}(x_1,x_2)= \frac{m\omega^2 R^2}{2} \left( \frac{x_1^2+\sigma^2 x_2^2}{R^2+x_1^2+\sigma^2 x_2^2} \right) \label{Neumannplano}
\end{equation}
Thus orbits for (\ref{Neumann}) lying in $S^2_+$ or $S^2_-$ are in one to one correspondence with bounded orbits of the planar system (\ref{Neumannplano}) whereas orbits that crosses the equator have to be recovered from the projected pictures as the gluing of unbounded orbits of the two planar copies.
\begin{figure*}
\begin{center}
\includegraphics[height=4cm]{Neumannb} \hspace{0.8cm} \includegraphics[height=4cm]{Neumann2}
\caption{An orbit of the Neumann problem and its corresponding planar orbit.}
\end{center}
\label{fig:2}
\end{figure*}
\subsection{The Killing system}
In the Killing system \cite{Killing1885,Kozlov1992}, see also \cite{Gonzalez} and references therein, one massive particle is forced to move on an $S^2$-sphere of radius $R$ under the action of a gravitational field created by two (e.g. attractive, $\gamma_1>0$, $\gamma_2>0$) Keplerian centers. Fixing the centers in the above defined $F_1$ and $F_2$ points the potential energy reads:
\[
{\cal U}(\theta_1,\theta_2)=-\frac{\gamma_1}{R}{\rm cotan}\, \theta_1-\frac{\gamma_2}{R}{\rm cotan}\, \theta_2 \quad ,
\]
and thus the test mass feels the presence of two attractive centers in the North hemisphere and two (repulsive) ones located at the antipodal points in the South hemisphere with identical strengths. In sphero-conical coordinates the potential energy is written with two different expressions depending on the hemisphere that it is considered. In both cases ${\cal U}_{\pm}(U,V)$ is of Liouville Type I in $S^2$ form (\ref{typeIS2}), with:
\[
F_{\pm}(U)=\mp \frac{\gamma_1+\gamma_2}{R}U\sqrt{1-U^2} \quad , \quad G(V)=-\frac{\gamma_1-\gamma_2}{R} V\sqrt{1-V^2} \quad .
\]
Applying the general procedure explained above the dynamics in the $S^2_+$ hemisphere can be described by the planar potential:
\[
{\cal V}_+(x_1,x_2)= -\frac{\gamma_1}{\sigma^2\, \sqrt{(x_1-\frac{R\bar{\sigma}}{\sigma})^2+x_2^2}}-\frac{\gamma_2}{\sigma^2 \sqrt{(x_1+\frac{R\bar{\sigma}}{\sigma})^2+x_2^2}}
\]
that corresponds to the problem of two attractive centers in ${\mathbb R}^2$. In a parallel way, the problem in the South hemisphere is orbitally equivalent to the planar problem:
\[
{\cal V}_-(x_1,x_2)= \frac{\gamma_2}{\sigma^2\, \sqrt{(x_1-\frac{R\bar{\sigma}}{\sigma})^2+x_2^2}}+\frac{\gamma_1}{\sigma^2 \sqrt{(x_1+\frac{R\bar{\sigma}}{\sigma})^2+x_2^2}}\ ,
\]
i.e. the planar potential of two repulsive centers where the roles of the points $(\pm \frac{R\bar{\sigma}}{\sigma}, 0)$, and thus the strengths of the centers in modulus, are
interchanged with respect to the attractive potential ${\cal V}_+(x_1, x_2)$ in $\Pi_+(S^2_+)$.
In \cite{Gonzalez} a complete analysis of the different types of orbits for this problem is performed, including the integration and inversion of the involved elliptic integrals that lead to explicit expressions in terms of Jacobi elliptic functions for all the available regimes in the bifurcation diagram. Two examples of planetary type orbits are represented in Figure 4. In Figure 5 we can see their corresponding orbits in the projected planar systems.
\begin{figure*}
\begin{center}
\includegraphics[height=5cm]{tp1aFS} \includegraphics[height=5cm]{tpcFS}
\caption{a) Planetary orbit in $S^2_+$. b) Closed orbit in $S^2$ that crosses the equator.}
\end{center}
\label{fig:2}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[height=2.7cm]{tp1aFP} \hspace{0.5cm} \includegraphics[height=2.7cm]{tpcFPN}\includegraphics[height=2.7cm]{tpcFPS}
\caption{a) Planetary orbit projected in ${\mathbb R}^2$. b) and c) Projections of the closed orbit on $\Pi_+(S^2_+)$ and $\Pi_-(S^2_-)$ respectively.}
\end{center}
\label{fig:2}
\end{figure*}
\subsection{Inverse gnomonic projection of the Garnier system from ${\mathbb R}^2$ to $S^2$}
The Garnier system \cite{Garnier,Perelomov} corresponds to a planar anharmonic oscillator which is isotropic in the quartic power of the distance to the center but anisotropic in the quadratic term. Using non-dimensional coordinates and couplings the potential energy is defined to be:
\[
{\cal V}(x_1,x_2)= \frac{1}{2}\left(x_1^2+x_2^2-1\right)^2+\frac{a^2}{2}x_2^2 \quad , \qquad 0<a<1 \quad .
\]
Changing to Euler elliptic variables (\ref{elliptic}) it is easily seen that it is a Liouville Type I system in ${\mathbb R}^2$ since the potential energy takes the form (\ref{sepellip}) where:
\[
f(u)=\frac{a^4}{2} ( u^2-1) \left( u^2-\frac{1}{a^2}\right)^2\ , \quad g(v)=\frac{a^4}{2} ( 1-v^2) \left( v^2-\frac{1}{a^2}\right)^2 \quad .
\]
The quadratures are thus
\[
\varsigma-\varsigma_0=\mp \, a \int_1^u \, \frac{d z}{\sqrt{(z^2-1)P_6(z) }} \ , \quad \varsigma-\varsigma_0=\mp a \int_{-1}^v \, \frac{d z}{\sqrt{(1-z^2)\tilde{P}_6(z)}} \quad ,
\]
where the sixth order polynomials read:
\[
P_6(u)=-f(u)+h u^2-\lambda\quad ,\qquad \tilde{P}_6(v)=-g(v)-hv^2+\lambda \quad .
\]
The change of integration variable: $z^2 = \bar{z}$, renders both integrals to identical canonical form:
\[
\int \frac{d\bar{z}}{\sqrt{2 \bar{z} (1-\bar{z}) (a^4 \bar{z}^3-a^2(a^2+2) \bar{z}^2+(1+2a^2-2h)\bar{z} +2\lambda-1)}}
\]
i.e., they are hyper-elliptic integrals of genus 2.
The inverse gnomonic projection leads us to the Liouville Type I separable system in $S^2_+$ characterized by the rational functions:
\[
F(U)=\frac{(U^2-\bar{\sigma}^2) (1-2U^2)^2}{2(1-U^2)^2} \ , \quad G(V)=\frac{(\bar{\sigma}^2-V^2) (1-2V^2)^2}{2(1-V^2)^2}
\]
where, in this non dimensional setting, we have identified the parameters in the form: $a=\frac{\bar{\sigma}}{\sigma}$.
The corresponding potential in terms of $(X,Y,Z)\in S^2_+$ is:
\[
{\cal U}(X,Y,Z)=\frac{1}{2\sigma^2} \left( \frac{1- \bar{\sigma}^2 X^2}{Z^4}- \frac{1+3\sigma^2}{Z^2}\right)
\]
that is singular in the Equator. Thus in this case, even if we extend ${\cal U}(X,Y,Z)$ to the whole $S^2$ sphere, the orbits cannot cross the Equator, and unbounded planar orbits are mapped into spherical trajectories that approach the Equator asymptotically.
\section{Summary and further comments}
In this report we have analyzed separable classical Hamiltonian systems in an unified way. We have focused in systems of two degrees of freedom for which the configuration space is either an $S^2$ sphere or the Euclidean plane ${\mathbb R}^2$.
In the first case, that we denote as Liouville Type I in $S^2$, we have selected those systems for which the Hamilton-Jacobi equation is separable in
sphero-conical coordinates. In the planar case the separability
of the HJ equation is demanded in Euler elliptic coordinates, thus restricting ourselves to Liouville Type I systems
in ${\mathbb R}^2$.
The main contribution in this essay is the construction of a bridge between Liouville Type I systems respectively in
$S^2$ and ${\mathbb R}^2$. The path is traced following the gnomonic projection from both the North and South hemi-spheres to the plane.
The idea is inspired by the connection between the two Keplerian center problem respectively in $S^2$ and ${\mathbb R}^2$ established in \cite{Al1,Al2,Borisov2007}. We provide a geometric structure to the Borisov-Mamaev map which allows to extend the idea to any Liouville Type I system. As particular cases we construct the bridge between the Neumann problem and its partner in
the plane, besides reconstructing the Borisov-Mamaev map betwen the Killing problem, two Keplerian centers in $S^2$,
and the Euler problem, two Keplerian centers in ${\mathbb R}^2$, in this geometric setting. Moreover, we also consider a distinguished Liouville Type I system in ${\mathbb R}^2$, the Garnier system and its mapping back in $S^2$ by using the inverse of the gnomonic projection. A remarkable fact emerges: the two center problems in $S^2$ and ${\mathbb R}^2$ exhibit
separable potentials in terms of either trigonometric or polynomial functions but identical, up to a constant, strengths: in both manifolds Keplerian potentials arise.
The results of this work can be extended to the Quantum framework. It would be very interesting to analyze the relation between separable Schr\"odinger equations in $S^2$ and the corresponding projected equations in ${\mathbb R}^2$. Connecting paths between the classical and quantum worlds are provided by the WKB quantization procedure.
Finally, it is adequate to remind that the search for solitary waves arising in $(1+1)$-dimensional relativistic scalar field theories is tantamount to solve an analogue mechanical system. In this framework, the application of the equivalence between separable systems in $S^2$ and ${\mathbb R}^2$ could be a fruitful source of information about the links between solitary waves in non-linear and linear sigma models, \cite{Alonso,Alonso1,Alonso2,Alonso3}.
\section*{Acknowledgements}
The authors thank the Spanish Ministerio de Econom\'{\i}a y Competitividad (MINECO) for financial support under grant MTM2014-57129-C2-1-P and the Junta de Castilla y Le\'on for the grant VA057U16.
|
2,877,628,091,051 | arxiv | \section{Introduction}
Modern web browsers are becoming powerful platforms for advanced application development \cite{Hahn2012b,Ginsburg2011}. New advances in core web application technologies such as the modern web browsers' universal support of ECMAScript 5 (and 6) \cite{khan2014using}, CSS3 and HTML5 APIs have made it much more feasible to implement powerful middle-ware platforms for data management and powerful graphical rendering, as well as real-time communication purely in client-side JavaScript \cite{mwalongo2016state,bernal2017reusable}. The last decade has seen a slow, but steady, shift to fully distributed solutions using web-standards \cite{eckersley2003neuroscience,millan2014open,sherif2014cbrain,wood2014harnessing}, closely tracked by expressiveness of the JavaScript programming language. Web-based solutions are especially appealing as they do not require the installation of any client-side software other than a standard web browser which enhances accessibility and usability.
Unrelated to rise of web-technologies, a new emerging trend is the rapid adoption of containerization technologies. These have enabled the concept of \emph{compute} portability in a similar sense to \emph{data} portability. Just as data can be moved from place to place, containerization allows for operations on that data to also be moved from place to place.
To our knowledge, no web-based platform currently exists that provides data \emph{and} compute agnostic services (some services, such as CBRAIN \cite{sherif2014cbrain} and LONI~\cite{Rex:2003} provide conceptually similar approaches, but do not have deep connectivity to typical hospital database repositories), in particular collection, management, and real-time sharing of medical data, as well as access to pipelines that process that data. In this paper, we introduce {\emph{CHIPS}}{} (\textbf{C}loud \textbf{H}ealthcare \textbf{I}mage \textbf{P}rocessing \textbf{S}ervice). {\emph{CHIPS}}{} is a novel web-based medical data storage and data processing workflow service that provides strict data security while also facilitating secure, real-time interactive collaboration over the Internet and internal Intranets.
{\emph{CHIPS}}{} is able to seamlessly collect data from typical sources found in hospitals (such as Picture Archive and Communications Systems, PACS) and easily export to approved cloud storage. {\emph{CHIPS}}{} not only manages data collection and organization, but it also provides a large (and expanding) library of pipelines to analyze imported data, and the containerized compute can execute in a large variety of remote resources. {\emph{CHIPS}}{} provides for persistent record and management of activity in {\emph{feeds}}{} as well as for powerful visualization of data. In particular, it makes use of the popular {\tt XTK} toolkit which was also developed by our team at the Fetal-Neonatal Neuroimaging and Developmental Science Center, Boston Children’s Hospital\footnote{\URL{http://fnndsc.babymri.org}} for the in-browser rendering and visualization of medical image data and can be freely downloaded from the web\footnote{\URL{http://goxtk.com}} \cite{haehn2014neuroimaging}.
\section{Architectural Overview}
\subsection{Scope}
\begin{wrapfigure}{l}{6.5cm}
\centering
\includegraphics[width=6cm]{CHRIS_expand}
\caption{\label{fig:chris_expand} {\emph{CHIPS}}{} connects multiple input PACS sources to multiple ``cloud" compute nodes.}
\noindent \hrulefill
\vspace{-30pt}
\end{wrapfigure}
The creation of {\emph{CHIPS}}{} has been motivated by both clinical and research needs. On the clinical side, {\emph{CHIPS}}{} was built to provide clinicians with easy access to large amounts of data (especially from hospital image databases like Picture Archive and Communications Systems -- PACS), to provide for powerful collaboration, and to allow for easy access to a library of analysis processes or pipelines. On the research side, {\emph{CHIPS}}{} was designed to allow computational researchers to test and develop new algorithms for image processing across heterogeneous platforms, while allowing life science researchers to focus on their research protocols and data processing, without needing to spend time on the minutiae of performing data analysis.
The system design is highly distributed, as shown in Figure \ref{fig:chris_expand}, which shows a {\emph{CHIPS}}{} deployment connected to multiple input sources and multiple compute sources. Though the figure suggests a single, discrete central point, components of {\emph{CHIPS}}{} do reside on each input (PACS) and compute location.
\subsection{Distributed Component Design}
\begin{wrapfigure}{r}{6.5cm}
\centering
\vspace{-40pt}
\includegraphics[width=6cm]{chris_distributed_workflow}
\caption{\label{fig:chris_distributed_workflow} \small The internal {\emph{CHIPS}}{} logical architecture.}
\noindent \hrulefill
\vspace{-20pt}
\end{wrapfigure}
Architecturally {\emph{CHIPS}}{} is not a single monolithic system, but a distributed collection of interconnected components, including a front-end webserver and web-based UI; a core RESTful back-end central server that provides access to all data, feeds, users, etc; a DICOM/PACS interface; a set of independent RESTful microservices that handle inter-network data IO and also remote process management, and a core cloud-based computational platform that orchestrates offloading of image processing pipelines to some remote cloud-based compute -- see Figure \ref{fig:chris_distributed_workflow}.
The top the red box of Figure \ref{fig:chris_distributed_workflow} contains the \emph{PACS node} and represents the Hospital image data repository. The second blue box, labeled {\it Web-entry point and data hosting node} contains the main {\emph{CHIPS}}{} backend and is presented as being in a ``cloud" (i.e. some resource that is accessible from the Internet). Finally, the bottom yellow box is shown on a separate ``cloud" to emphasize that it is topologically distinct from the {\it Web-entry point}.
The logical relationships between data (represented as the rectangles with a tree structure) and compute elements denoted by the named hexagons is shown by either data connectors (thick blue arrows) or control connections (single line arrows). In the syntax of the diagram, the stylized cloud icon touching some of the boxes denotes that these compute elements are controlled by a REST API, while the sphere icon denotes web-access.
An remote compute is denoted by {\tt plugin}, which is controlled by a {\tt manage} component. In the most abstract sense, the {\tt plugin} processes an input data structure, and outputs a transformed data structure (the two tree graphs as shown). File transfer between the data cloud and compute cloud is performed by the file {\tt IO} handler component. A {\tt query/retrieve} process in the data cloud connects to an authentication process, {\tt auth} in the Hospital network, while on-the-fly anonymization of DICOM images is handled by process anonymizer {\tt anon}. Finally the {\tt dispatcher} is a component that determines what compute node (or cloud) is best suited for the data analysis at hand. The circle icon attached to the {\tt manage} and {\tt plugin} icons implies the attached process and can provide real-time feedback information to other software agents about the controlled process via its own REST interface.
\subsection{Pervasive containerization}
{\emph{CHIPS}}{} is designed as a distributed system, and the underlying components are containerized (currently using docker\footnote{\URL{https://www.docker.com}}. In Figure \ref{fig:chris_distributed_workflow}, the {\it Main CHIPS web interface} and associated backend database is housed within a single container\footnote{\URL{https://github.com/FNNDSC/ChRIS_ultron_backEnd}}. Input data and processed results are accessible in the hosting node and volume mapped as appropriate to this back end.
Other components of {\emph{CHIPS}}{} in the web-entry node are similarly containerized. This includes the {\tt manage}\footnote{\URL{https://github.com/FNNDSC/pman}} block, which is responsible for spawning processes on the underlying system. Not only does {\tt manage} provide the means to start and stop processes, but it also tracks the execution state, termination state, and standard output/error streams of the process. The {\tt manage} component has a REST interface through which clients can start/stop and query processes.
Also containerized is the {\tt IO}\footnote{\URL{https://github.com/FNNDSC/pfioh}} component that can transfer entire directory trees across network boundaries from one system to another as well as the {\tt dispatch}\footnote{\URL{https://github.com/FNNDSC/swarm}} component that can orchestrate multiple processing jobs as handled by {\tt manage}. The {\tt plugin} container houses the particular compute to perform on a given set of data, and is spawned by the {\tt manage} component under direction of the {\tt dispatch}. Since the compute typically occurs on a separate system to the data hosting node, the {\tt IO} containers perform the necessary transmit of data to this compute system, as well as the retrieve of resultant data back to the data node, allowing the web container to present (and visualize) results to the user.
\section{UI Considerations}
\begin{wrapfigure}{l}{7.5cm}
\centering
\vspace{-30pt}
\includegraphics[width=6cm]{home-close.png}
\caption{{\emph{CHIPS}}{} home page with a ``cards" organization.}
\noindent \hrulefill
\label{fig:ChIPS}
\vspace{-10pt}
\end{wrapfigure}
Figure \ref{fig:ChIPS} shows the home page view on first logging into the system. Studies that have been ``sent" to {\emph{CHIPS}}{} appear in their own ``cards" on the user's home page with a small visualization of a represented image set of the study. Various control on this home page allow users to organize/tag ``cards" in specific projects (or folders), remove cards, bookmark for easy access, etc. New cards can be generated by clicking on the \textcircled{+} icon and choosing an activity (such as PACS Query/Retrieve), and any card can be seamlessly shared with other users of the system.
\begin{wrapfigure}{r}{7.5cm}
\centering
\vspace{-20pt}
\includegraphics[width=6cm]{feed-data-viewer.png}
\caption{\label{fig:feed-viewer} \small Visualizing pulled and processed data.}
\noindent \hrulefill
\vspace{-50pt}
\end{wrapfigure}
On selecting a given feed, the core image data in that feed is visualized in a rich, web-based viewer -- see Figure \ref{fig:feed-viewer}. Various tabs and elements of the feed view provide different perspectives on the data, and also provide the ability to annotate notes, or add comments. As in the feed view, a \textcircled{+} icon is also present, and if selected, opens a ribbon of ``plugins" (or ``apps") to run on the data contained in the feed. For example, certain plugins might perform a surface reconstruction of the brain surface with tissue segmentation (for example, a FreeSurfer plugin).
The interface semantics within a feed are straightforward: a user clicks on the feed and enters the top level data view. Once a plugin from the \textcircled{+} is applied, the feed data is processed accordingly. When the plugin is completed, its output files are also organized in the feed in a logical tree view (accessible via the left "Data" tab) in a manner akin to an email thread. In this manner, the thread of execution from data $\rightarrow$ plugin $\rightarrow$ data is defined -- in effect building a workflow.
Any image visualized can also be shared in real-time using collaboration features built into the viewer library and leveraging the Google Drive API and Google Realtime API \cite{bernal2017reusable}.
\section{Big Data Infrastructure}
\begin{wrapfigure}{l}{10.5cm}
\centering
\includegraphics[width=10cm]{bigData_creation}
\caption{\label{fig:bigData} \small Big data pre-processing.}
\noindent \hrulefill
\vspace{-10pt}
\end{wrapfigure}
An important component of {\emph{CHIPS}}{} lies in creating a foundation suitable for future support of ``data mining". Recently, the term \emph{Big Data} has come into common parlance, especially in the context of informatics \cite{Provost2013,Swan2013,JCP:JCP24662}. Despite the term and the use of \emph{Big}, the concept often refers to the use of predictive analytics and other advanced data analytics tools that extract meaning from sets of data and does not necessarily to the particular size of the data set.
In healthcare, big data analytics has impacted the field in very specific areas such as clinical risk intervention, waste and care variability reduction, and automated reporting. However, as a field, biomedical imaging has not especially benefited from big data approaches due to the unstructured nature of image data, complexity of results from analysis in terms of data formats (again usually unstructured), simple quality issues such as noise in image acquisitions, etc.
{\emph{CHIPS}}{} constructs a framework to allow big data methods to be used in this image space. Consider that the incoming source data to {\emph{CHIPS}}{} are DICOM images that by their nature contain a large amount of meta information, most of which is non PHI and will be left unchanged by the anonymization processes. Information about the scanning/imaging protocol, acquisition parameters, as well as certain non-PHI demographics such as patient sex and age can be meaningfully databased. Moreover, the application of an analysis pipeline to an image data-set can in turn result in large amounts of meaningful data that can be databased and associated with the incoming source data. For example, FreeSurfer, which is dockerized as a plugin in the {\emph{CHIPS}}{} system produces volumetric segmentations and surface reconstructions on raw input MRI T1 weighted data \cite{Dale99corticalsurface-based,Fischl99corticalsurface-based,freesurfer}.
In Figure \ref{fig:bigData} input raw DICOM (purple block) and output processed data from the DICOMs (green block) are shown. A {\tt DICOM tag extraction} process removes the image meta data and associates this information with the particular image record. DICOM data is regularly formatted and easily extracted. Importantly, for the output data, and assuming the output data is a 3D surface reconstruction and tables of brain parcellation volume values, a {\tt structured analysis} process regularizes all this information into meta data that will be added to the space of data pertaining to this image record. This processing will lay the ground work on which data analytics can explore and mine for relations between (for example) input acquisition parameters and pipeline output results, or simply mine across output results for hidden trends in data trajectories (for example volumetric changes with age or sex).
\section{Conclusion and Future Directions}
{\emph{CHIPS}}{} is a distributed system that provides a single, cloud-based, access point to a large family of services. These include: (a) accessing medical image data securely from participating institutions with authenticated access and built-in anonymization of collected image data; (b) organizing collected data in a modern UI that allows for easy data management and sharing; (c) performing processing on images by dispatching data to remote clouds and controlling/managing remote execution on these resources; (d) powerful real-time collaboration on images using secure third party services (such as the Google RealTime API); and intuitively constructing medical image processing workflows. {\emph{CHIPS}}{} is not only a medical data management system, but strives to improve the quality of healthcare by allowing clinical users the ability to easily perform value added processing and sharing of data and information. Current and future directions for {\emph{CHIPS}}{} include facilitating the construction of big-data frameworks and allowing for users to simply construct experiments for data analytics and various machine learning pipelines.
All analysis and development conducted by the {\emph{CHIPS}}{} system at the Boston Children's Hospital was conducted under relevant Institutional Review Board approval, which governed access to image data and controlled the scope of sharing of such data.
\bibliographystyle{splncs03}
|
2,877,628,091,052 | arxiv | \section{\label{sec:intro}Introduction}
Dynamics of quantum systems with the states frequently monitored by a
measurement apparatus can be rather controversial and is a subject of
constant interest over the years
\cite{patil15,blok14,murch13,hatridge13,wiseman11,ashhab10,guerlin07,gleyzes07,murch08,gammelmark13,pechen06,gordon13,gammelmark13,
mackrory10,wiseman10:book,paz-silva12}. Every measurement in the
measurement sequence may be tuned to cause only a partial collapse of the system's
wave function, with the collapse effect being arbitrary weak (so called
``weak measurements'') \cite{hatridge13,
gleyzes07,caves87,oreshkov05,varbanov07}.
In another context, the term ``weak measurement'' was introduced by
Aharonov, Albert, and Vaidman (AAV) \cite{aharonov88} as being
attributed to a measurement of a continuous degree of freedom
(e.~g. an electron position) coupled to a discrete one (spin) via
post-selection. These two approaches were recently shown to be
equivalent \cite{dressel12,lundeen12}.
As a result of repetitive application of weak measurements --- the
situation which is sometimes referred to as weak Zeno measurements
(WZM) --- a kind of stochastic ``quantum trajectory'' arises
\cite{murch13,belavkin89,gisin84,wiseman96,wiseman10:book} due to
unpredictable character of every particular measurement outcome.
Just repeating the weak measurements, without any further action on
the system, allows to control the system state in various ways. For
instance, one can achieve an arbitrary state from any other one by
repeating weak measurements in different bases
\cite{ashhab10,gillett10,mackrory10,wiseman11,karasik11,gordon13,
blok14}. Alternatively, by allowing the strength of the weak
measurements (in AAV sense) to depend on the spatial coordinate, one
can create a ``potential wall'' which may reflect a particle
\cite{mackrory10,gordon13}. It should be noted that these control
mechanisms work if the initial state is pare-known.
In contrast, if we repeat a weak measurement of a single qubit in a
fixed basis, the resulting dynamics was up to now believed to be very
trivial. The resulting quantum trajectory just stochastically
approaches one of the two qubit's basis states $\ket{0}$ or $\ket{1}$.
In this article we add a new dimension to this seemingly trivial
dynamics by allowing the measurement strength to be changed in time
and in a state-conditional way.
We observe that the stochastic
equations, describing quantum trajectories in such simple
one-dimensional system are in fact quite similar to the ones describing
motion of a small Brownian particle in a fluid flow (overdamped
Brownian motion)
\cite{gisin84,haenggi09,reimann02,braun:book04}.
One of the striking phenomena in such flows as well as in many
other stochastic systems are so called stochastic ratchets. Namely, by
varying the potential acting on the Brownian particle in time and/or
space in a periodic way, it is possible to create an effective
additional force, despite the potential introduces no average force
\cite{haenggi09,reimann02,braun:book04}.
Such ratchets are encountered in many stochastic
systems of different nature
\cite{wu14,haenggi09,velez08,souza06,villegas05,reimann02,braun:book04}.
Brownian ratchets are deeply connected to so called
Parrondo games, when two or more lossy games are
combined to give a winning one
\cite{parrondo96,allison02,wu14}. Very recently,
the notion of weak Parrondo games and weak Brownian
ratchets were introduced in \cite{wu14} to describe the situation when
two or more lossy games are played together to give just less lossy (but
not winning) one.
We remark that, although both Parrondo games and Brownian ratchets
were considered in context of quantum systems
\cite{pawela13,chandrashekar11,bulger08,flitney02,reimann02}, this was
up to now done via some direct action on the system itself.
In contrast,
in the present article
the situation of control-without-direct-action
will be discussed.
Here we show how the effective potential arising in WZM dynamics can be
modified by changing the strength of the measurement periodically in
time and in a state-conditioned fashion (that is, in dependence on the
current system state). We demonstrate that the stochastic ratchet
effect, albeit weak, is possible in such situation. We
also demonstrate a dynamic
localization of the state in the case when the measurement strength
vanishes at some particular system state.
A ``false basis state'' may appear, which ``attracts'' the stochastic
trajectories in similar way as the true basis states do.
The article is organized as follows: in Sec.~\ref{sec:discr} we
introduce the system under consideration and derive the master
equation governing the probability distribution on the line between
two basis states; In Sec.~\ref{sec:contin} we derive the equation for
the continuous case taking into account conditionally-dependent
measurement strength; In Secs.~\ref{sec:ratchets}-\ref{sec:localiz}
we investigate the effects related to the conditionally modified
measurements strength. Finally, the
conclusions and discussion are presented in Sec.~\ref{sec:concl}.
\section{Discrete dynamics}
\label{sec:discr}
\begin{figure*}[ht
\begin{center}
\includegraphics[width=0.75\textwidth]{f1_bb
\caption{ \label{f1
(a) The model of weak measurements of the qubit $\ket{q}$ using
an ancilla $\ket{a}$ (initially in the state $\ket{0}$) and its
rotations $R_y$, followed by the measurement of $\ket{a}$. After
the measurement, the process is repeated with the same or
another ancilla in the state $\ket{0}$. In (b), the stochastic
process induced by repeating application of (a) using $\theta$
coordinates (left y-axis) and $x$-coordinates given by
\refeq{eq:1} (right y-axis) vs. measurement number $n$ is shown.
The $x$-coordinates are obviously better suitable to study the
asymptotic behavior as $n\to\infty$. }
\end{center}
\end{figure*}
\subsection{The setting}
Our model of weak Zeno measurements is
depictured in \reffig{f1}(a) and uses projective measurements of
ancilla qubit $\ket{a}$ to realize the weak ones of $\ket{q}$. The system is prepared in the state $\ket{q}\otimes\ket{0}$
with $\ket{q} = \cos\theta \ket{0} + \sin\theta\ket{1}$ for some
$\theta$. First, we apply a rotation $R_y(2\delta)$ to
the ancilla state; the rotation $R_y$ is conditioned to
$\ket{q}=\ket{1}$ and is defined as:
\begin{gather}R_y(\delta)\ket{0} \mapsto \cos\delta \ket{0} +
\sin\delta\ket{1}, \\R_y(\delta)\ket{1} \mapsto \cos\delta \ket{1} -
\sin\delta\ket{0}.\end{gather}
Afterwards, we unconditionally apply the rotation $R_y$ by
some other angle $\alpha$ and finally we measure the ancilla qubit. If
$\alpha\ne0$ and $\delta$ is small, the resulting measurement modifies
$\ket{q}$ only slightly. After the measurement, we repeat the whole procedure
using another ancilla in the initial state $\ket{0}$ (or the same
ancilla returned to the state $\ket{0}$).
The state of the whole system
$\ket{q}\otimes\ket{0}$ [see \reffig{f1}(a)] is transformed by two
$R_y$ operators described above as:
\begin{gather}
\label{eq:0}
\ket{q}\otimes\ket{0} \mapsto \ket{q_0} \otimes \ket{0} + \ket{q_1}
\otimes \ket{1}, \\ \ket{q_i} = \sum_j b_{ij} \ket{j},
\end{gather}
where $b_{ij}$ are $(i+1,j+1)$th element of the matrix $b$ defined as:
\begin{equation}
\label{eq:4}
b =
\begin{pmatrix}
& \cos\theta\cos\alpha & \sin\theta\cos{(\delta +\alpha)} \\
& \cos\theta\sin\alpha & \sin\theta\sin{(\delta+ \alpha )} \\
\end{pmatrix}.
\end{equation}
If the measurement of $\ket{a}$ gives 0, the system state is
reduced to $\ket{q} = \ket{q_0}/\sqrt{p_0}$ and
in the opposite case to $\ket{q} = \ket{q_1}/\sqrt{p_1}$, where
\begin{gather}
\label{eq:2}
p_0 = \cos^2\alpha \cos^2\theta + \sin^2\theta\cos^2{(\delta
+\alpha)},\\
p_1 = \cos^2\theta\sin^2\alpha + \sin^2\theta\sin^2{(\delta+ \alpha
)}.
\label{eq:2a}
\end{gather}
The probabilities of the corresponding outcomes are $p_0$ and
$p_1$.
The above description can be reformulated in the form of a generalized
measurement formalism, with the measurement operators
\begin{eqnarray}
\label{eq:6}
\mathcal{B}_0 = b_{11}\ket{0}\bra{0} + b_{12}\ket{1}\bra{1},\\
\mathcal{B}_1 = b_{21}\ket{0}\bra{0} + b_{22}\ket{1}\bra{1},
\label{eq:6a}
\end{eqnarray}
so that $\sum_j \mathcal{B}_j^\dag\mathcal{B}_j =1$ and the state
after the measurement with the result $j$ is transformed as:
$\ket{q}\to \mathcal{B}_j\ket{q}/\sqrt{p_j}$.
The resulting process is a (classical) one-dimensional random walk
along the axis $\theta$ as shown in \reffig{f1}(b). It spends most of
time in the vicinity of the limiting states $\ket{q}=\ket{0}$ and
$\ket{q}=\ket{1}$ ($\theta=0$, $\pi/2$). It is thus useful to
introduce parabolic coordinates \cite{oreshkov05} as:
\begin{equation}
\label{eq:1}
x= \atanh{\left\{-\cos(2\theta)\right\}}; \, \theta = \arcsin{\sqrt{\frac{1+\tanh{x}}{2}}}.
\end{equation}
In this coordinate system, $\theta=0$ corresponds to $x=-\infty$ and
$\theta=\pi/2$ corresponds to $x=+\infty$ [(cf. \reffig{f1}(b)].
Using $x$ instead of $\theta$ allows to expand these vicinities into
semi-infinite intervals.
We also note that if we measure $\ket{q}$ directly (instead of
$\ket{a}$), the probability to find $\ket{q}$ in the state $\ket{1}$
will be:
\begin{equation}
\Pi(x) = \sin^2\theta = (1+\tanh x)/2.\label{eq:pi}
\end{equation}
\subsection{Classical random walk interpretation}
Now, for the sake of simplicity, we exclude the situation when
$\ket{q}$ is exactly in one of the basis states $\ket{0}$ or $\ket{1}$
in the beginning of the process. In this case, the equations above
allow to define the process as a one-dimensional random walk on the
line $x\in (-\infty,+\infty)$ in the following way: assuming that at
the $n$th iteration step the system is in the point $x_n$, at the
$n+1$ step it will be in the point either
$x_{n+1} = x_{n} + \epsilon_0$ or $x_{n+1} = x_{n} + \epsilon_1$
(depending on the measurement outcome), every of two variants
occurring with the probabilities $p_i(x_n)$, $i=0,1$. Few realizations
of this random walks are shown in \reffig{f1}(b).
The probabilities $p_i(x)$ can be rewritten in $x$-coordinates as:
\begin{gather}
\label{eq:p0x}
p_0(x) = \Pi(x) \cos^2{(\delta
+\alpha)} + \left(1-\Pi(x)\right) \cos^2\alpha, \\
\label{eq:p1x}
p_1(x) = \Pi(x)\sin^2{(\delta+ \alpha )} + \left(1-\Pi(x)\right)
\sin^2\alpha,
\end{gather}
where $\Pi(x)$ is defined by \refeq{eq:pi}.
The step sizes $\epsilon_i$, $i=0,1$, do not depend on $x$ and are given
by (see details in \ref{app:1}):
\begin{gather}
\label{eq:epsx00}
\epsilon_0 = \operatorname{atanh}{\left(\frac{2\cos^2{(\alpha+\delta)}}{\cos^2{(\delta
+\alpha)} + \cos^2\alpha}-1\right)} , \\
\label{eq:epsx10}
\epsilon_1 =
\operatorname{atanh}{\left(\frac{2\sin^2{(\alpha+\delta)}}{\sin^2{(\delta
+\alpha)} +\sin^2\alpha}-1\right)}.
\end{gather}
\refeqs{eq:p0x}{eq:epsx10} define obviously a Markovian random
walk.
\subsection{Conditionally varied measurement parameters}
Suppose, we know exactly the initial state of the system $x_0$ and are
able to make all the rotations also exactly. In this case, the
subsequent positions $x_n$ of the qubit on $x$-line can be also
calculated exactly since we know the measurement outcomes $M_n=0,1$
and thus the step sizes $\epsilon_i$ at every $n$. We may now
introduce the state-dependent dynamics by allowing the parameters
$\delta$, $\alpha$ to be dependent on the step number $n$ and the
state of the system at the last step $x_n$. That is we may take some
(pre-defined) functions of two arguments $\alpha(n,x)$, $\delta(n,x)$
and at every step select the parameters for the next step
$\alpha_{n+1}$, $\delta_{n+1}$ as: $\alpha_{n+1} = \alpha(n,x_n)$,
$\delta_{n+1} = \delta(n,x_n)$. In this way, the parameters $p_i$,
$\epsilon_i$ of our random walk are also some pre-defined functions of
$n$, $x_n$ defined by \refeqs{eq:p0x}{eq:epsx10}.
The functions $\alpha(m,x)$, $\delta(m,x)$ may be quite
arbitrary. They add new degrees of freedom to our system, leading, as
we will see, to rather interesting new dynamics.
We remark also that in quantum control schemes \cite{wiseman10:book}
the information about the current state of the system is often used by
feeding it back into the system via modification of the system's
Hamiltonian. In contrast, in our case, only the parameters of the
measurement itself, but not the parameters of system, are changed.
\subsection{Master equation}
Using \refeqs{eq:0}{eq:4} or \refeqs{eq:6}{eq:6a} it is easy to obtain
an equation governing the evolution of the probability density
function (pdf) $P(n,x)$, describing the probability $P$ of $\ket{q}$
to appear in the vicinity of $x$ at the step $n$. Since our qubit
$\ket{q}$ always remains in a pure state which is fully described by
its coordinate $x$ (or, equivalently, by $\theta$), such master
equation is just an another way to express the dynamics of
$\ket{q}$. It provides essentially the same information as
\refeqs{eq:0}{eq:4} or \refeqs{eq:6}{eq:6a}. This reformulation will
be however useful in the next sections when we consider stochastic
ratchet behavior.
We start from the general case with no assumption about the particular
coordinate system. We use the variable $y$ by which we may understand
any of the coordinates $x$, $\theta$ or $\Pi$ mentioned before. We
introduce furthermore the measure $d\mathcal{P}(n,y) = P(n,y)dy$ which
expresses simply the total probability to find $\ket{q}$ in the
interval $[y,y+dy]$. Then, by definition of our process, using the
Markov property and the formula for total probability
\cite{gardiner09:book} we obtain the following relation:
\begin{align}
\nonumber
d\mathcal{P}(n+1,y) = p_0(y_0(y))
d\mathcal{P}(n,y_0(y)) + \\
p_1(y_1(y)) d\mathcal{P}(n,y_1(y)),
\label{eq:9}
\end{align}
where $y_i(y)$, $i=1,2$ are defined in an implicit way as
$y = y_i+\epsilon_i(y_i)$. This expression is valid for an arbitrary
(also varying) step size, that is, also for the state-conditioned
trajectories as they were defined above in the previous section. In
the case of $x$-coordinates ($y\equiv x$) we obtain straightforwardly
the following expression for $P(n,x)$:
\begin{align}
\nonumber
P(n+1,y) = p_0(x_0(x))
x'_0(x)P(n,x_0(x)) + \\
p_1(x_1(x)) x'_1(x) P(n,x_1(x)),
\label{eq:10}
\end{align}
where $x = x_i(x)+\epsilon_i(x_i(x))$, $x'_i(x) = dx_i(x)/dx$.
In particular, for the constant measurement strength we have,
$\epsilon_i=\mathrm{const}$, $x'_i(x)=0$, and therefore we have:
\begin{align}
\nonumber
P(n+1,x) = p_0(x-\epsilon_0)P(n,x-\epsilon_0) + \\
p_1(x-\epsilon_1) P(n,x-\epsilon_1).
\label{eq:me}
\end{align}
Very important are conserved quantities of \refeq{eq:9} or
\refeq{eq:10}. The most obvious one is the average value of $\Pi$ on
$n$th step,
$\langle \Pi\rangle_n \equiv \int_{-\infty}^{+\infty} \Pi(x)
P(n,x)dx$, which represents the a priori probability to find $\ket{q}$
in the state $\ket{1}$ if we perform a projective measurement of
$\ket{q}$ after the $n$-th step of our process. One can show that from
\refeq{eq:me} it follows that:
\begin{equation}
\label{eq:uppi_av}
\langle \Pi\rangle_{n+1} = \langle \Pi\rangle_n,
\end{equation}
and thus for any $n$, $\langle \Pi\rangle_n = \langle
\Pi\rangle_0$.
\refeq{eq:uppi_av} can be obtained by substituting \refeq{eq:10} into
definition of $\langle \Pi\rangle_{n+1}$, giving thus
\begin{align}
\nonumber
& \langle \Pi\rangle_{n+1} = \int_{-\infty}^{+\infty} \Pi(x) P(n+1,x)
dx = \\ \label{eq:av1a}
& \int_{-\infty}^{+\infty} P(n,x)\left\{p_0(x) \Pi(x) +
p_1(x) \Pi(x) \right\} dx,
\end{align}
where we made a replacement $x'_i(x)dx\to dx_i$ and the variable
change $x_i(x)\to x$ in both parts of the integral. Since
$p_0(x)+p_1(x)=1$, this gives \refeq{eq:uppi_av}. We remark that
\refeq{eq:uppi_av} is universal, that is valid for any choice of the
measurement parameters, also if they vary in dependence on the step
$n$ or current position $x_n$.
\section{Continuous diffusive limit}
\label{sec:contin}
\subsection{general equation}
The continuous limit arises if we tend the measurement strength to
zero. In this case, instead of the discrete equation \refeq{eq:10}, a
continuous equation arises, with the step numbers $n$ being mapped to
a continuous ``time'' $t$. If the measurement strength is constant
(independent on $n$) and if this constant strength tends to zero, the
corresponding limit is universal, that is, does not depend on the
measurement strength and on the particular measurement procedure. The
dynamics in such ``unconditional'' continuous limit is often described
by the stochastic Schr\"odinger equation or by the master equation for
the density matrix
\cite{belavkin89,gisin84,wiseman96,brun02a,oreshkov05,varbanov07,wiseman10:book}.
Nevertheless, to our knowledge, a consideration general enough to
include time- and conditionally-varied measurements were presented
only very recently in \cite{bauer13}. Earlier works dealt only with
the case of measurements of equal strength or at least the strength
which is not explicitly time dependent (but might depend on time
indirectly via the outcome of the previous measurement)
\cite{oreshkov05,varbanov07}. Instead of directly writing the
resulting equation according \cite{bauer13} we will proceed, for the
sake of closeness of presentation, from the master equation for
$P(n,x)$, derived in the previous section, to the corresponding
continuous limit described by the Fokker-Planck (FP) equation. The FP
approach used here is also different from \cite{bauer13} where Ito
calculus is used, but Ito and FP approaches are, of course, equivalent
\cite{gardiner09:book}. We use the later because of the
straightforward connection to the methods used in the theory of
stochastic ratchets \cite{reimann02}.
That is, our goal here is to derive the FP equation in the case which
includes the walk with conditionally varying measurement parameters
$\delta$, $\alpha$ which depend on the outcome of the all previous
measurements and also on $n$.
The transition to the continuous time can be done as follows: We
introduce ``time'' $t$ such that each step of our process corresponds
to a small interval
$\tau_n=\tau(\delta_n(n,x_{n-1}),\alpha_n(n,x_{n-1}))$, that is, we
replace $n=\sum^n_{i=1} 1$ by
\begin{equation}
t\equiv \sum_{i=1}^n\tau_i\label{eq:5}
\end{equation}
and allow $\tau_i(x_i)$ to tend to zero for every $i$, $x_i$. We do
not assume that all $\tau_i$ are equal. In our case, as
$\tau_i(x_i)\to 0$, we can expect that
$P(t,x)\equiv \left.P(n,x)\right|_{n\to t}$ changes at every step only
slightly and we can then decompose $P(t,x)$ into series as:
\begin{equation}
P(t+\tau_n,x)\approxeq P(t,x) +
\tau_n \partial_tP(t,x).\label{eq:3}
\end{equation}
To be allowed to do this we must assume that, independently on $n$,
the step size
$\epsilon_{i,n}=\epsilon_i(\delta_n(n,x_{n-1}),\alpha_n(n,x_{n-1}))$
defined in \refeqs{eq:epsx00}{eq:epsx10} goes to zero as
$\tau_n\to 0$. In particular, this is the case if $\delta_n \to 0$,
$\alpha_n=\mathrm{const_n}>0$ for all $n$. Thus, for small enough
$\delta_n$, we may assume:
\begin{gather}
\label{eq:alpha_n}
\alpha_n=\mathrm{const}(n,x),\\
\label{eq:delta}
\delta_n=\delta g_\delta(x_{n-1},n),\\
\tau_n=\delta^2 g_\tau(x_{n-1},n),
\label{eq:delta_tau}
\end{gather}
where we introduced the parameter $\delta\to0$ which describes how
fast $\delta_n$ and $\tau_n$ approach to zero; $g_{\tau}(x,n)>0$,
$g_{\delta}(x,n)$ are some functions which do not depend on $\delta$
and which we can chose at our will.
That is, we require that all $\tau_n$, $\delta_n$ tend to zero as
$O(\delta^2)$ and $O(\delta)$ respectively. This template is taken
from the consideration of the case with the constant step size as
shown in Appendix \ref{app:2}. The functions $g_{\tau}(x,n)>0$ and
$g_{\delta}(x,n)$ provide ``form-factors'', which determine the
strength of measurement in dependence on the system position $x$ and
$n$.
Using Eqs.~(\ref{eq:5}),(\ref{eq:delta}),(\ref{eq:delta_tau}), we
define a
function
$g(x,t)$ as:
\begin{equation}
\label{eq:25}
g(x,t) =
\left.\frac{g_\delta(x_{n-1},n)}{g_\tau(x_{n-1},n)}\right|_{n\to
t;x_{n-1}\to x }.
\end{equation}
Using Eqs.~(\ref{eq:3}),(\ref{eq:25}) we derive, in a rather standard
way, the FP equation (see Appendix \ref{app:3} for details and a
description of the general procedure in \cite{gardiner09:book}):
\begin{gather}
\label{eq:14}
\partial_t P(t,x) = -\partial_x J(t,x),\\
J(x,y) = \mu(t,x) P(t,x)
- \partial_{x}\left(D(t,x) P(t,x) \right).
\label{eq:14a}
\end{gather}
Here
\begin{equation}
\label{eq:mu:lim}
\mu(x,t) = g(x,t)^2\tanh{(x)}, \, D(x,t)= g(x,t)^2/2,
\end{equation}
have now the meaning of the drift and diffusion coefficients,
respectively. In different coordinates, like $\theta$ or $\Pi$ the FP
equation conserves its form, only the drift and diffusion coefficient
modifies (see Sec.~\ref{app:4}).
This FP equation, as said, describes the dynamics of the pure state
$\ket{q}$ which position on the line between $\ket{0}$ and $\ket{1}$
is described by the coordinate $x$. Stochastic distribution of the
position $x$ is due to unpredictable character of the weak measurement
sequence. The same FP equation describes also a Brownian heavily
damped particle moving in the potential
\begin{equation}
V(x,t) = -\int_0^x \mu(x',t)dx',\label{eq:30}
\end{equation}
\cite{reimann02,doering98}.
The average drift velocity
$\langle \dot{x}(t)\rangle \equiv \int_{-\infty}^{+\infty}
\dfrac{dx}{dt} P(x,t) dx$ can be obtained also as an average of
$J(x,t)$:
\begin{equation}
\langle \dot{x}(t)\rangle = \int_{-\infty}^{+\infty} J(x,t) dx.
\label{eq:31}
\end{equation}
Note that here the ensemble average is assumed, and
$\langle \dot{x}(t)\rangle$ depends on $t$. For the case of $g(x)=1$,
that is, if the step size in our random walk is state-independent, we
have:
\begin{equation}
\label{eq:15}
\mu(x) = \tanh(x), \, V(x)=\ln{(\cos{(x)})},\, D=\frac12.
\end{equation}
The corresponding functions $\mu$, $D$, $V$ are shown in
\reffig{fig:diff}.
The solution of \refeqs{eq:14}{eq:14a},(\ref{eq:15}) wit the initial
condition $P(0,x) = \delta(x-X)$, where $\delta(x-X)$ is the Dirac
delta-function localized at the arbitrary point $X$ can be found
analytically \cite{gisin84}:
\begin{equation}
\label{eq:21}
P(t,x) = \frac{1}{\sqrt{2 \pi t}} \frac{\cosh x}{\cosh X}
\exp{\left(-\frac{t^2+(x-X)^2}{2 t}\right)}.
\end{equation}
\begin{figure}[t!
\begin{center}
\includegraphics[width=0.5\textwidth]{fig-diff}
\caption{ \label{fig:diff
Diffusion $D(x)$ (solid blue line), drift coefficient $\mu(x)$
(dashed red line) and the effective potential $V(x)$ (dotted
yellow line, normalized to a constant $c=0.1$ for better
visibility) in dependence on $x$ according to \refeq{eq:15}. The
asymptotic coordinates $x_\lar$, $x_\rar$ defined in
\refeqs{eq:xleft}{eq:xright} are shown, with $X=-10$ in this
case. Asymptotic coordinates are useful when $|x|$ is large,
that is, as the step $n\to\infty$: $D(x)$, $\mu(x)$ and $V(x)$
are significantly simplified for large $|x|$.
}
\end{center}
\end{figure}
\subsection{Asymptotic FP equation}
To make a semi-analytic approach described in \cite{reimann02} working
(as described in the next section), we have to find the conditions
where, assuming $g=\mathrm{const}$ in \refeq{eq:15}, we have also
$\mu=\mathrm{const}$. In our equations this is generally not the case
because of $\tanh(x)$ factor. However, as one can see from
\reffig{fig:diff} and from \refeq{eq:15}, the deviation from this
condition decreases exponentially with $|x|$ because $|\tanh(x)|$
exponentially fast approaches 1. Also, as one can see from
\refeq{eq:21}, if we take the initial starting point $X$ far away from
the origin $X=0$, $P(t,x)$ behaves very much like a normal Gaussian
distribution which shifts with time with the constant unit speed away
from $x=X$, and expands with the variance $\sigma^2=t$.
This allows us to consider the asymptotic behavior, as the initial
point $X$ and thus $x$ are far enough from the origin $x=0$. We thus
introduce ``shifted'' coordinates $x_\lar$, $x_\rar$ as (see also
\reffig{fig:diff}):
\begin{gather}
\label{eq:xleft}
x_\lar = x+X,\\
x_\rar = x-X,
\label{eq:xright}
\end{gather}
where $X\gg0$ is a large arbitrary number. We will call them
``asymptotic coordinates''. For such defined variables, neglecting
the terms which is exponentially small with $|X|$ we have from
\refeq{eq:mu:lim}:
\begin{gather}
\label{eq:32}
\mu(x_\lar,t) = -g(x_\lar,t)^2, \,
\mu(x_\rar,t) = g(x_\rar,t)^2,\\
\label{eq:32a}
D(x_\lar,t) = g(x_\lar,t)^2/2, \,
D(x_\rar,t) = g(x_\rar,t)^2/2,
\end{gather}
that is, the factor $\tanh{(x)}$ which were present in the diffusion
coefficient in \refeq{eq:15}, disappears.
In the following, we will consider only the case when $x\to-\infty$,
and, correspondingly, we restrict ourselves to the variable $x_\lar$
(cf. \reffig{fig:diff}). The dynamics for the case of $x\to+\infty$
is obviously analogous, only the overall drift direction will be the
opposite as \refeq{eq:32} indicates. The asymptotic FP equation for
this case coincides with the original one \refeqs{eq:14}{eq:14a}, only
written in asymptotic coordinates $x\to x_\lar$:
\begin{gather}
\label{eq:14as}
\partial_t P(t,x_\lar) = -\partial_x J(t,x_\lar),\\
J(x_\lar,y) = \mu(t,x_\lar) P(t,x_\lar)
- \partial_{x_\lar}\left(D(t,x_\lar) P(t,x_\lar) \right).
\label{eq:14asa}
\end{gather}
\subsection{FP equation for periodically varying potential}
In this section we focus on the case when $g(x_\lar,t)$ changes in
space and time periodically. We assume $g$ to have period $L$ in space
$x_\lar$. In our new asymptotic coordinates, reformulation of the FP
equation \refeqs{eq:32}{eq:14asa} allowing to take advantage of such
periodicity is possible \cite{reimann02}. Namely, we define the
reduced quantities:
\begin{gather}
\label{eq:ptilde}
\tilde{P}(x_\lar,t) = \sum_{j=-\infty}^{+\infty}
P(x_\lar+j L,t),\\
\label{eq:jtilde}
\tilde{J}(x_\lar,t) = \sum_{j=-\infty}^{+\infty}
J(x_\lar+j L,t).
\end{gather}
Obviously, $\tilde{P}(x_\lar,t)$ and $\tilde{J}(x_\lar,t)$ are finite
and defined in the range $x_\lar\in[-L/2,L/2]$. Moreover, from
\refeqs{eq:ptilde}{eq:jtilde} one can see that $\tilde{P}$,
$\tilde{J}$ are periodic in $x_\lar$:
\begin{equation}
\label{eq:34}
\tilde{P}(x_\lar,t) = \tilde{P}(x_\lar+L,t),\, \tilde{J}(x_\lar,t) = \tilde{J}(x_\lar+L,t).
\end{equation}
\begin{figure}[tbph!]
\begin{center}
\includegraphics[width=\columnwidth]{prob-const-dist
\caption{ \label{fig:prob-const-dist
The dynamics of the reduced asymptotic probability density
$\tilde{P}(t,x_\lar)$ (a) and the current density
$\tilde{J}(t,x_\lar)$ (b) for $g(t,x_\lar)=1$ obtained by direct
simulations of \refeqs{eq:14b}{eq:14c}, assuming periodic
boundary conditions and initial conditions described in text. In
contrast to initial variables $P$, $J$, the reduced variable
$\tilde{P}$, $\tilde{J}$ do have a steady-state, which is in
this case a homogeneous distribution.}
\end{center}
\end{figure}
In the asymptotic variables $\{x_\lar,t\}$, as it follows from
\refeqs{eq:32}{eq:32a}, $\mu(x_\lar,t)$ and $D(x_\lar,t)$ are periodic
in space with the same period $L$ (which is by the way not true for
$\mu$ and $D$ written using the original variable $x$). Under these
circumstances the FP equation written for $\tilde{P}$, $\tilde{J}$
remains the same as for $P$, $J$. That is, we have:
\begin{gather}
\label{eq:14b}
\partial_t \tilde{P}(t,x_\lar) = -\partial_{x_\lar} \tilde{J}(t,x_\lar),\\
\tilde{J}(x_\lar,t) = \mu(t,x_\lar) \tilde{P}(t,x_\lar)
- \partial_{x_\lar}\left(D(t,x_\lar) \tilde{P}(t,x_\lar) \right) ,
\label{eq:14c}
\end{gather}
where the coefficients remain the same as before, that is, are given by \refeqs{eq:32}{eq:32a}.
The advantage of such reformulation is that now we can consider only
the finite interval in $x_\lar$ from, say, $-L/2$ to $L/2$. Besides,
the equation for the average drift velocity \refeq{eq:31} also retains
its form:
\begin{equation}
\label{eq:33}
\langle \dot{x}_\lar(t)\rangle = \int_{-L/2}^{L/2} \tilde{J}(x_\lar,t) dx.
\end{equation}
Remarkably, the direct definition of $\langle \dot{x}_\lar(t)\rangle$ as
the average of $\dot{x}_\lar$ with the probability distribution
$\tilde P(x_\lar,t)$ is not valid anymore.
As an illustration of the dynamics appearing in the reduced equations,
we show in \reffig{fig:prob-const-dist} the dynamics of
$\tilde{P}(x_\lar,t)$, $\tilde{J}(x_\lar,t)$ for the case of
$g(x_\lar,t) = \mathrm{const} = 1$ obtained using direct numerical
simulations of \refeqs{eq:14b}{eq:14c} with the initial condition
$P(x_\lar,t)\propto \exp{(-x_\lar^2/0.1)}$ and periodic boundary
conditions. The figure shows rather rapid homogenization of
$\tilde{P}(x_\lar,t)$, $\tilde{J}(x_\lar,t)$ in space because of the
action of diffusion.
This homogenization illustrates an important peculiarity of the
reduced quantities: although the initial variables $J$, $P$ have no
steady-state in their dynamics, the reduced quantities $\tilde J$,
$\tilde P$ do have a steady-state. In the case of
\reffig{fig:prob-const-dist} this steady state is simply a constant
which does not depend neither on $t$ nor on $x_\lar$.
\begin{figure}[tbph]
\begin{center}
\includegraphics[width=\columnwidth]{ratchet
\caption{ \label{fig:ratchet
Schematic representation of the Brownian ratchet effect. Without
a ratchet effect ($g=1$), a constant drift force $\mu$ leads to
a current $\langle \dot x \rangle=\mu$ (black thin line). In
contrast, when $g$ changes in space and/or time (but still
$\langle g\rangle=1$), the average current
$\langle \dot x \rangle$ can be modified even through the
average force $\langle \mu \rangle$ remains the same (blue solid
and black dotted lines). Although in many other systems the
Brownian ratchet effect can change the average direction of
motion (see black dotted line), it is not possible in the
present case -- the type of ratchet which we call ``weak
ratchet''. In this later case, $|\langle \dot x \rangle|$ can be
modified but its sign is not reversed (see blue line). The
asymptotic values of $\mu$ and $\langle \dot x \rangle$ for
$x\to-\infty$ (that is, assuming asymptotic coordinates
$x_\lar$) are marked by dashed lines. Weak ratchet effect in the
asymptotic case is marked by a red arrow.}
\end{center}
\end{figure}
\section{Brownian ratchets}
\label{sec:ratchets}
\begin{figure*}[tbph]
\begin{center}
\includegraphics[width=0.75\textwidth]{prob-spt-dist-2x2-18e
\caption{ \label{fig:spt-ratchet
Brownian ratchet effect for $g(x_\lar,t)$ varying in space and
time. (a) $g(x_\lar,t)$, as given by
\refeqs{eq:37}{eq:22b}. (b), (c) the reduced asymptotic
probability density $\tilde{P}(x_\lar)$ and the current density
$\tilde{J}(x_\lar)$ obtained by direct simulations of
\refeqs{eq:14b}{eq:14c} with periodic boundary conditions and
initial conditions described in text. (d) the averaged current
$\langle \dot x_\lar(t)\rangle$ in dependence on time.}
\end{center}
\end{figure*}
One of the most interesting phenomena in Brownian flows is a
possibility of so called stochastic ratchets
\cite{haenggi09,reimann02}. Namely, by manipulating dynamically the
potential $V(x,t)$ in a Brownian flow, one can have nonzero average
motion $\langle \dot{x}\rangle\ne0$ even in the case when the average
force, $\langle\mu\rangle\equiv\int \mu(x,t)dx$ is exactly zero (for
every $t$). Here, to simplify notations, we used the denotation
$\langle \dot{x}\rangle$ for the time- and space average defined as:
\begin{equation}
\label{eq:avxt}
\langle \dot{x}\rangle \equiv \langle \bar{\dot{x}}(\infty) \rangle,
\end{equation}
where:
\begin{equation}
\label{eq:23}
\langle \bar{\dot{x}}(t)\rangle \equiv
\frac1t\int_0^t\langle \dot{x}(\tau)\rangle\,d\tau
\end{equation}
is the ''moving average'' in time of the space average.
Alternatively, one can speak about a ratchet effect if a nonzero
initial force $\mu\ne0$ can be canceled or even reversed by
introducing some periodic modulations of the potential. Both of these
definitions are visualized in \reffig{fig:ratchet}.
In our case it is quite clear that the average flow defined by
$\mu=-1$ (in the asymptotic case $x\to x_\lar$) can not be
reversed. Otherwise, one would have a possibility to violate the
conservation law of $\langle\Pi\rangle$ given by \refeq{eq:uppi_av} by
tuning, for every particular trajectory, the potential in such a way
that the current system state is forced to move in the direction
opposite to $\mu$ and thus bring our system to any of the states
$\ket{0}$, $\ket{1}$ at our wish which would violate
\refeq{eq:uppi_av}.
Nevertheless, one can try to find a Brownian ratchet effect in a weak
sense, that is, to find such a function $g(x_\lar,t)$ that the the
asymptotic value of $\langle\dot x_\lar\rangle>-1$, despite of
$\langle\mu\rangle=-1$. The notion of a weak ratchet effect, in
comparison to a ``normal'' stochastic ratchet, is visualized in
\reffig{fig:ratchet} (blue line). Weak ratchets are in close
correspondence to the weak Parrondo games, where the combination of
lossy games lead to less lossy one, but still not to a winning one
\cite{wu14}.
Of course, one can always obtain $\mu(x_\lar,t)=0$ by simply putting
$g=0$, that is, by reducing step size of the random walk to zero.
Here we want however to investigate the effects which is independent
on such raw step size reduction. To ensure this we will always take
such $g$ that
\begin{equation}
\label{eq:35}
\langle g(t)^2\rangle = 1,
\end{equation}
where we define $\langle g(x_\lar,t)^2\rangle$ as:
\begin{equation}
\label{eq:36}
\langle g(t)^2\rangle \equiv
\frac{1}{L}\int_{-L/2}^{L/2} g(x_\lar,t)^2
dx_\lar.
\end{equation}
This condition excludes the possibility to reduce
$\langle\dot x_\lar\rangle$ by reducing the measurement strength
globally. That is, if one reduces the measurement strength near some
point, one has to increase it in the vicinity of some another one.
We will now try to construct a periodic in time and space function $g$
which allows to reduce $|\langle \dot x_\lar \rangle|$, making it as
small as possible.
We remark that several various types of stochastic ratchets has been
considered in the literature (see \cite{reimann02,braun:book04} and
references therein), the classification is based on the functional
form of $D$ and $\mu$. In many commonly studied hydrodynamic Brownian
flows $\mu$ and $D$ can be varied quite independently -- in contrast
to our situation where independent variation of $D$ and $\mu$ is
impossible because of the common factor $g$. Our situation closely
resembles hydrodynamic Brownian ratchets with varying friction
\cite{reimann02,luchsinger00,lancon01,krishnan92}. The most studied
class of ratchets is so called pulsating ones, where $\mu$ may vary in
space and time, whereas $D$ is a constant. On the other hand, the
situations when both $\mu$ and $D$ vary in space or, alternatively, in
time, were also considered under the names Seebeck or temperature
ratchets, correspondingly. They can be mapped, by suitable change of
variable, to the pulsating ratchets.
In our case, as one can see, the situation when $g$ is changing in
time but not in space provides no possibility for any ratchet
effect. Namely, in this case \refeq{eq:33} can be calculated directly
by integrating \refeq{eq:14c} with the boundary conditions
\refeq{eq:34}, giving $\langle \dot x_\lar(t)\rangle=-1$. We have
then $\tilde P\to\mathrm{const}=1/L$, that is, full homogenization of
$\tilde P$ will take place, exactly as in the case of $g=1$.
We can therefore consider the cases when $\mu$ and $D$ change both in
time and space or only in space. For the presence of the ratchet
effect, the symmetry of the $\mu$ and $D$ are of the critical
importance. In general, ``almost all'' functions except the ones
processing certain particular symmetry properties allow the ratchet
effect \cite{reimann02,braun:book04}. Nevertheless, no analytical
symmetry relation is known, to our knowledge, for the case when both
$\mu$ and $D$ are arbitrary functions of space and time. For the case
when $\mu$ and $D$ are only space-dependent, the situation is simpler
and is discussed in the next section.
One of the most well-known types of ratchets is an on-off tilting
ratchet, were the diffusion $D$ is constant and the asymmetric
potential $V$ is switched on and off. At the on-stage, the particle
moves to the minimum of the potential and therefore becomes well
localized. When the potential is switched off, diffusion leads to a
broadening of the particle's wave packet. Switching the potential on
again makes the particle feeling the force, which pushes it to certain
direction. If the potential is asymmetric, the force is also
asymmetric, leading to an average current.
Having in mind said above, we first probe functions $g(x_\lar,t)$
which has the following form:
\begin{equation}
\label{eq:37}
g(x_\lar,t)=C(t)\left\{1+F(x_\lar)f(t)\right\},
\end{equation}
where the function $f(t)$ is periodic in time which models the
switching on and off behavior, and $F(x)$ is periodic in space, but
might be asymmetric. The normalizing constant $C(t)$ is obtained from
\refeq{eq:35}. To start with, we will try the following functions:
\begin{gather}
\label{eq:22}
f(t) = (\sign{(\sin{(t)})}-1)/2, \\
F(x_\lar) = a\left[\sin{(x_\lar)}+b\sin{(2
x_\lar)}\right], \label{eq:22a}\\
a=-0.6,\: b=-0.5.\label{eq:22b}
\end{gather}
Here, $f(t)$ works as a switcher which is active only half of the
period, $a$ determines the ``amplitude'' of the periodic potential
whereas $b$ is selected in such a way that the shape of $g$ resembles
a ''saw-tooth'' one, in order to introduce some spatial asymmetry into
the profile $g$. Indeed, this shape of $F$ is simply the decomposition
of the ideal saw-tooth shape
$F_s(x)=\sum_{n=1}^{\infty}(-1)^n\sin{n x}/n$ which is cut off on the
second term.
The resulting dynamics is plotted in \reffig{fig:spt-ratchet}. Namely,
the shape of $g$ is presented in \reffig{fig:spt-ratchet}(a) whereas
\reffig{fig:spt-ratchet}(b) and \reffig{fig:spt-ratchet}(c) show the
temporally- and spatially resolved probability and current. To
calculate \reffig{fig:spt-ratchet}, the boundary and initial
conditions were taken as in \reffig{fig:prob-const-dist}.
One can see from \reffig{fig:spt-ratchet}(b) that when the
space-varying potential is on, the probability density $\tilde P$
concentrates in the regions where $g$ (and thus $D$, $\mu$) is
minimal. When it is off, the wave packet starts to diverge (and at the
same time is moving to the negative direction of $x_\lar$).
This behavior is thus different from typical pulsating ratchets in the
sense that when the potential is switched on, the particle is
localized in the minima of $g$ (and thus of $D$ and $\mu$ and not in
the minima of the potential. In \reffig{fig:spt-ratchet}(d) one can
see that the current $\langle \dot x_\lar(t)\rangle$ approaches, after
a short transition process, a stationary regime of oscillations in
time with a period $2\pi$. The long time behavior of the average of
$\langle \bar{ \dot x}_\lar(t)\rangle$ given by \refeq{eq:23} is shown
in \reffig{fig:ratchet-effect}, where it is seen that this average
approaches $\approx -0.86$ instead of $-1$ as in the case of constant
$g=1$, $\mu=-1$, thus clearly showing the ratchet effect in this
process.
To check the stability of the effect, simulations were made for
different functions $f(t)$, $F(x)$. For instance, in
\reffig{fig:ratchet-effect} the case with $b=0$ and
$f(t)=\sign{(\sin{(t)})}$ is also plotted. In this case, the ratchet
effect is definitely smaller but still persists. As said, the
condition \refeq{eq:35} excludes the effect of bare step size
reduction in this random walk, demonstrating that the ratchet effect
is a dynamical phenomenon independent from the step size.
\begin{figure}[tbph!]
\includegraphics[width=\columnwidth]{avj-ratchet-18e
\caption{ \label{fig:ratchet-effect
Dependence of the temporal average
$\langle \bar{ \dot{ x}}_\lar\rangle$ given by \refeq{eq:23} on
the averaging time interval $t$. In the case of constant $g=1$
(orange dashed line) this quantity quickly approaches $-1$ (no
ratchet effect), whereas in the case of varying measurement
strength with the parameters \refeqs{eq:37}{eq:22b} (blue solid
line) it approaches $\approx -0.86$, demonstrating a weak
Brownian ratchet. The effect strength depends on the function
shape. For instance, green dotted line shows the case
\refeq{eq:37} with the spatial dependence $F(x)$ given by
\refeq{eq:22a} with $a=-0.8$, $b=0$, and $f(t)=\sign{\sin{t}}$.
}
\end{figure}
\section{Seebeck ratchets and dynamical localization}
\label{sec:localiz}
\begin{figure*}[tbph!]
\begin{center}
\includegraphics[width=0.75\textwidth]{prob-sp-dist-2x2
\caption{ \label{fig:sp-ratchet
Seebeck ratchet and dynamical localization effect for
$g(x_\lar)$ dependent only on spatial coordinate. (a)
$g(x_\lar,t)$ given by \refeq{eq:37} with $C(t)=\mathrm{const}$,
$f(t)=1$, and other parameters defined by \refeq{eq:22a},
\refeq{eq:22bb}. (b), (c) Reduced asymptotic probability
density $\tilde{P}(x_\lar)$ and the current density
$\tilde{J}(x_\lar)$ obtained by direct simulations of
\refeqs{eq:14b}{eq:14c} with periodic boundary conditions and
initial conditions described in text. (d) Spatially averaged
current $\langle \dot x_\lar\rangle$ in dependence on time. }
\end{center}
\end{figure*}
In general, the ratchet effect can appear if $D$, $\mu$ change only in
space. In this case, one may have the diffusion $D$ and the potential
$V$ defined by \refeq{eq:30} being not in phase \cite{reimann02},
which is in our case is typically fulfilled, since $D\sim\mu$,
$\mu=-\partial_x V$ (that is, if $D\sim\sin x$, then $V\sim\cos x$;
see also \reffig{fig:V-sp}). Such ratchets are typically known as
Seebeck ones \cite{reimann02}. For Seebeck ratchets, an analytical
condition exists which determines the absence of the ratchet
effect. In particular, if we consider the case with no average force
($\langle \mu \rangle=0$) and if $\int_{-L/2}^{L/2}\mu(x)/D(x)dx=0$,
no ratchet effect is present \cite{vankampen1988,landauer88}. In our
case, $\langle \mu \rangle\ne0$ so that the condition above can not be
applied directly. Nevertheless, we can, by the replacement
$x_\lar\to x_\lar$, $t\to t-x_\lar$, reduce our equation to the case
with $\langle \mu \rangle=0$. In this case we have
$\langle \dot x_\lar\rangle\to \langle \dot x_\lar\rangle +1$.
Afterwards, we can apply the above criterion, which gives us the
criterion for the absence of the ratchet effect for our case in the
form:
\begin{equation}
\label{eq:seebeck_cr}
\frac{1}{L}\int_{-L/2}^{L/2}\frac{1}{g^2(x_\lar)}dx_\lar =1.
\end{equation}
That is, for a typical function $g$ (which satisfies \refeq{eq:35}) we
should expect the presence of a ratchet effect, unless
\refeq{eq:seebeck_cr} is also valid. An exemplary profile of $g$
which we use to test the Seebeck ratchet numerically is given by
\refeq{eq:37} with $f(t)=1$ and $F(x_\lar)$ defined by \refeq{eq:22a}
with:
\begin{equation}
a=-0.8,\: b=0,\label{eq:22bb}
\end{equation}
and is shown in \reffig{fig:sp-ratchet}(a). For such a function $g$,
as one can see in \reffig{fig:sp-ratchet}(b), the average current
$\langle \dot x_\lar\rangle$ can be also larger than $-1$; in the case
of \reffig{fig:sp-ratchet} it approaches $\approx -0.2$ as one can see
in \reffig{fig:sp-ratchet}(d). In this case, the initial distribution
is quickly rearranged to a stationary (but inhomogeneous) one.
Now, again, the system is located mostly near the minimum of $g$. This
allows interpretation of the Seebeck ratchet effect in the present
case in the terms of a ''dynamical localization''. Namely, let us
observe the potential $V(x_\lar)$ as shown in \reffig{fig:V-sp} (solid
blue line). One can see that $V(x_\lar)$ approaches a flat region
(where it is almost constant) close to $x_\lar=\pi/2$. That is, there
is almost no effective force at that point. If our effective
``particle'' approaches this region, it nearly stops. Nevertheless,
the ``particle'' experiences some small drift to the negative
direction of $x_\lar$.
\begin{figure}[t!
\begin{center}
\includegraphics[width=\columnwidth]{V-sp
\caption{ \label{fig:V-sp
Diffusion $D(x)$ (solid blue line), shift $\mu(x)$ (dashed red
line) and the effective potential $V(x)$ (dotted yellow line,
normalized to a constant $c=0.1$ for better visibility) in
dependence on $x_\lar$ with $g(x_\lar)$ being time-independent, that is, given by \refeq{eq:37} with $C(t)=\mathrm{const}$, $f(t)=1$, and other parameters defined by \refeq{eq:22a}, \refeq{eq:22bb}.
}
\end{center}
\end{figure}
Going one step further, we consider now the case when $g=0$ at some
point. In this case we also expect localization shown in the previous
example. But more interesting dynamics will also appear as we will see
below. In general, to observe localization, it is not necessary to
take the periodic potential as it was in the previous example. We now
consider the global dynamics related to localization, and therefore we
return back from ``asymptotic coordinate'' $x_\lar$ to the initial
coordinate $x$ and thus to the FP equation as written in
\refeqs{eq:14}{eq:14a}. We assume also for simplicity that $g(x)$
approaches zero only in one single point $X$, that is, $g\to 0$ as
$x\to X$. Returning to our initial qubit, the state $\ket{X}$ is
given by
\begin{gather}
\label{eq:41}
\ket{X}=A\ket{0}+B\ket{1};\\
A=\sqrt{\Pi(X)},
\,B=\sqrt{1-\Pi(X)},\label{eq:41a}
\end{gather}
where $\Pi(X)$ is given by \refeq{eq:pi}. As we will see later the
trajectory can not cross $\ket{X}$ in this case. A state $\ket{x}$
located between of $\ket{0}$ and $\ket{X}$ will approach either
$\ket{0}$ or $\ket{X}$ as $t\to\infty$. Analogously, a state located
initially between $\ket{X}$ and $\ket{1}$ will approach either
$\ket{X}$ or $\ket{1}$ (see \reffig{fig:pm}(b)). We note a similarity
to the initial system with the state-independent coupling strength
$g=1$ in this limiting dynamics (where the limiting states are
$\ket{0}$ and $\ket{1}$, see \reffig{fig:pm}a). One can make this
analogy exact by considering the FP equation in coordinates $\Pi(x)$
defined in \refeq{eq:pi} which is given by (see also
Sec.~\ref{app:4}):
\begin{gather}
\label{eq:16aa}
\partial_t P(t,\Pi) = \partial_{\Pi\Pi}\left(D(\Pi)P(t,\Pi)\right)\\
D(\Pi) = 2(\Pi-1)^2\Pi^2g^2(\Pi), \label{eq:16aaa}
\end{gather}
so that the diffusion coefficient $\mu=0$ in these
coordinates. We remark that for $\ket{0}$ and $\ket{1}$
($\Pi=0$ and $\Pi=1$ correspondingly) $D(\Pi)=0$.
Let us make the denotation $\Pi(X)\equiv \Pi_X$; we have thus
$g(\Pi_X)=0$. We also consider only one case when the initial state is
in between $\ket{0}$ and $\ket{X}$, that is, $x(t=0)<X$ and
$\Pi(t=0)<\Pi_X$ (see \reffig{fig:pm}(b), red lines). In this case we
can obviously define such function $\tilde g(\Pi)$ that:
\begin{equation}
\label{eq:7}
g(\Pi) = \frac{\Pi_X-\Pi}{1-\Pi}\tilde g(\Pi),
\end{equation}
which is possible without singularities since $1-\Pi>1-\Pi_X>0$.
Here, $\tilde g\ge 0$ does not anymore necessarily approaches to zero as
$\Pi\to\Pi_X$.
Now, by making a variable change:
\begin{equation}
\tilde\Pi=\Pi/\Pi_X,\, \tilde t = \Pi^2_Xt,\label{eq:8}
\end{equation}
we arrive to a new FP equation:
\begin{gather}
\label{eq:16bb}
\partial_{\tilde t} P(\tilde t,\tilde \Pi)
= \partial_{\tilde\Pi\tilde\Pi}\left(\tilde D(\tilde\Pi)P(\tilde t,\tilde\Pi)\right),\\
\tilde D(\tilde\Pi) = 2(\tilde\Pi-1)^2\tilde\Pi^2\tilde g^2(\tilde\Pi), \label{eq:16bbb}
\end{gather}
where $\tilde g(\tilde\Pi)=\tilde g(\Pi \Pi_X)$. One can see that
\refeqs{eq:16aa}{eq:16aaa} and \refeqs{eq:16bb}{eq:16bbb} are
completely equivalent. That is, the dynamics of the random walk
between $\ket{0}$ and $\ket{X}$ and between $\ket{0}$ and $\ket{1}$
can be one-to-one-mapped to each other. In particular, the dynamics of
the random walk with $\tilde g=1$, that is, with
$g(\Pi) = (\Pi_X-\Pi)/(1-\Pi)$, is completely equivalent to the
dynamics of the simplest random walk with $g=1$. The system with
$\tilde g=1$ behaves near $\ket{X}$ in the same way as the system with
$g=1$ near the state $\ket{1}$, for instance, the time of arrival to
the point $\ket{X}$ is infinite. This is obviously true for any other
bounded functions $\tilde g$ obeying \refeq{eq:35} and such that $g>0$.
This allows also to calculate straightforwardly the probability of the
outcomes $\ket{0}$ or $\ket{X}$ (resp. $\ket{X}$ or $\ket{1}$) as
$t\to\infty$. In our initial system with $g=1$ the a priori
probabilities to have $\ket{0}$ or $\ket{1}$ would be given by
$\Pi(x)$ and $1-\Pi(x)$, correspondingly, and, according to
\refeq{eq:uppi_av}, also do not depend on the measurement strength
$g(x,t)$ (unless $g$ approaches zero somewhere). By rescaling the
latter situation using \refeq{eq:8} we see, that, starting from the
state $\ket{x}$ we reach $\ket{0}$ or $\ket{X}$ (resp. $\ket{X}$ and
$\ket{1}$) with the probabilities $\tilde \Pi(x)=\Pi(x)/\Pi_X$ and
$1-\tilde \Pi(x)=1-\Pi(x)/\Pi_X$. This probability also does not
depend on the measurement strength (unless it approaches zero
somewhere else at $x<X$). In the same way, if $\ket{x}$ is in between
$\ket{X}$ and $\ket{1}$, we obtain that the probabilities to reach
$\ket{X}$ or $\ket{1}$ are $(\Pi(x)-\Pi_X)/(1-\Pi_X)$ and
$(1-\Pi(x))/(1-\Pi_X)$ respectively.
\begin{figure}[htbp!]
\begin{center}
\includegraphics[width=0.4\textwidth]{pm
\caption{ \label{fig:pm
The limits $t\to\infty$ of the weak measurement sequence in
the case of $g=1$ (a) and in the case of $g(x,t)$
such that $g(x)\to0$ as $x\to X$ (b); the coordinate $X$
corresponds to the qubit state $\ket{X}$. In the former case,
an arbitrary state
$\ket{q}$ approaches either to $\ket{0}$ or $\ket{1}$, whereas in the case
(b) the state may also have
$\ket{X}$ as a limiting point. Some states
(such as $\ket{q}$) tend to either $\ket{0}$ or $\ket{X}$,
the others (as $\ket{q'}$) approach $\ket{1}$ or $\ket{X}$.
}
\end{center}
\end{figure}
\section{Conclusions}
\label{sec:concl}
In the present article we have considered quantum trajectories
resulting from a sequence of weak measurements, in the simplest
one-dimensional settings, but assuming the measurement strength
depending on the step number $n$ and on the current state of the
system described by the coordinate $x$ on the line. Of course, the
current state can not be inferred from the measurement directly, in
contrast to classical systems. Nevertheless, if the initial state and
the parameters of the weak measurements are known, all the subsequent
positions of the system can be inferred from the sequence of the
measurement outcomes, and thus the conditional measurement strength
can be well defined.
Such measurement process, in the limit of infinitely small steps,
leads to a diffusive dynamics with the both drift and diffusion
depending on the coordinate $x$ and time $t$. In fact, the dynamics
arising in such case is quite similar to, for instance, overdamped
Brownian particle in a flow with the varying friction coefficient. In
this article we discussed the nontrivial dynamics arising due to this
analogy.
For instance, an exciting phenomenon arising in Brownian flows is the
stochastic ratchet effect, which allows to ``rectify'' Brownian motion
using periodically varying potential. Such potential does not
introduce any net force by itself, nevertheless allowing to push
particles in the direction opposite to the flow. As it has been shown
here, in our case we can achieve only a weak form of the stochastic
ratchet effect. That is, we can not reverse the overall drift
direction of the quantum trajectories, but only slow down this motion.
The ratchet effect manifests itself in the localization of the
''particle'' in the areas where the measurement strength is reduced
and thus the effective force is minimal.
Finally, we considered the case when the step size approaches zero as
the system approaches some state $\ket{X}$. No quantum trajectory can
cross the singularity point arising in this case. Moreover, the
trajectories approach such singularity in the infinite time in a
similar way as they approach the ``normal'' basis states. The FP
equation demonstrate remarkable self-similarity in this case: The
arbitrary quantum walk between any subsequent zeros can be mapped to a
quantum walk between $\ket{0}$ and $\ket{1}$ with the non-vanishing
measurement strength.
The effects predicted here can be tested in the measurement-only
quantum control settings, such as, for instance, the one recently
realized experimentally using defect-in-diamond-based qubits
\cite{blok14}, but also in other setups where weak quantum measurement or
control was realized, for instance for photon-based \cite{gillett10},
ultracold-atom-based \cite{patil15,murch08} or superconducting-based
qubits \cite{murch13}.
\section*{Acknowledgment}
The author is thankful to Nieders. Vorab, project ZN3061, and to
German Research Foundation (DFG), project BA 4156/4-1, for the
financial support.
|
2,877,628,091,053 | arxiv | \section{Introduction}
The low rank approximation of matrices is a crucial component in many data mining applications today. In addition to functioning as a stand alone technique for dimensionality reduction \cite{cohen2015dimensionality}, denoising \cite{nguyen2013denoising}, signal processing \cite{fazel2008compressed}, data compression \cite{pmlr-v70-anderson17a}, and more, it has also been incorporated into more complex algorithms as a computational subroutine \cite{liu2013tensor,parikh2014proximal}. As part of large scale modern data processing, low rank approximations help to reveal important structural information in the raw data and to transform the data into forms that are more efficient for computation, transmission, and storage.
The singular value decomposition (SVD) is a matrix factorization of both theoretical and practical importance, and it has a number of useful properties related to matrix nearness and rank. In particular, it is used to identify nearby matrices of lower rank, and, leaving aside the question of computational complexity, it is known that the rank-$k$ truncated SVD is the ``gold standard'' for approximating a matrix by another matrix of rank at most $k$ \cite{eckart1936approximation}.
While procedures for computing the exact rank-$k$ truncated SVD have existed since the 1960s \cite{golub1965calculating}, the computational cost of these algorithms are prohibitive at the scale of many of today's datasets. The recent applications of low rank matrix approximation techniques to big-data problems differ in both the computation efficiency requirement and the accuracy requirement of the algorithms. Firstly, we are increasingly leaving behind the era of moderately sized matrices and entering an age of web-scale datasets and big-data applications. The matrices arising from such are often extraordinarily large, exceeding the order of $10^6$ in one or both of the dimensions \cite{talwalkar2013large,mazumder2010spectral,cohen2012survey}, and have much higher computational efficiency demands on the algorithms. Secondly, while the truncated SVD may be the final desired object for previous scientific computing questions, for big-data applications, it is usually an intermediate representation for the overall classification or regression task. Empirically, the final accuracy of the task only weakly depends on the accuracy of the matrix approximation \cite{gu2015subspace}. Thus, while previous variants of truncated SVD algorithms focused on computing up to full double precision, newer iterations of these algorithms aimed at big-data applications can comfortably get by with only $2$-$3$ digits of accuracy.
These considerations have led to the development of randomized variants of traditional SVD algorithms suited to large, sparse matrices, in particular randomized subspace iteration (RSI) and randomized block Lanczos (RBL) \cite{drineas2006fast,rokhlin2009randomized,halko2011finding,musco2015randomized}. By applying either a randomized sketching or projecting operation on the original matrix, these algorithms balance reducing computational complexity with producing an acceptably accurate approximation. While empirically they have shown to be effective and have been widely adopted by popular software packages, e.g. \cite{pedregosa2011scikit}, there has been scant new theoretical work on the convergence guarantees of the latter algorithm, the better performing but more complicated randomized block Lanczos algorithm.
In this paper, we present novel theoretical convergence results concerning the rate of singular value convergence for the RBL algorithm, along with numerical experiments supporting these results. Our analysis presents a unified singular value convergence theory for variants of the Block Lanczos algorithm, for all valid parameter choices of block size $b$. To our knowledge, all previous results in the literature are applicable only for the choice of $b \geq k$, the target rank. We present a generalized theorem, applicable to all block sizes $b$, which coincide asymptotically with previous results for the case $b \geq k$, while providing equally strong rates of convergence for the case $b < k$.
In Section~\ref{sec:backbround}, we present the randomized block Lanczos algorithm and discuss some previous convergence results for this algorithm. In Section~\ref{sec:theoretical_results}, we dive into our main theoretical result and its derivation, followed by corollaries for special cases. In Section~\ref{sec:numerical_experiments}, we investigate the behavior of this algorithm for different parameter settings and empirically verify the results of the previous section. Finally, we give concluding remarks in Section~\ref{sec:conclusions}.
\section{Background} \label{sec:backbround}
\subsection{Preliminaries}
Throughout this paper, our analysis assumes exact arithmetics.
We denote matrices by bold-faced uppercase letters, e.g. $\mathbf{M}$, entries of matrices by the plain-faced lowercase letter that the entry belongs to, e.g. $m_{11}$, and block submatrices by the bold-faced or script-faced uppercase letter that the submatrix belongs to subscripted by position, possibly with subscripts, e.g. $\mathbf{M}_{11}$, $\mathcal{M}_{11}$ or $\mathbf{M}_{a \times b}$. Double numerical subscripts denote the position of the element or the submatrix, i.e. $\mathbf{M}_{11}$ and $m_{11}$ are the topmost leftmost subblock or entry of $\mathbf{M}$ respectively. $m \times n$ subscripts denote the dimensions of a submatrix, when such information is relevant, i.e. $\mathbf{M}_{a \times b}$ denote a subblock of $\mathbf{M}$ that has dimensions $a \times b$.
Constants are denoted by script-faced uppercase or lowercase letters, e.g. $\mathcal{C}$ or $\mathcal{\alpha}$, when it is asymptotically insignificant, i.e. constant with respect to the convergence parameter.
The SVD of a matrix $\mathbf{A}$ is defined as the factorization
\begin{equation}
\mathbf{A} = \mathbf{U} \mathbf{\Sigma} \mathbf{V}^T
\end{equation}
where $\mathbf{U} = \begin{bmatrix} \mathbf{u}_1 & \cdots & \mathbf{u}_n \end{bmatrix}$ and $\mathbf{V} = \begin{bmatrix} \mathbf{v}_1 & \cdots & \mathbf{v}_n \end{bmatrix}$ are orthogonal matrices whose columns are the set of left and right singular vectors respectively, and $\mathbf{\Sigma}$ is a diagonal matrix whose entries $\mathbf{\Sigma}_{ii} = \sigma_i$ are the singular values ordered descendingly $\sigma_1 \geq \cdots \geq \sigma_n \geq 0$.
The rank-$k$ truncated SVD of a matrix is defined as
\begin{equation}
\mathrm{svd}_k\left( \mathbf{A} \right) = \mathbf{U}_k \mathbf{\Sigma}_k \mathbf{V}_k
\end{equation}
where $\mathbf{U}_k = \begin{bmatrix} \mathbf{u}_1 & \cdots & \mathbf{u}_k \end{bmatrix}$ and $\mathbf{V}_k = \begin{bmatrix} \mathbf{v}_1 & \cdots & \mathbf{v}_k \end{bmatrix}$ contain the first $k$ left and right singular vectors respectively, and $\mathbf{\Sigma}_k = \mathrm{diag}(\sigma_1, \cdots, \sigma_k)$.
The $i$th singular values of an arbitrary matrix $\mathbf{M}$ is denoted by $\sigma_i(\mathbf{M})$, or simply $\sigma_i$ when it is clear from context the matrix in question.
The $p$th degree Chebyshev polynomial is defined by the recurrence
\begin{align}
T_0(x) &\equiv 1 \\
T_1(x) &\equiv x \\
T_p(x) &\equiv 2p T_{p-1}(x) - T_{p-2}(x)
\end{align}
Alternatively, they may be expressed as
\begin{equation}
T_p(x) = \frac{1}{2} \left( \left(x + \sqrt{x^2 - 1}\right)^{p} + \left(x + \sqrt{x^2 - 1} \right)^{-p} \right)
\end{equation}
for $\vert x \vert > 1$, and estimated as
\begin{equation}
T_p(1+\epsilon) \approx \frac{1}{2} \left( 1 + \epsilon + \sqrt{2 \epsilon} \right)^p
\end{equation}
for $p$ large and $\epsilon$ small.
\subsection{The Algorithm}
The randomized block Lancos algorithm is a straightforward combination of the classical block Lanczos algorithm \cite{golub1977block} with the added element of a randomized starting matrix $\mathbf{V} = \mathbf{A} \mathbf{\Omega}$.
The pseudocode for this algorithm is outlined in Algorithm~\ref{alg:blk_lanczos}. Of the parameters of the algorithm, $k$ (target rank) is problem dependent, while $b$ (block size), $q$ (no. of iterations) are chosen by the user to control the quality and computational cost of the approximation. The algorithm requires the choices of $b, q$ to satisfy $qb \geq k$, to ensure that the Krylov subspace be at least $k$ dimensional.
\begin{algorithm}
\caption{randomized block Lanczos algorithm pseudocode}
\label{alg:blk_lanczos}
\begin{algorithmic}[1]
\Require
$\begin{array}{ll}
\mathbf{A} \in \mathbb{R}^{m \times n} & \\
\mathbf{\Omega} \in \mathbb{R}^{n \times b} & \textrm{, random Gaussian matrix} \\
k & \textrm{, target rank} \\
b & \textrm{, block size} \\
q & \textrm{, number of Lanczos iterations}
\end{array}$
\Ensure $\begin{array}{ll}
\mathbf{B}_k \in \mathbb{R}^{m \times n} & \textrm{, a rank-$k$ approximation to $\mathbf{A}$}
\end{array}$
\State Form the block column Krylov subspace matrix \hspace{1cm} $\mathbf{K} = \begin{bmatrix} \mathbf{A} \mathbf{\Omega} & (\mathbf{A}\mathbf{A}^T) \mathbf{A} \mathbf{\Omega} & \cdots & (\mathbf{A}\mathbf{A}^T)^{q}\mathbf{A}\mathbf{\Omega} \end{bmatrix}$.
\State Compute an orthonormal basis $\mathbf{Q}$ for the column span of $\mathbf{K}$, using e.g. $\mathbf{Q}\mathbf{R} \leftarrow \mathrm{qr}(\mathbf{K})$.
\State Project $\mathbf{A}$ onto the Krylov subspace by computing \hspace{1cm} $\mathbf{B} = \mathbf{Q} \mathbf{Q}^T \mathbf{A}$.
\State Compute $k$-truncated SVD $\mathbf{B}_k = \mathrm{svd}_k \left( \mathbf{B} \right) = \mathrm{svd}_k \left( \mathbf{Q}\mathbf{Q}^T \mathbf{A} \right) = \mathbf{Q} \cdot \mathrm{svd}_k \left( \mathbf{Q}^T \mathbf{A} \right)$.
\State Return $\mathbf{B}_k$.
\end{algorithmic}
\end{algorithm}
We present the algorithm pseudocode in this form in order to highlight the mathematical ideas that are at the core of this algorithm. It is well known that a naive implementation of any Lanczos algorithm is plagued by loss of orthogonality of the Lanczos vectors due to roundoff errors \cite{paige_article}. A practical implementation of Algorithm~\ref{alg:blk_lanczos} should involve, at the very least, a reorganization of the computation to use the three-term recurrence and bidiagonalization \cite{golub}, and reorthogonalizations of the Lanczos vectors at each step using one of the numerous schemes that has been proposed \cite{golub,parlett,simon}.
\subsection{Previous Work}
Historically, the the classical Lanczos algorithm was developed as an eigenvalue algorithm for symmetric matrices. Its convergence analysis focused on theorems concerning the approximation quality of the approximant's eigenvalues as a function of $k$, the target rank. The analysis relied heavily on the analysis of the $k$-dimensional Krylov subspace and the choice of the associated $k$-degree Chebyshev polynomial. Classical results in this line of inquiry include those by Kaniel \cite{kaniel1966estimates}, Paige \cite{paige1971computation}, Underwood \cite{underwood1975iterative}, Saad \cite{saad_article}.
More recently, while there has been much work on the analysis of randomized algorithms, such efforts have been focused mostly on RBL's simpler cousins, such as randomized power iteration or randomized subspace iteration \cite{halko2011finding,gu2015subspace}. The exception is the results from \cite{musco2015randomized}. To our knowledge, this is one of the few works that provide convergence analysis for randomized block Lanczos and the first work that gives ``gap''-independent theoretical bounds for this algorithm. The analysis found therein is restricted to the case for the block size, $b$, chosen at least the size of $k$, the desired target rank. Our theoretical analysis will give a more generally applicable convergence bound, encompassing the case for both $1 \leq b < k$ and $b \geq k$. In the latter case, our theoretical results will coincide with those in \cite{musco2015randomized}. In the former case, we show that the rapid convergence of the algorithm for any block size $b$ larger than the largest singular value cluster size is assured. We draw attention to this distinction in choosing the block size parameter $b$ - in our numerical experiments, we show that generally smaller choices for $b$ are favored.
Our current work is based partially on the analysis found in \cite{gu2015subspace}. This work established aggressive multiplicative convergence bounds for the randomized subspace iteration algorithm, for both singular values and normed (Frobenius, spectral) matrix convergence. These bounds depend on both the singular value gap and the number of iterations taken by the algorithm - the former is a property of the matrix in question, and the latter is proportional to the computational complexity of the algorithm. The analysis presented in this work is linear algebraic in nature, drawing on deterministic matrix analysis, as well expectation bounds on randomized Gaussian matrices and their concentration of measure characteristics. Our current work employs similar methods, and achieves bounds of a similar form. While the details differ, core ideas, such as creating an artificial ``gap'' in the spectrum and choosing an opportune orthonormal basis for the analysis, are the same.
\section{Theoretical Results} \label{sec:theoretical_results}
\subsection{Problem Statement}
Given an arbitrary matrix $\mathbf{A} \in \mathbb{R}^{m \times n}$ and a target rank $k \leq \mathrm{rank}(\mathbf{A})$, the goal of a low-rank matrix approximation algorithm is to compute another matrix $\mathbf{B}_k \in \mathbb{R}^{m \times n}$ whose rank is at most $k$.
There are many ways to ask and answer the question, ``how good of an approximation is $\mathbf{B}_k$ to the original $\mathbf{A}$?'' In particular, for various low-rank approximation algorithms, the answer has been provided in terms of normed approximation error \cite{halko2011finding,gu2015subspace,musco2015randomized,xiao2016spectrum}, singular subspace error \cite{chen2009lanczos,li2015convergence}, and singular value error \cite{gu2015subspace,saad_article}.
In this paper, we focus on the singular value error for the randomized block Lanczos algorithm. As $\mathbf{B}$ is an orthogonal projection of $\mathbf{A}$ in Alg.~\ref{alg:blk_lanczos}, by the Cauchy interlacing theorem for singular values, we immediately have the upper bound
\begin{equation}
\sigma_j \geq \sigma_j(\mathbf{B}_k)
\end{equation}
for $j = 1, \cdots, k$.
The optimal lower bound is achieved, of course, by the rank-$k$ truncated SVD of $\mathbf{A}$, giving the tight inequality
\begin{equation}
\sigma_j \geq \sigma_j(\mathrm{svd}_k(\mathbf{A})) \geq \sigma_j
\end{equation}
for $j = 1, \cdots, k$.
We will to show that the randomized block Lanczos algorithm provides competitive accuracy, and produces singular value estimates at least some fraction of the optimum.
\begin{equation}
\sigma_j \geq \sigma_j(\mathbf{B}_k) \geq \frac{\sigma_j}{\sqrt{1 + \{\text{some convergence factor} \}}}
\end{equation}
for $\{\text{some convergence factor}\} \rightarrow 0$.
\subsection{Key Results}
Our convergence analysis will show that if the randomized block Lanczos algorithm converges, then the $k$ desired singular values of the approximation $\mathbf{B}_k$ converges to the corresponding true singular values of $\mathbf{A}$ exponentially in the number of iteration $q$. Moreover, convergence occurs as long as the block size $b$ is chosen to be larger than the maximum cluster size for the $k$ relevant singular values.
We present our main results here and delay their proofs to Subsection~\ref{ssec:analysis}. Our main theorem is as follows.
\begin{theorem}
\label{thm:svconvergence}
Let $\mathbf{B}_k$ be the matrix returned by Alg.~\ref{alg:blk_lanczos}. Assume that $\mathbf{\Omega}$ is chosen such that the two conditions in Remark~\ref{lst:conditions} hold. For any choices of $r, s \geq 0$, and any parameter choice $b, q$ satisfying $k+r = (q-p)b \geq k$, for $j = 1, \cdots, k$,
\begin{equation}
\sigma_j \geq \sigma_j (\mathbf{B}_k) \geq \frac{\sigma_{j+s}}{\sqrt{1 + \mathcal{C}^2 T_{2p+1}^{-2} \left( 1 + 2 \cdot \frac{\sigma_j - \sigma_{j+s+r+1}}{\sigma_{j+s+r+1}} \right) }} \label{eqn:svconvergence_withs}
\end{equation}
where $\mathcal{C}$ is a constant that is independent of $q$.
\end{theorem}
This inequality shows that for all valid choices of parameters $b, q$, the convergence of the approximate singular values are governed by the growth of the Chebyshev polynomial term
\begin{equation}
T_{2p+1} \left( 1 + 2 \cdot \frac{\sigma_j - \sigma_{j+s+r+1}}{\sigma_{j+s+r+1}} \right)
\end{equation}
, with the bound holding across all choices of the analysis parameters $s, r$.
Theorem~\ref{thm:svconvergence} admits the following corollaries about two special choices for the block size parameter $b$, where the constants in each case can be expressed in an algebraically closed form.
\begin{corollary}[Special case: $b = 1$]
For any choices of $r, s \geq 0$ satisfying $k+r = (q-p) \geq k$, for $j = 1, \cdots, k$,
\begin{equation}
\sigma_j \geq \sigma_j \left(\mathbf{B}_k \right) \geq \frac{\sigma_{j+s}}{\sqrt{1 + \mathcal{C}_{b=1} T_{2p+1}^{-2} \left(1 + 2 \cdot \frac{\sigma_j - \sigma_{j+s+r+1}}{\sigma_{j+s+r+1}} \right)}}
\end{equation}
where
\begin{equation}
\mathcal{C}_{b = 1} = \left( \max_{\overset{1 \leq s \leq k}{j+r+1 \leq r \leq n}} \frac{\widehat{\omega}_r}{\widehat{\omega}_s} \right)^2 \cdot \left( \sum_{s=1}^j \sum_{r = j+r+1}^{n} \prod_{\overset{t=1}{t \neq s}}^{j+r} \left( \frac{\sigma_r^2 - \sigma_t^2}{\sigma_s^2 - \sigma_t^2} \right)^2 \right)
\end{equation}
is a constant independent of $q$.
\end{corollary}
\begin{corollary}[Special case: $b \geq k + r$] \label{cor:bkr}
For any choices of $r, s \geq 0$, for $j = 1, \cdots, k$,
\begin{equation}
\sigma_j \geq \sigma_j\left( \mathbf{B}_k \right) \geq \frac{\sigma_{j+s}}{\sqrt{1 + \mathcal{C}_{b\geq k+r}^2 T_{2q+1}^{-2} \left( 1 + 2 \cdot \frac{\sigma_j - \sigma_{j+s+r+1}}{\sigma_{j+s+r+1}} \right)}}
\end{equation}
where
\begin{equation}
\mathcal{C}_{b\geq k+r} = \left\Vert \widetilde{\mathbf{\Omega}}_{41} \right\Vert_2 \left\Vert \widetilde{\mathbf{\Omega}}^{-1}_{11} \right\Vert_2
\end{equation}
is a constant independent of both $q$, the iteration parameter, and $\mathbf{\Sigma}$, the spectrum of $\mathbf{A}$.
\end{corollary}
Choosing optimally the analysis parameters $r, s$, we arrive at a result coinciding asymptotically with the conclusions reached in \cite{musco2015randomized}.
\begin{theorem}
Let $\mathbf{B}_k$ be the matrix returned by running Alg.~\ref{alg:blk_lanczos} with the block size $b = k$. Assume $\mathbf{\Omega}$ is chosen such that $\widetilde{\mathbf{\Omega}}_{11}$ is nonsingular. Then, for $j = 1, \cdots, k$
\begin{equation}
\sigma_j \geq \sigma_j\left(\mathbf{B}_k \right) \geq \sigma_j e^{\mathcal{O}\left( -\frac{\log (\mathcal{A}(4q+2))^2}{(4q+2)^2} \right)}
\end{equation}
where $\mathcal{A} = 2\mathcal{C}_{b\geq k+r}$ is a constant independent of $q$.
\end{theorem}
Finally, from Theorem~\ref{thm:svconvergence} we may derive the following result, which states that for certain matrices with singular spectrum rapidly decaying to $0$, the RBL algorithm converges superlinearly.
\begin{theorem}
\label{thm:superlinear}
Assume the singular value spectrum of $\mathbf{A}$ decays such that $\sigma_i \rightarrow 0$. Let $\mathbf{B}_k$ be the rank $k$ approximation of $\mathbf{A}$ returned by Alg.~\ref{alg:blk_lanczos}. Assume additionally that the hypothesis and notation of Theorem~\ref{thm:svconvergence} hold. Then
\begin{equation}
\sigma_j (\mathbf{B}_k) \rightarrow \sigma_j
\end{equation}
superlinearly in $q$, the number of iterations.
\end{theorem}
This theorem validates long observed empirical behaviors of block Lanczos algorithms. In Section~\ref{sec:numerical_experiments}, we show two examples of typical data matrices with spectrums that fall under this regime, and the expected superlinear convergence behavior.
\subsection{Intuition}
Our analysis makes use of the following three ideas:
\begin{figure}
\centering
\caption{Chebyshev polynomials $T_n(x)$ grow much faster than monomials of the same degree $M_n(x) = x^n$ in the interval $\vert x \vert > 1$.}
\label{fig:chebyshev}
\includegraphics[scale=0.6]{chebyshev}
\end{figure}
\begin{figure}
\centering
\caption{Auxiliary analysis parameters $r, s$ are adjusted to create a sufficient singular spectrum ``gap'' to drive convergence.}
\label{fig:params}
\begin{tikzpicture}[scale=0.7]
\draw [thick, -] (0,0) -- (12,0);
\draw [thick, -] (1.44, -0.2) -- (1.44, 0.2);
\node[align=center] at (1.44, -0.5) {0};
\draw [thick, -] (2.88, -0.2) -- (2.88, 0.2);
\node[align=center] at (2.88, -0.5) {$\sigma_n$};
\draw [thick, -] (6.00, -0.2) -- (6.00, 0.2);
\node[align=center] at (6.00, -0.5) {$\sigma_{k+s+r+1}$};
\draw [thick, -] (8.40, -0.2) -- (8.40, 0.2);
\node[align=center] at (8.40, +0.5) {$\sigma_{k+s}$};
\draw [thick, -] (8.64, -0.2) -- (8.64, 0.2);
\node[align=center] at (8.64, -0.5) {$\sigma_k$};
\draw [thick, -] (10.08, -0.2) -- (10.08, 0.2);
\node[align=center] at (10.08, -0.5) {$\sigma_1$};
\draw [thin, -] (8.60, -0.1) -- (8.60, 0.1);
\draw [thin, -] (8.55, -0.1) -- (8.55, 0.1);
\draw [thin, -] (8.50, -0.1) -- (8.50, 0.1);
\draw [thin, -] (8.45, -0.1) -- (8.45, 0.1);
\draw[decoration={brace,mirror,raise=5pt},decorate] (6.00,-0.7) -- (8.64,-0.7);
\node[align=center] at (7.44, -1.5) {``gap''};
\end{tikzpicture}
\end{figure}
\begin{enumerate}
\item the growth behavior of Chebyshev polynomials, a traditional ingredient in the analysis of Lanczos iteration methods, (Fig.~\ref{fig:chebyshev})
\item the choice of a clever orthonormal basis for the Krylov subspace, an idea adapted from \cite{gu2015subspace},
\item the creation of a spectrum ``gap'', by separating the spectrum of $\mathbf{A}$ into those singular values that are ``close'' to $\sigma_k$, and those that are sufficiently smaller in magnitude, using auxiliary analysis parameters $r,s$. (Fig.~\ref{fig:params})
\end{enumerate}
\subsection{Analysis} \label{ssec:analysis}
We are interested in the column span of the Krylov subspace matrix $\mathbf{K}$. Let the singular value decomposition of $\mathbf{A}$ be denoted as $\mathbf{A} = \mathbf{U} \mathbf{\Sigma} \mathbf{V}^T$. Then, we may write
\begin{align}
\mathbf{K} &= \begin{bmatrix} \mathbf{A} \mathbf{\Omega} & (\mathbf{A} \mathbf{A}^T) \mathbf{A} \mathbf{\Omega} & \cdots & (\mathbf{A} \mathbf{A}^T)^q \mathbf{A} \mathbf{\Omega} \end{bmatrix} \nonumber \\
&= \begin{bmatrix} \mathbf{U} \mathbf{\Sigma} \mathbf{V}^T \mathbf{\Omega} & \mathbf{U} \mathbf{\Sigma}^{2+1} \mathbf{V}^T \mathbf{\Omega} & \cdots & \mathbf{U} \mathbf{\Sigma}^{2q+1} \mathbf{V}^T \mathbf{\Omega} \end{bmatrix} \nonumber \\
&= \mathbf{U} \mathbf{\Sigma} \begin{bmatrix} \widehat{\mathbf{\Omega}} & \widehat{\mathbf{\Sigma}} \widehat{\mathbf{\Omega}} & \cdots & \widehat{\mathbf{\Sigma}}^q \widehat{\mathbf{\Omega}} \end{bmatrix}
\end{align}
where for notational convenience we have defined the quantities $\widehat{\mathbf{\Omega}} \equiv \mathbf{V}^T \mathbf{\Omega}$ and $\widehat{\mathbf{\Sigma}} \equiv \mathbf{\Sigma}^2$.
We ``factor out'' the component of the Krylov subspace that drives convergence from the component that is related to the initial starting subspace but independent of $q$. To this end, define for $0 \leq p \leq q$,
\begin{equation}
\mathbf{K}_p \equiv \mathbf{U} T_{2p+1} ( \mathbf{\Sigma} ) \begin{bmatrix} \widehat{\mathbf{\Omega}} & \widehat{\mathbf{\Sigma}} \widehat{\mathbf{\Omega}} & \cdots & \widehat{\mathbf{\Sigma}}^{q-p} \widehat{\mathbf{\Omega}} \end{bmatrix} \label{eqn:kp}
\end{equation}
The matrices $\mathbf{K}$ and $\widehat{\mathbf{K}}$ are related as
\begin{equation}
\mathrm{span}\left\{ \mathbf{K}_p \right\} \subseteq \mathrm{span} \left\{ \mathbf{K} \right\}
\end{equation}
In light of this, since Step 3 of Alg.~\ref{alg:blk_lanczos} is a projection, we are justified in our analysis to work with $\mathbf{K}_p$ instead of the more complicated $\mathbf{K}$.
Next, we multiply $\mathbf{K}_p$ by a specially constructed, full rank matrix $\mathbf{X}$. This operation will preserve the subspace spanned by the columns of $\mathbf{K}_p$, but align, as much as possible, the first $k$ columns to the direction of the leading $k$ singular vectors.
For all $0 \leq p \leq q$, let
\begin{equation}
\mathbf{V}_p \equiv \begin{bmatrix} \widehat{\mathbf{\Omega}} & \widehat{\mathbf{\Sigma}} \widehat{\mathbf{\Omega}} & \cdots & \widehat{\mathbf{\Sigma}}^{q-p} \widehat{\mathbf{\Omega}} \end{bmatrix}
\end{equation}
denote the generalized Vandermonde matrix from Eqn.~\ref{eqn:kp} and partition this matrix as follows:
\begin{equation}
\mathbf{V}_p = \begin{bmatrix} \mathbf{V}_{11} & \mathbf{V}_{12} \\ \mathbf{V}_{21} & \mathbf{V}_{22} \\ \mathbf{V}_{31} & \mathbf{V}_{32} \\ \mathbf{V}_{41} & \mathbf{V}_{42} \end{bmatrix} \label{eqn:vp}
\end{equation}
where the blocks in the first dimension are sized $k, s, r, t = n - (k+s+r)$ and the blocks in the second dimension are sized $k, r$. Intuitively, $s$ is used to handle duplicate or clustered singular values, while $r$ is used to create the ``gap'' that drives convergence (Fig.~\ref{fig:params}). With this partition, we examine the convergence behavior viewed as an accentuation of the ``gap'' by the appropriate Chebysehv polynomial.
We show the existence of a(t least one) special non-singular $\mathbf{X} \in \mathbb{R}^{(k+r) \times (k+r)}$ such that
\begin{align}
\mathbf{K}_p \mathbf{X} &= \mathbf{U} T_{2p+1}(\mathbf{\Sigma}) \mathbf{V}_p \mathbf{X} \\
&= \mathbf{U} \begin{bmatrix} \mathbf{Q}_{11} & \widehat{\mathbf{V}}_{12} \\ \mathbf{Q}_{21} & \widehat{\mathbf{V}}_{22} \\ \mathbf{0} & \widehat{\mathbf{V}}_{32} \\ \mathbf{H} & \widehat{\mathbf{V}}_{42} \end{bmatrix} \label{eqn:gapmatrix}
\end{align}
with $\begin{bmatrix} \mathbf{Q}_{11} \\ \mathbf{Q}_{21} \end{bmatrix}$ a column orthogonal matrix. Notice the ``gap'' in the $(3,1)$ block of size $r$ is created by using $\mathbf{X}$ to align the columns of $\mathbf{K}_p$.
We explicit construct such an $\mathbf{X}$. Partition
\begin{align}
\mathbf{X} &= \begin{bmatrix} \mathbf{X}_{11} & \mathbf{X}_{12} \\ \mathbf{X}_{21} & \mathbf{X}_{22} \end{bmatrix} \label{eqn:xblocked} \\
\mathbf{\Sigma} &= \begin{bmatrix} \mathbf{\Sigma}_1 & & & \\ & \mathbf{\Sigma}_2 & & \\ & & \mathbf{\Sigma}_3 & \\ & & & \mathbf{\Sigma}_4 \end{bmatrix}
\end{align}
where each dimension of $\mathbf{X}$ is sized $k, r$, and each dimension of $\mathbf{\Sigma}$ is sized $k, s, r, t=n-(k+s+r)$. Then,
\begin{align}
T_{2p+1}(\mathbf{\Sigma}) \mathbf{V}_p \mathbf{X} \equiv \left[ \begin{array}{c|c} \begin{pmatrix} \widehat{\mathbf{V}}_{11} \\ \widehat{\mathbf{V}}_{21} \end{pmatrix} & \cdots \\ \hline \widehat{\mathbf{V}}_{31} & \cdots \\ \hline \widehat{\mathbf{V}}_{41} & \cdots \end{array} \right]
\end{align}
where
\begin{align*}
\begin{pmatrix} \widehat{\mathbf{V}}_{11} \\ \widehat{\mathbf{V}}_{21} \end{pmatrix} &= \begin{pmatrix} T_{2p+1}(\mathbf{\Sigma}_1) & \\ & T_{2p+1}(\mathbf{\Sigma}_2) \end{pmatrix} \begin{pmatrix} \mathbf{V}_{11} & \mathbf{V}_{12} \\ \mathbf{V}_{21} & \mathbf{V}_{22} \end{pmatrix} \begin{pmatrix} \mathbf{X}_{11} \\ \mathbf{X}_{21} \end{pmatrix} \\
\widehat{\mathbf{V}}_{31} &= T_{2p+1}(\mathbf{\Sigma}_3) (\mathbf{V}_{31} \mathbf{X}_{11} + \mathbf{V}_{32} \mathbf{X}_{21}) \\
\widehat{\mathbf{V}}_{41} &= T_{2p+1}(\mathbf{\Sigma}_4) (\mathbf{V}_{41} \mathbf{X}_{11} + \mathbf{V}_{42} \mathbf{X}_{21})
\end{align*}
Setting
\begin{equation}
\mathbf{X}_{21} = - \mathbf{V}_{32}^{-1} \mathbf{V}_{31} \mathbf{X}_{11} \label{eqn:x21def}
\end{equation}
ensures the $(2,1)$ block of dimensions $r \times k$ to be $\widehat{\mathbf{V}}_{31} = \mathbf{0}$, and causes the $(1,1)$ block of dimensions $(k+s) \times k$ to become
\begin{align*}
\begin{pmatrix} \widehat{\mathbf{V}}_{11} \\ \widehat{\mathbf{V}}_{21} \end{pmatrix} = \begin{bmatrix} T_{2p+1}(\mathbf{\Sigma}_1) & \\ & T_{2p+1}(\mathbf{\Sigma}_2) \end{bmatrix} \begin{bmatrix} \mathbf{V}_{11} - \mathbf{V}_{12} \mathbf{V}_{32}^{-1} \mathbf{V}_{31} \\ \mathbf{V}_{21} - \mathbf{V}_{22} \mathbf{V}_{32}^{-1} \mathbf{V}_{31} \end{bmatrix} \mathbf{X}_{11}
\end{align*}
We can then take the QR factorization
\begin{equation}
\widetilde{\mathbf{Q}}\widetilde{\mathbf{R}} = \begin{bmatrix} T_{2p+1}(\mathbf{\Sigma}_1) & \\ & T_{2p+1}(\mathbf{\Sigma}_2) \end{bmatrix} \begin{bmatrix} \mathbf{V}_{11} - \mathbf{V}_{12} \mathbf{V}_{32}^{-1} \mathbf{V}_{31} \\ \mathbf{V}_{21} - \mathbf{V}_{22} \mathbf{V}_{32}^{-1} \mathbf{V}_{31} \end{bmatrix} \label{eqn:rtildedef}
\end{equation}
and set
\begin{equation}
\mathbf{X}_{11} = \widetilde{\mathbf{R}}^{-1} \label{eqn:x11def}
\end{equation}
This ensures that
\begin{equation}
\begin{pmatrix} \widehat{\mathbf{V}}_{11} \\ \widehat{\mathbf{V}}_{21} \end{pmatrix}^T \begin{pmatrix} \widehat{\mathbf{V}}_{11} \\ \widehat{\mathbf{V}}_{21} \end{pmatrix} = \left( \widetilde{\mathbf{Q}}\widetilde{\mathbf{R}}\widetilde{\mathbf{R}}^{-1} \right)^T \left( \widetilde{\mathbf{Q}}\widetilde{\mathbf{R}}\widetilde{\mathbf{R}}^{-1} \right) = \mathbf{I}
\end{equation}
Let Eqn.~(\ref{eqn:x11def}) and Eqn.~(\ref{eqn:x21def}) define $\mathbf{X}_{11}$ and $\mathbf{X}_{21}$ respectively as
\begin{equation}
\begin{bmatrix} \mathbf{X}_{11} \\ \mathbf{X}_{21} \end{bmatrix} = \begin{bmatrix} \mathbf{I} \\ -\mathbf{V}_{32}^{-1} \mathbf{V}_{31} \end{bmatrix} \widetilde{\mathbf{R}}^{-1} \label{eqn:xdefcol2}
\end{equation}
We specify
\begin{equation}
\begin{bmatrix} \mathbf{X}_{12} \\ \mathbf{X}_{22} \end{bmatrix} \equiv \begin{bmatrix} \mathbf{X}_{11} \\ \mathbf{X}_{21} \end{bmatrix}^{\perp} \label{eqn:xdefcol1}
\end{equation}
to provide a complete description of $\mathbf{X}$ which satisfies Eqn.~\ref{eqn:gapmatrix}.
\begin{remark}
In order for the above derivation and thus Eqn.~(\ref{eqn:xdefcol1}) and Eqn.~(\ref{eqn:xdefcol2}) to be valid, the following conditions must hold: $\mathbf{\Omega}$ is chosen to allow
\begin{itemize}
\item $\mathbf{V}_{32}$ to be non-singular and thus invertible,
\item $\mathbf{V}_{11}-\mathbf{V}_{12} \mathbf{V}_{32}^{-1} \mathbf{V}_{31}$ to be non-singular and thus $\widetilde{\mathbf{R}}$ to be invertible. Note that this expression is the Schur complement of the $(k+r) \times (k+r)$ matrix $\begin{bmatrix} \mathbf{V}_{11} & \mathbf{V}_{12} \\ \mathbf{V}_{31} & \mathbf{V}_{32} \end{bmatrix}$ with respect to the $\mathbf{V}_{32}$ block.
\end{itemize}
\label{lst:conditions}
\end{remark}
We present a first result on a lower bound for the singular value of $\mathbf{B}_k$.
\begin{lemma}
\label{lem:svconvlemma}
Let $\mathbf{B}_k$ be the matrix returned by Alg.~\ref{alg:blk_lanczos}, let $\mathbf{H}$ be as defined in Eqn.~(\ref{eqn:gapmatrix}), and assume that the two conditions in Remark~\ref{lst:conditions} hold. Then,
\begin{equation}
\sigma_k(\mathbf{B}_k) \geq \frac{\sigma_{k+s}}{\sqrt{1 + \Vert \mathbf{H} \Vert_2^2}}
\end{equation}
\end{lemma}
\begin{proof}
The matrix returned by Alg.~\ref{alg:blk_lanczos} is the $k$-truncated SVD of $\mathbf{Q}\mathbf{Q}^T \mathbf{A}$, where the columns of $\mathbf{Q}$ are an orthonormal basis for the column span of $\mathbf{K}$. By construction, it follows that
\begin{equation}
\sigma_k(\mathbf{B}_k) \geq \sigma_k \left( \widehat{\mathbf{Q}}_p \widehat{\mathbf{Q}}_p^T \mathbf{A} \right) \label{eqn:sveqn1}
\end{equation}
where $\widehat{\mathbf{Q}}_p$ contains columns that form an orthonormal basis for the column span of $\mathbf{K}_p \mathbf{X}$.
In particular, let $\widehat{\mathbf{Q}}_p \widehat{\mathbf{R}}_p$ be the QR factorization of $\mathbf{K}_p \mathbf{X}$, partitioned as follows:
\begin{equation}
\mathbf{K}_p \mathbf{X} = \widehat{\mathbf{Q}}_p \widehat{\mathbf{R}}_p = \begin{bmatrix} \widehat{\mathbf{Q}}_1 & \widehat{\mathbf{Q}}_2 \end{bmatrix} \begin{bmatrix} \widehat{\mathbf{R}}_{11} & \widehat{\mathbf{R}}_{12} \\ & \widehat{\mathbf{R}}_{22} \end{bmatrix} \label{eqn:kpxqrfact}
\end{equation}
where the block dimensions are sized $k,s$, as appropriate.
We can then write
\begin{align*}
\widehat{\mathbf{Q}}_p &\widehat{\mathbf{Q}}_p^T \mathbf{A} \\
&= \widehat{\mathbf{Q}}_p \begin{bmatrix} \widehat{\mathbf{Q}}_1^T \\ \widehat{\mathbf{Q}}_2^T \end{bmatrix} \mathbf{U} \left[ \begin{array}{c|c} \begin{pmatrix} \mathbf{\Sigma}_1 & \\ & \mathbf{\Sigma}_2 \\ \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \end{pmatrix} & \begin{pmatrix} \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \\ \mathbf{\Sigma}_3 & \\ & \mathbf{\Sigma}_4 \end{pmatrix} \end{array} \right] \mathbf{V}^T \\
&= \widehat{\mathbf{Q}}_p \left[ \begin{array}{c|c} \widehat{\mathbf{Q}}_1^T \mathbf{U} \begin{pmatrix} \mathbf{\Sigma}_1 & \\ & \mathbf{\Sigma}_2 \\ \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \end{pmatrix} & \widehat{\mathbf{Q}}_{1}^T \mathbf{U} \begin{pmatrix} \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \\ \mathbf{\Sigma}_3 & \\ & \mathbf{\Sigma}_4 \end{pmatrix} \\ \hline \widehat{\mathbf{Q}}_2^T \mathbf{U} \begin{pmatrix} \mathbf{\Sigma}_1 & \\ & \mathbf{\Sigma}_2 \\ \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \end{pmatrix} & \widehat{\mathbf{Q}}_2^T \mathbf{U} \begin{pmatrix} \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \\ \mathbf{\Sigma}_3 & \\ & \mathbf{\Sigma}_4 \end{pmatrix} \end{array} \right] \mathbf{V}^T
\end{align*}
By the Cauchy interlacing theorem for singular values, it follows that
\begin{equation}
\sigma_k \left( \widehat{\mathbf{Q}}_p \widehat{\mathbf{Q}}_p^T \mathbf{A} \right) \geq \sigma_k \left( \widehat{\mathbf{Q}}_1^T \mathbf{U} \begin{pmatrix} \mathbf{\Sigma}_1 & \\ & \mathbf{\Sigma}_2 \\ \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \end{pmatrix} \right) \label{eqn:sveqn2}
\end{equation}
We can compare the first $k$ columns of Eqn.~(\ref{eqn:kpxqrfact}) with the expression in Eqn.~(\ref{eqn:gapmatrix}) to see that
\begin{equation}
\widehat{\mathbf{Q}}_1 \widehat{\mathbf{R}}_{11} = \mathbf{U} \begin{bmatrix} \mathbf{Q}_{11} \\ \mathbf{Q}_{21} \\ \mathbf{0} \\ \mathbf{H} \end{bmatrix} \label{eqn:r11def}
\end{equation}
which helps us to write
\begin{align}
\widehat{\mathbf{Q}}_1^T \mathbf{U} \begin{pmatrix} \mathbf{\Sigma}_1 & \\ & \mathbf{\Sigma}_2 \\ \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \end{pmatrix} &= \left( \mathbf{U} \begin{pmatrix} \mathbf{Q}_{11} \\ \mathbf{Q}_{21} \\ \mathbf{0} \\ \mathbf{H} \end{pmatrix} \widehat{\mathbf{R}}_{11}^{-1} \right)^T \mathbf{U} \begin{pmatrix} \mathbf{\Sigma}_1 & \\ & \mathbf{\Sigma}_2 \\ \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \end{pmatrix} \nonumber \\
&= \widehat{\mathbf{R}}_{11}^{-T} \begin{pmatrix} \mathbf{Q}_{11} \\ \mathbf{Q}_{21} \\ \mathbf{0} \\ \mathbf{H} \end{pmatrix}^T \begin{pmatrix} \mathbf{\Sigma}_1 & \\ & \mathbf{\Sigma}_2 \\ \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \end{pmatrix} \nonumber \\
&= \widehat{\mathbf{R}}_{11}^{-T} \begin{bmatrix} \mathbf{Q}_{11}^T \mathbf{\Sigma}_1 & \mathbf{Q}_{21}^T \mathbf{\Sigma}_2 \end{bmatrix} \label{eqn:sveqn3}
\end{align}
On the other hand,
\begin{align}
\sigma_{k+s} &= \sigma_k \left( \sigma_{k+s} \begin{bmatrix} \mathbf{Q}_{11}^T & \mathbf{Q}_{21}^T \end{bmatrix} \right) \nonumber \\
&\leq \sigma_k \left( \begin{bmatrix} \mathbf{Q}_{11}^T \mathbf{\Sigma}_1 & \mathbf{Q}_{21}^T \mathbf{\Sigma}_2 \end{bmatrix} \right) \nonumber \\
&= \sigma_k \left( \widehat{\mathbf{R}}_{11}^T \widehat{\mathbf{R}}_{11}^{-T} \begin{bmatrix} \mathbf{Q}_{11}^T \mathbf{\Sigma}_1 & \mathbf{Q}_{21}^T \mathbf{\Sigma}_2 \end{bmatrix} \right) \nonumber \\
&\leq \Vert \widehat{\mathbf{R}}_{11}^T \Vert_2 \, \sigma_k \left( \widehat{\mathbf{R}}_{11}^{-T} \begin{bmatrix} \mathbf{Q}_{11}^T \mathbf{\Sigma}_1 & \mathbf{Q}_{21}^T \mathbf{\Sigma}_2 \end{bmatrix} \right) \label{eqn:sveqn4}
\end{align}
Combining Eqns.~(\ref{eqn:sveqn1}),~(\ref{eqn:sveqn2}),~(\ref{eqn:sveqn3}), and~(\ref{eqn:sveqn4}), we obtain
\begin{equation}
\sigma_k(\mathbf{B}_k) \geq \frac{\sigma_{k+s}}{\Vert \widehat{\mathbf{R}}_{11}^T \Vert_2}
\end{equation}
With the help of Eqn.~(\ref{eqn:r11def}),
\begin{align}
\widehat{\mathbf{R}}_{11}^T \widehat{\mathbf{R}}_{11} &= \widehat{\mathbf{R}}_{11}^T \left(\mathbf{U}^T \widehat{\mathbf{Q}}_1 \right)^T \left(\mathbf{U}^T \widehat{\mathbf{Q}}_1 \right) \widehat{\mathbf{R}}_{11} \\
&= \begin{bmatrix} \mathbf{Q}_{11} \\ \mathbf{Q}_{21} \end{bmatrix}^T \begin{bmatrix} \mathbf{Q}_{11} \\ \mathbf{Q}_{21} \end{bmatrix} + \mathbf{H}^T \mathbf{H} \\
&= \mathbf{I} + \mathbf{H}^T \mathbf{H}
\end{align}
, which completes the proof.
\end{proof}
We are now in a position to provide the proof for Theorem~\ref{thm:svconvergence}
\begin{proof}
With an eye toward Lemma~\ref{lem:svconvlemma}, we proceed by providing a bound for $\Vert \mathbf{H} \Vert_2^2$.
\begin{align*}
& \, \Vert \mathbf{H} \Vert_2^2 \\
=& \, \sigma_1^2 \left( \mathbf{H} \mathbf{H}^T \right) \\
=& \, \sigma_1^2 \Bigg( T_{2p+1}(\mathbf{\Sigma}_4) (\mathbf{V}{41}-\mathbf{V}_{42}\mathbf{V}_{32}^{-1}\mathbf{V}_{31}) \left(\widetilde{\mathbf{R}}^T \widetilde{\mathbf{R}} \right)^{-1} \\
&(\mathbf{V}{41}-\mathbf{V}_{42}\mathbf{V}_{32}^{-1}\mathbf{V}_{31})^T T_{2p+1}(\mathbf{\Sigma}_4) \Bigg) \\
=& \, \sigma_1^2 \Bigg( T_{2p+1}(\mathbf{\Sigma}_4) (\mathbf{V}{41}-\mathbf{V}_{42}\mathbf{V}_{32}^{-1}\mathbf{V}_{31}) \nonumber \\
&\Bigg( \begin{bmatrix} \mathbf{V}_{11} - \mathbf{V}_{12} \mathbf{V}_{32}^{-1} \mathbf{V}_{31} \\ \mathbf{V}_{21}-\mathbf{V}_{22}\mathbf{V}_{32}^{-1} \mathbf{V}_{31} \end{bmatrix}^T \begin{bmatrix} T_{2p+1}^2(\mathbf{\Sigma}_1) & \\ & T_{2p+1}^2(\mathbf{\Sigma}_2) \end{bmatrix} \\
&\hspace{4cm}\begin{bmatrix} \mathbf{V}_{11} - \mathbf{V}_{12} \mathbf{V}_{32}^{-1} \mathbf{V}_{31} \\ \mathbf{V}_{21}-\mathbf{V}_{22}\mathbf{V}_{32}^{-1} \mathbf{V}_{31} \end{bmatrix} \Bigg)^{-1} \nonumber \\
&(\mathbf{V}{41}-\mathbf{V}_{42}\mathbf{V}_{32}^{-1}\mathbf{V}_{31})^T T_{2p+1}(\mathbf{\Sigma}_4) \Bigg) \\
\leq& \, \sigma_1^2 \Bigg( T_{2p+1}(\mathbf{\Sigma}_4) (\mathbf{V}{41}-\mathbf{V}_{42}\mathbf{V}_{32}^{-1}\mathbf{V}_{31}) \nonumber \\
( &(\mathbf{V}_{11}-\mathbf{V}_{12}\mathbf{V}_{32}^{-1}\mathbf{V}_{31})^T T_{2p+1}^2(\mathbf{\Sigma}_1) (\mathbf{V}_{11}-\mathbf{V}_{12}\mathbf{V}_{32}^{-1}\mathbf{V}_{31}) )^{-1} \nonumber \\
&(\mathbf{V}{41}-\mathbf{V}_{42}\mathbf{V}_{32}^{-1}\mathbf{V}_{31})^T T_{2p+1}(\mathbf{\Sigma}_4) \Bigg) \\
=& \, \Vert T_{2p+1}(\mathbf{\Sigma}_4) (\mathbf{V}_{41}-\mathbf{V}_{42}\mathbf{V}_{32}^{-1} \mathbf{V}_{31}) \\
&(\mathbf{V}_{11} -\mathbf{V}_{12}\mathbf{V}_{32}^{-1}\mathbf{V}_{31})^{-1} T_{2p+1}^{-1}(\mathbf{\Sigma}_1) \Vert_2^2 \\
\leq& \, T_{2p+1}^{-2}\left( 1 + 2 \cdot \frac{\sigma_k-\sigma_{k+s+r+1}}{\sigma_{k+s+r+1}} \right) \\
&\Vert (\mathbf{V}_{41}-\mathbf{V}_{42}\mathbf{V}_{32}^{-1} \mathbf{V}_{31}) (\mathbf{V}_{11} -\mathbf{V}_{12}\mathbf{V}_{32}^{-1}\mathbf{V}_{31})^{-1} \Vert_2^2
\end{align*}
The $1 + 2 \cdot \frac{\sigma_k - \sigma_{k+s+r+1}}{\sigma_{k+s+r+1}}$ factor is interpreted as shifting the Chebyshev polynomial $T_{2p+1}$ onto the interval $[0, \sigma_{k+s+r+1}]$, so that the tail of the singular spectrum is bounded by $1$ and convergence is driven by the growth of the Chebyshev polynomial on the $[\sigma_k, \cdots, \sigma_1]$ part of the spectrum that we are interested in.
Repeating the previous argument for $1 \leq j \leq k$ completes the proof for the bound on $\sigma_j(\mathbf{B}_k)$.
\end{proof}
Due to space constraint, we omit the proofs for the corollaries of Theorem~\ref{thm:svconvergence}. They are similar in flavor to the proof above and involve constructions of specifically chosen $\mathbf{X}$ matrices in each case.
We close by providing the proof for Theorem~\ref{thm:superlinear}.
\begin{proof}
The statement of the theorem is equivalent to the statement that
\begin{equation}
\mathcal{C} \, T_{2p+1}^{-1}\left( 1+2 \cdot \frac{\sigma_j - \sigma_{j+r+1}}{\sigma_{j+r+1}} \right) \rightarrow 0
\end{equation}
superlinearly. For notational convenience we assume $\sigma_j$ is not a multiple singular value and we have chosen $s=0$; otherwise, the following argument can be made for the largest choice of $s$ such that $\sigma_{j+s} = \sigma_j$.
Recall that a sequence $\left\{ a_n \right\}$ converges superlinearly to $a$ if
\begin{equation}
\lim_{n \rightarrow \infty} \frac{\vert a_{n+1} - a \vert}{\vert a_n - a \vert} = 0
\end{equation}
For any fixed $j = 1, \cdots, k$, define
\begin{align*}
a_q &\equiv \mathcal{C}(r) T^{-1}_{2\left(q+1 - \frac{k+r}{b} \right) + 1} \left(1 + 2 \cdot \frac{\sigma_j - \sigma_{j+r+1}}{\sigma_{j+r+1}} \right)
\end{align*}
where we have explicitly specified the dependence of the constant $\mathcal{C}$ on the analysis parameter $r$, and expressed $p$ in terms of $q$. We approximate
\begin{align*}
a_q &\approx \mathcal{C}(r) \cdot \frac{1}{2} \left( 1 + g + \sqrt{2g} \right)^{-\left( 2\left(q+1-\frac{k+r}{b} \right) + 1\right)} \\
&\approx \frac{1}{2} \cdot \mathcal{C}(r) \cdot \left(1 + g + \sqrt{2g}\right)^{-2\left( 1-\frac{k+r}{b} \right)+1} \cdot \left(1+g+\sqrt{2g}\right)^{-2q} \\
&\text{where } \, g = 2 \cdot \frac{\sigma_j - \sigma_{j+r+1}}{\sigma_{j+r+1}} = 2 \cdot \left( \frac{\sigma_j}{\sigma_{j+r+1}} - 1 \right)
\end{align*}
Then we argue that $a_{q+1}/a_q \rightarrow 0$ as follows.
\begin{align}
\frac{a_{q+1}}{a_q} &= \frac{1}{\left( 1+g+\sqrt{2g} \right)^2} \leq \frac{1}{(1+g)^2}
\end{align}
Since we assume a spectrum such that $\sigma_i \rightarrow 0$ eventually, it is possible to chose $r$ sufficiently large such that $1/(1+g)^2$ is arbitrarily small.
\end{proof}
Rigorously, the above argument applies only to infinite dimensional operators, as in the finite dimensional case, $r \leq n$ cannot be chosen to be arbitrarily large. However, numerous previous works have noted that in practice, the convergence does tend to exhibit superlinear behavior for certain types of spectrums \cite{saad1994theoretical}.
\section{Numerical Experiments} \label{sec:numerical_experiments}
\subsection{Computational Complexity}
We will give an arithmetic complexity accounting of the randomized block Lanczos algorithm. The initialization of the random starting matrix $\mathbf{\Omega}$ takes $\mathcal{O}(nb)$ floating-point operations (flops). In step 1, the formation of the Krylov matrix $\mathbf{K}$ consists of $1$ matrix multiplications of $\mathbf{A}\mathbf{\Omega}$ along with $2(q-1)$ accumulated applications of either $\mathbf{A}$ or $\mathbf{A}^T$ for a total of $\mathcal{O}(mnbq)$ flops. The orthonormal basis $\mathbf{Q}$ of $\mathbf{K}$ can be computed using a QR factorization using the standard Householder implementation, which has complexity $\mathcal{O}(m(bq)^2)$. Finally, steps 3 and 4 consists of first forming $\mathbf{Q}^TA$ for $\mathcal{O}(mnbq)$ flops, then computing its truncated SVD factorization. Because the size of this matrix is $qb \times n$ and we expect $qb \approx k$ to be small, we assume its SVD computation is performed with a non-specialized dense matrix algorithm, using $\mathcal{O}(n(bq)^2)$ flops. The final step of projecting the right $k$ singular vectors onto $Q$ is an additional $\mathcal{O}(m(bq)^2)$ flops.
Overall, the computational complexity of Algorithm~\ref{alg:blk_lanczos} is $\mathcal{O}(mnbq + (m+n)(bq)^2)$. The first term dominates the computations and is the result of performing the matrix multiplications for the computation of the Lanczos block vectors. Fortunately, matrix multiplication is a highly optimized and highly tuned part of many matrix computation libraries, especially for suitably chosen block sizes.
We draw attention to the fact that the parameters $b$ and $q$ only appear together as the quantity $bq$ in our computational complexity count. This suggests that we may freely vary $b$, $q$ - as long as they vary inversely and the quantity $bq$ remains constant, the cost for running Algorithm~\ref{alg:blk_lanczos} remains comparable. (In practice, this will only hold true for $b > 1$, due to the efficiency of BLAS2 and BLAS3 operations compared with BLAS1 operations.) Given the comparable computational complexity, and assuming the conditions for the convergence of Algorithm~\ref{alg:blk_lanczos} is met, we need not privilege the block size choice $b = k$. In fact, we show empirically that in many cases, it is advantageous to choose block sizes $b$ strictly smaller than $k$.
\subsection{Activities and Sports Dataset}
The Activities and Sports Dataset is a dataset consisting of motion sensor data for $8$ subjects performing $19$ daily/sports activities, for $5$ minutes, sampled at $25$Hz frequency. This dataset can be found at \cite{altun2010comparative}.
The matrix associated with this dataset is dense and of dimensions $\mathbf{A} \in \mathbb{R}^{9120 \times 5625}$, where each row is a sample and each entry is a double precision float. Figure~\ref{fig:asd_spec} shows a plot of the first $500$ singular values of $\mathbf{A}$. As is typically for data matrices, this matrix exhibit spectrum decay on the order of $\sigma_j = \frac{1}{j^\tau}$, and our theory suggests that in this case, we should observe superlinear convergence for the RBL algorithm.
\begin{figure}
\centering
\includegraphics[scale=0.6]{asd_spec}
\caption{First $500$ singular values of the Daily Activities and Sports Matrix. }
\label{fig:asd_spec}
\end{figure}
In this set of experiments, we investigate the convergence of a single singular value with respect to the number of iterations, in addition to the affect of the block size on convergence. We run the RSI and RBL algorithms on the Activities and Sports Dataset matrix with a target rank of $k = 200$, and examine the convergence of $\sigma_1$, $\sigma_{100}$, and $\sigma_{200}$. The results of these experiments are in Figures~\ref{fig:asd_j1},~\ref{fig:asd_j100}, and~\ref{fig:asd_j200}.
\begin{figure}
\centering
\includegraphics[scale=0.6]{bidiagj1}
\caption{$k=200$ approximation of the Daily Activities Dataset, convergence of $\sigma_{1}$. }
\label{fig:asd_j1}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.6]{bidiagj100}
\caption{$k=200$ approximation of the Daily Activities Dataset, convergence of $\sigma_{100}$. }
\label{fig:asd_j100}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.6]{bidiagj200}
\caption{$k=200$ approximation of the Daily Activities Dataset, convergence of $\sigma_{200}$. }
\label{fig:asd_j200}
\end{figure}
Each of these plots represent the convergence of a particular singular value. In each plot, each line represents a single parameter setting for the block size $b$, for either the RSI or the RBL algorithm. The $y$-axis is in log scale, and denote
\begin{equation}
\text{rel. err.} = \frac{\sigma_j - \sigma_j(\mathbf{B}_k)}{\sigma_j}
\end{equation}
, the relative error of the particular singular value we are examining. The $x$-axis is in linear scale, and denote the number of matrix-vector multiplications (MATVECs), a proxy measure for computational complexity. Markers on each line represent successive iterations of the algorithm. In these plots, down and to the left is good - we seek parameter settings that give good convergence for less computational complexity. We observe that, as expected, RSI converges linearly and RBL converges superlinearly. These trends are most clearly seen in Figure~\ref{fig:asd_j200} and is also present in Figure~\ref{fig:asd_j100}. The convergence of $\sigma_1$ is extremely rapid in Figure~\ref{fig:asd_j1}, and reaches double precision in $2$-$5$ iterations for all block sizes. In all cases, for both RBL and RSI, it appears that at the same computational complexity, choosing a smaller block size $b$, leads to more rapid convergence. For example, in Figure~\ref{fig:asd_j200}, we observe that in order for $\sigma_j$ to converge to a relative error of $\sim 10^{-5}$, taking $b = 1$ uses $1/2$ the number of MATVECs as taking $b = k = 200$.
\subsection{Eigenfaces Dataset}
The Eigenfaces dataset is available from the AT\&T Laboratories' Database of Faces \cite{samaria1994parameterisation}, and consists of $10$ different face images of $40$ different subjects at $92 \times 112$ pixels resolution, varying in light, facial expressions, and other details. The widely cited technique for processing this data is via principal component analysis (PCA), where it was observed that each face can be composed in large part from a few prominent ``Eigenfaces'' \cite{turk1991face}.
The associated matrix is a dense matrix, which is formed by vectorizing each different face image as a column vector. It has dimensions $\mathbf{A} \in \mathbb{R}^{10304 \times 400}$ and is of full numerical rank. The spectrum of this matrix spans $5$ orders of magnitude but decays extremely rapidly, typical of data matrices. In fact, as seen in Figure~\ref{fig:ef_spec}, it drops to zero within the first $50$ largest singular values.
\begin{figure}
\centering
\includegraphics[scale=0.6]{ef_spec}
\caption{Spectrum of the Eigenfaces Matrix. }
\label{fig:ef_spec}
\end{figure}
We repeat the experiments performed in the last section. For this set of experiments, we use the RSI and RBL algorithms to compute rank-$k=100$ approximations for the Eigenfaces matrix, and examine the convergence of $\sigma_{100}$. The result appears in Figure~\ref{fig:ef_j100}.
\begin{figure}
\centering
\includegraphics[scale=0.6]{eigenfaces_k100_j100}
\caption{$k=100$ approximation of the Eigenfaces Dataset, convergence of $\sigma_{100}$. }
\label{fig:ef_j100}
\end{figure}
We observe similar behavior as those observed for the Daily Activities and Sports Matrix: the RSI algorithm exhibits linear convergence while the RBL algorithm exhibits superlinear convergence; smaller block sizes $b$ appear to converge more quickly for a fixed number of flops.
\section{Conclusions} \label{sec:conclusions}
In this paper, we have derived a novel convergence result for the randomized block Lanczos algorithm. We have shown that for all block sizes, the singular value approximation accuracy for this algorithm converges geometrically in the number of iterations, with a rate that is asymptotically superior to that achieved by the randomized subspace iteration algorithm. We have also shown for a matrix with spectrum decaying to zero, the RBL algorithm converges superlinearly. Additionally, we have provided numerical results in support of our analysis.
The current work is largely theoretical in nature, and there continues to be need for quality implementations of the Randomized Block Lanczos algorithm to aid its wider adoptability. To this end, continuations of the current work might include such an (possibly parallelized) implementation, along with further investigations of practical choices for the block size parameter $b$ which balances the evident preference for a smaller $b$ for convergence with the advantages of a larger $b$ for computational efficiency and numerical stability.
\bibliographystyle{IEEEtran}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.